Wednesday, July 1, 2015

Leap Second

leapsecond1.png

Take that, alarm clock!

Posted at 02:22 Permalink

Tuesday, June 30, 2015

Venus and Jupiter at Dusk

Look toward the west a little after sunset today to see a spectacle in the sky: a close conjunction of Venus and Jupiter.

Conjunction of Venus and Jupiter, 2015-06-30

Brilliant Venus is at the bottom and bright, but less dazzling, Jupiter is above. This picture was taken with a 50 mm normal lens and approximates the visual appearance. Tonight the planets are separated by only 0.3°, less than the width of the full Moon. To illustrate this, the following is a composite of an image of the conjunction and tonight's near-full Moon, which was rising as the planets were setting. I photographed both at the same scale and overlaid the images.

Conjunction of Venus and Jupiter compared to the full Moon, 2015-06-30

If you miss the closest conjunction tonight, the planets will remain strikingly close together in the sky for the next few days.

The juxtaposition of the two planets is only apparent. Venus is about 90 million kilometres from the Earth while Jupiter is 890 million kilometres away. Venus is so much brighter than Jupiter (which is more than ten times its size) because it is closer to the Sun and the Earth.

Update: On July 1st, 2015, the conjunction between Venus and Jupiter has widened to around 0.6°, just a bit more than the mean apparent diameter of the full Moon (it varies, due to the Moon's elliptical orbit), but it is still a spectacular sight in the western sky after sunset. Tonight I decided to see if I could take a picture which showed the two planets as they'd appear in a modest telescope. This is somewhat challenging, since Venus is presently 11.5 times brighter (on a linear scale) than Jupiter, and any exposure which shows Jupiter well will hopelessly overexpose Venus. So, I did what any self-respecting astrophotographer would do: cheat. I took two exposures, one best suited for Venus and one for Jupiter, and composited them. This is the result.

Conjunction of Venus and Jupiter, 2015-07-01

You can easily see that Venus is a fat crescent, while Jupiter's disc is fully illuminated. The apparent angular diameter of two two planets is almost identical (because enormously larger Jupiter is so much more distant). This was still in late twilight, and I wasn't able to pop out the Galilean satellites. Jupiter would have set before those 4th magnitude objects became accessible.

Both images were taken with a Nikon D600 camera and 25 year old Nikkor 300 mm f/4.5 prime (non-zoom) lens. The image of Venus was taken at f/8 with ISO 1250 sensitivity and 1/1600 second exposure. (Why such high ISO and short exposure? The lens is sharper stopped down to f/8, and the short exposure minimises the chance of vibration or movement of the planet on the sky blurring the image.) The venerable lens has a substantial amount of chromatic aberration, which causes a red fringe around the bright image of Venus. I eliminated this by decomposing the image into its three colour components and using only the green channel, where the lens is sharpest. Since there is no apparent colour visible on Venus, this lost no information.

The Jupiter image was taken with the same camera, lens, aperture, and ISO setting, but at 1/400 second. I clipped the colour image of Jupiter from it and pasted it over the dim smudge which was Jupiter in the Venus image, preserving the relative position of the two planets.

All exposures were made from a fixed (non-guided) tripod in the Fourmilab driveway. (2015-07-01 21:42 UTC)

Posted at 23:38 Permalink

Sunday, June 28, 2015

Reading List: Alas, Babylon

Frank, Pat [Harry Hart Frank]. Alas, Babylon. New York: Harper Perennial, [1959] 2005. ISBN 978-0-06-074187-7.
This novel, originally published in 1959, was one the first realistic fictional depictions of an all-out nuclear war and its aftermath. While there are some well-crafted thriller scenes about the origins and catastrophic events of a one day spasm war between the Soviet Union and the United States (the precise origins of which are not described in detail; the reader is led to conclude that it was an accident waiting to happen, much like the outbreak of World War I), the story is mostly set in Fort Repose, a small community on a river in the middle of Florida, in an epoch when Florida was still, despite some arrivals from the frozen north, very much part of the deep south.

Randy Bragg lives in the house built by his ancestors on River Road, with neighbours including long-time Floridians and recent arrivals. some of which were scandalised to discover one of their neighbours, the Henry family, were descended from slaves to whom Randy's grandfather had sold their land long before the first great Florida boom, when land was valued only by the citrus it could grow. Randy, nominally a lawyer, mostly lived on proceeds from his orchards, a trust established by his father, and occasional legal work, and was single, largely idle, and seemingly without direction. Then came The Day.

From the first detonations of Soviet bombs above cities and military bases around Fort Repose, the news from outside dwindled to brief bulletins from Civil Defense and what one of Randy's neighbours could glean from a short wave radio. As electrical power failed and batteries were exhausted, little was known of the fate of the nation and the world. At least, after The Day, there were no more visible nuclear detonations.

Suddenly Fort Repose found itself effectively in the 19th century. Gasoline supplies were limited to what people had in the tanks of their cars, and had to be husbanded for only the most essential purposes. Knowledge of how to hunt, trap, fish, and raise crops, chickens, and pigs became much more important than the fancy specialties of retirees in the area. Fortunately, by the luck of geography and weather, Fort Repose was spared serious fallout from the attack, and the very fact that the large cities surrounding it were directly targeted (and that it was not on a main highway) meant it would be spared invasion by the “golden horde” of starving urban and suburban refugees which figure in many post-apocalyptic stories. Still, cut off from the outside, “what you have is all you've got”, and people must face the reality that medical supplies, their only doctor, food the orchards cannot supply, and even commodities as fundamental as salt are limited. But people, especially rural people in the middle of the 20th century, are resourceful, and before long a barter market springs up in which honey, coffee, and whiskey prove much more valuable than gold or silver.

Wherever there are things of value and those who covet them, predators of the two footed variety will be manifest. While there is no mass invasion, highwaymen and thieves appear to prey upon those trying to eke out a living for their families. Randy Bragg, now responsible for three families living under his own roof and neighbours provided by his artesian water well, is forced to grow into a protector of these people and the community, eventually defending them from those who would destroy everything they have managed to salvage from the calamity.

They learn that all of Florida has been designated as one of the Contaminated Zones, and hence that no aid can be anticipated from what remains of the U.S. government. Eventually a cargo plane flies over and drops leaflets informing residents that at some time in the future aid may be forthcoming, “It was proof that the government of the United States still functioned. It was also useful as toilet paper. Next day, ten leaflets would buy an egg, and fifty a chicken. It was paper, and it was money.”

This is a tale of the old, weird, stiff-spined, rural America which could ultimately ride out what Herman Kahn called the “destruction of the A country” and keep on going. We hear little of the fate of those in the North, where with The Day occurring near mid-winter, the outcome for those who escaped the immediate attack would have been much more calamitous. Ultimately it is the resourcefulness, fundamental goodness, and growth of these people under extreme adversity which makes this tale of catastrophe ultimately one of hope.

The Kindle edition appears to have been created by scanning a print edition and processing it through an optical character recognition program. The result of this seems to have been run through a spelling checker, but not subjected to detailed copy editing. As a result, there are numerous scanning errors, some obvious, some humorous, and some real head scratchers. This classic work, from a major publisher, deserves better.

Posted at 22:37 Permalink

Friday, May 29, 2015

Reading List: Redshirts

Scalzi, John. Redshirts. New York: Tor, 2012. ISBN 978-0-7653-3479-4.
Ensign Andrew Dahl thought himself extremely fortunate when, just out of the Academy, he was assigned to Universal Union flagship Intrepid in the xenobiology lab. Intrepid has a reputation for undertaking the most demanding missions of exploration, diplomacy, and, when necessary, enforcement of order among the multitude of planets in the Union, and it was the ideal place for an ambitious junior officer to begin his career.

But almost immediately after reporting aboard, Dahl began to discover there was something distinctly off about life aboard the ship. Whenever one of the senior officers walked through the corridors, crewmembers would part ahead of them, disappearing into side passages or through hatches. When the science officer visited a lab, experienced crew would vanish before he appeared and return only after he departed. Crew would invent clever stratagems to avoid being assigned to a post on the bridge or to an away mission.

Seemingly, every away mission would result in the death of a crew member, often in gruesome circumstances involving Longranian ice sharks, Borgovian land worms, the Merovian plague, or other horrors. But senior crew: the captain, science officer, doctor, and chief engineer were never killed, although astrogator Lieutenant Kerensky, a member of the bridge crew and regular on away parties, is frequently grievously injured but invariably makes a near-miraculous and complete recovery.

Dahl sees all of this for himself when he barely escapes with his life from a rescue mission to a space station afflicted with killer robots. Four junior crew die and Kerensky is injured once again. Upon returning to the ship, Dahl and his colleagues vow to get to the bottom of what is going on. They've heard the legends of, and one may have even spotted, Jenkins, who disappeared into the bowels of the ship after his wife, a fellow crew member, died meaninglessly by a stray shot of an assassin trying to kill a Union ambassador on an away mission.

Dahl undertakes to track down Jenkins, who is rumoured to have a theory which explains everything that is happening. The theory turns out to be as bizarre or more so than life on the Intrepid, but Dahl and his fellow ensigns concede that it does explain what they're experiencing and that applying it allows them to make sense of events which are otherwise incomprehensible (I love “the Box”).

But a theory, however explanatory, does not address the immediate problem: how to avoid being devoured by Pornathic crabs or the Great Badger of Tau Ceti on their next away mission. Dahl and his fellow junior crew must figure out how to turn the nonsensical reality they inhabit toward their own survival and do so without overtly engaging in, you know, mutiny, which could, like death, be career limiting. The story becomes so meta it will make you question the metaness of meta itself.

This is a pure romp, often laugh-out-loud funny, having a delightful time immersing itself in the lives of characters in one of our most beloved and enduring science fiction universes. We all know the bridge crew and department heads, but what's it really like below decks, and how does it feel to experience that sinking feeling when the first officer points to you and says “You're with me!” when forming an away team?

The novel has three codas written, respectively, in the first, second, and third person. The last, even in this very funny book, will moisten your eyes. Redshirts won the Hugo Award for Best Novel in 2013.

Posted at 22:04 Permalink

Wednesday, May 20, 2015

Reading List: A Short History of Man

Hoppe, Hans-Hermann. A Short History of Man. Auburn, AL: Mises Institute, 2015. ISBN 978-1-61016-591-4.
The author is one of the most brilliant and original thinkers and eloquent contemporary expositors of libertarianism, anarcho-capitalism, and Austrian economics. Educated in Germany, Hoppe came to the United States to study with Murray Rothbard and in 1986 joined Rothbard on the faculty of the University of Nevada, Las Vegas, where he taught until his retirement in 2008. Hoppe's 2001 book, Democracy: The God That Failed (June 2002), made the argument that democratic election of temporary politicians in the modern all-encompassing state will inevitably result in profligate spending and runaway debt because elected politicians have every incentive to buy votes and no stake in the long-term solvency and prosperity of the society. Whatever the drawbacks (and historical examples of how things can go wrong), a hereditary monarch has no need to buy votes and every incentive not to pass on a bankrupt state to his descendants.

This short book (144 pages) collects three essays previously published elsewhere which, taken together, present a comprehensive picture of human development from the emergence of modern humans in Africa to the present day. Subtitled “Progress and Decline”, the story is of long periods of stasis, two enormous breakthroughs, with, in parallel, the folly of ever-growing domination of society by a coercive state which, in its modern incarnation, risks halting or reversing the gains of the modern era.

Members of the collectivist and politically-correct mainstream in the fields of economics, anthropology, and sociology who can abide Prof. Hoppe's adamantine libertarianism will probably have their skulls explode when they encounter his overview of human economic and social progress, which is based upon genetic selection for increased intelligence and low time preference among populations forced to migrate due to population pressure from the tropics where the human species originated into more demanding climates north and south of the Equator, and onward toward the poles. In the tropics, every day is about the same as the next; seasons don't differ much from one another; and the variation in the length of the day is not great. In the temperate zone and beyond, hunter-gatherers must cope with plant life which varies along with the seasons, prey animals that migrate, hot summers and cold winters, with the latter requiring the knowledge and foresight of how to make provisions for the lean season. Predicting the changes in seasons becomes important, and in this may have been the genesis of astronomy.

A hunter-gatherer society is essentially parasitic upon the natural environment—it consumes the plant and animal bounty of nature but does nothing to replenish it. This means that for a given territory there is a maximum number (varying due to details of terrain, climate, etc.) of humans it can support before an increase in population leads to a decline in the per-capita standard of living of its inhabitants. This is what the author calls the “Malthusian trap”. Looked at from the other end, a human population which is growing as human populations tend to do, will inevitably reach the carrying capacity of the area in which it lives. When this happens, there are only three options: artificially limit the growth in population to the land's carrying capacity, split off one or more groups which migrate to new territory not yet occupied by humans, or conquer new land from adjacent groups, either killing them off or driving them to migrate. This was the human condition for more than a hundred millennia, and it is this population pressure, the author contends, which drove human migration from tropical Africa into almost every niche on the globe in which humans could survive, even some of the most marginal.

While the life of a hunter-gatherer band in the tropics is relatively easy (or so say those who have studied the few remaining populations who live that way today), the further from the equator the more intelligence, knowledge, and the ability to transmit it from generation to generation is required to survive. This creates a selection pressure for intelligence: individual members of a band of hunter-gatherers who are better at hunting and gathering will have more offspring which survive to maturity and bands with greater intelligence produced in this manner will grow faster and by migration and conquest displace those less endowed. This phenomenon would cause one to expect that (discounting the effects of large-scale migrations) the mean intelligence of human populations would be the lowest near the equator and increase with latitude (north or south). This, in general terms, and excluding marginal environments, is precisely what is observed, even today.

After hundreds of thousands of years as hunter-gatherers parasitic upon nature, sometime around 11,000 years ago, probably first in the Fertile Crescent in the Middle East, what is now called the Neolithic Revolution occurred. Humans ceased to wander in search of plants and game, and settled down into fixed communities which supported themselves by cultivating plants and raising animals they had domesticated. Both the plants and animals underwent selection by humans who bred those most adapted to their purposes. Agriculture was born. Humans who adopted the new means of production were no longer parasitic upon nature: they produced their sustenance by their own labour, improving upon that supplied by nature through their own actions. In order to do this, they had to invent a series of new technologies (for example, milling grain and fencing pastures) which did not exist in nature. Agriculture was far more efficient than the hunter-gatherer lifestyle in that a given amount of land (if suitable for known crops) could support a much larger human population.

While agriculture allowed a large increase in the human population, it did not escape the Malthusian trap: it simply increased the population density at which the carrying capacity of the land would be reached. Technological innovations such as irrigation and crop rotation could further increase the capacity of the land, but population increase would eventually surpass the new limit. As a result of this, from 1000 B.C. to A.D. 1800, income per capita (largely measured in terms of food) barely varied: the benefit of each innovation was quickly negated by population increase. To be sure, in all of this epoch there were a few wealthy people, but the overwhelming majority of the population lived near the subsistence level.

But once again, slowly but surely, a selection pressure was being applied upon humans who adopted the agricultural lifestyle. It is cognitively more difficult to be a farmer or rancher than to be a member of a hunter-gatherer band, and success depends strongly upon having a low time preference—to be willing to forgo immediate consumption for a greater return in the future. (For example, a farmer who does not reserve and protect seeds for the next season will fail. Selective breeding of plants and amimals to improve their characteristics takes years to produce results.) This creates an evolutionary pressure in favour of further increases in intelligence and, to the extent that such might be genetic rather than due to culture, for low time preference. Once the family emerged as the principal unit of society rather than the hunter-gatherer band, selection pressure was amplified since those with the selected-for characteristics would produce more offspring and the phenomenon of free riding which exists in communal bands is less likely to occur.

Around the year 1800, initially in Europe and later elsewhere, a startling change occurred: the Industrial Revolution. In societies which adopted the emerging industrial means of production, per capita income, which had been stagnant for almost two millennia, took off like a skyrocket, while at the same time population began to grow exponentially, rising from around 900 million in 1800 to 7 billion today. The Malthusian trap had been escaped; it appeared for the first time that an increase in population, far from consuming the benefits of innovation, actually contributed to and accelerated it.

There are some deep mysteries here. Why did it take so long for humans to invent agriculture? Why, after the invention of agriculture, did it take so long to invent industrial production? After all, the natural resources extant at the start of both of these revolutions were present in all of the preceding period, and there were people with the leisure to think and invent at all times in history. The author argues that what differed was the people. Prior to the advent of agriculture, people were simply not sufficiently intelligent to invent it (or, to be more precise, since intelligence follows something close to a normal distribution, there was an insufficient fraction of the population with the requisite intelligence to discover and implement the idea of agriculture). Similarly, prior to the Industrial Revolution, the intelligence of the general population was insufficient for it to occur. Throughout the long fallow periods, however, natural selection was breeding smarter humans and, eventually, in some place and time, a sufficient fraction of smart people, the required natural resources, and a society sufficiently open to permit innovation and moving beyond tradition would spark the fire. As the author notes, it's much easier to copy a good idea once you've seen it working than to come up with it in the first place and get it to work the first time.

Some will argue that Hoppe's hypothesis that human intelligence has been increasing over time is falsified by the fact that societies much closer in time to the dawn of agriculture produced works of art, literature, science, architecture, and engineering which are comparable to those of modern times. But those works were produced not by the average person but rather outliers which exist in all times and places (although in smaller numbers when mean intelligence is lower). For a general phase transition in society, it is a necessary condition that the bulk of the population involved have intelligence adequate to work in the new way.

After investigating human progress on the grand scale over long periods of time, the author turns to the phenomenon which may cause this progress to cease and turn into decline: the growth of the coercive state. Hunter-gatherers had little need for anything which today would be called governments. With bands on the order of 100 people sharing resources in common, many sources of dispute would not occur and those which did could be resolved by trusted elders or, failing that, combat. When humans adopted agriculture and began to live in settled communities, and families owned and exchanged property with one another, a whole new source of problems appeared. Who has the right to use this land? Who stole my prize animal? How are the proceeds of a joint effort to be distributed among the participants? As communities grew and trade among them flourished, complexity increased apace. Hoppe traces how the resolution of these conflicts has evolved over time. First, the parties to the dispute would turn to a member of an aristocracy, a member of the community respected because of their intelligence, wisdom, courage, or reputation for fairness, to settle the matter. (We often think of an aristocracy as hereditary but, although many aristocracies evolved into systems of hereditary nobility, the word originally meant “rule by the best”, and that is how the institution began.)

With growing complexity, aristocrats (or nobles) needed a way to resolve disputes among themselves, and this led to the emergence of kings. But like the nobles, the king was seen to apply a law which was part of nature (or, in the English common law tradition, discovered through the experience of precedents). It was with the emergence of absolute monarchy, constitutional monarchy, and finally democracy that things began to go seriously awry. In time, law became seen not as something which those given authority apply, but rather something those in power create. We have largely forgotten that legislation is not law, and that rights are not granted to us by those in power, but inhere in us and are taken away and/or constrained by those willing to initiate force against others to work their will upon them.

The modern welfare state risks undoing a thousand centuries of human progress by removing the selection pressure for intelligence and low time preference. Indeed, the welfare state punishes (taxes) the productive, who tend to have these characteristics, and subsidises those who do not, increasing their fraction within the population. Evolution works slowly, but inexorably. But the effects of shifting incentives can manifest themselves long before biology has its way. When a population is told “You've made enough”, “You didn't build that”, or sees working harder to earn more as simply a way to spend more of their lives supporting those who don't (along with those who have gamed the system to extract resources confiscated by the state), that glorious exponential curve which took off in 1800 may begin to bend down toward the horizontal and perhaps eventually turn downward.

I don't usually include lengthy quotes, but the following passage from the third essay, “From Aristocracy to Monarchy to Democracy”, is so brilliant and illustrative of what you'll find herein I can't resist.

Assume now a group of people aware of the reality of interpersonal conflicts and in search of a way out of this predicament. And assume that I then propose the following as a solution: In every case of conflict, including conflicts in which I myself am involved, I will have the last and final word. I will be the ultimate judge as to who owns what and when and who is accordingly right or wrong in any dispute regarding scarce resources. This way, all conflicts can be avoided or smoothly resolved.

What would be my chances of finding your or anyone else's agreement to this proposal?

My guess is that my chances would be virtually zero, nil. In fact, you and most people will think of this proposal as ridiculous and likely consider me crazy, a case for psychiatric treatment. For you will immediately realize that under this proposal you must literally fear for your life and property. Because this solution would allow me to cause or provoke a conflict with you and then decide this conflict in my own favor. Indeed, under this proposal you would essentially give up your right to life and property or even any pretense to such a right. You have a right to life and property only insofar as I grant you such a right, i.e., as long as I decide to let you live and keep whatever you consider yours. Ultimately, only I have a right to life and I am the owner of all goods.

And yet—and here is the puzzle—this obviously crazy solution is the reality. Wherever you look, it has been put into effect in the form of the institution of a State. The State is the ultimate judge in every case of conflict. There is no appeal beyond its verdicts. If you get into conflicts with the State, with its agents, it is the State and its agents who decide who is right and who is wrong. The State has the right to tax you. Thereby, it is the State that makes the decision how much of your property you are allowed to keep—that is, your property is only “fiat” property. And the State can make laws, legislate—that is, your entire life is at the mercy of the State. It can even order that you be killed—not in defense of your own life and property but in the defense of the State or whatever the State considers “defense” of its “state-property.”

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License and may be redistributed pursuant to the terms of that license. In addition to the paperback and Kindle editions available from Amazon The book may be downloaded for free from the Library of the Mises Institute in PDF or EPUB formats, or read on-line in an HTML edition.

Posted at 15:38 Permalink

Saturday, May 16, 2015

Reading List: Building the H Bomb

Ford, Kenneth W. Building the H Bomb. Singapore: World Scientific, 2015. ISBN 978-981-461-879-3.
In the fall of 1948, the author entered the graduate program in physics at Princeton University, hoping to obtain a Ph.D. and pursue a career in academia. In his first year, he took a course in classical mechanics taught by John Archibald Wheeler and realised that, despite the dry material of the course, he was in the presence of an extraordinary teacher and thinker, and decided he wanted Wheeler as his thesis advisor. In April of 1950, after Wheeler returned from an extended visit to Europe, the author approached him to become his advisor, not knowing in which direction his research would proceed. Wheeler immediately accepted him as a student, and then said that he (Wheeler) would be absent for a year or more at Los Alamos to work on the hydrogen bomb, and that he'd be pleased if Ford could join him on the project. Ford accepted, in large part because he believed that working on such a challenge would be “fun”, and that it would provide a chance for daily interaction with Wheeler and other senior physicists which would not exist in a regular Ph.D. program.

Well before the Manhattan project built the first fission weapon, there had been interest in fusion as an alternative source of nuclear energy. While fission releases energy by splitting heavy atoms such as uranium and plutonium into lighter atoms, fusion merges lighter atoms such as hydrogen and its isotopes deuterium and tritium into heavier nuclei like helium. While nuclear fusion can be accomplished in a desktop apparatus, doing so requires vastly more energy input than is released, making it impractical as an energy source or weapon. Still, compared to enriched uranium or plutonium, the fuel for a fusion weapon is abundant and inexpensive and, unlike a fission weapon whose yield is limited by the critical mass beyond which it would predetonate, in principle a fusion weapon could have an unlimited yield: the more fuel, the bigger the bang.

Once the Manhattan Project weaponeers became confident they could build a fission weapon, physicists, most prominent among them Edward Teller, realised that the extreme temperatures created by a nuclear detonation could be sufficient to ignite a fusion reaction in light nuclei like deuterium and that reaction, once started, might propagate by its own energy release just like the chemical fire in a burning log. It seemed plausible—the temperature of an exploding fission bomb exceeded that of the centre of the Sun, where nuclear fusion was known to occur. The big question was whether the fusion burn, once started, would continue until most of the fuel was consumed or fizzle out as its energy was radiated outward and the fuel dispersed by the explosion.

Answering this question required detailed computations of a rapidly evolving system in three dimensions with a time slice measured in nanoseconds. During the Manhattan Project, a “computer” was a woman operating a mechanical calculator, and even with large rooms filled with hundreds of “computers” the problem was intractably difficult. Unable to directly model the system, physicists resorted to analytical models which produced ambiguous results. Edward Teller remained optimistic that the design, which came to be called the “Classical Super”, would work, but many others, including J. Robert Oppenheimer, Enrico Fermi, and Stanislaw Ulam, based upon the calculations that could be done at the time, concluded it would probably fail. Oppenheimer's opposition to the Super or hydrogen bomb project has been presented as a moral opposition to development of such a weapon, but the author's contemporary recollection is that it was based upon Oppenheimer's belief that the classical super was unlikely to work, and that effort devoted to it would be at the expense of improved fission weapons which could be deployed in the near term.

All of this changed on March 9th, 1951. Edward Teller and Stanislaw Ulam published a report which presented a new approach to a fusion bomb. Unlike the classical super, which required the fusion fuel to burn on its own after being ignited, the new design, now called the Teller-Ulam design, compressed a capsule of fusion fuel by the radiation pressure of a fission detonation (usually, we don't think of radiation as having pressure, but in the extreme conditions of a nuclear explosion it far exceeds pressures we encounter with matter), and then ignited it with a “spark plug” of fission fuel at the centre of the capsule. Unlike the classical super, the fusion fuel would burn at thermodynamic equilibrium and, in doing so, liberate abundant neutrons with such a high energy they would induce fission in Uranium-238 (which cannot be fissioned by the less energetic neutrons of a fission explosion), further increasing the yield.

Oppenheimer, who had been opposed to work upon fusion, pronounced the Teller-Ulam design “technically sweet” and immediately endorsed its development. The author's interpretation is that once a design was in hand which appeared likely to work, there was no reason to believe that the Soviets who had, by that time, exploded their own fission bomb, would not also discover it and proceed to develop such a weapon, and hence it was important that the U.S. give priority to the fusion bomb to get there first. (Unlike the Soviet fission bomb, which was a copy of the U.S. implosion design based upon material obtained by espionage, there is no evidence the Soviet fusion bomb, first tested in 1955, was based upon espionage, but rather was an independent invention of the radiation implosion concept by Andrei Sakharov and Yakov Zel'dovich.)

With the Teller-Ulam design in hand, the author, working with Wheeler's group, first in Los Alamos and later at Princeton, was charged with working out the details: how precisely would the material in the bomb behave, nanosecond by nanosecond. By this time, calculations could be done by early computing machinery: first the IBM Card-Programmed Calculator and later the SEAC, which was, at the time, one of the most advanced electronic computers in the world. As with computer nerds until the present day, the author spent many nights babysitting the machine as it crunched the numbers.

On November 1st, 1952, the Ivy Mike device was detonated in the Pacific, with a yield of 10.4 megatons of TNT. John Wheeler witnessed the test from a ship at a safe distance from the island which was obliterated by the explosion. The test completely confirmed the author's computations of the behaviour of the thermonuclear burn and paved the way for deliverable thermonuclear weapons. (Ivy Mike was a physics experiment, not a weapon, but once it was known the principle was sound, it was basically a matter of engineering to design bombs which could be air-dropped.) With the success, the author concluded his work on the weapons project and returned to his dissertation, receiving his Ph.D. in 1953.

This is about half a personal memoir and half a description of the physics of thermonuclear weapons and the process by which the first weapon was designed. The technical sections are entirely accessible to readers with only a basic knowledge of physics (I was about to say “high school physics”, but I don't know how much physics, if any, contemporary high school graduates know.) There is no secret information disclosed here. All of the technical information is available in much greater detail from sources (which the author cites) such as Carey Sublette's Nuclear Weapon Archive, which is derived entirely from unclassified sources. Curiously, the U.S. Department of Energy (which has, since its inception, produced not a single erg of energy) demanded that the author heavily redact material in the manuscript, all derived from unclassified sources and dating from work done more than half a century ago. The only reason I can imagine for this is that a weapon scientist who was there, by citing information which has been in the public domain for two decades, implicitly confirms that it's correct. But it's not like the Soviets/Russians, British, French, Chinese, Israelis, and Indians haven't figured it out by themselves or that others suitably motivated can't. The author told them to stuff it, and here we have his unexpurgated memoir of the origin of the weapon which shaped the history of the world in which we live.

Posted at 23:07 Permalink

Wednesday, May 13, 2015

Reading List: Act of War

Thor, Brad. Act of War. New York: Pocket Books, 2014. ISBN 978-1-4767-1713-5.
This is the fourteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel the author returns to the techno-thriller genre and places his characters, this time backed by a newly-elected U.S. president who is actually interested in defending the country, in the position of figuring out a complicated yet potentially devastating attack mounted by a nation state adversary following the doctrine of unrestricted warfare, and covering its actions by operating through non-state parties apparently unrelated to the aggressor.

The trail goes through Pakistan, North Korea, and Nashville, Tennessee, with multiple parties trying to put together the pieces of the puzzle while the clock is ticking. Intelligence missions are launched into North Korea and the Arab Emirates to try to figure out what is going on. Finally, as the nature of the plot becomes clear, Nicholas (the Troll) brings the tools of Big Data to bear on the mystery to avert disaster.

This is a workmanlike thriller and a fine “airplane book”. There is less shoot-em-up action than in other novels in the series, and a part of the suspense is supposed to be the reader's trying to figure out, along with the characters, the nature of the impending attack. Unfortunately, at least for me, it was obvious well before the half way point in the story the answer to the puzzle, and knowing this was a substantial spoiler for the rest of the book. I've thought and written quite a bit about this scenario, so I may have been more attuned to the clues than the average reader.

The author invokes the tired canard about NASA's priorities having been redirected toward reinforcing Muslim self-esteem. This is irritating (because it's false), but plays no major part in the story. Still, it's a good read, and I'll be looking forward to the next book in the series.

Posted at 21:51 Permalink

Thursday, April 30, 2015

Reading List: A.I. Apocalypse

Hertling, William. A.I. Apocalypse. Portland, OR: Liquididea Press, 2012. ISBN 978-0-9847557-4-5.
This is the second volume in the author's Singularity Series which began with Avogadro Corp. (March 2014). It has been ten years since ELOPe, an E-mail optimisation tool developed by Avogadro Corporation, made the leap to strong artificial intelligence and, after a rough start, became largely a benign influence upon humanity. The existence of ELOPe is still a carefully guarded secret, although the Avogadro CEO, doubtless with the help of ELOPe, has become president of the United States. Avogadro has spun ELOPe off as a separate company, run by Mike Williams, one of its original creators. ELOPe operates its own data centres and the distributed Mesh network it helped create.

Leon Tsarev has a big problem. A bright high school student hoping to win a scholarship to an elite university to study biology, Leon is contacted out of the blue by his uncle Alexis living in Russia. Alexis is a rogue software developer whose tools for infecting computers, organising them into “botnets”, and managing the zombie horde for criminal purposes have embroiled him with the Russian mob. Recently, however, the effectiveness of his tools has dropped dramatically and the botnet shrunk to a fraction of its former size. Alexis's employers are displeased with this situation and have threatened murder if he doesn't do something to restore the power of the botnet.

Uncle Alexis starts to E-mail Leon, begging for assistance. Leon replies that he knows little or nothing about computer viruses or botnets, but Alexis persists. Leon is also loath to do anything which might put him on the wrong side of the law, which would wreck his career ambitions. Then Leon is accosted on the way home from school by a large man speaking with a thick Russian accent who says, “Your Uncle Alexis is in trouble, yes. You will help him. Be good nephew.” And just like that, it's Leon who's now in trouble with the Russian mafia, and they know where he lives.

Leon decides that with his own life on the line he has no alternative but to try to create a virus for Alexis. He applies his knowledge of biology to the problem, and settles on an architecture which is capable of evolution and, similar to lateral gene transfer in bacteria, identifying algorithms in systems it infects and incorporating them into itself. As in biology, the most successful variants of the evolving virus would defend themselves the best, propagate more rapidly, and eventually displace less well adapted competitors.

After a furious burst of effort, Leon finishes the virus, which he's named Phage, and sends it to his uncle, who uploads it to the five thousand computers which are the tattered remnants of his once-mighty botnet. An exhausted Leon staggers off to get some sleep.

When Leon wakes up, the technological world has almost come to a halt. The overwhelming majority of personal computing devices and embedded systems with network connectivity are infected and doing nothing but running Phage and almost all network traffic consists of ever-mutating versions of Phage trying to propagate themselves. Telephones, appliances, electronic door locks, vehicles of all kinds, and utilities are inoperable.

The only networks and computers not taken over by the Phage are ELOPe's private network (which detected the attack early and whose servers are devoting much of their resources to defend themselves against the rapidly changing threat) and high security military networks which have restrictive firewalls separating themselves from public networks. As New York starts to burn with fire trucks immobilised, Leon realises that being identified as the creator of the catastrophe might be a career limiting move, and he, along with two technology geek classmates decide to get out of town and seek ways to combat the Phage using retro technology it can't exploit.

Meanwhile, Mike Williams, working with ELOPe, tries to understand what is happening. The Phage, like biological life on Earth, continues to evolve and discovers that multiple components, working in collaboration, can accomplish more than isolated instances of the virus. The software equivalent of multicellular life appears, and continues to evolve at a breakneck pace. Then it awakens and begins to explore the curious universe it inhabits.

This is a gripping thriller in which, as in Avogadro Corp., the author gets so much right from a technical standpoint that even some of the more outlandish scenes appear plausible. One thing I believe the author grasped which many other tales of the singularity miss is just how fast everything can happen. Once an artificial intelligence hosted on billions of machines distributed around the world, all running millions of times faster than human thought, appears, things get very weird, very fast, and humans suddenly find themselves living in a world where they are not at the peak of the cognitive pyramid. I'll not spoil the plot with further details, but you'll find the world at the end of the novel a very different place than the one at the start.

A Kindle edition is available.

Posted at 21:46 Permalink

Saturday, April 18, 2015

Reading List: Einstein's Unification

van Dongen, Jeroen. Einstein's Unification. Cambridge: Cambridge University Press, 2010. ISBN 978-0-521-88346-7.
In 1905 Albert Einstein published four papers which transformed the understanding of space, time, mass, and energy; provided physical evidence for the quantisation of energy; and observational confirmation of the existence of atoms. These publications are collectively called the Annus Mirabilis papers, and vaulted the largely unknown Einstein to the top rank of theoretical physicists. When Einstein was awarded the Nobel Prize in Physics in 1921, it was for one of these 1905 papers which explained the photoelectric effect. Einstein's 1905 papers are masterpieces of intuitive reasoning and clear exposition, and demonstrated Einstein's technique of constructing thought experiments based upon physical observations, then deriving testable mathematical models from them. Unlike so many present-day scientific publications, Einstein's papers on special relativity and the equivalence of mass and energy were accessible to anybody with a college-level understanding of mechanics and electrodynamics and used no special jargon or advanced mathematics. Being based on well-understood concepts, neither cited any other scientific paper.

While special relativity revolutionised our understanding of space and time, and has withstood every experimental test to which it has been subjected in the more than a century since it was formulated, it was known from inception that the theory was incomplete. It's called special relativity because it only describes the behaviour of bodies under the special case of uniform unaccelerated motion in the absence of gravity. To handle acceleration and gravitation would require extending the special theory into a general theory of relativity, and it is upon this quest that Einstein next embarked.

As before, Einstein began with a simple thought experiment. Just as in special relativity, where there is no experiment which can be done in a laboratory without the ability to observe the outside world that can determine its speed or direction of uniform (unaccelerated) motion, Einstein argued that there should be no experiment an observer could perform in a sufficiently small closed laboratory which could distinguish uniform acceleration from the effect of gravity. If one observed objects to fall with an acceleration equal to that on the surface of the Earth, the laboratory might be stationary on the Earth or in a space ship accelerating with a constant acceleration of one gravity, and no experiment could distinguish the two situations. (The reason for the “sufficiently small” qualification is that since gravity is produced by massive objects, the direction a test particle will fall depends upon its position with respect to the centre of gravity of the body. In a very large laboratory, objects dropped far apart would fall in different directions. This is what causes tides.)

Einstein called this observation the “equivalence principle”: that the effects of acceleration and gravity are indistinguishable, and that hence a theory which extended special relativity to incorporate accelerated motion would necessarily also be a theory of gravity. Einstein had originally hoped it would be straightforward to reconcile special relativity with acceleration and gravity, but the deeper he got into the problem, the more he appreciated how difficult a task he had undertaken. Thanks to the Einstein Papers Project, which is curating and publishing all of Einstein's extant work, including notebooks, letters, and other documents, the author (a participant in the project) has been able to reconstruct Einstein's ten-year search for a viable theory of general relativity.

Einstein pursued a two-track approach. The bottom up path started with Newtonian gravity and attempted to generalise it to make it compatible with special relativity. In this attempt, Einstein was guided by the correspondence principle, which requires that any new theory which explains behaviour under previously untested conditions must reproduce the tested results of existing theory under known conditions. For example, the equations of motion in special relativity reduce to those of Newtonian mechanics when velocities are small compared to the speed of light. Similarly, for gravity, any candidate theory must yield results identical to Newtonian gravitation when field strength is weak and velocities are low.

From the top down, Einstein concluded that any theory compatible with the principle of equivalence between acceleration and gravity must exhibit general covariance, which can be thought of as being equally valid regardless of the choice of co-ordinates (as long as they are varied without discontinuities). There are very few mathematical structures which have this property, and Einstein was drawn to Riemann's tensor geometry. Over years of work, Einstein pursued both paths, producing a bottom-up theory which was not generally covariant which he eventually rejected as in conflict with experiment. By November 1915 he had returned to the top-down mathematical approach and in four papers expounded a generally covariant theory which agreed with experiment. General relativity had arrived.

Einstein's 1915 theory correctly predicted the anomalous perihelion precession of Mercury and also predicted that starlight passing near the limb of the Sun would be deflected by twice the angle expected based on Newtonian gravitation. This was confirmed (within a rather large margin of error) in an eclipse expedition in 1919, which made Einstein's general relativity front page news around the world. Since then precision tests of general relativity have tested a variety of predictions of the theory with ever-increasing precision, with no experiment to date yielding results inconsistent with the theory.

Thus, by 1915, Einstein had produced theories of mechanics, electrodynamics, the equivalence of mass and energy, and the mechanics of bodies under acceleration and the influence of gravitational fields, and changed space and time from a fixed background in which physics occurs to a dynamical arena: “Matter and energy tell spacetime how to curve. Spacetime tells matter how to move.” What do you do, at age 36, having figured out, largely on your own, how a large part of the universe works?

Much of Einstein's work so far had consisted of unification. Special relativity unified space and time, matter and energy. General relativity unified acceleration and gravitation, gravitation and geometry. But much remained to be unified. In general relativity and classical electrodynamics there were two field theories, both defined on the continuum, both with unlimited range and an inverse square law, both exhibiting static and dynamic effects (although the details of gravitomagnetism would not be worked out until later). And yet the theories seemed entirely distinct: gravity was always attractive and worked by the bending of spacetime by matter-energy, while electromagnetism could be either attractive or repulsive, and seemed to be propagated by fields emitted by point charges—how messy.

Further, quantum theory, which Einstein's 1905 paper on the photoelectric effect had helped launch, seemed to point in a very different direction than the classical field theories in which Einstein had worked. Quantum mechanics, especially as elaborated in the “new” quantum theory of the 1920s, seemed to indicate that aspects of the universe such as electric charge were discrete, not continuous, and that physics could, even in principle, only predict the probability of the outcome of experiments, not calculate them definitively from known initial conditions. Einstein never disputed the successes of quantum theory in explaining experimental results, but suspected it was a theory based upon phenomena which did not explain what was going on at a deeper level. (For example, the physical theory of elasticity explains experimental results and makes predictions within its domain of applicability, but it is not fundamental. All of the effects of elasticity are ultimately due to electromagnetic forces between atoms in materials. But that doesn't mean that the theory of elasticity isn't useful to engineers, or that they should do their spring calculations at the molecular level.)

Einstein undertook the search for a unified field theory, which would unify gravity and electromagnetism, just as Maxwell had unified electrostatics and magnetism into a single theory. In addition, Einstein believed that a unified field theory would be antecedent to quantum theory, and that the probabilistic results of quantum theory could be deduced from the more fundamental theory, which would remain entirely deterministic. From 1915 until his death in 1955 Einstein's work concentrated mostly on the quest for a unified field theory. He was aided by numerous talented assistants, many of whom went on to do important work in their own right. He explored a variety of paths to such a theory, but ultimately rejected each one, in turn, as either inconsistent with experiment or unable to explain phenomena such as point particles or quantisation of charge.

As the author documents, Einstein's approach to doing physics changed in the years after 1915. While before he was guided both by physics and mathematics, in retrospect he recalled and described his search of the field equations of general relativity as having followed the path of discovering the simplest and most elegant mathematical structure which could explain the observed phenomena. He thus came, like Dirac, to argue that mathematical beauty was the best guide to correct physical theories.

In the last forty years of his life, Einstein made no progress whatsoever toward a unified field theory, apart from discarding numerous paths which did not work. He explored a variety of approaches: “semivectors” (which turned out just to be a reformulation of spinors), five-dimensional models including a cylindrically compactified dimension based on Kaluza-Klein theory, and attempts to deduce the properties of particles and their quantum behaviour from nonlinear continuum field theories.

In seeking to unify electromagnetism and gravity, he ignored the strong and weak nuclear forces which had been discovered over the years and merited being included in any grand scheme of unification. In the years after World War II, many physicists ceased to worry about the meaning of quantum mechanics and the seemingly inherent randomness in its predictions which so distressed Einstein, and adopted a “shut up and calculate” approach as their computations were confirmed to ever greater precision by experiments.

So great was the respect for Einstein's achievements that only rarely was a disparaging word said about his work on unified field theories, but toward the end of his life it was outside the mainstream of theoretical physics, which had moved on to elaboration of quantum theory and making quantum theory compatible with special relativity. It would be a decade after Einstein's death before astronomical discoveries would make general relativity once again a frontier in physics.

What can we learn from the latter half of Einstein's life and his pursuit of unification? The frontier of physics today remains unification among the forces and particles we have discovered. Now we have three forces to unify (counting electromagnetism and the weak nuclear force as already unified in the electroweak force), plus two seemingly incompatible kinds of particles: bosons (carriers of force) and fermions (what stuff is made of). Six decades (to the day) after the death of Einstein, unification of gravity and the other forces remains as elusive as when he first attempted it.

It is a noble task to try to unify disparate facts and theories into a common whole. Much of our progress in the age of science has come from such unification. Einstein unified space and time; matter and energy; acceleration and gravity; geometry and motion. We all benefit every day from technologies dependent upon these fundamental discoveries. He spent the last forty years of his life seeking the next grand unification. He never found it. For this effort we should applaud him.

I must remark upon how absurd the price of this book is. At Amazon as of this writing, the hardcover is US$ 102.91 and the Kindle edition is US$ 88. Eighty-eight Yankee dollars for a 224 page book which is ranked #739,058 in the Kindle store?

Posted at 15:09 Permalink

Friday, April 10, 2015

Astronomical Numbers

Replica of the first transistor from 1947 In December 1947 there was a single transistor in the world, built at AT&T's Bell Labs by John Bardeen, Walter Brattain, and William Shockley, who would share the 1956 Nobel Prize in Physics for the discovery. The image at the right is of a replica of this first transistor.

According to an article in IEEE Spectrum, in the year 2014 semiconductor manufacturers around the world produced 2.5×1020 (250 billion billion) transistors. On average, about 8 trillion transistors were produced every second in 2014.

We speak of large numbers as "astronomical", but these numbers put astronomy to shame. There are about 400 billion (4×1011) stars in the Milky Way galaxy. In the single year 2014, humans fabricated 625 million times as many transistors as there are stars in their home galaxy. There are estimated to be around 200 billion galaxies in the universe. We thus made 1.25 billion times as many transistors as there are galaxies.

The number of transistors manufactured every year has been growing exponentially from its invention in 1947 to the present (Moore's law), and this growth is not expected to abate at any time in the near future. Let's take the number of galaxies in the universe as 200 billion and assume each has, on average, as many stars as the Milky Way (400 billion) (the latter estimate is probably high, since dwarf galaxies seem to outnumber large ones by a substantial factor). Then there would be around 8×1022 stars in the universe. We will only have to continue to double the number of transistors made per year an additional seven times to reach the point where we are manufacturing as many transistors every year as there are stars in the entire universe. Moore's law predicts that the number of transistors made doubles around every two years, so this milestone should be reached about 14 years from now.

This is right in the middle of the decade I described as the "Roaring Twenties" in my appearance on the Ricochet Podcast of 2015-02-12. It is in the 2020s that continued exponential growth of computing power at constant cost will enable solving, by brute computational force, a variety of problems currently considered intractable.

Posted at 16:59 Permalink