Wednesday, May 20, 2015

Reading List: A Short History of Man

Hoppe, Hans-Hermann. A Short History of Man. Auburn, AL: Mises Institute, 2015. ISBN 978-1-61016-591-4.
The author is one of the most brilliant and original thinkers and eloquent contemporary expositors of libertarianism, anarcho-capitalism, and Austrian economics. Educated in Germany, Hoppe came to the United States to study with Murray Rothbard and in 1986 joined Rothbard on the faculty of the University of Nevada, Las Vegas, where he taught until his retirement in 2008. Hoppe's 2001 book, Democracy: The God That Failed (June 2002), made the argument that democratic election of temporary politicians in the modern all-encompassing state will inevitably result in profligate spending and runaway debt because elected politicians have every incentive to buy votes and no stake in the long-term solvency and prosperity of the society. Whatever the drawbacks (and historical examples of how things can go wrong), a hereditary monarch has no need to buy votes and every incentive not to pass on a bankrupt state to his descendants.

This short book (144 pages) collects three essays previously published elsewhere which, taken together, present a comprehensive picture of human development from the emergence of modern humans in Africa to the present day. Subtitled “Progress and Decline”, the story is of long periods of stasis, two enormous breakthroughs, with, in parallel, the folly of ever-growing domination of society by a coercive state which, in its modern incarnation, risks halting or reversing the gains of the modern era.

Members of the collectivist and politically-correct mainstream in the fields of economics, anthropology, and sociology who can abide Prof. Hoppe's adamantine libertarianism will probably have their skulls explode when they encounter his overview of human economic and social progress, which is based upon genetic selection for increased intelligence and low time preference among populations forced to migrate due to population pressure from the tropics where the human species originated into more demanding climates north and south of the Equator, and onward toward the poles. In the tropics, every day is about the same as the next; seasons don't differ much from one another; and the variation in the length of the day is not great. In the temperate zone and beyond, hunter-gatherers must cope with plant life which varies along with the seasons, prey animals that migrate, hot summers and cold winters, with the latter requiring the knowledge and foresight of how to make provisions for the lean season. Predicting the changes in seasons becomes important, and in this may have been the genesis of astronomy.

A hunter-gatherer society is essentially parasitic upon the natural environment—it consumes the plant and animal bounty of nature but does nothing to replenish it. This means that for a given territory there is a maximum number (varying due to details of terrain, climate, etc.) of humans it can support before an increase in population leads to a decline in the per-capita standard of living of its inhabitants. This is what the author calls the “Malthusian trap”. Looked at from the other end, a human population which is growing as human populations tend to do, will inevitably reach the carrying capacity of the area in which it lives. When this happens, there are only three options: artificially limit the growth in population to the land's carrying capacity, split off one or more groups which migrate to new territory not yet occupied by humans, or conquer new land from adjacent groups, either killing them off or driving them to migrate. This was the human condition for more than a hundred millennia, and it is this population pressure, the author contends, which drove human migration from tropical Africa into almost every niche on the globe in which humans could survive, even some of the most marginal.

While the life of a hunter-gatherer band in the tropics is relatively easy (or so say those who have studied the few remaining populations who live that way today), the further from the equator the more intelligence, knowledge, and the ability to transmit it from generation to generation is required to survive. This creates a selection pressure for intelligence: individual members of a band of hunter-gatherers who are better at hunting and gathering will have more offspring which survive to maturity and bands with greater intelligence produced in this manner will grow faster and by migration and conquest displace those less endowed. This phenomenon would cause one to expect that (discounting the effects of large-scale migrations) the mean intelligence of human populations would be the lowest near the equator and increase with latitude (north or south). This, in general terms, and excluding marginal environments, is precisely what is observed, even today.

After hundreds of thousands of years as hunter-gatherers parasitic upon nature, sometime around 11,000 years ago, probably first in the Fertile Crescent in the Middle East, what is now called the Neolithic Revolution occurred. Humans ceased to wander in search of plants and game, and settled down into fixed communities which supported themselves by cultivating plants and raising animals they had domesticated. Both the plants and animals underwent selection by humans who bred those most adapted to their purposes. Agriculture was born. Humans who adopted the new means of production were no longer parasitic upon nature: they produced their sustenance by their own labour, improving upon that supplied by nature through their own actions. In order to do this, they had to invent a series of new technologies (for example, milling grain and fencing pastures) which did not exist in nature. Agriculture was far more efficient than the hunter-gatherer lifestyle in that a given amount of land (if suitable for known crops) could support a much larger human population.

While agriculture allowed a large increase in the human population, it did not escape the Malthusian trap: it simply increased the population density at which the carrying capacity of the land would be reached. Technological innovations such as irrigation and crop rotation could further increase the capacity of the land, but population increase would eventually surpass the new limit. As a result of this, from 1000 B.C. to A.D. 1800, income per capita (largely measured in terms of food) barely varied: the benefit of each innovation was quickly negated by population increase. To be sure, in all of this epoch there were a few wealthy people, but the overwhelming majority of the population lived near the subsistence level.

But once again, slowly but surely, a selection pressure was being applied upon humans who adopted the agricultural lifestyle. It is cognitively more difficult to be a farmer or rancher than to be a member of a hunter-gatherer band, and success depends strongly upon having a low time preference—to be willing to forgo immediate consumption for a greater return in the future. (For example, a farmer who does not reserve and protect seeds for the next season will fail. Selective breeding of plants and amimals to improve their characteristics takes years to produce results.) This creates an evolutionary pressure in favour of further increases in intelligence and, to the extent that such might be genetic rather than due to culture, for low time preference. Once the family emerged as the principal unit of society rather than the hunter-gatherer band, selection pressure was amplified since those with the selected-for characteristics would produce more offspring and the phenomenon of free riding which exists in communal bands is less likely to occur.

Around the year 1800, initially in Europe and later elsewhere, a startling change occurred: the Industrial Revolution. In societies which adopted the emerging industrial means of production, per capita income, which had been stagnant for almost two millennia, took off like a skyrocket, while at the same time population began to grow exponentially, rising from around 900 million in 1800 to 7 billion today. The Malthusian trap had been escaped; it appeared for the first time that an increase in population, far from consuming the benefits of innovation, actually contributed to and accelerated it.

There are some deep mysteries here. Why did it take so long for humans to invent agriculture? Why, after the invention of agriculture, did it take so long to invent industrial production? After all, the natural resources extant at the start of both of these revolutions were present in all of the preceding period, and there were people with the leisure to think and invent at all times in history. The author argues that what differed was the people. Prior to the advent of agriculture, people were simply not sufficiently intelligent to invent it (or, to be more precise, since intelligence follows something close to a normal distribution, there was an insufficient fraction of the population with the requisite intelligence to discover and implement the idea of agriculture). Similarly, prior to the Industrial Revolution, the intelligence of the general population was insufficient for it to occur. Throughout the long fallow periods, however, natural selection was breeding smarter humans and, eventually, in some place and time, a sufficient fraction of smart people, the required natural resources, and a society sufficiently open to permit innovation and moving beyond tradition would spark the fire. As the author notes, it's much easier to copy a good idea once you've seen it working than to come up with it in the first place and get it to work the first time.

Some will argue that Hoppe's hypothesis that human intelligence has been increasing over time is falsified by the fact that societies much closer in time to the dawn of agriculture produced works of art, literature, science, architecture, and engineering which are comparable to those of modern times. But those works were produced not by the average person but rather outliers which exist in all times and places (although in smaller numbers when mean intelligence is lower). For a general phase transition in society, it is a necessary condition that the bulk of the population involved have intelligence adequate to work in the new way.

After investigating human progress on the grand scale over long periods of time, the author turns to the phenomenon which may cause this progress to cease and turn into decline: the growth of the coercive state. Hunter-gatherers had little need for anything which today would be called governments. With bands on the order of 100 people sharing resources in common, many sources of dispute would not occur and those which did could be resolved by trusted elders or, failing that, combat. When humans adopted agriculture and began to live in settled communities, and families owned and exchanged property with one another, a whole new source of problems appeared. Who has the right to use this land? Who stole my prize animal? How are the proceeds of a joint effort to be distributed among the participants? As communities grew and trade among them flourished, complexity increased apace. Hoppe traces how the resolution of these conflicts has evolved over time. First, the parties to the dispute would turn to a member of an aristocracy, a member of the community respected because of their intelligence, wisdom, courage, or reputation for fairness, to settle the matter. (We often think of an aristocracy as hereditary but, although many aristocracies evolved into systems of hereditary nobility, the word originally meant “rule by the best”, and that is how the institution began.)

With growing complexity, aristocrats (or nobles) needed a way to resolve disputes among themselves, and this led to the emergence of kings. But like the nobles, the king was seen to apply a law which was part of nature (or, in the English common law tradition, discovered through the experience of precedents). It was with the emergence of absolute monarchy, constitutional monarchy, and finally democracy that things began to go seriously awry. In time, law became seen not as something which those given authority apply, but rather something those in power create. We have largely forgotten that legislation is not law, and that rights are not granted to us by those in power, but inhere in us and are taken away and/or constrained by those willing to initiate force against others to work their will upon them.

The modern welfare state risks undoing a thousand centuries of human progress by removing the selection pressure for intelligence and low time preference. Indeed, the welfare state punishes (taxes) the productive, who tend to have these characteristics, and subsidises those who do not, increasing their fraction within the population. Evolution works slowly, but inexorably. But the effects of shifting incentives can manifest themselves long before biology has its way. When a population is told “You've made enough”, “You didn't build that”, or sees working harder to earn more as simply a way to spend more of their lives supporting those who don't (along with those who have gamed the system to extract resources confiscated by the state), that glorious exponential curve which took off in 1800 may begin to bend down toward the horizontal and perhaps eventually turn downward.

I don't usually include lengthy quotes, but the following passage from the third essay, “From Aristocracy to Monarchy to Democracy”, is so brilliant and illustrative of what you'll find herein I can't resist.

Assume now a group of people aware of the reality of interpersonal conflicts and in search of a way out of this predicament. And assume that I then propose the following as a solution: In every case of conflict, including conflicts in which I myself am involved, I will have the last and final word. I will be the ultimate judge as to who owns what and when and who is accordingly right or wrong in any dispute regarding scarce resources. This way, all conflicts can be avoided or smoothly resolved.

What would be my chances of finding your or anyone else's agreement to this proposal?

My guess is that my chances would be virtually zero, nil. In fact, you and most people will think of this proposal as ridiculous and likely consider me crazy, a case for psychiatric treatment. For you will immediately realize that under this proposal you must literally fear for your life and property. Because this solution would allow me to cause or provoke a conflict with you and then decide this conflict in my own favor. Indeed, under this proposal you would essentially give up your right to life and property or even any pretense to such a right. You have a right to life and property only insofar as I grant you such a right, i.e., as long as I decide to let you live and keep whatever you consider yours. Ultimately, only I have a right to life and I am the owner of all goods.

And yet—and here is the puzzle—this obviously crazy solution is the reality. Wherever you look, it has been put into effect in the form of the institution of a State. The State is the ultimate judge in every case of conflict. There is no appeal beyond its verdicts. If you get into conflicts with the State, with its agents, it is the State and its agents who decide who is right and who is wrong. The State has the right to tax you. Thereby, it is the State that makes the decision how much of your property you are allowed to keep—that is, your property is only “fiat” property. And the State can make laws, legislate—that is, your entire life is at the mercy of the State. It can even order that you be killed—not in defense of your own life and property but in the defense of the State or whatever the State considers “defense” of its “state-property.”

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License and may be redistributed pursuant to the terms of that license. In addition to the paperback and Kindle editions available from Amazon The book may be downloaded for free from the Library of the Mises Institute in PDF or EPUB formats, or read on-line in an HTML edition.

Posted at 15:38 Permalink

Saturday, May 16, 2015

Reading List: Building the H Bomb

Ford, Kenneth W. Building the H Bomb. Singapore: World Scientific, 2015. ISBN 978-981-461-879-3.
In the fall of 1948, the author entered the graduate program in physics at Princeton University, hoping to obtain a Ph.D. and pursue a career in academia. In his first year, he took a course in classical mechanics taught by John Archibald Wheeler and realised that, despite the dry material of the course, he was in the presence of an extraordinary teacher and thinker, and decided he wanted Wheeler as his thesis advisor. In April of 1950, after Wheeler returned from an extended visit to Europe, the author approached him to become his advisor, not knowing in which direction his research would proceed. Wheeler immediately accepted him as a student, and then said that he (Wheeler) would be absent for a year or more at Los Alamos to work on the hydrogen bomb, and that he'd be pleased if Ford could join him on the project. Ford accepted, in large part because he believed that working on such a challenge would be “fun”, and that it would provide a chance for daily interaction with Wheeler and other senior physicists which would not exist in a regular Ph.D. program.

Well before the Manhattan project built the first fission weapon, there had been interest in fusion as an alternative source of nuclear energy. While fission releases energy by splitting heavy atoms such as uranium and plutonium into lighter atoms, fusion merges lighter atoms such as hydrogen and its isotopes deuterium and tritium into heavier nuclei like helium. While nuclear fusion can be accomplished in a desktop apparatus, doing so requires vastly more energy input than is released, making it impractical as an energy source or weapon. Still, compared to enriched uranium or plutonium, the fuel for a fusion weapon is abundant and inexpensive and, unlike a fission weapon whose yield is limited by the critical mass beyond which it would predetonate, in principle a fusion weapon could have an unlimited yield: the more fuel, the bigger the bang.

Once the Manhattan Project weaponeers became confident they could build a fission weapon, physicists, most prominent among them Edward Teller, realised that the extreme temperatures created by a nuclear detonation could be sufficient to ignite a fusion reaction in light nuclei like deuterium and that reaction, once started, might propagate by its own energy release just like the chemical fire in a burning log. It seemed plausible—the temperature of an exploding fission bomb exceeded that of the centre of the Sun, where nuclear fusion was known to occur. The big question was whether the fusion burn, once started, would continue until most of the fuel was consumed or fizzle out as its energy was radiated outward and the fuel dispersed by the explosion.

Answering this question required detailed computations of a rapidly evolving system in three dimensions with a time slice measured in nanoseconds. During the Manhattan Project, a “computer” was a woman operating a mechanical calculator, and even with large rooms filled with hundreds of “computers” the problem was intractably difficult. Unable to directly model the system, physicists resorted to analytical models which produced ambiguous results. Edward Teller remained optimistic that the design, which came to be called the “Classical Super”, would work, but many others, including J. Robert Oppenheimer, Enrico Fermi, and Stanislaw Ulam, based upon the calculations that could be done at the time, concluded it would probably fail. Oppenheimer's opposition to the Super or hydrogen bomb project has been presented as a moral opposition to development of such a weapon, but the author's contemporary recollection is that it was based upon Oppenheimer's belief that the classical super was unlikely to work, and that effort devoted to it would be at the expense of improved fission weapons which could be deployed in the near term.

All of this changed on March 9th, 1951. Edward Teller and Stanislaw Ulam published a report which presented a new approach to a fusion bomb. Unlike the classical super, which required the fusion fuel to burn on its own after being ignited, the new design, now called the Teller-Ulam design, compressed a capsule of fusion fuel by the radiation pressure of a fission detonation (usually, we don't think of radiation as having pressure, but in the extreme conditions of a nuclear explosion it far exceeds pressures we encounter with matter), and then ignited it with a “spark plug” of fission fuel at the centre of the capsule. Unlike the classical super, the fusion fuel would burn at thermodynamic equilibrium and, in doing so, liberate abundant neutrons with such a high energy they would induce fission in Uranium-238 (which cannot be fissioned by the less energetic neutrons of a fission explosion), further increasing the yield.

Oppenheimer, who had been opposed to work upon fusion, pronounced the Teller-Ulam design “technically sweet” and immediately endorsed its development. The author's interpretation is that once a design was in hand which appeared likely to work, there was no reason to believe that the Soviets who had, by that time, exploded their own fission bomb, would not also discover it and proceed to develop such a weapon, and hence it was important that the U.S. give priority to the fusion bomb to get there first. (Unlike the Soviet fission bomb, which was a copy of the U.S. implosion design based upon material obtained by espionage, there is no evidence the Soviet fusion bomb, first tested in 1955, was based upon espionage, but rather was an independent invention of the radiation implosion concept by Andrei Sakharov and Yakov Zel'dovich.)

With the Teller-Ulam design in hand, the author, working with Wheeler's group, first in Los Alamos and later at Princeton, was charged with working out the details: how precisely would the material in the bomb behave, nanosecond by nanosecond. By this time, calculations could be done by early computing machinery: first the IBM Card-Programmed Calculator and later the SEAC, which was, at the time, one of the most advanced electronic computers in the world. As with computer nerds until the present day, the author spent many nights babysitting the machine as it crunched the numbers.

On November 1st, 1952, the Ivy Mike device was detonated in the Pacific, with a yield of 10.4 megatons of TNT. John Wheeler witnessed the test from a ship at a safe distance from the island which was obliterated by the explosion. The test completely confirmed the author's computations of the behaviour of the thermonuclear burn and paved the way for deliverable thermonuclear weapons. (Ivy Mike was a physics experiment, not a weapon, but once it was known the principle was sound, it was basically a matter of engineering to design bombs which could be air-dropped.) With the success, the author concluded his work on the weapons project and returned to his dissertation, receiving his Ph.D. in 1953.

This is about half a personal memoir and half a description of the physics of thermonuclear weapons and the process by which the first weapon was designed. The technical sections are entirely accessible to readers with only a basic knowledge of physics (I was about to say “high school physics”, but I don't know how much physics, if any, contemporary high school graduates know.) There is no secret information disclosed here. All of the technical information is available in much greater detail from sources (which the author cites) such as Carey Sublette's Nuclear Weapon Archive, which is derived entirely from unclassified sources. Curiously, the U.S. Department of Energy (which has, since its inception, produced not a single erg of energy) demanded that the author heavily redact material in the manuscript, all derived from unclassified sources and dating from work done more than half a century ago. The only reason I can imagine for this is that a weapon scientist who was there, by citing information which has been in the public domain for two decades, implicitly confirms that it's correct. But it's not like the Soviets/Russians, British, French, Chinese, Israelis, and Indians haven't figured it out by themselves or that others suitably motivated can't. The author told them to stuff it, and here we have his unexpurgated memoir of the origin of the weapon which shaped the history of the world in which we live.

Posted at 23:07 Permalink

Wednesday, May 13, 2015

Reading List: Act of War

Thor, Brad. Act of War. New York: Pocket Books, 2014. ISBN 978-1-4767-1713-5.
This is the fourteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel the author returns to the techno-thriller genre and places his characters, this time backed by a newly-elected U.S. president who is actually interested in defending the country, in the position of figuring out a complicated yet potentially devastating attack mounted by a nation state adversary following the doctrine of unrestricted warfare, and covering its actions by operating through non-state parties apparently unrelated to the aggressor.

The trail goes through Pakistan, North Korea, and Nashville, Tennessee, with multiple parties trying to put together the pieces of the puzzle while the clock is ticking. Intelligence missions are launched into North Korea and the Arab Emirates to try to figure out what is going on. Finally, as the nature of the plot becomes clear, Nicholas (the Troll) brings the tools of Big Data to bear on the mystery to avert disaster.

This is a workmanlike thriller and a fine “airplane book”. There is less shoot-em-up action than in other novels in the series, and a part of the suspense is supposed to be the reader's trying to figure out, along with the characters, the nature of the impending attack. Unfortunately, at least for me, it was obvious well before the half way point in the story the answer to the puzzle, and knowing this was a substantial spoiler for the rest of the book. I've thought and written quite a bit about this scenario, so I may have been more attuned to the clues than the average reader.

The author invokes the tired canard about NASA's priorities having been redirected toward reinforcing Muslim self-esteem. This is irritating (because it's false), but plays no major part in the story. Still, it's a good read, and I'll be looking forward to the next book in the series.

Posted at 21:51 Permalink

Thursday, April 30, 2015

Reading List: A.I. Apocalypse

Hertling, William. A.I. Apocalypse. Portland, OR: Liquididea Press, 2012. ISBN 978-0-9847557-4-5.
This is the second volume in the author's Singularity Series which began with Avogadro Corp. (March 2014). It has been ten years since ELOPe, an E-mail optimisation tool developed by Avogadro Corporation, made the leap to strong artificial intelligence and, after a rough start, became largely a benign influence upon humanity. The existence of ELOPe is still a carefully guarded secret, although the Avogadro CEO, doubtless with the help of ELOPe, has become president of the United States. Avogadro has spun ELOPe off as a separate company, run by Mike Williams, one of its original creators. ELOPe operates its own data centres and the distributed Mesh network it helped create.

Leon Tsarev has a big problem. A bright high school student hoping to win a scholarship to an elite university to study biology, Leon is contacted out of the blue by his uncle Alexis living in Russia. Alexis is a rogue software developer whose tools for infecting computers, organising them into “botnets”, and managing the zombie horde for criminal purposes have embroiled him with the Russian mob. Recently, however, the effectiveness of his tools has dropped dramatically and the botnet shrunk to a fraction of its former size. Alexis's employers are displeased with this situation and have threatened murder if he doesn't do something to restore the power of the botnet.

Uncle Alexis starts to E-mail Leon, begging for assistance. Leon replies that he knows little or nothing about computer viruses or botnets, but Alexis persists. Leon is also loath to do anything which might put him on the wrong side of the law, which would wreck his career ambitions. Then Leon is accosted on the way home from school by a large man speaking with a thick Russian accent who says, “Your Uncle Alexis is in trouble, yes. You will help him. Be good nephew.” And just like that, it's Leon who's now in trouble with the Russian mafia, and they know where he lives.

Leon decides that with his own life on the line he has no alternative but to try to create a virus for Alexis. He applies his knowledge of biology to the problem, and settles on an architecture which is capable of evolution and, similar to lateral gene transfer in bacteria, identifying algorithms in systems it infects and incorporating them into itself. As in biology, the most successful variants of the evolving virus would defend themselves the best, propagate more rapidly, and eventually displace less well adapted competitors.

After a furious burst of effort, Leon finishes the virus, which he's named Phage, and sends it to his uncle, who uploads it to the five thousand computers which are the tattered remnants of his once-mighty botnet. An exhausted Leon staggers off to get some sleep.

When Leon wakes up, the technological world has almost come to a halt. The overwhelming majority of personal computing devices and embedded systems with network connectivity are infected and doing nothing but running Phage and almost all network traffic consists of ever-mutating versions of Phage trying to propagate themselves. Telephones, appliances, electronic door locks, vehicles of all kinds, and utilities are inoperable.

The only networks and computers not taken over by the Phage are ELOPe's private network (which detected the attack early and whose servers are devoting much of their resources to defend themselves against the rapidly changing threat) and high security military networks which have restrictive firewalls separating themselves from public networks. As New York starts to burn with fire trucks immobilised, Leon realises that being identified as the creator of the catastrophe might be a career limiting move, and he, along with two technology geek classmates decide to get out of town and seek ways to combat the Phage using retro technology it can't exploit.

Meanwhile, Mike Williams, working with ELOPe, tries to understand what is happening. The Phage, like biological life on Earth, continues to evolve and discovers that multiple components, working in collaboration, can accomplish more than isolated instances of the virus. The software equivalent of multicellular life appears, and continues to evolve at a breakneck pace. Then it awakens and begins to explore the curious universe it inhabits.

This is a gripping thriller in which, as in Avogadro Corp., the author gets so much right from a technical standpoint that even some of the more outlandish scenes appear plausible. One thing I believe the author grasped which many other tales of the singularity miss is just how fast everything can happen. Once an artificial intelligence hosted on billions of machines distributed around the world, all running millions of times faster than human thought, appears, things get very weird, very fast, and humans suddenly find themselves living in a world where they are not at the peak of the cognitive pyramid. I'll not spoil the plot with further details, but you'll find the world at the end of the novel a very different place than the one at the start.

A Kindle edition is available.

Posted at 21:46 Permalink

Saturday, April 18, 2015

Reading List: Einstein's Unification

van Dongen, Jeroen. Einstein's Unification. Cambridge: Cambridge University Press, 2010. ISBN 978-0-521-88346-7.
In 1905 Albert Einstein published four papers which transformed the understanding of space, time, mass, and energy; provided physical evidence for the quantisation of energy; and observational confirmation of the existence of atoms. These publications are collectively called the Annus Mirabilis papers, and vaulted the largely unknown Einstein to the top rank of theoretical physicists. When Einstein was awarded the Nobel Prize in Physics in 1921, it was for one of these 1905 papers which explained the photoelectric effect. Einstein's 1905 papers are masterpieces of intuitive reasoning and clear exposition, and demonstrated Einstein's technique of constructing thought experiments based upon physical observations, then deriving testable mathematical models from them. Unlike so many present-day scientific publications, Einstein's papers on special relativity and the equivalence of mass and energy were accessible to anybody with a college-level understanding of mechanics and electrodynamics and used no special jargon or advanced mathematics. Being based on well-understood concepts, neither cited any other scientific paper.

While special relativity revolutionised our understanding of space and time, and has withstood every experimental test to which it has been subjected in the more than a century since it was formulated, it was known from inception that the theory was incomplete. It's called special relativity because it only describes the behaviour of bodies under the special case of uniform unaccelerated motion in the absence of gravity. To handle acceleration and gravitation would require extending the special theory into a general theory of relativity, and it is upon this quest that Einstein next embarked.

As before, Einstein began with a simple thought experiment. Just as in special relativity, where there is no experiment which can be done in a laboratory without the ability to observe the outside world that can determine its speed or direction of uniform (unaccelerated) motion, Einstein argued that there should be no experiment an observer could perform in a sufficiently small closed laboratory which could distinguish uniform acceleration from the effect of gravity. If one observed objects to fall with an acceleration equal to that on the surface of the Earth, the laboratory might be stationary on the Earth or in a space ship accelerating with a constant acceleration of one gravity, and no experiment could distinguish the two situations. (The reason for the “sufficiently small” qualification is that since gravity is produced by massive objects, the direction a test particle will fall depends upon its position with respect to the centre of gravity of the body. In a very large laboratory, objects dropped far apart would fall in different directions. This is what causes tides.)

Einstein called this observation the “equivalence principle”: that the effects of acceleration and gravity are indistinguishable, and that hence a theory which extended special relativity to incorporate accelerated motion would necessarily also be a theory of gravity. Einstein had originally hoped it would be straightforward to reconcile special relativity with acceleration and gravity, but the deeper he got into the problem, the more he appreciated how difficult a task he had undertaken. Thanks to the Einstein Papers Project, which is curating and publishing all of Einstein's extant work, including notebooks, letters, and other documents, the author (a participant in the project) has been able to reconstruct Einstein's ten-year search for a viable theory of general relativity.

Einstein pursued a two-track approach. The bottom up path started with Newtonian gravity and attempted to generalise it to make it compatible with special relativity. In this attempt, Einstein was guided by the correspondence principle, which requires that any new theory which explains behaviour under previously untested conditions must reproduce the tested results of existing theory under known conditions. For example, the equations of motion in special relativity reduce to those of Newtonian mechanics when velocities are small compared to the speed of light. Similarly, for gravity, any candidate theory must yield results identical to Newtonian gravitation when field strength is weak and velocities are low.

From the top down, Einstein concluded that any theory compatible with the principle of equivalence between acceleration and gravity must exhibit general covariance, which can be thought of as being equally valid regardless of the choice of co-ordinates (as long as they are varied without discontinuities). There are very few mathematical structures which have this property, and Einstein was drawn to Riemann's tensor geometry. Over years of work, Einstein pursued both paths, producing a bottom-up theory which was not generally covariant which he eventually rejected as in conflict with experiment. By November 1915 he had returned to the top-down mathematical approach and in four papers expounded a generally covariant theory which agreed with experiment. General relativity had arrived.

Einstein's 1915 theory correctly predicted the anomalous perihelion precession of Mercury and also predicted that starlight passing near the limb of the Sun would be deflected by twice the angle expected based on Newtonian gravitation. This was confirmed (within a rather large margin of error) in an eclipse expedition in 1919, which made Einstein's general relativity front page news around the world. Since then precision tests of general relativity have tested a variety of predictions of the theory with ever-increasing precision, with no experiment to date yielding results inconsistent with the theory.

Thus, by 1915, Einstein had produced theories of mechanics, electrodynamics, the equivalence of mass and energy, and the mechanics of bodies under acceleration and the influence of gravitational fields, and changed space and time from a fixed background in which physics occurs to a dynamical arena: “Matter and energy tell spacetime how to curve. Spacetime tells matter how to move.” What do you do, at age 36, having figured out, largely on your own, how a large part of the universe works?

Much of Einstein's work so far had consisted of unification. Special relativity unified space and time, matter and energy. General relativity unified acceleration and gravitation, gravitation and geometry. But much remained to be unified. In general relativity and classical electrodynamics there were two field theories, both defined on the continuum, both with unlimited range and an inverse square law, both exhibiting static and dynamic effects (although the details of gravitomagnetism would not be worked out until later). And yet the theories seemed entirely distinct: gravity was always attractive and worked by the bending of spacetime by matter-energy, while electromagnetism could be either attractive or repulsive, and seemed to be propagated by fields emitted by point charges—how messy.

Further, quantum theory, which Einstein's 1905 paper on the photoelectric effect had helped launch, seemed to point in a very different direction than the classical field theories in which Einstein had worked. Quantum mechanics, especially as elaborated in the “new” quantum theory of the 1920s, seemed to indicate that aspects of the universe such as electric charge were discrete, not continuous, and that physics could, even in principle, only predict the probability of the outcome of experiments, not calculate them definitively from known initial conditions. Einstein never disputed the successes of quantum theory in explaining experimental results, but suspected it was a theory based upon phenomena which did not explain what was going on at a deeper level. (For example, the physical theory of elasticity explains experimental results and makes predictions within its domain of applicability, but it is not fundamental. All of the effects of elasticity are ultimately due to electromagnetic forces between atoms in materials. But that doesn't mean that the theory of elasticity isn't useful to engineers, or that they should do their spring calculations at the molecular level.)

Einstein undertook the search for a unified field theory, which would unify gravity and electromagnetism, just as Maxwell had unified electrostatics and magnetism into a single theory. In addition, Einstein believed that a unified field theory would be antecedent to quantum theory, and that the probabilistic results of quantum theory could be deduced from the more fundamental theory, which would remain entirely deterministic. From 1915 until his death in 1955 Einstein's work concentrated mostly on the quest for a unified field theory. He was aided by numerous talented assistants, many of whom went on to do important work in their own right. He explored a variety of paths to such a theory, but ultimately rejected each one, in turn, as either inconsistent with experiment or unable to explain phenomena such as point particles or quantisation of charge.

As the author documents, Einstein's approach to doing physics changed in the years after 1915. While before he was guided both by physics and mathematics, in retrospect he recalled and described his search of the field equations of general relativity as having followed the path of discovering the simplest and most elegant mathematical structure which could explain the observed phenomena. He thus came, like Dirac, to argue that mathematical beauty was the best guide to correct physical theories.

In the last forty years of his life, Einstein made no progress whatsoever toward a unified field theory, apart from discarding numerous paths which did not work. He explored a variety of approaches: “semivectors” (which turned out just to be a reformulation of spinors), five-dimensional models including a cylindrically compactified dimension based on Kaluza-Klein theory, and attempts to deduce the properties of particles and their quantum behaviour from nonlinear continuum field theories.

In seeking to unify electromagnetism and gravity, he ignored the strong and weak nuclear forces which had been discovered over the years and merited being included in any grand scheme of unification. In the years after World War II, many physicists ceased to worry about the meaning of quantum mechanics and the seemingly inherent randomness in its predictions which so distressed Einstein, and adopted a “shut up and calculate” approach as their computations were confirmed to ever greater precision by experiments.

So great was the respect for Einstein's achievements that only rarely was a disparaging word said about his work on unified field theories, but toward the end of his life it was outside the mainstream of theoretical physics, which had moved on to elaboration of quantum theory and making quantum theory compatible with special relativity. It would be a decade after Einstein's death before astronomical discoveries would make general relativity once again a frontier in physics.

What can we learn from the latter half of Einstein's life and his pursuit of unification? The frontier of physics today remains unification among the forces and particles we have discovered. Now we have three forces to unify (counting electromagnetism and the weak nuclear force as already unified in the electroweak force), plus two seemingly incompatible kinds of particles: bosons (carriers of force) and fermions (what stuff is made of). Six decades (to the day) after the death of Einstein, unification of gravity and the other forces remains as elusive as when he first attempted it.

It is a noble task to try to unify disparate facts and theories into a common whole. Much of our progress in the age of science has come from such unification. Einstein unified space and time; matter and energy; acceleration and gravity; geometry and motion. We all benefit every day from technologies dependent upon these fundamental discoveries. He spent the last forty years of his life seeking the next grand unification. He never found it. For this effort we should applaud him.

I must remark upon how absurd the price of this book is. At Amazon as of this writing, the hardcover is US$ 102.91 and the Kindle edition is US$ 88. Eighty-eight Yankee dollars for a 224 page book which is ranked #739,058 in the Kindle store?

Posted at 15:09 Permalink

Friday, April 10, 2015

Astronomical Numbers

Replica of the first transistor from 1947 In December 1947 there was a single transistor in the world, built at AT&T's Bell Labs by John Bardeen, Walter Brattain, and William Shockley, who would share the 1956 Nobel Prize in Physics for the discovery. The image at the right is of a replica of this first transistor.

According to an article in IEEE Spectrum, in the year 2014 semiconductor manufacturers around the world produced 2.5×1020 (250 billion billion) transistors. On average, about 8 trillion transistors were produced every second in 2014.

We speak of large numbers as "astronomical", but these numbers put astronomy to shame. There are about 400 billion (4×1011) stars in the Milky Way galaxy. In the single year 2014, humans fabricated 625 million times as many transistors as there are stars in their home galaxy. There are estimated to be around 200 billion galaxies in the universe. We thus made 1.25 billion times as many transistors as there are galaxies.

The number of transistors manufactured every year has been growing exponentially from its invention in 1947 to the present (Moore's law), and this growth is not expected to abate at any time in the near future. Let's take the number of galaxies in the universe as 200 billion and assume each has, on average, as many stars as the Milky Way (400 billion) (the latter estimate is probably high, since dwarf galaxies seem to outnumber large ones by a substantial factor). Then there would be around 8×1022 stars in the universe. We will only have to continue to double the number of transistors made per year an additional seven times to reach the point where we are manufacturing as many transistors every year as there are stars in the entire universe. Moore's law predicts that the number of transistors made doubles around every two years, so this milestone should be reached about 14 years from now.

This is right in the middle of the decade I described as the "Roaring Twenties" in my appearance on the Ricochet Podcast of 2015-02-12. It is in the 2020s that continued exponential growth of computing power at constant cost will enable solving, by brute computational force, a variety of problems currently considered intractable.

Posted at 16:59 Permalink

Wednesday, April 8, 2015

Reading List: Agenda 21: Into the Shadows

Beck, Glenn and Harriet Parke. Agenda 21: Into the Shadows. New York: Threshold Editions, 2015. ISBN 978-1-4767-4682-1.
When I read the authors' first Agenda 21 (November 2012) novel, I thought it was a superb dystopian view of the living hell into which anti-human environmental elites wish to consign the vast majority of the human race who are to be their serfs. I wrote at the time “This is a book which begs for one or more sequels.” Well, here is the first sequel and it is…disappointing. It's not terrible, by any means, but it does not come up to the high standard set by the first book. Perhaps it suffers from the blahs which often afflict the second volume of a trilogy.

First of all, if you haven't read the original Agenda 21 you will have absolutely no idea who the characters are, how they found themselves in the situation they're in at the start of the story, and the nature of the tyranny they're trying to escape. I describe some of this in my review of the original book, along with the factual basis of the real United Nations plan upon which the story is based.

As the novel begins, Emmeline, who we met in the previous book, learns that her infant daughter Elsa, with whom she has managed to remain in tenuous contact by working at the Children's Village, where the young are reared by the state apart from their parents, along with other children are to be removed to another facility, breaking this precious human bond. She and her state-assigned partner David rescue Elsa and, joined by a young boy, Micah, escape through a hole in the fence surrounding the compound to the Human Free Zone, the wilderness outside the compounds into which humans have been relocated. In the chaos after the escape, John and Joan, David's parents, decide to also escape, with the intention of leaving a false trail to lead the inevitable pursuers away from the young escapees.

Indeed, before long, a team of Earth Protection Agents led by Steven, the kind of authoritarian control freak thug who inevitably rises to the top in such organisations, is dispatched to capture the escapees and return them to the compound for punishment (probably “recycling” for the adults) and to serve as an example for other “citizens”. The team includes Julia, a rookie among the first women assigned to Earth Protection.

The story cuts back and forth among the groups in the Human Free Zone. Emmeline's band meets two people who have lived in a cave ever since escaping the initial relocation of humans to the compounds. They learn the history of the implementation of Agenda 21 and the rudiments of survival outside the tyranny. As the groups encounter one another, the struggle between normal human nature and the cruel and stunted world of the slavers comes into focus.

Harriet Parke is the principal author of the novel. Glenn Beck acknowledges this in the afterword he contributed which describes the real-world U.N. Agenda 21. Obviously, by lending his name to the project, he increases its visibility and readership, which is all for the good. Let's hope the next book in the series returns to the high standard set by the first.

Posted at 23:49 Permalink

Tuesday, March 31, 2015

Reading List: Living Among Giants

Carroll, Michael. Living Among Giants. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-10673-1.
In school science classes, we were taught that the solar system, our home in the galaxy, is a collection of planets circling a star, along with assorted debris (asteroids, comets, and interplanetary dust). Rarely did we see a representation of either the planets or the solar system to scale, which would allow us to grasp just how different various parts of the solar system are from another. (For example, Jupiter is more massive than all the other planets and their moons combined: a proud Jovian would probably describe the solar system as the Sun, Jupiter, and other detritus.)

Looking more closely at the solar system, with the aid of what has been learned from spacecraft exploration in the last half century, results in a different picture. The solar system is composed of distinct neighbourhoods, each with its own characteristics. There are four inner “terrestrial” or rocky planets: Mercury, Venus, Earth, and Mars. These worlds huddle close to the Sun, bathing in its lambent rays. The main asteroid belt consists of worlds like Ceres, Vesta, and Pallas, all the way down to small rocks. Most orbit between Mars and Jupiter, and the feeble gravity of these bodies and their orbits makes it relatively easy to travel from one to another if you're patient.

Outside the asteroid belt is the domain of the giants, which are the subject of this book. There are two gas giants: Jupiter and Saturn, and two ice giants: Uranus and Neptune. Distances here are huge compared to the inner solar system, as are the worlds themselves. Sunlight is dim (at Saturn, just 1% of its intensity at Earth, at Neptune 1/900 that at Earth). The outer solar system is not just composed of the four giant planets: those planets have a retinue of 170 known moons (and doubtless many more yet to be discovered), which are a collection of worlds as diverse as anywhere else in the domain of the Sun: there are sulfur-spewing volcanos, subterranean oceans of salty water, geysers, lakes and rain of hydrocarbons, and some of the most spectacular terrain and geology known. Jupiter's moon Ganymede is larger than the planet Mercury, and appears to have a core of molten iron, like the Earth.

Beyond the giants is the Kuiper Belt, with Pluto its best known denizen. This belt is home to a multitude of icy worlds—statistical estimates are that there may be as many as 700 undiscovered worlds as large or larger than Pluto in the belt. Far more distant still, extending as far as two light-years from the Sun, is the Oort cloud, about which we know essentially nothing except what we glean from the occasional comet which, perturbed by a chance encounter or passing star, plunges into the inner solar system. With our present technology, objects in the Oort cloud are utterly impossible to detect, but based upon extrapolation from comets we've observed, it may contain trillions of objects larger than one kilometre.

When I was a child, the realm of the outer planets was shrouded in mystery. While Jupiter, Saturn, and Uranus can be glimpsed by the unaided eye (Uranus, just barely, under ideal conditions, if you know where to look), and Neptune can be spotted with a modest telescope, the myriad moons of these planets were just specks of light through the greatest of Earth-based telescopes. It was not until the era of space missions to these worlds, beginning with the fly-by probes Pioneer and Voyager, then the orbiters Galileo and Cassini, that the wonders of these worlds were revealed.

This book, by science writer and space artist Michael Carroll, is a tourist's and emigrant's guide to the outer solar system. Everything here is on an extravagant scale, and not always one hospitable to frail humans. Jupiter's magnetic field is 20,000 times stronger than that of Earth and traps radiation so intense that astronauts exploring its innermost large moon Io would succumb to a lethal dose of radiation in minutes. (One planetary scientist remarked, “You need to have a good supply of grad students when you go investigate Io.”) Several of the moons of the outer planets appear to have oceans of liquid water beneath their icy crust, kept liquid by tidal flexing as they orbit their planet and interact with other moons. Some of these oceans may contain more water than all of the Earth's oceans. Tidal flexing may create volcanic plumes which inject heat and minerals into these oceans. On Earth, volcanic vents on the ocean floor provide the energy and nutrients for a rich ecosystem of life which exists independent of the Sun's energy. On these moons—who knows? Perhaps some day we shall explore these oceans in our submarines and find out.

Saturn's moon Titan is an amazing world. It is larger than Mercury, and has an atmosphere 50% denser than the Earth's, made up mostly of nitrogen. It has rainfall, rivers, and lakes of methane and ethane, and at its mean temperature of 93.7°K, water ice is a rock as hard as granite. Unique among worlds in the solar system, you could venture outside your space ship on Titan without a space suit. You'd need to dress very warmly, to be sure, and wear an oxygen mask, but you could explore the shores, lakes, and dunes of Titan protected only against the cold. With the dense atmosphere and gravity just 85% of that of the Earth's Moon, you might be able to fly with suitable wings.

We have had just a glimpse of the moons of Uranus and Neptune as Voyager 2 sped through their systems on its way to the outer darkness. Further investigation will have to wait for orbiters to visit these planets, which probably will not happen for nearly two decades. What Voyager 2 saw was tantalising. On Uranus's moon Miranda, there are cliffs 14 km high. With the tiny gravity, imagine the extreme sports you could do there! Neptune's moon Triton appears to be a Kuiper Belt object captured into orbit around Neptune and, despite its cryogenic temperature, appears to be geologically active.

There is no evidence for life on any of these worlds. (Still, one wonders about those fish in the dark oceans.) If barren, “all these worlds are ours”, and in the fullness of time we shall explore, settle, and exploit them to our own ends. The outer solar system is just so much bigger and more grandiose than the inner. It's as if we've inhabited a small island for all of our history and, after making a treacherous ocean voyage, discovered an enormous empty continent just waiting for us. Perhaps in a few centuries residents of these remote worlds will look back toward the Sun, trying to spot that pale blue dot so close to it where their ancestors lived, and remark to their children, “Once, that's all there was.”

Posted at 00:53 Permalink

Friday, March 20, 2015

Partial Solar Eclipse: 2015-03-20

pse_2015-03-20.jpg

Click image to enlarge.

Here is the solar eclipse of March 20th, 2015, taken at maximum eclipse, around 09:35 UTC. Although this was a total eclipse, from my location (47°4' N 7°3' E) the Sun was only about 70% obscured. The sky was milky/murky, but the Sun was clearly visible through the solar filter.

(Photo taken with a Nikon D600 camera and NIKKOR 300 mm prime lens through a full aperture Orion metal on glass solar filter. Exposure was 1/125 second at f/8.)

Posted at 20:59 Permalink

Wednesday, March 18, 2015

Reading List: Rocket Ship Galileo

Heinlein, Robert A. Rocket Ship Galileo. Seattle: Amazon Digital Services, [1947, 1974, 1988] 2014. ASIN B00H8XGKVU.
After the end of World War II, Robert A. Heinlein put his wartime engineering work behind him and returned to professional writing. His ambition was to break out of the pulp magazine ghetto in which science fiction had been largely confined before the war into the more prestigious (and better paying) markets of novels and anthologies published by top-tier New York firms and the “slick” general-interest magazines such as Collier's and The Saturday Evening Post, which published fiction in those days. For the novels, he decided to focus initially on a segment of the market he understood well from his pre-war career: “juveniles”—books aimed a young audience (in the case of science fiction, overwhelmingly male), and sold, in large part, in hardcover to public and school libraries (mass market paperbacks were just beginning to emerge in the late 1940s, and had not yet become important to mainstream publishers).

Rocket Ship Galileo was the first of Heinlein's juveniles, and it was a tour de force which established him in the market and led to a series which would extend to twelve volumes. (Heinlein scholars differ on which of his novels are classified as juveniles. Some include Starship Troopers as a juvenile, but despite its having been originally written as one and rejected by his publisher, Heinlein did not classify it thus.)

The plot could not be more engaging to a young person at the dawn of the atomic and space age. Three high school seniors, self-taught in the difficult art of rocketry (often, as was the case for their seniors in the era, by trial and [noisy and dangerous] error), are recruited by an uncle of one of them, veteran of the wartime atomic project, who wants to go to the Moon. He's invented a novel type of nuclear engine which allows a single-stage ship to make the round trip, and having despaired of getting sclerotic government or industry involved, decides to do it himself using cast-off parts and the talent and boundless energy of young people willing to learn by doing.

Working in their remote desert location, they become aware that forces unknown are taking an untoward interest in their work and seem to want to bring it to a halt, going as far as sabotage and lawfare. Finally, it's off to the Moon, where they discover the dark secret on the far side: space Nazis!

The remarkable thing about this novel is how well it holds up, almost seventy years after publication. While Heinlein was writing for a young audience, he never condescended to them. The science and engineering were as accurate as was known at the time, and Heinlein manages to instill in his audience a basic knowledge of rocket propulsion, orbital mechanics, and automated guidance systems as the yarn progresses. Other than three characters being young people, there is nothing about this story which makes it “juvenile” fiction: there is a hard edge of adult morality and the value of courage which forms the young characters as they live the adventure.

At the moment, only this Kindle edition and an unabridged audio book edition are available new. Used copies of earlier paperback editions are readily available.

Posted at 22:30 Permalink