Reading List: A Short History of Man
Wednesday, May 20, 2015 15:38
Reading List: Building the H Bomb
Saturday, May 16, 2015 23:07
Reading List: Act of War
Wednesday, May 13, 2015 21:51
Reading List: A.I. Apocalypse
Thursday, April 30, 2015 21:46
Reading List: Einstein's Unification
Saturday, April 18, 2015 15:09
Wednesday, May 20, 2015 15:38
- Hoppe, Hans-Hermann.
A Short History of Man.
Auburn, AL: Mises Institute, 2015.
The author is one of the most brilliant and original thinkers
and eloquent contemporary expositors
of libertarianism, anarcho-capitalism, and
Educated in Germany, Hoppe came to
the United States to study with
and in 1986 joined Rothbard on the faculty of the University
of Nevada, Las Vegas, where he taught until his retirement in 2008.
Hoppe's 2001 book,
Democracy: The God That Failed
(June 2002), made the argument that democratic election of
temporary politicians in the modern all-encompassing state
will inevitably result in profligate spending and runaway
debt because elected politicians have every incentive to
buy votes and no stake in the long-term solvency and prosperity of
the society. Whatever the drawbacks (and historical examples of how
things can go wrong), a hereditary monarch has no need to buy votes
and every incentive not to pass on a bankrupt state to his descendants.
This short book (144 pages) collects three essays previously published
elsewhere which, taken together, present a comprehensive picture
of human development from the emergence of modern humans in
Africa to the present day. Subtitled “Progress and Decline”,
the story is of long periods of stasis, two enormous breakthroughs,
with, in parallel, the folly of ever-growing domination of society by
a coercive state which, in its modern incarnation, risks halting or
reversing the gains of the modern era.
Members of the collectivist and politically-correct mainstream in the
fields of economics, anthropology, and sociology who can abide
Prof. Hoppe's adamantine libertarianism will probably have their
skulls explode when they encounter his overview of human economic and
social progress, which is based upon genetic selection for increased
intelligence and low
forced to migrate due to population pressure from the tropics where
the human species originated into more demanding climates north and
south of the Equator, and onward toward the poles. In the tropics,
every day is about the same as the next; seasons don't differ much from
one another; and the variation in the length of the day is not great.
In the temperate zone and beyond, hunter-gatherers must cope with
plant life which varies along with the seasons, prey animals that
migrate, hot summers and cold winters, with the latter requiring the
knowledge and foresight of how to make provisions for the lean season.
Predicting the changes in seasons becomes important, and in this may
have been the genesis of astronomy.
A hunter-gatherer society is essentially parasitic upon the natural
environment—it consumes the plant and animal bounty of nature
but does nothing to replenish it. This means that for a given
territory there is a maximum number (varying due to details of terrain,
climate, etc.) of humans it can support before an increase in population
leads to a decline in the per-capita standard of living of its inhabitants.
This is what the author calls the
at from the other end, a human population which is growing as human
populations tend to do, will inevitably reach the carrying capacity
of the area in which it lives. When this happens, there are only three
options: artificially limit the growth in population to the land's
carrying capacity, split off one or more groups which migrate to new
territory not yet occupied by humans, or conquer new land from adjacent
groups, either killing them off or driving them to migrate. This was
the human condition for more than a hundred millennia, and it is this
population pressure, the author contends, which drove human migration from
tropical Africa into almost every niche on the globe in which humans could
survive, even some of the most marginal.
While the life of a hunter-gatherer band in the tropics is relatively
easy (or so say those who have studied the few remaining populations
who live that way today), the further from the equator the more intelligence,
knowledge, and the ability to transmit it from generation to
generation is required to survive. This creates a selection pressure for
intelligence: individual members of a band of hunter-gatherers who are
better at hunting and gathering will have more offspring which survive to
maturity and bands with greater intelligence produced in this manner
will grow faster and by migration and conquest displace those less endowed.
This phenomenon would cause one to expect that (discounting the effects
of large-scale migrations) the mean intelligence of human populations would
be the lowest near the equator and increase with latitude (north or south).
This, in general terms, and excluding marginal environments, is
precisely what is observed, even today.
After hundreds of thousands of years as hunter-gatherers parasitic upon
nature, sometime around 11,000 years ago, probably first in the
in the Middle East, what is now called the
occurred. Humans ceased to wander in search of plants and game, and
settled down into fixed communities which supported themselves by cultivating
plants and raising animals they had domesticated. Both the plants
and animals underwent selection by humans who bred those most adapted to
their purposes. Agriculture was born. Humans who adopted the new means
of production were no longer parasitic upon nature: they produced their
sustenance by their own labour, improving upon that supplied by nature through
their own actions. In order to do this, they had to invent a series of
new technologies (for example, milling grain and fencing pastures) which
did not exist in nature. Agriculture was far more efficient than the
hunter-gatherer lifestyle in that a given amount of land (if suitable
for known crops) could support a much larger human population.
While agriculture allowed a large increase in the human population, it
did not escape the Malthusian trap: it simply increased the population
density at which the carrying capacity of the land would be reached.
Technological innovations such as irrigation and crop rotation could further increase
the capacity of the land, but population increase would eventually surpass
the new limit. As a result of this, from 1000 B.C.
to A.D. 1800, income per capita (largely
measured in terms of food) barely varied: the benefit of each innovation was
quickly negated by population increase. To be sure, in all of this epoch
there were a few wealthy people, but the overwhelming majority of the
population lived near the subsistence level.
But once again, slowly but surely, a selection pressure was being applied
upon humans who adopted the agricultural lifestyle. It is cognitively more
difficult to be a farmer or rancher than to be a member of a
hunter-gatherer band, and success depends strongly upon having a low
time preference—to be willing to forgo immediate consumption for
a greater return in the future. (For example, a farmer who does not reserve and
protect seeds for the next season will fail. Selective breeding of plants
and amimals to improve their characteristics takes years to produce
results.) This creates an evolutionary
pressure in favour of further increases in intelligence and, to the extent that
such might be genetic rather than due to culture, for low
time preference. Once the family
emerged as the principal unit of society rather than the hunter-gatherer band,
selection pressure was amplified since those with the selected-for characteristics
would produce more offspring and the phenomenon of
which exists in communal bands is less likely to occur.
Around the year 1800, initially in Europe and later elsewhere, a startling
change occurred: the
In societies which adopted the emerging industrial means of
production, per capita income, which had been stagnant for almost two millennia,
off like a skyrocket,
while at the same time population began to
grow exponentially, rising from around 900 million in 1800 to 7 billion today.
The Malthusian trap had been escaped; it appeared for the first time that an increase
in population, far from consuming the benefits of innovation, actually contributed
to and accelerated it.
There are some deep mysteries here. Why did it take so long for humans to
invent agriculture? Why, after the invention of agriculture, did it take
so long to invent industrial production? After all, the natural resources
extant at the start of both of these
revolutions were present in all of the preceding period, and there were people
with the leisure to think and invent at all times in history. The author
argues that what differed was the people. Prior to the advent of
agriculture, people were simply not sufficiently intelligent to invent it
(or, to be more precise, since intelligence follows something close to a
there was an insufficient fraction of the population with the requisite
intelligence to discover and implement the idea of agriculture). Similarly,
prior to the Industrial Revolution, the intelligence of the general population
was insufficient for it to occur. Throughout the long fallow periods, however,
natural selection was breeding smarter humans and, eventually, in some place
and time, a sufficient fraction of smart people, the required natural resources, and
a society sufficiently open to permit innovation and moving beyond tradition
would spark the fire. As the author notes, it's much easier to copy a good
idea once you've seen it working than to come up with it in the first place and get
it to work the first time.
Some will argue that Hoppe's hypothesis that human intelligence has
been increasing over time is falsified by the fact that societies much
closer in time to the dawn of agriculture produced works of art,
literature, science, architecture, and engineering which are
comparable to those of modern times. But those works were produced
not by the average person but rather outliers which exist in all times
and places (although in smaller numbers when mean intelligence is
lower). For a general
in society, it is a necessary condition that the bulk of the
population involved have intelligence adequate to work in the new way.
After investigating human progress on the grand scale over long periods of time,
the author turns to the phenomenon which may cause this progress to cease and
turn into decline: the growth of the coercive state. Hunter-gatherers had little
need for anything which today would be called governments. With bands on the
order of 100 people sharing resources in common, many sources of dispute would not
occur and those which did could be resolved by trusted elders or, failing that,
combat. When humans adopted agriculture and began to live in settled
communities, and families owned and exchanged property with one another,
a whole new source of problems appeared. Who has the right to use this land?
Who stole my prize animal? How are the proceeds of a joint effort to be
distributed among the participants? As communities grew and trade among them
flourished, complexity increased apace. Hoppe traces how the resolution of these
conflicts has evolved over time. First, the parties to the dispute would turn to
a member of an aristocracy, a member of the community respected because of their
intelligence, wisdom, courage, or reputation for fairness, to settle the matter.
(We often think of an aristocracy as hereditary but, although many aristocracies
evolved into systems of hereditary nobility, the word originally meant “rule by
the best”, and that is how the institution began.)
With growing complexity, aristocrats (or nobles) needed a way to
resolve disputes among themselves, and this led to the emergence of
kings. But like the nobles, the king was seen to apply a law which
was part of nature (or, in the English common law tradition,
discovered through the experience of precedents). It was with the
emergence of absolute monarchy, constitutional monarchy, and finally
democracy that things began to go seriously awry. In time, law became
seen not as something which those given authority apply, but
rather something those in power create. We have largely
legislation is not law,
and that rights are not granted
to us by those in power, but inhere in us and are taken away and/or
constrained by those willing to initiate force against others to work
their will upon them.
The modern welfare state risks undoing a thousand centuries of human
progress by removing the selection pressure for intelligence and low
time preference. Indeed, the welfare state punishes (taxes) the
productive, who tend to have these characteristics, and subsidises
those who do not, increasing their fraction within the population.
Evolution works slowly, but inexorably. But the effects of shifting
incentives can manifest themselves long before biology has its way.
When a population is told “You've made enough”, “You
didn't build that”, or sees working harder to earn more as
simply a way to spend more of their lives supporting those who don't
(along with those who have gamed the system to extract resources
confiscated by the state), that glorious exponential curve which took
off in 1800 may begin to bend down toward the horizontal and perhaps
eventually turn downward.
I don't usually include lengthy quotes, but the following passage from
the third essay, “From Aristocracy to Monarchy to Democracy”,
is so brilliant and illustrative of what you'll find herein
I can't resist.
Assume now a group of people aware of the reality of interpersonal
conflicts and in search of a way out of this predicament. And assume
that I then propose the following as a solution: In every case of
conflict, including conflicts in which I myself am involved, I will
have the last and final word. I will be the ultimate judge as to who
owns what and when and who is accordingly right or wrong in any
dispute regarding scarce resources. This way, all conflicts can be
avoided or smoothly resolved.
What would be my chances of finding your or anyone else's
agreement to this proposal?
My guess is that my chances would be virtually zero, nil. In fact, you
and most people will think of this proposal as ridiculous and likely
consider me crazy, a case for psychiatric treatment. For you will
immediately realize that under this proposal you must literally fear
for your life and property. Because this solution would allow me to
cause or provoke a conflict with you and then decide this conflict in
my own favor. Indeed, under this proposal you would essentially give
up your right to life and property or even any pretense to such a
right. You have a right to life and property only insofar as I grant
you such a right, i.e., as long as I decide to let you live and keep
whatever you consider yours. Ultimately, only I have a right to life
and I am the owner of all goods.
And yet—and here is the puzzle—this obviously crazy solution
is the reality. Wherever you look, it has been put into effect in the
form of the institution of a State. The State is the ultimate judge in
every case of conflict. There is no appeal beyond its verdicts. If you
get into conflicts with the State, with its agents, it is the State
and its agents who decide who is right and who is wrong. The State has
the right to tax you. Thereby, it is the State that makes the decision
how much of your property you are allowed to keep—that is, your
property is only “fiat” property. And the State can make
laws, legislate—that is, your entire life is at the mercy of the
State. It can even order that you be killed—not in defense of
your own life and property but in the defense of the State or whatever
the State considers “defense” of its
This work is licensed under the Creative Commons
International License and may be redistributed pursuant to the
terms of that license. In addition to the paperback and
Kindle editions available from Amazon
The book may be downloaded for free from the
Library of the Mises Institute
formats, or read on-line in an
Saturday, May 16, 2015 23:07
- Ford, Kenneth W.
Building the H Bomb.
Singapore: World Scientific, 2015.
In the fall of 1948, the author entered the graduate program in physics
at Princeton University, hoping to obtain a Ph.D. and pursue a
career in academia. In his first year, he took a course in
classical mechanics taught by
Archibald Wheeler and realised that, despite the dry material
of the course, he was in the presence of an extraordinary
teacher and thinker, and decided he wanted Wheeler as his thesis
advisor. In April of 1950, after Wheeler returned from an extended
visit to Europe, the author approached him to become his advisor,
not knowing in which direction his research would proceed. Wheeler
immediately accepted him as a student, and then said that he (Wheeler)
would be absent for a year or more at Los Alamos to work on the
hydrogen bomb, and that he'd be pleased if Ford could join him on
the project. Ford accepted, in large part because he believed that
working on such a challenge would be “fun”, and that it
would provide a chance for daily interaction with Wheeler and other
senior physicists which would not exist in a regular Ph.D. program.
Well before the Manhattan project built the first fission weapon,
there had been interest in fusion as an alternative source of
nuclear energy. While fission releases energy by splitting heavy
atoms such as uranium and plutonium into lighter atoms, fusion
merges lighter atoms such as hydrogen and its isotopes deuterium
and tritium into heavier nuclei like helium. While nuclear fusion
can be accomplished in a desktop apparatus, doing so requires vastly
more energy input than is released, making it impractical as an
energy source or weapon. Still, compared to enriched uranium or
plutonium, the fuel for a fusion weapon is abundant and inexpensive
and, unlike a fission weapon whose yield is limited by the critical
mass beyond which it would predetonate, in principle a fusion weapon
could have an unlimited yield: the more fuel, the bigger the bang.
Once the Manhattan Project weaponeers became confident they could
build a fission weapon, physicists, most prominent among them
realised that the extreme temperatures created by a nuclear
detonation could be sufficient to ignite a fusion reaction in
light nuclei like deuterium and that reaction, once
started, might propagate by its own energy release just like
the chemical fire in a burning log. It seemed plausible—the
temperature of an exploding fission bomb exceeded that of the
centre of the Sun, where nuclear fusion was known to occur. The
big question was whether the fusion burn, once started, would
continue until most of the fuel was consumed or fizzle out as its
energy was radiated outward and the fuel dispersed by the explosion.
Answering this question required detailed computations of a rapidly
evolving system in three dimensions with a time slice measured in
nanoseconds. During the Manhattan Project, a “computer”
was a woman operating a mechanical calculator, and even with
large rooms filled with hundreds of “computers” the
problem was intractably difficult. Unable to directly model the
system, physicists resorted to analytical models which produced
ambiguous results. Edward Teller remained optimistic that the
design, which came to be called the “Classical Super”,
would work, but many others, including
J. Robert Oppenheimer,
Enrico Fermi, and
based upon the calculations that could be done at the time, concluded
it would probably fail. Oppenheimer's opposition to the Super or
hydrogen bomb project has been presented as a moral opposition to
development of such a weapon, but the author's contemporary recollection
is that it was based upon Oppenheimer's belief that the classical
super was unlikely to work, and that effort devoted to it would
be at the expense of improved fission weapons which could be
deployed in the near term.
All of this changed on March 9th, 1951. Edward Teller and Stanislaw Ulam
published a report which presented a new approach to a fusion bomb.
Unlike the classical super, which required the fusion fuel to burn
on its own after being ignited, the new design, now called the
design, compressed a capsule of fusion fuel by the radiation pressure of
a fission detonation (usually, we don't think of radiation as having
pressure, but in the extreme conditions of a nuclear explosion it
far exceeds pressures we encounter with matter), and then ignited it
with a “spark plug” of fission fuel at the centre of
the capsule. Unlike the classical super, the fusion fuel would burn at
thermodynamic equilibrium and, in doing so, liberate abundant
neutrons with such a high energy they would induce fission in
Uranium-238 (which cannot be fissioned by the less energetic neutrons of
a fission explosion), further increasing the yield.
Oppenheimer, who had been opposed to work upon fusion, pronounced the
Teller-Ulam design “technically sweet” and immediately
endorsed its development. The author's interpretation is that once
a design was in hand which appeared likely to work, there was no
reason to believe that the Soviets who had, by that time, exploded
their own fission bomb, would not also discover it and proceed to
develop such a weapon, and hence it was important that the U.S.
give priority to the fusion bomb to get there first. (Unlike the
Soviet fission bomb, which was a copy of the U.S. implosion design
based upon material obtained by espionage, there is no evidence the
Soviet fusion bomb, first tested in 1955, was based upon espionage, but
rather was an independent invention of the radiation implosion concept by
With the Teller-Ulam design in hand, the author, working with Wheeler's
group, first in Los Alamos and later at Princeton, was charged with
working out the details: how precisely would the material in the bomb
behave, nanosecond by nanosecond. By this time, calculations could
be done by early computing machinery: first the IBM
and later the
was, at the time, one of the most advanced electronic computers in
the world. As with computer nerds until the present day, the author spent
many nights babysitting the machine as it crunched the numbers.
On November 1st, 1952, the
Ivy Mike device was
detonated in the Pacific, with a yield of 10.4 megatons of TNT. John
Wheeler witnessed the test from a ship at a safe distance
from the island which was obliterated by the explosion. The test
completely confirmed the author's computations of the behaviour
of the thermonuclear burn and paved the way for deliverable
thermonuclear weapons. (Ivy Mike was a physics experiment, not
a weapon, but once it was known the principle was sound, it was basically
a matter of engineering to design bombs which could be air-dropped.)
With the success, the author concluded his work on the weapons project
and returned to his dissertation, receiving his Ph.D. in 1953.
This is about half a personal memoir and half a description of the
physics of thermonuclear weapons and the process by which the first
weapon was designed. The technical sections are entirely accessible
to readers with only a basic knowledge of physics (I was about to say
“high school physics”, but I don't know how much physics,
if any, contemporary high school graduates know.) There is no
secret information disclosed here. All of the technical information
is available in much greater detail from sources (which the author
cites) such as Carey Sublette's
Nuclear Weapon Archive,
which is derived entirely from unclassified sources. Curiously, the
U.S. Department of Energy (which has, since its inception, produced
not a single erg of energy) demanded that the author
redact material in the manuscript, all derived from unclassified
sources and dating from work done more than half a century ago. The only
reason I can imagine for this is that a weapon scientist who was
there, by citing information which has been in the public domain for
two decades, implicitly confirms that it's correct. But it's not like
the Soviets/Russians, British, French, Chinese, Israelis, and
Indians haven't figured it out by themselves or that others
suitably motivated can't. The author told them to stuff it, and here
we have his unexpurgated memoir of the origin of the weapon which
shaped the history of the world in which we live.
Wednesday, May 13, 2015 21:51
- Thor, Brad.
Act of War.
New York: Pocket Books, 2014.
This is the fourteenth in the author's
Harvath series, which began with
The Lions of Lucerne (October 2010). In this
novel the author returns to the techno-thriller genre and places
his characters, this time backed by a newly-elected U.S. president who
is actually interested in defending the country, in the position of
figuring out a complicated yet potentially devastating attack mounted by
a nation state adversary following the doctrine of
warfare, and covering its actions by operating through non-state
parties apparently unrelated to the aggressor.
The trail goes through Pakistan, North Korea, and Nashville, Tennessee, with
multiple parties trying to put together the pieces of the puzzle while
the clock is ticking. Intelligence missions are launched into North Korea
and the Arab Emirates to try to figure out what is going on. Finally,
as the nature of the plot becomes clear, Nicholas (the Troll) brings
the tools of Big Data to bear on the mystery to avert disaster.
This is a workmanlike thriller and a fine “airplane book”.
There is less shoot-em-up action than in other novels in the series, and
a part of the suspense is supposed to be the reader's trying to figure
out, along with the characters, the nature of the impending attack.
Unfortunately, at least for me, it was obvious well before the half
way point in the story the answer to the puzzle, and knowing this was
a substantial spoiler for the rest of the book. I've thought and written
quite a bit about this scenario, so I may have been more attuned to
the clues than the average reader.
The author invokes the tired canard about NASA's priorities having
been redirected toward reinforcing Muslim self-esteem. This is
irritating (because it's false), but plays no major part in the
story. Still, it's a good read, and I'll be looking forward to the
next book in the series.
Thursday, April 30, 2015 21:46
- Hertling, William.
Portland, OR: Liquididea Press, 2012.
This is the second volume in the author's Singularity Series
which began with Avogadro Corp.
(March 2014). It has been ten years since ELOPe, an
E-mail optimisation tool developed by Avogadro Corporation,
made the leap to strong artificial intelligence and, after
a rough start, became largely a benign influence upon humanity.
The existence of ELOPe is still a carefully guarded secret,
although the Avogadro CEO, doubtless with the help of ELOPe,
has become president of the United States. Avogadro has spun
ELOPe off as a separate company, run by Mike Williams, one
of its original creators. ELOPe operates its own data centres
and the distributed Mesh network it helped create.
Leon Tsarev has a big problem. A bright high school student
hoping to win a scholarship to an elite university to study
biology, Leon is contacted out of the blue by his uncle Alexis
living in Russia. Alexis is a rogue software developer whose
tools for infecting computers, organising them into
“botnets”, and managing the zombie horde for
criminal purposes have embroiled him with the Russian mob.
Recently, however, the effectiveness of his tools has
dropped dramatically and the botnet shrunk to a fraction
of its former size. Alexis's employers are displeased with
this situation and have threatened murder if he doesn't
do something to restore the power of the botnet.
Uncle Alexis starts to E-mail Leon, begging for assistance.
Leon replies that he knows little or nothing about
computer viruses or botnets, but Alexis persists. Leon is
also loath to do anything which might put him on the wrong
side of the law, which would wreck his career ambitions.
Then Leon is accosted on the way home from school by a
large man speaking with a thick Russian accent who says,
“Your Uncle Alexis is in trouble, yes. You will help
him. Be good nephew.” And just like that, it's Leon
who's now in trouble with the Russian mafia, and they know
where he lives.
Leon decides that with his own life on the line he has no
alternative but to try to create a virus for Alexis. He
applies his knowledge of biology to the problem, and settles
on an architecture which is capable of evolution and, similar
to lateral gene transfer in bacteria, identifying algorithms
in systems it infects and incorporating them into itself. As
in biology, the most successful variants of the evolving
virus would defend themselves the best, propagate more
rapidly, and eventually displace less well adapted
After a furious burst of effort, Leon finishes the virus,
which he's named Phage, and sends it to his uncle, who
uploads it to the five thousand computers which are the
tattered remnants of his once-mighty botnet. An exhausted
Leon staggers off to get some sleep.
When Leon wakes up, the technological world has almost
come to a halt. The overwhelming majority of personal
computing devices and embedded systems with network
connectivity are infected and doing nothing but running
Phage and almost all network traffic consists of ever-mutating
versions of Phage trying to propagate themselves. Telephones,
appliances, electronic door locks, vehicles of all kinds,
and utilities are inoperable.
The only networks and computers not taken over by the Phage
are ELOPe's private network (which detected the attack early
and whose servers are devoting much of their resources to
defend themselves against the rapidly changing threat) and
high security military networks which have restrictive firewalls
separating themselves from public networks. As New York starts
to burn with fire trucks immobilised, Leon realises that being
identified as the creator of the catastrophe might be a career
limiting move, and he, along with two technology geek classmates
decide to get out of town and seek ways to combat the Phage
using retro technology it can't exploit.
Meanwhile, Mike Williams, working with ELOPe, tries to understand
what is happening. The Phage, like biological life on Earth, continues
to evolve and discovers that multiple components, working in
collaboration, can accomplish more than isolated instances of the
virus. The software equivalent of multicellular life appears,
and continues to evolve at a breakneck pace. Then it awakens and
begins to explore the curious universe it inhabits.
This is a gripping thriller in which, as in Avogadro Corp.,
the author gets so much right from a technical standpoint that
even some of the more outlandish scenes appear plausible.
One thing I believe the author grasped which many other
tales of the singularity miss is just how fast everything
can happen. Once an artificial intelligence hosted on billions of
machines distributed around the world, all running millions of times
faster than human thought, appears, things get very weird, very
fast, and humans suddenly find themselves living in a world where
they are not at the peak of the cognitive pyramid. I'll not spoil
the plot with further details, but you'll find the world at the
end of the novel a very different place than the one at the start.
A Kindle edition is available.
Saturday, April 18, 2015 15:09
- van Dongen, Jeroen.
Cambridge: Cambridge University Press, 2010.
In 1905 Albert Einstein published four papers which transformed the
understanding of space, time, mass, and energy; provided physical evidence for
the quantisation of energy; and observational confirmation of the
existence of atoms. These publications are collectively called the
Annus Mirabilis papers,
and vaulted the largely unknown Einstein to the top rank of theoretical
physicists. When Einstein was awarded the Nobel Prize in Physics in
1921, it was for one of these 1905 papers which explained the
photoelectric effect. Einstein's 1905 papers are masterpieces of
intuitive reasoning and clear exposition, and demonstrated
Einstein's technique of constructing thought experiments based
upon physical observations, then deriving testable mathematical
models from them. Unlike so many present-day scientific publications,
Einstein's papers on
equivalence of mass and energy
were accessible to anybody with a college-level understanding
of mechanics and electrodynamics and used no special jargon or
advanced mathematics. Being based on well-understood concepts,
neither cited any other scientific paper.
While special relativity revolutionised our understanding of space
and time, and has withstood every experimental test to which it
has been subjected in the more than a century since it was
formulated, it was known from inception that the theory was
incomplete. It's called special relativity because
it only describes the behaviour of bodies under the special
case of uniform unaccelerated motion in the absence of
gravity. To handle acceleration and gravitation would require
extending the special theory into a general theory of
relativity, and it is upon this quest that Einstein next
As before, Einstein began with a simple thought experiment. Just as in
special relativity, where there is no experiment which can be done in
a laboratory without the ability to observe the outside world that
can determine its speed or direction of uniform (unaccelerated) motion,
Einstein argued that there should be no experiment an observer could
perform in a sufficiently small closed laboratory which could distinguish
uniform acceleration from the effect of gravity. If one observed objects to
fall with an acceleration equal to that on the surface of the Earth,
the laboratory might be stationary on the Earth or in a space ship
accelerating with a constant acceleration of one gravity, and
no experiment could distinguish the two situations. (The reason for
the “sufficiently small” qualification is that since
gravity is produced by massive objects, the direction a test particle
will fall depends upon its position with respect to the centre of
gravity of the body. In a very large laboratory, objects dropped
far apart would fall in different directions. This is what causes
Einstein called this observation the
that the effects of acceleration and gravity are indistinguishable,
and that hence a theory which extended special relativity to
incorporate accelerated motion would necessarily also be a
theory of gravity. Einstein had originally hoped it would be
straightforward to reconcile special relativity with acceleration
and gravity, but the deeper he got into the problem, the more he
appreciated how difficult a task he had undertaken. Thanks to the
Project, which is curating and publishing all of Einstein's extant
work, including notebooks, letters, and other documents, the author
(a participant in the project) has been able to reconstruct Einstein's
ten-year search for a viable theory of general relativity.
Einstein pursued a two-track approach. The bottom up path started with
Newtonian gravity and attempted to generalise it to make it compatible
with special relativity. In this attempt, Einstein was guided by the
principle, which requires that any new theory which explains
behaviour under previously untested conditions must reproduce the
tested results of existing theory under known conditions. For example,
the equations of motion in special relativity reduce to those of
Newtonian mechanics when velocities are small compared to the speed of
light. Similarly, for gravity, any candidate theory must yield results
identical to Newtonian gravitation when field strength is weak and
velocities are low.
From the top down, Einstein concluded that any theory compatible with
the principle of equivalence between acceleration and gravity must
which can be thought of as being equally valid regardless of the choice
of co-ordinates (as long as they are varied without discontinuities).
There are very few mathematical structures which have this property,
and Einstein was drawn to
tensor geometry. Over years of
work, Einstein pursued both paths, producing a bottom-up theory which
was not generally covariant which he eventually rejected as in conflict
with experiment. By November 1915 he had returned to the top-down
mathematical approach and in four papers expounded a generally covariant
theory which agreed with experiment. General relativity had arrived.
Einstein's 1915 theory correctly predicted the
perihelion precession of Mercury and also predicted that
starlight passing near the limb of the Sun would be
by twice the angle expected based on Newtonian gravitation. This
was confirmed (within a rather large margin of error) in an
eclipse expedition in 1919, which made Einstein's general relativity
front page news around the world. Since then precision
tests of general
relativity have tested a variety of predictions of the theory
with ever-increasing precision, with no experiment to date yielding
results inconsistent with the theory.
Thus, by 1915, Einstein had produced theories of mechanics, electrodynamics,
the equivalence of mass and energy, and the mechanics of bodies under
acceleration and the influence of gravitational fields, and changed
space and time from a fixed background in which physics occurs to
a dynamical arena: “Matter and energy tell spacetime how to
curve. Spacetime tells matter how to move.” What do you do,
at age 36, having figured out, largely on your own, how a large part
of the universe works?
Much of Einstein's work so far had consisted of unification. Special
relativity unified space and time, matter and energy. General
relativity unified acceleration and gravitation, gravitation
and geometry. But much remained to be unified. In
general relativity and classical electrodynamics there were
two field theories, both defined on the continuum, both with
unlimited range and an inverse square law, both exhibiting static
and dynamic effects (although the details of
would not be worked out until later). And yet the theories seemed
entirely distinct: gravity was always attractive and worked by
the bending of spacetime by matter-energy, while electromagnetism
could be either attractive or repulsive, and seemed to be propagated
by fields emitted by point charges—how messy.
Further, quantum theory, which Einstein's 1905 paper on the
photoelectric effect had helped launch, seemed to point in a very
different direction than the classical field theories in which
Einstein had worked. Quantum mechanics, especially as elaborated
in the “new” quantum theory of the 1920s, seemed to
indicate that aspects of the universe such as electric charge
were discrete, not continuous, and that physics could, even in
principle, only predict the probability of the outcome of experiments,
not calculate them definitively from known initial conditions.
Einstein never disputed the successes of quantum theory in
explaining experimental results, but suspected it was a theory
based upon phenomena which did not explain what was going on at
a deeper level. (For example, the physical theory of elasticity
explains experimental results and makes predictions within its
domain of applicability, but it is not fundamental. All
of the effects of elasticity are ultimately due to electromagnetic
forces between atoms in materials. But that doesn't mean that the
theory of elasticity isn't useful to engineers, or that they should
do their spring calculations at the molecular level.)
Einstein undertook the search for a
unified field theory,
which would unify gravity and electromagnetism, just as Maxwell had
unified electrostatics and magnetism into a single theory. In
addition, Einstein believed that a unified field theory would be
antecedent to quantum theory, and that the probabilistic results of
quantum theory could be deduced from the more fundamental theory, which
would remain entirely deterministic. From 1915 until his death in 1955
Einstein's work concentrated mostly on the quest for a unified field
theory. He was aided by numerous talented assistants, many of whom
went on to do important work in their own right. He explored
a variety of paths to such a theory, but ultimately rejected each
one, in turn, as either inconsistent with experiment or unable
to explain phenomena such as point particles or quantisation of
As the author documents, Einstein's approach to doing physics changed in
the years after 1915. While before he was guided both by physics and
mathematics, in retrospect he recalled and described his search of
the field equations of general relativity as having followed the path
of discovering the simplest and most elegant mathematical structure which
could explain the observed phenomena. He thus came, like Dirac, to argue
that mathematical beauty was the best guide to correct physical theories.
In the last forty years of his life, Einstein made no progress whatsoever
toward a unified field theory, apart from discarding numerous paths
which did not work. He explored a variety of approaches:
“semivectors” (which turned out just to be a reformulation
five-dimensional models including a cylindrically
compactified dimension based on
and attempts to deduce the properties of particles and their
quantum behaviour from nonlinear continuum field theories.
In seeking to unify electromagnetism and gravity,
he ignored the strong and weak nuclear forces which had been discovered
over the years and merited being included in any grand scheme of
unification. In the years after World War II, many physicists ceased
to worry about the meaning of quantum mechanics and the seemingly
inherent randomness in its predictions which so distressed Einstein, and
adopted a “shut up and calculate” approach as their
computations were confirmed to ever greater precision by experiments.
So great was the respect for Einstein's achievements that only rarely
was a disparaging word said about his work on unified field theories,
but toward the end of his life it was outside the mainstream of
theoretical physics, which had moved on to elaboration of quantum
theory and making quantum theory compatible with special relativity.
It would be a decade after Einstein's death before astronomical
discoveries would make general relativity once again a frontier in
What can we learn from the latter half of Einstein's life and his
pursuit of unification? The frontier of physics today remains
unification among the forces and particles we have discovered. Now we
have three forces to unify (counting electromagnetism and the weak
nuclear force as already unified in the electroweak force), plus two
seemingly incompatible kinds of particles: bosons (carriers of force)
and fermions (what stuff is made of). Six decades (to the day) after
the death of Einstein, unification of gravity and the other forces
remains as elusive as when he first attempted it.
It is a noble task to try to unify disparate facts and theories into a
common whole. Much of our progress in the age of science has come from
such unification. Einstein unified space and time; matter and energy;
acceleration and gravity; geometry and motion. We all benefit every
day from technologies dependent upon these fundamental discoveries.
He spent the last forty years of his life seeking the next grand
unification. He never found it. For this effort we should applaud him.
I must remark upon how absurd the price of this book is. At Amazon as of this writing,
the hardcover is US$ 102.91 and the
Kindle edition is US$ 88. Eighty-eight Yankee dollars
for a 224 page book which is ranked #739,058 in the Kindle store?