Philosophy

Aratus of Soli. Phænomena. Edited, with introduction, translation, and commentary by Douglas Kidd. Cambridge: Cambridge University Press, [c. 275 B.C.] 1997. ISBN 0-521-58230-X.

September 2001 Permalink

Arkes, Hadley. Natural Rights and the Right to Choose. Cambridge: Cambridge University Press, 2002. ISBN 0-521-81218-6.

June 2003 Permalink

Awret, Uziel, ed. The Singularity. Exeter, UK: Imprint Academic, 2016. ISBN 978-1-84540-907-4.
For more than half a century, the prospect of a technological singularity has been part of the intellectual landscape of those envisioning the future. In 1965, in a paper titled “Speculations Concerning the First Ultraintelligent Machine” statistician I. J. Good wrote,

Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment.

Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.

  1. Progress in computing comes to a halt before reaching parity with human brain power, due to technological limits, economics (inability to afford the new technologies required, or lack of applications to fund the intermediate steps), or intervention by authority (for example, regulation motivated by a desire to avoid the risks and displacement due to super-human intelligence).
  2. Computing continues to advance, but we find that the human brain is either far more complicated than we believed it to be, or that something is going on in there which cannot be modelled or simulated by a deterministic computational process. The goal of human-level artificial intelligence recedes into the distant future.
  3. Blooie! Human level machine intelligence is achieved, successive generations of machine intelligences run away to approach the physical limits of computation, and before long machine intelligence exceeds that of humans to the degree humans surpass the intelligence of mice (or maybe insects).

Now, the thing about this is that many people will dismiss such speculation as science fiction having nothing to do with the “real world” they inhabit. But there's no more conservative form of forecasting than observing a trend which has been in existence for a long time (in the case of growth in computing power, more than a century, spanning multiple generations of very different hardware and technologies), and continuing to extrapolate it into the future and then ask, “What happens then?” When you go through this exercise and an answer pops out which seems to indicate that within the lives of many people now living, an event completely unprecedented in the history of our species—the emergence of an intelligence which far surpasses that of humans—might happen, the prospects and consequences bear some serious consideration.

The present book, based upon two special issues of the Journal of Consciousness Studies, attempts to examine the probability, nature, and consequences of a singularity from a variety of intellectual disciplines and viewpoints. The volume begins with an essay by philosopher David Chalmers originally published in 2010: “The Singularity: a Philosophical Analysis”, which attempts to trace various paths to a singularity and evaluate their probability. Chalmers does not attempt to estimate the time at which a singularity may occur—he argues that if it happens any time within the next few centuries, it will be an epochal event in human history which is worth thinking about today. Chalmers contends that the argument for artificial intelligence (AI) is robust because there appear to be multiple paths by which we could get there, and hence AI does not depend upon a fragile chain of technological assumptions which might break at any point in the future. We could, for example, continue to increase the performance and storage capacity of our computers, to such an extent that the “deep learning” techniques already used in computing applications, combined with access to a vast amount of digital data on the Internet, may cross the line of human intelligence. Or, we may continue our progress in reverse-engineering the microstructure of the human brain and apply our ever-growing computing power to emulating it at a low level (this scenario is discussed in detail in Robin Hanson's The Age of Em [September 2016]). Or, since human intelligence was produced by the process of evolution, we might set our supercomputers to simulate evolution itself (which we're already doing to some extent with genetic algorithms) in order to evolve super-human artificial intelligence (not only would computer-simulated evolution run much faster than biological evolution, it would not be random, but rather directed toward desired results, much like selective breeding of plants or livestock).

Regardless of the path or paths taken, the outcomes will be one of the three discussed above: either a singularity or no singularity. Assume, arguendo, that the singularity occurs, whether before 2050 as some optimists project or many decades later. What will it be like? Will it be good or bad? Chalmers writes,

I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.

…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment?

What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained?

Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry?

Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what?

Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter?

Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”.

Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens.

Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them.

This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:

If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.

March 2017 Permalink

Barrow, John D. The Infinite Book. New York: Vintage Books, 2005. ISBN 1-4000-3224-5.
Don't panic—despite the title, this book is only 330 pages! Having written an entire book about nothing (The Book of Nothing, May 2001), I suppose it's only natural the author would take on the other end of the scale. Unlike Rudy Rucker's Infinity and the Mind, long the standard popular work on the topic, Barrow spends only about half of the book on the mathematics of infinity. Philosophical, metaphysical, and theological views of the infinite in a variety of cultures are discussed, as well as the history of the infinite in mathematics, including a biographical portrait of the ultimately tragic life of Georg Cantor. The physics of an infinite universe (and whether we can ever determine if our own universe is infinite), the paradoxes of an infinite number of identical copies of ourselves necessarily existing in an infinite universe, the possibility of machines which perform an infinite number of tasks in finite time, whether we're living in a simulation (and how we might discover we are), and the practical and moral consequences of immortality and time travel are also explored.

Mathematicians and scientists have traditionally been very wary of the infinite (indeed, the appearance of infinities is considered an indication of the limitations of theories in modern physics), and Barrow presents any number of paradoxes which illustrate that, as he titles chapter four, “infinity is not a big number”: it is fundamentally different and requires a distinct kind of intuition if nonsensical results are to be avoided. One of the most delightful examples is Zhihong Xia's five-body configuration of point masses which, under Newtonian gravitation, expands to infinite size in finite time. (Don't worry: the finite speed of light, formation of an horizon if two bodies approach too closely, and the emission of gravitational radiation keep this from working in the relativistic universe we inhabit. As the author says [p. 236], “Black holes might seem bad but, like growing old, they are really not so bad when you consider the alternatives.”)

This is an enjoyable and enlightening read, but I found it didn't come up to the standard set by The Book of Nothing and The Constants of Nature (June 2003). Like the latter book, this one is set in a hideously inappropriate font for a work on mathematics: the digit “1” is almost indistinguishable from the letter “I”. If you look very closely at the top serif on the “1” you'll note that it rises toward the right while the “I” has a horizontal top serif. But why go to the trouble of distinguishing the two characters and then making the two glyphs so nearly identical you can't tell them apart without a magnifying glass? In addition, the horizontal bar of the plus sign doesn't line up with the minus sign, which makes equations look awful.

This isn't the author's only work on infinity; he's also written a stage play, Infinities, which was performed in Milan in 2002 and 2003.

September 2007 Permalink

Berman, Morris. The Twilight of American Culture. New York: W. W. Norton, 2000. ISBN 0-393-32169-X.

April 2003 Permalink

Bloom, Allan. The Closing of the American Mind. New York: Touchstone Books, 1988. ISBN 0-671-65715-1.

June 2001 Permalink

Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.
Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.

Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.

Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.

As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.

“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.

This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.

One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.

At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.

As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.

That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

September 2014 Permalink

Cahill, Thomas. Sailing the Wine-Dark Sea: Why the Greeks Matter. New York: Doubleday, 2003. ISBN 0-385-49553-6.

November 2003 Permalink

Carr, Bernard, ed. Universe or Multiverse? Cambridge: Cambridge University Press, 2007. ISBN 0-521-84841-5.
Before embarking upon his ultimately successful quest to discover the laws of planetary motion, Johannes Kepler tried to explain the sizes of the orbits of the planets from first principles: developing a mathematical model of the orbits based upon nested Platonic solids. Since, at the time, the solar system was believed by most to be the entire universe (with the fixed stars on a sphere surrounding it), it seemed plausible that the dimensions of the solar system would be fixed by fundamental principles of science and mathematics. Even though he eventually rejected his model as inaccurate, he never completely abandoned it—it was for later generations of astronomers to conclude that there is nothing fundamental whatsoever about the structure of the solar system: it is simply a contingent product of the history of its condensation from the solar nebula, and could have been entirely different. With the discovery of planets around other stars in the late twentieth century, we now know that not only do planetary systems vary widely, many are substantially more weird than most astronomers or even science fiction writers would have guessed.

Since the completion of the Standard Model of particle physics in the 1970s, a major goal of theoretical physicists has been to derive, from first principles, the values of the more than twenty-five “free parameters” of the Standard Model (such as the masses of particles, relative strengths of forces, and mixing angles). At present, these values have to be measured experimentally and put into the theory “by hand”, and there is no accepted physical explanation for why they have the values they do. Further, many of these values appear to be “fine-tuned” to allow the existence of life in the universe (or at least, life which resembles ourselves)—a tiny change, for example, in the mass ratio of the up and down quarks and the electron would result in a universe with no heavy elements or chemistry; it's hard to imagine any form of life which could be built out of just protons or neutrons. The emergence of a Standard Model of cosmology has only deepened the mystery, adding additional apparently fine-tunings to the list. Most stunning is the cosmological constant, which appears to have a nonzero value which is 124 orders of magnitude smaller than predicted from a straightforward calculation from quantum physics.

One might take these fine-tunings as evidence of a benevolent Creator (which is, indeed, discussed in chapters 25 and 26 of this book), or of our living in a simulation crafted by a clever programmer intent on optimising its complexity and degree of interestingness (chapter 27). But most physicists shy away from such deus ex machina and “we is in machina” explanations and seek purely physical reasons for the values of the parameters we measure.

Now let's return for a moment to Kepler's attempt to derive the orbits of the planets from pure geometry. The orbit of the Earth appears, in fact, fine-tuned to permit the existence of life. Were it more elliptical, or substantially closer to or farther from the Sun, persistent liquid water on the surface would not exist, as seems necessary for terrestrial life. The apparent fine-tuning can be explained, however, by the high probability that the galaxy contains a multitude of planetary systems of every possible variety, and such a large ensemble is almost certain to contain a subset (perhaps small, but not void) in which an earthlike planet is in a stable orbit within the habitable zone of its star. Since we can only have evolved and exist in such an environment, we should not be surprised to find ourselves living on one of these rare planets, even though such environments represent an infinitesimal fraction of the volume of the galaxy and universe.

As efforts to explain the particle physics and cosmological parameters have proved frustrating, and theoretical investigations into cosmic inflation and string theory have suggested that the values of the parameters may have simply been chosen at random by some process, theorists have increasingly been tempted to retrace the footsteps of Kepler and step back from trying to explain the values we observe, and instead view them, like the masses and the orbits of the planets, as the result of an historical process which could have produced very different results. The apparent fine-tuning for life is like the properties of the Earth's orbit—we can only measure the parameters of a universe which permits us to exist! If they didn't, we wouldn't be here to do the measuring.

But note that like the parallel argument for the fine-tuning of the orbit of the Earth, this only makes sense if there are a multitude of actually existing universes with different random settings of the parameters, just as only a large ensemble of planetary systems can contain a few like the one in which we find ourselves. This means that what we think of as our universe (everything we can observe or potentially observe within the Hubble volume) is just one domain in a vastly larger “multiverse”, most or all of which may remain forever beyond the scope of scientific investigation.

Now such a breathtaking concept provides plenty for physicists, cosmologists, philosophers, and theologians to chew upon, and macerate it they do in this thick (517 page), heavy (1.2 kg), and expensive (USD 85) volume, which is drawn from papers presented at conferences held between 2001 and 2005. Contributors include two Nobel laureates (Steven Weinberg and Frank Wilczek), and just about everybody else prominent in the multiverse debate, including Martin Rees, Stephen Hawking, Max Tegmark, Andrei Linde, Alexander Vilenkin, Renata Kallosh, Leonard Susskind, James Hartle, Brandon Carter, Lee Smolin, George Ellis, Nick Bostrom, John Barrow, Paul Davies, and many more. The editor's goal was that the papers be written for the intelligent layman: like articles in the pre-dumbed-down Scientific American or “front of book” material in Nature or Science. In fact, the chapters vary widely in technical detail and difficulty; if you don't follow this stuff closely, your eyes may glaze over in some of the more equation-rich chapters.

This book is far from a cheering section for multiverse theories: both sides are presented and, in fact, the longest chapter is that of Lee Smolin, which deems the anthropic principle and anthropic arguments entirely nonscientific. Many of these papers are available in preliminary form for free on the arXiv preprint server; if you can obtain a list of the chapter titles and authors from the book, you can read most of the content for free. Renata Kallosh's chapter contains an excellent example of why one shouldn't blindly accept the recommendations of a spelling checker. On p. 205, she writes “…the gaugino condensate looks like a fractional instant on effect…”—that's supposed to be “instanton”!

August 2007 Permalink

D'Souza, Dinesh. Life After Death: The Evidence. Washington: Regnery Publishing, 2009 ISBN 978-1-59698-099-0.
Ever since the Enlightenment, and to an increasing extent today, there is a curious disconnect between the intellectual élite and the population at large. The overwhelming majority of human beings who have ever lived believed in their survival, in one form or another, after death, while materialists, reductionists, and atheists argue that this is nothing but wishful thinking; that there is no physical mechanism by which consciousness could survive the dissolution of the neural substrate in which it is instantiated, and point to the lack of any evidence for survival after death. And yet a large majority of people alive today beg to differ. As atheist H. G. Wells put it in a very different context, they sense that “Worlds may freeze and suns may perish, but there stirs something within us now that can never die again.” Who is right?

In this slim (256 page) volume, the author examines the scientific, philosophical, historical, and moral evidence for and implications of survival after death. He explicitly excludes religious revelation (except in the final chapter, where some evidence he cites as historical may be deemed by others to be argument from scriptural authority). Having largely excluded religion from the argument, he explores the near-universality of belief in life after death across religious traditions and notes the common threads uniting them.

But traditions and beliefs do not in any way address the actual question: does our individual consciousness, in some manner, survive the death of our bodies? While materialists discard such a notion as absurd, the author argues that there is nothing in our present-day understanding of physics, evolutionary biology, or neuroscience which excludes this possibility. In fact, the complete failure so far to understand the physical basis of consciousness can be taken as evidence that it may be a phenomenon independent of its physical instantiation: structured information which could conceivably transcend the hardware on which it currently operates.

Computer users think nothing these days of backing up their old computer, loading the backups onto a new machine (which may use a different processor and operating system), and with a little upward compatibility magic, having everything work pretty much as before. Do your applications and documents from the old computer die when you turn it off for the last time? Are they reincarnated when you load them into the replacement machine? Will they live forever as long as you continue to transfer them to successive machines, or on backup tapes? This may seem a silly analogy, but consider that materialists consider your consciousness and self to be nothing other than a pattern of information evolving in a certain way according to the rules of neural computation. Do the thought experiment: suppose nanotechnological robots replaced your meat neurons one by one with mechanical analogues with the same external electrochemical interface. Eventually your brain would be entirely different physically, but would your consciousness change at all? Why? If it's just a bunch of components, then replacing protein components with silicon (or whatever) components which work in the same way should make no difference at all, shouldn't it?

A large part of what living organisms do is sense their external environment and interact with it. Unicellular organisms swim along the gradient of increasing nutrient concentration. Other than autonomic internal functions of which we are aware only when they misbehave, humans largely experience the world through our sensory organs, and through the internal sense of self which is our consciousness. Is it not possible that the latter is much like the former—something external to the meatware of our body which is picked up by a sensory organ, in this case the neural networks of the brain?

If this be the case, in the same sense that the external world does not cease to exist when our eyes, ears, olfactory, and tactile sensations fail at the time of death or due to injury, is it not plausible that dissolution of the brain, which receives and interacts with our external consciousness, need not mean the end of that incorporeal being?

Now, this is pretty out-there stuff, which might cause the author to run from the room in horror should he hear me expound it. Fine: this humble book reviewer spent a substantial amount of time contributing to a project seeking evidence for existence of global, distributed consciousness, and has concluded that such has been demonstrated to exist by the standards accepted by most of the “hard” sciences. But let's get back to the book itself.

One thing you won't find here is evidence based upon hauntings, spiritualism, or other supposed contact with the dead (although I must admit, Chicago election returns are awfully persuasive as to the ability of the dead to intervene in affairs of the living). The author does explore near death experiences, noting their universality across very different cultures and religious traditions, and evidence for reincarnation, which he concludes is unpersuasive (but see the research of Ian Stevenson and decide for yourself). The exploration of a physical basis for the existence of other worlds (for example, Heaven and Hell) cites the “multiverse” paradigm, and invites sceptics of that “theory of anything” to denounce it as “just as plausible as life after death”—works for me.

Excuse me for taking off on a tangent here, but it is, in a formal sense. If you believe in an infinite chaotically inflating universe with random initial conditions, or in Many Worlds in One (October 2006), then Heaven and Hell explicitly exist, not only once in the multiverse, but an infinity of times. For every moment in your life that you may have to ceased to exist, there is a universe somewhere out there, either elsewhere in the multiverse or in some distant region far from our cosmic horizon in this universe, where there's an observable universe identical to our own up to that instant which diverges thence into one which grants you eternal reward or torment for your actions. In an infinite universe with random initial conditions, every possibility occurs an infinite number of times. Think about it, or better yet, don't.

The chapter on morality is particularly challenging and enlightening. Every human society has had a code of morality (different in the details, but very much the same at the core), and most of these societies have based their moral code upon a belief in cosmic justice in an afterlife. It's self-evident that bad guys sometimes win at the expense of good guys in this life, but belief that the score will be settled in the long run has provided a powerful incentive for mortals to conform to the norms which their societies prescribe as good. (I've deliberately written the last sentence in the post-modern idiom; I consider many moral norms absolutely good or bad based on gigayears of evolutionary history, but I needn't introduce that into evidence to prove my case, so I won't.) From an evolutionary standpoint, morality is a survival trait of the family or band: the hunter who shares the kill with his family and tribe will have more descendants than the gluttonous loner. A tribe which produces males who sacrifice themselves to defend their women and children will produce more offspring than the tribe whose males value only their own individual survival.

Morality, then, is, at the group level, a selective trait, and consequently it's no surprise that it's universal among human societies. But if, as serious atheists such as Bertrand Russell (as opposed to the lower-grade atheists we get today) worried, morality has been linked to religion and belief in an afterlife in every single human society to date, then how is morality (a survival characteristic) to be maintained in the absence of these beliefs? And if evolution has selected us to believe in the afterlife for the behavioural advantages that belief confers in the here and now, then how successful will the atheists be in extinguishing a belief which has conferred a behavioural selective advantage upon thousands of generations of our ancestors? And how will societies which jettison such belief fare in competition with those which keep it alive?

I could write much more about this book, but then you'd have to read a review even longer than the book, so I'll spare you. If you're interested in this topic (as you'll probably eventually be as you get closer to the checkered flag), this is an excellent introduction, and the end notes provide a wealth of suggestions for additional reading. I doubt this book will shake the convictions of either the confirmed believers or the stalwart sceptics, but it will provide much for both to think about, and perhaps motivate some folks whose approach is “I'll deal with that when the time comes” (which has been pretty much my own) to consider the consequences of what may come next.

February 2010 Permalink

Deutsch, David. The Beginning of Infinity. New York: Viking, 2011. ISBN 978-0-670-02275-5.
Were it possible to communicate with the shades of departed geniuses, I suspect Richard Feynman would be dismayed at the prospect of a distinguished theoretical physicist committing phil-oss-o-phy in public, while Karl Popper would be pumping his fist in exultation and shouting “Yes!”. This is a challenging book and, at almost 500 pages in the print edition, a rather long one, but it is a masterpiece well worthy of the investment in reading it, and then, after an interval to let its implications sink in, reading it again because there is so much here that you're unlikely to appreciate it all in a single reading.

The author attempts nothing less ambitious than a general theory of the creation of knowledge and its implications for the future of the universe. (In what follows, I shall take a different approach than the author in explaining the argument, but I think we arrive at the same place.) In all human endeavours: science, art, morals, politics and governance, technology, economics, etc., what we ultimately seek are good explanations—models which allow us to explain a complex objective reality and make predictions about its behaviour. The author rejects the arguments of the relativists and social constructionists that no such objective reality exists, as well as those of empiricists and advocates of inductive reasoning that our models come purely from observation of events. Instead, he contends that explanations come from conjectures which originate in the human mind (often sparked by experience), which are then tested against objective reality and alternative conjectures, in a process which (in the absence of constraints which obstruct the process, such as reliance on received wisdom instead of free inquiry) inevitably converges upon explanations which are minimal and robust in the sense that almost any small change destroys their predictive power.

For example, if I were so inclined, I could invent a myth involving gods and goddesses and their conflicting wills and goals which would exactly replicate the results of Newton's laws of mechanics. But this would be a bad explanation because the next person could come up with their own myth involving an entirely different pantheon which produced the same results. All of the excess baggage contributes nothing to the explanation, while there's no way you can simplify “F=ma” without breaking the entire structure.

And yet all of our explanations, however elegant and well-tested, are simply the best explanations we've found so far, and likely to be incomplete when we try to apply them to circumstances outside the experiences which motivated us to develop them. Newton's laws fail to describe the motion of objects at a substantial fraction of the speed of light, and it's evident from fundamental conflicts in their theoretical structure that our present theories of the very small (quantum mechanics) and the very large (general relativity) are inadequate to describe circumstances which obtained in the early universe and in gravitational collapse of massive objects.

What is going on here, contends Deutsch, is nothing other than evolution, with the creation of conjectures within the human mind serving as variation and criticism of them based on confrontation with reality performing selection. Just as biological evolution managed over four billion years or so to transform the ancestral cell into human brains capable of comprehending structures from subatomic particles to cosmology, the spark which was ignited in the brains of our ancestors is able, in principle, to explain everything, either by persistence in the process of conjecture and criticism (variation and selection), or by building the tools (scientific instruments, computers, and eventually perhaps our own intellectually transcendent descendents) necessary to do so. The emergence of the human brain was a phase transition in the history of the Earth and, perhaps, the universe. Humans are universal explainers.

Let's consider the concept of universality. While precisely defined in computing, it occurs in many guises. For example, a phonetic alphabet (as opposed to a pictographic writing system) is capable of encoding all possible words made up of the repertoire of sounds it expresses, including those uninvented and never yet spoken. A positional number system can encode all possible numbers without the need to introduce new symbols for numbers larger or smaller than those encountered so far. The genetic code, discovered as best we can determine through a process of chemical evolution on the early Earth, is universal: the same code, with a different string of nucleotides, can encode both brewer's yeast and Beethoven. Less than five million years ago the human lineage diverged from the common ancestor of present-day humans and chimpanzees, and between that time and today the human mind made the “leap to universality”, with the capacity to generate explanations, test them against reality, transmit them to other humans as memes, and store them extrasomatically as oral legends and, eventually, written records.

Universality changes all the rules and potential outcomes. It is a singularity in the mathematical sense that one cannot predict the future subsequent to its emergence from events preceding it. For example, an extraterrestrial chemist monitoring Earth prior to the emergence of the first replicator could have made excellent predictions about the chemical composition of the oceans and its interaction with the energy and material flows in the environment, but at the moment that first replicating cell appeared, the potential for things the meticulous chemist wouldn't remotely imagine came into existence: stromatolites, an oxygen-rich atmosphere, metazoans, flowers, beetles, dinosaurs, boot prints on the Moon, and the designated hitter rule. So it is with the phase transition to universality of the human mind. It is now impossible to predict based on any model not taking that singularity into account the fate of the Earth, the Sun, the solar system, or the galaxy. Barring societal collapse, it appears probable that within this century individual wealthy humans (and a few years thereafter, everybody) will have the ability to launch self-replicating von Neumann probes into the galaxy with the potential of remaking it in their own image in an eyeblink compared to the age of the universe (unless they encounter probes launched by another planet full of ambitious universal explainers, which makes for another whole set of plot lines).

But universality and evolutionary epistemology have implications much closer to home and the present. Ever since the Enlightenment, Western culture has developed and refined the scientific method, the best embodiment of the paradigm of conjecture and criticism in the human experience. And yet, at the same time, the institutions of governance of our societies have been largely variations on the theme of “who shall rule?”, and the moral underpinnings of our societies have either been based upon received wisdom from sacred texts, tradition, or the abdication of judgement inherent in multicultural relativism. The author argues that in all of these “non-scientific” domains objective truth exists just as it does in mechanics and chemistry, and that we can discover it and ever improve our explanations of it by precisely the same process we use in science: conjecture and criticism. Perversely, many of the institutions we've created impede this process. Consider how various political systems value compromise. But if there is a right answer and a wrong answer, you don't get a better explanation by splitting the difference. It's as if, faced with a controversy between geocentric and heliocentric models of the solar system, you came up with a “compromise” that embodied the “best of both”. In fact, Tycho did precisely that, and it worked even worse than the alternatives. The value of democracy is not that it generates good policies—manifestly it doesn't—but rather that it provides the mechanism for getting rid of bad policies and those who advocate them and eventually selecting the least bad policies based upon present knowledge, always subject to revision based on what we'll discover tomorrow.

The Enlightenment may also be thought of as a singularity. While there have been brief episodes in human history where our powers as universal explainers have been unleashed (Athens and Florence come to mind, although there have doubtless been a multitude of others throughout history which have left us no record—it is tragic to think of how many Galileos were born and died in static tribal societies), our post-Enlightenment world is the only instance which has lasted for centuries and encompassed a large part of the globe. The normal state of human civilisation seems to be a static or closed society dominated by tradition and taboos which extinguish the inborn spark of universal explanation which triggers the runaway exponential growth of knowledge and power. The dynamic (or open) society (1, 2) is a precious thing which has brought unprecedented prosperity to the globe and stands on the threshold of remaking the universe as we wish it to be.

If this spark be not snuffed by ignorance, nihilism, adherence to tradition and authority, and longing for the closure of some final utopia, however confining, but instead lights the way to a boundless frontier of uncertainty and new problems to comprehend and solve, then David Deutsch will be celebrated as one of the visionaries who pointed the way to this optimistic destiny of our species and its inheritors.

September 2011 Permalink

Edmonds, David and John Eidinow. Wittgenstein's Poker. London: Faber and Faber, 2001. ISBN 0-571-20909-2.
A U.S. edition of this book, ISBN 0-06-093664-9, was published in September 2002.

December 2002 Permalink

Frankfurt, Harry G. On Bullshit. Princeton: Princeton University Press, 2005. ISBN 0-691-12294-6.
This tiny book (just 67 9½×15 cm pages—I'd estimate about 7300 words) illustrates that there is no topic, however mundane or vulgar, which a first-rate philosopher cannot make so complicated and abstruse that it appears profound. The author, a professor emeritus of philosophy at Princeton University, first published this essay in 1986 in the Raritan Review. In it, he tackles the momentous conundrum of what distinguishes bullshit from lies. Citing authorities including Wittgenstein and Saint Augustine, he concludes that while the liar is ultimately grounded in the truth (being aware that what he is saying is counterfactual and crafting a lie to make the person to whom he tells it believe that), the bullshitter is entirely decoupled (or, perhaps in his own estimation, liberated) from truth and falsehood, and is simply saying whatever it takes to have the desired effect upon the audience.

Throughout, it's obvious that we're in the presence of a phil-oss-o-pher doing phil-oss-o-phy right out in the open. For example, on p. 33 we have:

It is in this sense that Pascal's (Fania Pascal, an acquaintance of Wittgenstein in the 1930s, not Blaise—JW) statement is unconnected to a concern with the truth; she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true.
(The Punctuator applauds the use of colons and semicolons in the passage quoted above!)

All of this is fine, but it seems to me that the author misses an important aspect of bullshit: the fact that in many cases—perhaps the overwhelming majority—the bulshittee is perfectly aware of being bullshitted by the bullshitter, and the bullshitter is conversely aware that the figurative bovid excrement emitted is being dismissed as such by those whose ears it befouls. Now, this isn't always the case: sometimes you find yourself in a tight situation faced with a difficult question and manage to bullshit your way through, but in the context of a “bull session”, only the most naïve would assume that what was said was sincere and indicative of the participants' true beliefs: the author cites bull sessions as a venue in which people can try on beliefs other than their own in a non-threatening environment.

July 2007 Permalink

Hicks, Stephen R. C. Explaining Postmodernism. Phoenix: Scholargy, 2004. ISBN 1-59247-642-2.
Starting more than ten years ago, with the mass pile-on to the Internet and the advent of sites with open content and comment posting, I have been puzzled by the extent of the anger, hatred, and nihilism which is regularly vented in such fora. Of all the people of my generation with whom I have associated over the decades (excepting, of course, a few genuine nut cases), I barely recall anybody who seemed to express such an intensively negative outlook on life and the world, or who were so instantly ready to impute “evil” (a word used incessantly for the slightest difference of opinion) to those with opposing views, or to inject ad hominem arguments or obscenity into discussions of fact and opinion. Further, this was not at all confined to traditionally polarising topics; in fact, having paid little attention to most of the hot-button issues in the 1990s, I first noticed it in nerdy discussions of topics such as the merits of different microprocessors, operating systems, and programming languages—matters which would seem unlikely, and in my experience had only rarely in the past, inspired partisans on various sides to such passion and vituperation. After a while, I began to notice one fairly consistent pattern: the most inflamed in these discussions, those whose venting seemed entirely disproportionate to the stakes in the argument, were almost entirely those who came of age in the mid-1970s or later; before the year 2000 I had begun to call them “hate kiddies”, but I still didn't understand why they were that way. One can speak of “the passion of youth”, of course, which is a real phenomenon, but this seemed something entirely different and off the scale of what I recall my contemporaries expressing in similar debates when we were of comparable age.

This has been one of those mysteries that's puzzled me for some years, as the phenomenon itself seemed to be getting worse, not better, and with little evidence that age and experience causes the original hate kiddies to grow out of their youthful excess. Then along comes this book which, if it doesn't completely explain it, at least seems to point toward one of the proximate causes: the indoctrination in cultural relativist and “postmodern” ideology which began during the formative years of the hate kiddies and has now almost entirely pervaded academia apart from the physical sciences and engineering (particularly in the United States, whence most of the hate kiddies hail). In just two hundred pages of main text, the author traces the origins and development of what is now called postmodernism to the “counter-enlightenment” launched by Rousseau and Kant, developed by the German philosophers of the 18th and 19th centuries, then transplanted to the U.S. in the 20th. But the philosophical underpinnings of postmodernism, which are essentially an extreme relativism which goes as far as denying the existence of objective truth or the meaning of texts, doesn't explain the near monolithic adherence of its champions to the extreme collectivist political Left. You'd expect that philosophical relativism would lead its believers to conclude that all political tendencies were equally right or wrong, and that the correct political policy was as impossible to determine as ultimate scientific truth.

Looking at the philosophy espoused by postmodernists alongside the the policy views they advocate and teach their students leads to the following contradictions which are summarised on p. 184:

  • On the one hand, all truth is relative; on the other hand, postmodernism tells it like it really is.
  • On the one hand, all cultures are equally deserving of respect; on the other, Western culture is uniquely destructive and bad.
  • Values are subjective—but sexism and racism are really evil. (There's that word!—JW)
  • Technology is bad and destructive—and it is unfair that some people have more technology than others.
  • Tolerance is good and dominance is bad—but when postmodernists come to power, political correctness follows.

The author concludes that it is impossible to explain these and other apparent paradoxes and the uniformly Left politics of postmodernists without understanding the history and the failures of collectivist political movements dating from Rousseau's time. On p. 173 is an absolutely wonderful chart which traces the mutation and consistent failure of socialism in its various guises from Marx to the present. With each failure, the response has been not to question the premises of collectivism itself, but rather to redefine its justification, means, and end. As failure has followed failure, postmodernism represents an abject retreat from reason and objectivity itself, either using the philosophy in a Machiavellian way to promote collectivist ideology, or to urge acceptance of the contradictions themselves in the hope of creating what Nietzsche called ressentiment, which leads directly to the “everybody is evil”, “nothing works”, and “truth is unknowable” irrationalism and nihilism which renders those who believe it pliable in the hands of agenda-driven manipulators.

Based on the some of the source citations and the fact that this work was supported in part by The Objectivist Center, the author appears to be a disciple of Ayn Rand, which is confirmed by his Web site. Although the author's commitment to rationalism and individualism, and disdain for their adversaries, permeates the argument, the more peculiar and eccentric aspects of the Objectivist creed are absent. For its size, insight, and crystal clear reasoning and exposition, I know of no better introduction to how postmodernism came to be, and how it is being used to advance a collectivist ideology which has been thoroughly discredited by sordid experience. And I think I'm beginning to comprehend how the hate kiddies got that way.

May 2007 Permalink

Hitchens, Christopher. Why Orwell Matters. New York: Basic Books, 2002. ISBN 0-465-03049-1.

December 2002 Permalink

Hoover, Herbert. American Individualism. Introduction by George H. Nash. Stanford, CA: Hoover Institution Press, [1922] 2016. ISBN 978-0-8179-2015-9.
After the end of World War I, Herbert Hoover and the American Relief Administration he headed provided food aid to the devastated nations of Central Europe, saving millions from famine. Upon returning to the United States in the fall of 1919, he was dismayed by what he perceived to be an inoculation of the diseases of socialism, autocracy, and other forms of collectivism, whose pernicious consequences he had observed first-hand in Europe and in the peace conference after the end of the conflict, into his own country. In 1920, he wrote, “Every wind that blows carries to our shores an infection of social disease from this great ferment; every convulsion there has an economic reaction upon our own people.”

Hoover sensed that in the aftermath of war, which left some collectivists nostalgic for the national mobilisation and top-down direction of the economy by “war socialism”, and growing domestic unrest: steel and police strikes, lynchings and race riots, and bombing attacks by anarchists, that it was necessary to articulate the principles upon which American society and its government were founded, which he believed were distinct from those of the Old World, and the deliberate creation of people who had come to the new continent expressly to escape the ruinous doctrines of the societies they left behind.

After assuming the post of Secretary of Commerce in the newly inaugurated Harding administration in 1921, and faced with massive coal and railroad strikes which threatened the economy, Hoover felt a new urgency to reassert his vision of American principles. In December 1922, American Individualism was published. The short book (at 72 pages, more of a long pamphlet), was based upon a magazine article he had published the previous March in World's Work.

Hoover argues that five or six philosophies of social and economic organisation are contending for dominance: among them Autocracy, Socialism, Syndicalism, Communism, and Capitalism. Against these he contrasts American Individualism, which he believes developed among a population freed by emigration and distance from shackles of the past such as divine right monarchy, hereditary aristocracy, and static social classes. These people became individuals, acting on their own initiative and in concert with one another without top-down direction because they had to: with a small and hands-off government, it was the only way to get anything done. Hoover writes,

Forty years ago [in the 1880s] the contact of the individual with the Government had its largest expression in the sheriff or policeman, and in debates over political equality. In those happy days the Government offered but small interference with the economic life of the citizen.

But with the growth of cities, industrialisation, and large enterprises such as railroads and steel manufacturing, a threat to this frontier individualism emerged: the reduction of workers to a proletariat or serfdom due to the imbalance between their power as individuals and the huge companies that employed them. It is there that government action was required to protect the other component of American individualism: the belief in equality of opportunity. Hoover believes, and supports, intervention in the economy to prevent the concentration of economic power in the hands of a few, and to guard, through taxation and other means, against the emergence of a hereditary aristocracy of wealth. Yet this poses its own risks,

But with the vast development of industry and the train of regulating functions of the national and municipal government that followed from it; with the recent vast increase in taxation due to the war;—the Government has become through its relations to economic life the most potent force for maintenance or destruction of our American individualism.

One of the challenges American society must face as it adapts is avoiding the risk of utopian ideologies imported from Europe seizing this power to try to remake the country and its people along other lines. Just ten years later, as Hoover's presidency gave way to the New Deal, this fearful prospect would become a reality.

Hoover examines the philosophical, spiritual, economic, and political aspects of this unique system of individual initiative tempered by constraints and regulation in the interest of protecting the equal opportunity of all citizens to rise as high as their talent and effort permit. Despite the problems cited by radicals bent on upending the society, he contends things are working pretty well. He cites “the one percent”: “Yet any analysis of the 105,000,000 of us would show that we harbor less than a million of either rich or impecunious loafers.” Well, the percentage of very rich seems about the same today, but after half a century of welfare programs which couldn't have been more effective in destroying the family and the initiative of those at the bottom of the economic ladder had that been their intent, and an education system which, as a federal commission was to write in 1983, “If an unfriendly foreign power had attempted to impose on America …, we might well have viewed it as an act of war”, a nation with three times the population seems to have developed a much larger unemployable and dependent underclass.

Hoover also judges the American system to have performed well in achieving its goal of a classless society with upward mobility through merit. He observes, speaking of the Harding administration of which he is a member,

That our system has avoided the establishment and domination of class has a significant proof in the present Administration in Washington, Of the twelve men comprising the President, Vice-President, and Cabinet, nine have earned their own way in life without economic inheritance, and eight of them started with manual labor.

Let's see how that has held up, almost a century later. Taking the 17 people in equivalent positions at the end of the Obama administration in 2016 (President, Vice President, and heads of the 15 executive departments), we find that only 1 of the 17 inherited wealth (I'm inferring from the description of parents in their biographies) but that precisely zero had any experience with manual labour. If attending an Ivy League university can be taken as a modern badge of membership in a ruling class, 11 of the 17—65%, meet this test (if you consider Stanford a member of an “extended Ivy League”, the figure rises to 70%).

Although published in a different century in a very different America, much of what Hoover wrote remains relevant today. Just as Hoover warned of bad ideas from Europe crossing the Atlantic and taking root in the United States, the Frankfurt School in Germany was laying the groundwork for the deconstruction of Western civilisation and individualism, and in the 1930s, its leaders would come to America to infect academia. As Hoover warned, “There is never danger from the radical himself until the structure and confidence of society has been undermined by the enthronement of destructive criticism.” Destructive criticism is precisely what these “critical theorists” specialised in, and today in many parts of the humanities and social sciences even in the most eminent institutions the rot is so deep they are essentially a write-off.

Undoing a century of bad ideas is not the work of a few years, but Hoover's optimistic and pragmatic view of the redeeming merit of individualism unleashed is a bracing antidote to the gloom one may feel when surveying the contemporary scene.

December 2016 Permalink

Lewis, C. S. The Abolition of Man. New York: HarperCollins, [1944] 1947. ISBN 0-06-065294-2.
This short book (or long essay—the main text is but 83 pages) is subtitled “Reflections on education with special reference to the teaching of English in the upper forms of schools” but, in fact, is much more: one of the pithiest and most eloquent defences of traditional values I recall having read. Writing in the final years of World War II, when moral relativism was just beginning to infiltrate the secondary school curriculum, he uses as the point of departure an English textbook he refers to as “The Green Book” (actually The Control of Language: A critical approach to reading and writing, by Alex King and Martin Ketley), which he dissects as attempting to “debunk” the development of a visceral sense of right and wrong in students in the guise of avoiding emotionalism and sentimentality.

From his description of “The Green Book”, it seems pretty mild compared to the postmodern, multicultural, and politically correct propaganda aimed at present-day students, but then perhaps it takes an observer with the acuity of a C. S. Lewis to detect the poison in such a dilute form. He also identifies the associated perversion of language which accompanies the subversion of values. On p. 28 is this brilliant observation, which I only began to notice myself more than sixty years after Lewis identified it. “To abstain from calling it good and to use, instead, such predicates as ‘necessary”, ‘progressive’, or ‘efficient’ would be a subterfuge. They could be forced by argument to answer the questions ‘necessary for what?’, ‘progressing toward what?’, ‘effecting what?’; in the last resort they would have to admit that some state of affairs was in their opinion good for its own sake.” But of course the “progressives” and champions of “efficiency” don't want you to spend too much time thinking about the end point of where they want to take you.

Although Lewis's Christianity informs much of his work, religion plays little part in this argument. He uses the Chinese word Tao () or “The Way” to describe what he believes are a set of values shared, to some extent, by all successful civilisations, which must be transmitted to each successive generation if civilisation is to be preserved. To illustrate the universality of these principles, he includes a 19 page appendix listing the pillars of Natural Law, with illustrations taken from texts and verbal traditions of the Ancient Egyptian, Jewish, Old Norse, Babylonian, Hindu, Confucian, Greek, Roman, Christian, Anglo-Saxon, American Indian, and Australian Aborigine cultures. It seems like those bent on jettisoning these shared values are often motivated by disdain for the frequently-claimed divine origin of such codes of values. But their very universality suggests that, regardless of what myths cultures invent to package them, they represent an encoding of how human beings work and the distillation of millennia of often tragic trial-and-error experimentation in search of rules which allow members of our fractious species to live together and accomplish shared goals.

An on-line edition is available, although I doubt it is authorised, as the copyright for this work was last renewed in 1974.

May 2007 Permalink

Minogue, Kenneth. Alien Powers. New Brunswick, NJ: Transaction Publishers, [1985] 2007. ISBN 978-0-7658-0365-8.
No, this isn't a book about Roswell. Subtitled “The Pure Theory of Ideology”, it is a challenging philosophical exploration of ideology, ideological politics, and ideological arguments and strategies in academia and the public arena. By “pure theory”, the author means to explore what is common to all ideologies, regardless of their specifics. (I should note here, as does the author, that in sloppy contemporary discourse “ideology” is often used simply to denote a political viewpoint. In this work, the author restricts it to closed intellectual systems which ascribe a structural cause to events in the world, posit a mystification which prevents people from understanding what is revealed to the ideologue, and predict an inevitable historical momentum [“progress”] toward liberation from the unperceived oppression of the present.)

Despite the goal of seeking a pure theory, independent of any specific ideology, a great deal of time is necessarily spent on Marxism, since although the roots of modern ideology can be traced (like so many other pernicious things) to Rousseau and the French Revolution, it was Marx and Engels who elaborated the first complete ideological system, providing the intellectual framework for those that followed. Marxism, Fascism, Nazism, racism, nationalism, feminism, environmentalism, and many other belief systems are seen as instantiations of a common structure of ideology. In essence, this book can be seen as a “Content Wizard” for cranking out ideological creeds: plug in the oppressor and oppressed, the supposed means of mystification and path to liberation, and out pops a complete ideological belief system ready for an enterprising demagogue to start peddling. The author shows how ideological arguments, while masquerading as science, are the cuckoo's egg in the nest of academia, as they subvert and shortcut the adversarial process of inquiry and criticism with a revelation not subject to scrutiny. The attractiveness of such bogus enlightenment to second-rate minds and indolent intellects goes a long way to explaining the contemporary prevalence in the academy of ideologies so absurd that only an intellectual could believe them.

The author writes clearly, and often with wit and irony so dry it may go right past unless you're paying attention. But this is nonetheless a difficult book: it is written at such a level of philosophical abstraction and with so many historical and literary references that many readers, including this one, find it heavy going indeed. I can't recall any book on a similar topic this formidable since chapters two through the end of Allan Bloom's The Closing of the American Mind. If you want to really understand the attractiveness of ideology to otherwise intelligent and rational people, and how ideology corrupts the academic and political spheres (with numerous examples of how slippery ideological arguments can be), this is an enlightening read, but you're going to have to work to make the most of it.

This book was originally published in 1985. This edition includes a new introduction by the author, and two critical essays reflecting upon the influence of the book and its message from a contemporary perspective where the collapse of the Soviet Union and the end of the Cold War have largely discredited Marxism in the political arena, yet left its grip and that of other ideologies upon humanities and the social sciences in Western universities, if anything, only stronger.

March 2008 Permalink

Ortega y Gasset, José. The Revolt of the Masses. New York: W. W. Norton, [1930, 1932, 1964] 1993. ISBN 0-393-31095-7.
This book, published more than seventy-five years ago, when the twentieth century was only three decades old, is a simply breathtaking diagnosis of the crises that manifested themselves in that century and the prognosis for human civilisation. The book was published in Spanish in 1930; this English translation, authorised and approved by the author, by a translator who requested to remain anonymous, first appeared in 1932 and has been in print ever since.

I have encountered few works so short (just 190 pages), which are so densely packed with enlightening observations and thought-provoking ideas. When I read a book, if I encounter a paragraph that I find striking, either in the writing or the idea it embodies, I usually add it to my “quotes” archive for future reference. If I did so with this book, I would find myself typing in a large portion of the entire text. This is not an easy read, not due to the quality of the writing and translation (which are excellent), nor the complexity of the concepts and arguments therein, but simply due to the pure number of insights packed in here, each of which makes you stop and ponder its derivation and implications.

The essential theme of the argument anticipated the crunchy/soggy analysis of society by more than 65 years. In brief, over-achieving self-motivated elites create liberal democracy and industrial economies. Liberal democracy and industry lead to the emergence of the “mass man”, self-defined as not of the elite and hostile to existing elite groups and institutions. The mass man, by strength of numbers and through the democratic institutions which enabled his emergence, seizes the levers of power and begins to use the State to gratify his immediate desires. But, unlike the elites who created the State, the mass man does not think or plan in the long term, and is disinclined to make the investments and sacrifices which were required to create the civilisation in the first place, and remain necessary if it is to survive. In this consists the crisis of civilisation, and grasping this single concept explains much of the history of the seven decades which followed the appearance of the book and events today. Suddenly some otherwise puzzling things start to come into focus, such as why it is, in a world enormously more wealthy than that of the nineteenth century, with abundant and well-educated human resources and technological capabilities which dwarf those of that epoch, there seems to be so little ambition to undertake large-scale projects, and why those which are embarked upon are so often bungled.

In a single footnote on p. 119, Ortega y Gasset explains what the brilliant Hans-Hermann Hoppe spent an entire book doing: why hereditary monarchies, whatever their problems, are usually better stewards of the national patrimony than democratically elected leaders. In pp. 172–186 he explains the curious drive toward European integration which has motivated conquerors from Napoleon through Hitler, and collectivist bureaucratic schemes such as the late, unlamented Soviet Union and the odious present-day European Union. On pp. 188–190 he explains why a cult of youth emerges in mass societies, and why they produce as citizens people who behave like self-indulgent perpetual adolescents. In another little single-sentence footnote on p. 175 he envisions the disintegration of the British Empire, then at its zenith, and the cultural fragmentation of the post-colonial states. I'm sure that few of the author's intellectual contemporaries could have imagined their descendants living among the achievements of Western civilisation yet largely ignorant of its history or cultural heritage; the author nails it in chapters 9–11, explaining why it was inevitable and tracing the consequences for the civilisation, then in chapter 12 he forecasts the fragmentation of science into hyper-specialised fields and the implications of that. On pp. 184–186 he explains the strange attraction of Soviet communism for European intellectuals who otherwise thought themselves individualists—recall, this is but six years after the death of Lenin. And still there is more…and more…and more. This is a book you can probably re-read every year for five years in a row and get something more out of it every time.

A full-text online edition is available, which is odd since the copyright of the English translation was last renewed in 1960 and should still be in effect, yet the site which hosts this edition claims that all their content is in the public domain.

June 2006 Permalink

Popper, Karl R. The Open Society and Its Enemies. Vol. 1: The Spell of Plato. 5th ed., rev. Princeton: Princeton University Press, [1945, 1950, 1952, 1957, 1962] 1966. ISBN 0-691-01968-1.
The two hundred intricately-argued pages of main text are accompanied by more than a hundred pages of notes in small type. Popper states that “The text of the book is self-contained and may be read without these Notes. However, a considerable amount of material which is likely to interest all readers of the book will be found here, as well as some references and controversies which may not be of general interest.” My recommendation? Read the notes. If you skip them, you'll miss Popper's characterisation of Plato as the first philosopher to adopt a geometrical (as opposed to arithmetic) model of the world along with his speculations based on the sum of the square roots of 2 and 3 (known to Plato) differing from π by less than 1.5 parts per thousand (Note 9 to Chapter 6), or the exquisitely lucid exposition (written in 1942!) of why international law and institutions must ultimately defend the rights of human individuals as opposed to the sovereignty of nation states (Note 7 to Chapter 9). The second volume, which dissects the theories of Hegel and Marx, is currently out of print in the U.S. but a U.K. edition is available.

December 2003 Permalink

Popper, Karl R. The Open Society and Its Enemies. Vol. 2: Hegel and Marx. London: Routledge, [1945, 1962, 1966, 1995] 2003. ISBN 0-415-27842-2.
After tracing the Platonic origins of utopian schemes of top-down social engineering in Volume 1 (December 2003), Popper now turns to the best-known modern exemplars of the genre, Hegel and Marx, starting out by showing Aristotle's contribution to Hegel's philosophy. Popper considers Hegel a complete charlatan and his work a blizzard of obfuscation intended to dull the mind to such an extent that it can believe that the Prussian monarchy (which paid the salaries of Hegel and his acolytes) was the epitome of human freedom. For a work of serious philosophical criticism (there are more than a hundred pages of end notes in small type), Popper is forthrightly acerbic and often quite funny in his treatment of Hegel, who he disposes of in only 55 pages of this book of 470. (Popper's contemporary, Wittgenstein, gets much the same treatment. See note 51 to chapter 11, for example, in which he calls the Tractatus “reinforced dogmatism that opens wide the door to the enemy, deeply significant metaphysical nonsense…”. One begins to comprehend what possessed Wittgenstein, a year after the publication of this book, to brandish a fireplace poker at Popper.)

Readers who think of Popper as an icon of libertarianism may be surprised at his remarkably positive treatment of Marx, of whom he writes (chapter 13), “Science progresses through trial and error. Marx tried, and although he erred in his main doctrines, he did not try in vain. He opened and sharpened our eyes in many ways. A return to pre-Marxian social science is inconceivable. All modern writers are indebted to Marx, even if they do not know it. … One cannot do justice to Marx without recognizing his sincerity. His open-mindedness, his sense of facts, his distrust of verbiage, and especially of moralizing verbiage, made him one of the world's most influential fighters against hypocisy and pharisaism. He had a burning desire to help the oppressed, and was fully conscious of the need for proving himself in deeds, and not only in words.”

To be sure, this encomium is the prelude to a detailed critique of Marx's deterministic theory of history and dubious economic notions, but unlike Hegel, Marx is given credit for trying to make sense of phenomena which few others even attempted to study scientifically. Many of the flaws in Marx's work, Popper argues, may be attributed to Marx having imbibed too deeply and uncritically the work of Hegel, and the crimes committed in the name of Marxism the result of those treating his work as received dogma, as opposed to a theory subject to refutation, as Marx himself would have viewed it.

Also surprising is his condemnation, with almost Marxist vehemence, of nineteenth century “unrestrained capitalism”, and enthusiasm for government intervention in the economy and the emergence of the modern welfare state (chapter 20 in particular). One must observe, with the advantage of sixty years hindsight, that F.A. Hayek's less sanguine contemporary perspective in The Road to Serfdom (May 2002) has proved more prophetic. Of particular interest is Popper's advocacy of “piecemeal social engineering”, as opposed to grand top-down systems such as “scientific socialism”, as the genuinely scientific method of improving society, permitting incremental progress by experiments on the margin which are subject to falsification by their results, in the same manner Popper argues the physical sciences function in The Logic of Scientific Discovery.

Permit me to make a few remarks about the physical properties of this book. The paperback seems to have a spine made of triple-reinforced neutronium, and cannot be induced to lie flat by any of the usual stratagems. In fact, when reading the book, one must either use two hands to hold it open or else wedge it open with three fingers against the spine in order to read complete lines of text. This is tiring, particularly since the book is also quite heavy. If you happen to doze off whilst reading (which I'll confess happened a few times during some of the more intricate philosophical arguments), the thing will pop out of your hand, snap shut like a bear trap, and fly off in some random direction—Zzzzzz … CLACK … thud! I don't know what the problem is with the binding—I have any number of O'Reilly paperbacks about the same size and shape which lie flat without the need for any extreme measures. The text is set in a type font in which the distinction between roman and italic type is very subtle—sometimes I had to take off my glasses (I am nearsighted) and eyeball the text close-up to see if a word was actually emphasised, and that runs the risk of a bloody nose if your thumb should slip and the thing snap shut.

A U.S. edition of this volume is now back in print; for a while only Volume 1 was available from Princeton University Press. The U.K. edition of Volume 1 from Routledge remains available.

November 2005 Permalink

Rand, Ayn. Atlas Shrugged. New York: Dutton, [1957, 1992] 2005. ISBN 978-0-525-94892-6.
There is nothing I could possibly add by way of commentary on this novel, a classic of twentieth century popular fiction, one of the most discussed books of the epoch, and, more than fifty years after publication, still (at this writing) in the top two hundred books by sales rank at Amazon.com. Instead, I will confine my remarks to my own reactions upon reading this work for the third time and how it speaks to events of the present day.

I first read Atlas Shrugged in the summer of that most eventful year, 1968. I enjoyed it immensely, finding it not just a gripping story, but also, as Rand intended, a thorough (and in some ways, too thorough) exposition of her philosophy as glimpsed in The Fountainhead, which I'd read a few years earlier. I took it as an allegorical story about the pernicious effects and ultimate consequences of collectivism and the elevation of altruism over self-interest and need above earned rewards, but viewed the world in which it was set and the events which occurred there much as I did those of Orwell's 1984 and Heinlein's If This Goes On—: a cautionary tale showing the end point of trends visible in the contemporary world. But the world of Atlas Shrugged, like those of Orwell and Heinlein, seemed very remote from that of 1968—we were going to the Moon, and my expectations for the future were more along the lines of 2001 than Rand's dingy and decaying world. Also, it was 1968, for Heaven's sake, and I perceived the upheavals of the time (with a degree of naïveté and wrongheadedness I find breathtaking at this remove) as a sovereign antidote to the concentration of power and oppression of the individual, which would set things aright long before productive people began to heed Galt's call to shed the burden of supporting their sworn enemies.

My next traverse through Atlas Shrugged was a little before 1980. The seventies had taken a lot of the gloss off the bright and shiny 1968 vision of the future, and having run a small business for the latter part of that sorry decade, the encroachment of ever-rising taxes, regulation, and outright obstruction by governments at all levels was very much on my mind, which, along with the monetary and financial crises created by those policies plus a rising swamp of mysticism, pseudoscience, and the ascendant anti-human pagan cult of environmentalism, made it entirely plausible to me that the U.S. might tip over into the kind of accelerating decline described in the middle part of the novel. This second reading of the book left me with a very different impression than the first. This time I could see, from my own personal experience and in the daily news, precisely the kind of events foreseen in the story. It was no longer a cautionary tale but instead a kind of hitch-hiker's guide to the road to serfdom. Curiously, this reading the book caused me to shrug off the funk of demoralisation and discouragement and throw myself back into the entrepreneurial fray. I believed that the failure of collectivism was so self-evident that a turning point was at hand, and the landslide election of Reagan shortly thereafter appeared to bear this out. The U.S. was committed to a policy of lower taxes, rolling back regulations, standing up to aggressive collectivist regimes around the world, and opening the High Frontier with economical, frequent, and routine access to space (remember that?). While it was hardly the men of the mind returning from Galt's Gulch, it was good enough for me, and I decided to make the best of it and contribute what I could to what I perceived as the turnaround. As a footnote, it's entirely possible that if I hadn't reread Atlas Shrugged around this time, I would have given up on entrepreneurship and gone back to work for the Man—so in a way, this book was in the causal tree which led to Autodesk and AutoCAD. In any case, although working myself to exhaustion and observing the sapping of resources by looters and moochers after Autodesk's initial public stock offering in 1985, I still felt myself surfing on a wave of unbounded opportunity and remained unreceptive to Galt's pitch in 1987. In 1994? Well….

What with the eruption of the most recent financial crisis, the veer toward the hard left in the United States, and increasing talk of productive people opting to “go Galt”, I decided it was time for another pass through Atlas Shrugged, so I started reading it for the third time in early April 2010 and finished it in a little over two weeks, including some marathon sessions where I just didn't want to put it down, even though I knew the characters, principal events, and the ending perfectly well. What was different, and strikingly so, from the last read three decades ago, was how astonishingly prescient this book, published in 1957, was about events unfolding in the world today. As I noted above, in 1968 I viewed it as a dystopia set in an unspecified future. By 1980, many of the trends described in the book were clearly in place, but few of their ultimate dire consequences had become evident. In 2010, however, the novel is almost like reading a paraphrase of the history of the last quarter century. “Temporary crises”, “states of emergency”, “pragmatic responses”, calls to “sacrifice for the common good” and to “share the wealth” which seemed implausible then are the topics of speeches by present day politicians and news headlines. Further, the infiltration of academia and the news media by collectivists, their undermining the language and (in the guise of “postmodernism”) the foundations of rational thought and objective reality, which were entirely beneath the radar (at least to me) as late as 1980, are laid out here as clear as daylight, with the simultaneously pompous and vaporous prattling of soi-disant intellectuals which doubtless made the educated laugh when the book first appeared now having become commonplace in the classrooms of top tier universities and journals of what purport to be the humanities and social sciences. What once seemed a fantastic nightmare painted on a grand romantic canvas is in the process of becoming a shiveringly accurate prophecy.

So, where are we now? Well (if you'll allow me to use the word) objectively, I found the splice between our real-life past and present to be around the start of chapter 5 of part II, “Account Overdrawn”. This is about 500 pages into the hardback edition of 1168 pages, or around 40%. Obviously, this is the crudest of estimates—many things occur before that point which haven't yet in the real world and many afterward have already come to pass. Yet still, it's striking: who would have imagined piracy on the high seas to be a headline topic in the twenty-first century? On this reading I was also particularly struck by chapter 8 of part III, “The Egoist” (immediately following Galt's speech), which directly addresses a question I expect will soon intrude into the public consciousness: the legitimacy or lack thereof of nominally democratic governments. This is something I first wrote about in 1988, but never expected to actually see come onto the agenda. A recent Rasmussen poll, however, finds that just 21% of voters in the United States now believe that their federal government has the “consent of the governed”. At the same time, more than 40% of U.S. tax filers pay no federal income tax at all, and more than a majority receive more in federal benefits than they pay in taxes. The top 10% of taxpayers (by Adjusted Gross Income) pay more than 70% of all personal income taxes collected. This makes it increasingly evident that the government, if not already, runs the risk of becoming a racket in which the non-taxpaying majority use the coercive power of the state to shake down a shrinking taxpaying minority. This is precisely the vicious cycle which reaches its endpoint in this chapter, where the government loses all legitimacy in the eyes of not only its victims, but even its beneficiaries and participants. I forecast that should this trend continue (and that's the way to bet), within two years we will see crowds of people in the U.S. holding signs demanding “By what right?”.

In summary, I very much enjoyed revisiting this classic; given that it was the third time through and I don't consider myself to have changed all that much in the many years since the first time, this didn't come as a surprise. What I wasn't expecting was how differently the story is perceived based on events in the real world up to the time it's read. From the current perspective, it is eerily prophetic. It would be amusing to go back and read reviews at the time of its publication to see how many anticipated that happening. The ultimate lesson of Atlas Shrugged is that the looters subsist only by the sanction of their victims and through the product of their minds, which cannot be coerced. This is an eternal truth, which is why this novel, which states it so clearly, endures.

The link above is to the hardbound “Centennial Edition”. There are trade paperback, mass market paperback, and Kindle editions available as well. I'd avoid the mass market paperback, as the type is small and the spines of books this thick tend to disintegrate as you read them. At current Amazon prices, the hardcover isn't all that much more than the trade paperback and will be more durable if you plan to keep it around or pass it on to others. I haven't seen the Kindle transfer; if it's well done, it would be marvellous, as any print edition of this book is more than a handful.

April 2010 Permalink

Rucker, Rudy. The Lifebox, the Seashell, and the Soul. New York: Thunder's Mouth Press, 2005. ISBN 1-56025-722-9.
I read this book in manuscript form. An online excerpt is available.

September 2004 Permalink

Scully, Matthew. Dominion. New York: St. Martin's Press, 2002. ISBN 0-312-26147-0.

February 2003 Permalink

Sokal, Alan and Jean Bricmont. Fashionable Nonsense. New York: Picador, [1997] 1998. ISBN 978-0-312-20407-5.
There are many things to mock in the writings of “postmodern”, “deconstructionist”, and “critical” intellectuals, but one of the most entertaining for readers with a basic knowledge of science and mathematics is the propensity of many of these “scholars” to sprinkle their texts with words and concepts from mathematics and the physical sciences, all used entirely out of context and in total ignorance of their precise technical definitions, and without the slightest persuasive argument that there is any connection, even at a metaphorical level, between the mis-quoted science and the topic being discussed. This book, written by two physicists, collects some of the most egregious examples of such obscurantist writing by authors (all French—who would have guessed?) considered eminent in their fields. From Jacques Lacan's hilariously muddled attempts to apply topology and mathematical logic to psychoanalysis to Luce Irigaray's invoking fluid mechanics to argue that science is a male social construct, the passages quoted here at length are a laugh riot for those willing to momentarily put aside the consequences of their being taken seriously by many in the squishier parts of academia. Let me quote just one to give you a flavour—this passage is by Paul Virilio:

When depth of time replaces depths of sensible space; when the commutation of interface supplants the delimitation of surfaces; when transparence re-establishes appearances; then we begin to wonder whether that which we insist on calling space isn't actually light, a subliminary, para-optical light of which sunlight is only one phase or reflection. This light occurs in a duration measured in instantaneous time exposure rather than the historical and chronological passage of time. The time of this instant without duration is “exposure time”, be it over- or underexposure. Its photographic and cinematographic technologies already predicted the existence and the time of a continuum stripped of all physical dimensions, in which the quantum of energetic action and the punctum of cinematic observation have suddenly become the last vestiges of a vanished morphological reality. Transferred into the eternal present of a relativity whose topological and teleological thickness and depth belong to this final measuring instrument, this speed of light possesses one direction, which is both its size and dimension and which propagates itself at the same speed in all radial directions that measure the universe. (pp. 174–175)

This paragraph, which recalls those bright college days punctuated by deferred exhalations accompanied by “Great weed, man!”, was a single 193 word sentence in the original French; the authors deem it “the most perfect example of diarrhea of the pen that we have ever encountered.”

The authors survey several topics in science and mathematics which are particularly attractive to these cargo cult confidence men and women, and, dare I say, deconstruct their babblings. In all, I found the authors' treatment of the postmodernists remarkably gentle. While they do not hesitate to ridicule their gross errors and misappropriation of scientific concepts, they carefully avoid drawing the (obvious) conclusion that such ignorant nonsense invalidates the entire arguments being made. I suspect this is due to the authors, both of whom identify themselves as men of the Left, being sympathetic to the conclusions of those they mock. They're kind of stuck, forced to identify and scorn the irrational misuse of concepts from the hard sciences, while declining to examine the absurdity of the rest of the argument, which the chart from Explaining Postmodernism (May 2007) so brilliantly explains.

Alan Sokal is the perpetrator of the famous hoax which took in the editors of Social Text with his paper “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”, which appears in full here, along with comments on construction of the parody and remarks on the motivation behind it.

This book was originally published in French under the title Impostures intellectuelles. This English edition contains some material added to address critical comments on the French edition, and includes the original French language text of passages whose translation might be challenged as unfaithful to whatever the heck the original was trying to say.

June 2011 Permalink

Sowell, Thomas. The Quest for Cosmic Justice. New York: Touchstone Books, 1999. ISBN 0-684-86463-0.

October 2003 Permalink

Spengler, Oswald. The Decline of the West: An Abridged Edition. Oxford: Oxford University Press, [1918, 1922, 1932, 1959, 1961] 1991. ISBN 0-19-506634-0.
Only rarely do I read abridged editions. I chose this volume simply because it was the only readily-available English translation of the work. In retrospect, I don't think I could have handled much more Spengler, at least in one dose. Even in English, reading Spengler conjures up images of great mountain ranges of polysyllabic German philosophical prose. For example, chapter 21 begins with the following paragraph. “Technique is as old as free-moving life itself. The original relation between a waking-microcosm and its macrocosm—‘Nature’—consists in a mental sensation which rises from mere sense-impressions to sense-judgement, so that already it works critically (that is, separatingly) or, what comes to the same thing, causal-analytically”. In this abridged edition the reader need cope only with a mere 415 pages of such text. It is striking the extent to which today's postmodern nostrums of cultural relativism were anticipated by Spengler.

April 2004 Permalink

Staley, Kent W. The Evidence for the Top Quark. Cambridge: Cambridge University Press, 2004. ISBN 0-521-82710-8.
A great deal of nonsense and intellectual nihilism has been committed in the name of “science studies”. Here, however, is an exemplary volume which shows not only how the process of scientific investigation should be studied, but also why. The work is based on the author's dissertation in philosophy, which explored the process leading to the September 1994 publication of the “Evidence for top quark production in pp collisions at √s = 1.8 TeV” paper in Physical Review D. This paper is a quintessential example of Big Science: more than four hundred authors, sixty pages of intricate argumentation from data produced by a detector weighing more than two thousand tons, and automated examination of millions and millions of collisions between protons and antiprotons accelerated to almost the speed of light by the Tevatron, all to search, over a period of months, for an elementary particle which cannot be observed in isolation, and finally reporting “evidence” for its existence (but not “discovery” or “observation”) based on a total of just twelve events “tagged” by three different algorithms, when a total of about 5.7 events would have been expected due to other causes (“background”) purely by chance alone.

Through extensive scrutiny of contemporary documents and interviews with participants in the collaboration which performed the experiment, the author provides a superb insight into how science on this scale is done, and the process by which the various kinds of expertise distributed throughout a large collaboration come together to arrive at the consensus they have found something worthy of publication. He explores the controversies about the paper both within the collaboration and subsequent to its publication, and evaluates claims that choices made by the experimenters may have a produced a bias in the results, and/or that choosing experimental “cuts” after having seen data from the detector might constitute “tuning on the signal”: physicist-speak for choosing the criteria for experimental success after having seen the results from the experiment, a violation of the “predesignation” principle usually assumed in statistical tests.

In the final two, more philosophical, chapters, the author introduces the concept of “Error-Statistical Evidence”, and evaluates the analysis in the “Evidence” paper in those terms, concluding that despite all the doubt and controversy, the decision making process was, in the end, ultimately objective. (And, of course, subsequent experimentation has shown the information reported in the Evidence paper to be have been essentially correct.)

Popular accounts of high energy physics sometimes gloss over the fantastically complicated and messy observations which go into a reported result to such an extent you might think experimenters are just waiting around looking at a screen waiting for a little ball to pop out with a “t” or whatever stencilled on the side. This book reveals the subtlety of the actual data from these experiments, and the intricate chain of reasoning from the multitudinous electronic signals issuing from a particle detector to the claim of having discovered a new particle. This is not, however, remotely a work of popularisation. While attempting to make the physics accessible to philosophers of science and the philosophy comprehensible to physicists, each will find the portions outside their own speciality tough going. A reader without a basic understanding of the standard model of particle physics and the principles of statistical hypothesis testing will probably end up bewildered and may not make it to the end, but those who do will be rewarded with a detailed understanding of high energy particle physics experiments and the operation of large collaborations of researchers which is difficult to obtain anywhere else.

August 2006 Permalink

Taleb, Nassim Nicholas. The Black Swan. New York: Random House, 2007. ISBN 978-1-4000-6351-2.
If you are interested in financial markets, investing, the philosophy of science, modelling of socioeconomic systems, theories of history and historicism, or the rôle of randomness and contingency in the unfolding of events, this is a must-read book. The author largely avoids mathematics (except in the end notes) and makes his case in quirky and often acerbic prose (there's something about the French that really gets his goat) which works effectively.

The essential message of the book, explained by example in a wide variety of contexts is (and I'll be rather more mathematical here in the interest of concision) is that while many (but certainly not all) natural phenomena can be well modelled by a Gaussian (“bell curve”) distribution, phenomena in human society (for example, the distribution of wealth, population of cities, book sales by authors, casualties in wars, performance of stocks, profitability of companies, frequency of words in language, etc.) are best described by scale-invariant power law distributions. While Gaussian processes converge rapidly upon a mean and standard deviation and rare outliers have little impact upon these measures, in a power law distribution the outliers dominate.

Consider this example. Suppose you wish to determine the mean height of adult males in the United States. If you go out and pick 1000 men at random and measure their height, then compute the average, absent sampling bias (for example, picking them from among college basketball players), you'll obtain a figure which is very close to that you'd get if you included the entire male population of the country. If you replaced one of your sample of 1000 with the tallest man in the country, or with the shortest, his inclusion would have a negligible effect upon the average, as the difference from the mean of the other 999 would be divided by 1000 when computing the average. Now repeat the experiment, but try instead to compute mean net worth. Once again, pick 1000 men at random, compute the net worth of each, and average the numbers. Then, replace one of the 1000 by Bill Gates. Suddenly Bill Gates's net worth dwarfs that of the other 999 (unless one of them randomly happened to be Warren Buffett, say)—the one single outlier dominates the result of the entire sample.

Power laws are everywhere in the human experience (heck, I even found one in AOL search queries), and yet so-called “social scientists” (Thomas Sowell once observed that almost any word is devalued by preceding it with “social”) blithely assume that the Gaussian distribution can be used to model the variability of the things they measure, and that extrapolations from past experience are predictive of the future. The entry of many people trained in physics and mathematics into the field of financial analysis has swelled the ranks of those who naïvely assume human action behaves like inanimate physical systems.

The problem with a power law is that as long as you haven't yet seen the very rare yet stupendously significant outlier, it looks pretty much like a Gaussian, and so your model based upon that (false) assumption works pretty well—until it doesn't. The author calls these unimagined and unmodelled rare events “Black Swans”—you can see a hundred, a thousand, a million white swans and consider each as confirmation of your model that “all swans are white”, but it only takes a single black swan to falsify your model, regardless of how much data you've amassed and how long it has correctly predicted things before it utterly failed.

Moving from ornithology to finance, one of the most common causes of financial calamities in the last few decades has been the appearance of Black Swans, wrecking finely crafted systems built on the assumption of Gaussian behaviour and extrapolation from the past. Much of the current calamity in hedge funds and financial derivatives comes directly from strategies for “making pennies by risking dollars” which never took into account the possibility of the outlier which would wipe out the capital at risk (not to mention that of the lenders to these highly leveraged players who thought they'd quantified and thus tamed the dire risks they were taking).

The Black Swan need not be a destructive bird: for those who truly understand it, it can point the way to investment success. The original business concept of Autodesk was a bet on a Black Swan: I didn't have any confidence in our ability to predict which product would be a success in the early PC market, but I was pretty sure that if we fielded five products or so, one of them would be a hit on which we could concentrate after the market told us which was the winner. A venture capital fund does the same thing: because the upside of a success can be vastly larger than what you lose on a dud, you can win, and win big, while writing off 90% of all of the ventures you back. Investors can fashion a similar strategy using options and option-equivalent investments (for example, resource stocks with a high cost of production), diversifying a small part of their portfolio across a number of extremely high risk investments with unbounded upside while keeping the bulk in instruments (for example sovereign debt) as immune as possible to Black Swans.

There is much more to this book than the matters upon which I have chosen to expound here. What you need to do is lay your hands on this book, read it cover to cover, think it over for a while, then read it again—it is so well written and entertaining that this will be a joy, not a chore. I find it beyond charming that this book was published by Random House.

January 2009 Permalink

Taleb, Nassim Nicholas. Fooled by Randomness. 2nd. ed. New York: Random House, [2004] 2005. ISBN 978-0-8129-7521-5.
This book, which preceded the author's bestselling The Black Swan (January 2009), explores a more general topic: randomness and, in particular, how humans perceive and often misperceive its influence in their lives. As with all of Taleb's work, it is simultaneously quirky, immensely entertaining, and so rich in wisdom and insights that you can't possible absorb them all in a single reading.

The author's central thesis, illustrated from real-world examples, tests you perform on yourself, and scholarship in fields ranging from philosophy to neurobiology, is that the human brain evolved in an environment in which assessment of probabilities (and especially conditional probabilities) and nonlinear outcomes was unimportant to reproductive success, and consequently our brains adapted to make decisions according to a set of modular rules called “heuristics”, which researchers have begun to tease out by experimentation. While our brains are capable of abstract thinking and, with the investment of time required to master it, mathematical reasoning about probabilities, the parts of the brain we use to make many of the important decisions in our lives are the much older and more instinctual parts from which our emotions spring. This means that otherwise apparently rational people may do things which, if looked at dispassionately, appear completely insane and against their rational self-interest. This is particularly apparent in the world of finance, in which the author has spent much of his career, and which offers abundant examples of individual and collective delusional behaviour both before and after the publication of this work.

But let's step back from the arcane world of financial derivatives and consider a much simpler and easier to comprehend investment proposition: Russian roulette. A diabolical billionaire makes the following proposition: play a round of Russian roulette (put one cartridge in a six shot revolver, spin the cylinder to randomise its position, put the gun to your temple and pull the trigger). If the gun goes off, you don't receive any payoff and besides, you're dead. If there's just the click of the hammer falling on an empty chamber, you receive one million dollars. Further, as a winner, you're invited to play again on the same date next year, when the payout if you win will be increased by 25%, and so on in subsequent years as long as you wish to keep on playing. You can quit at any time and keep your winnings.

Now suppose a hundred people sign up for this proposition, begin to play the game year after year, and none chooses to take their winnings and walk away from the table. (For connoisseurs of Russian roulette, this is the variety of the game in which the cylinder is spun before each shot, not where the live round continues to advance each time the hammer drops on an empty chamber: in that case there would be no survivors beyond the sixth round.) For each round, on average, 1/6 of the players are killed and out of the game, reducing the number who play next year. Out of the original 100 players in the first round, one would expect, on average, around 83 survivors to participate in the second round, where the payoff will be 1.25 million.

What do we have, then, after ten years of this game? Again, on average, we expect around 16 survivors, each of whom will be paid more than seven million dollars for the tenth round alone, and who will have collected a total of more than 33 million dollars over the ten year period. If the game were to go on for twenty years, we would expect around 3 survivors from the original hundred, each of whom would have “earned” more than a third of a billion dollars each.

Would you expect these people to be regular guests on cable business channels, sought out by reporters from financial publications for their “hot hand insights on Russian roulette”, or lionised for their consistent and rapidly rising financial results? No—they would be immediately recognised as precisely what they were: lucky (and consequently very wealthy) fools who, each year they continue to play the game, run the same 1 in 6 risk of blowing their brains out.

Keep this Russian roulette analogy in mind the next time you see an interview with the “sizzling hot” hedge fund manager who has managed to obtain 25% annual return for his investors over the last five years, or when your broker pitches a mutual fund with a “great track record”, or you read the biography of a businessman or investor who always seems to have made the “right call” at the right time. All of these are circumstances in which randomness, and hence luck, plays an important part. Just as with Russian roulette, there will inevitably be big winners with a great “track record”, and they're the only ones you'll see because the losers have dropped out of the game (and even if they haven't yet they aren't newsworthy). So the question you have to ask yourself is not how great the track record of a given individual is, but rather the size of the original cohort from which the individual was selected at the start of the period of the track record. The rate hedge fund managers “blow up” and lose all of their investors' money in one disastrous market excursion is less than that of the players blown away in Russian roulette, but not all that much. There are a lot of trading strategies which will yield high and consistent returns until they don't, at which time they suffer sudden and disastrous losses which are always reported as “unexpected”. Unexpected by the geniuses who devised the strategy, the fools who put up the money to back it, and the clueless journalists who report the debacle, but entirely predictable to anybody who modelled the risks being run in the light of actual behaviour of markets, not some egghead's ideas of how they “should” behave.

Shall we try another? You go to your doctor for a routine physical, and as part of the laboratory work on your blood, she orders a screening test for a rare but serious disease which afflicts only one person in a thousand but which can be treated if detected early. The screening test has a 5% false positive rate (in 5% of the people tested who do not actually have the disease, it erroneously says that they do) and a 0% false negative rate (if you have the disease, the test will always report that you do). You return to the doctor's office for the follow-up visit and she tells you that you tested positive for the disease. What is the probability you actually have it?

Spoiler warning: Plot and/or ending details follow.  
Did you answer 95%? If you did, you're among the large majority of people, not just among the general population but also practising clinicians, who come to the same conclusion. And you'd be just as wrong as them. In fact, the odds you have the disease are a little less than 2%. Here's how it works. Let's assume an ensemble of 10,000 randomly selected people are tested. On average, ten of these people will have the disease, and all of them will test positive for it (no false negatives). But among that population, 500 people who do not have the disease will also test positive due to the 5% false positive rate of the test. That means that, on average (it gets tedious repeating this, but the natterers will be all over me if I don't do so in every instance), there will be, of 10,000 people tested, a total of 510 positive results, of which 10 actually have the disease. Hence, if you're the recipient of a positive test result, the probability you have the disease is 10/510, or a tad less than 2%. So, before embarking upon a demanding and potentially dangerous treatment regime, you're well advised to get some other independent tests to confirm that you are actually afflicted.
Spoilers end here.  
In making important decisions in life, we often rely upon information from past performance and reputation without taking into account how much those results may be affected by randomness, luck, and the “survivor effect” (the Russian roulette players who brag of their success in the game are necessarily those who aren't yet dead). When choosing a dentist, you can be pretty sure that a practitioner who is recommended by a variety of his patients whom you respect will do an excellent job drilling your teeth. But this is not the case when choosing an oncologist, since all of the people who give him glowing endorsements are necessarily those who did not die under his care, even if their survival is due to spontaneous remission instead of the treatment they received. In such a situation, you need to, as it were, interview the dead alongside the survivors, or, that being difficult, compare the actual rate of survival among comparable patients with the same condition.

Even when we make decisions with our higher cognitive facilities rather than animal instincts, it's still easy to get it wrong. While the mathematics of probability and statistics have been put into a completely rigorous form, there are assumptions in how they are applied to real world situations which can lead to the kinds of calamities one reads about regularly in the financial press. One of the reasons physical scientists transmogrify so easily into Wall Street “quants” is that they are trained and entirely comfortable with statistical tools and probabilistic analysis. The reason they so frequently run off the cliff, taking their clients' fortunes in the trailer behind them, is that nature doesn't change the rules, nor does she cheat. Most physical processes will exhibit well behaved Gaussian or Poisson distributions, with outliers making a vanishingly small contribution to mean and median values. In financial markets and other human systems none of these conditions obtain: the rules change all the time, and often change profoundly before more than a few participants even perceive they have; any action in the market will provoke a reaction by other actors, often nonlinear and with unpredictable delays; and in human systems the Pareto and other wildly non-Gaussian power law distributions are often the norm.

We live in a world in which randomness reigns in many domains, and where we are bombarded with “news and information” which is probably in excess of 99% noise to 1% signal, with no obvious way to extract the signal except with the benefit of hindsight, which doesn't help in making decisions on what to do today. This book will dramatically deepen your appreciation of this dilemma in our everyday lives, and provide a philosophical foundation for accepting the rôle randomness and luck plays in the world, and how, looked at with the right kind of eyes (and investment strategy) randomness can be your friend.

February 2011 Permalink

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

April 2018 Permalink

Taleb, Nassim Nicholas. Skin in the Game. New York: Random House, 2018. ISBN 978-0-425-28462-9.
This book is volume four in the author's Incerto series, following Fooled by Randomness (February 2011), The Black Swan (January 2009), and Antifragile (April 2018). In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn't in order to choose wisely among the myriad alternatives available.

The title, “Skin in the Game”, is an aphorism which refers to an individual's sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers' desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won't happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.

Ancestral human society was all about skin in the game. In a small band of hunter/gatherers, everybody can see and is aware of the actions of everybody else. Slackers who do not contribute to the food supply are likely to be cut loose to fend for themselves. When the hunt fails, nobody eats until the next kill. If a conflict develops with a neighbouring band, those who decide to fight instead of running away or surrendering are in the front line of the battle and will be the first to suffer in case of defeat.

Nowadays we are far more “advanced”. As the author notes, “Bureaucracy is a construction by which a person is conveniently separated from the consequences of his or her actions.” As populations have exploded, layers and layers of complexity have been erected, removing authority ever farther from those under its power. We have built mechanisms which have immunised a ruling class of decision makers from the consequences of their decisions: they have little or no skin in the game.

Less than a third of all Roman emperors died in their beds. Even though they were at the pinnacle of the largest and most complicated empire in the West, they regularly paid the ultimate price for their errors either in battle or through palace intrigue by those dissatisfied with their performance. Today the geniuses responsible for the 2008 financial crisis, which destroyed the savings of hundreds of millions of innocent people and picked the pockets of blameless taxpayers to bail out the institutions they wrecked, not only suffered no punishment of any kind, but in many cases walked away with large bonuses or golden parachute payments and today are listened to when they pontificate on the current scene, rather than being laughed at or scorned as they would be in a rational world. We have developed institutions which shift the consequences of bad decisions from those who make them to others, breaking the vital feedback loop by which we converge upon solutions which, if not perfect, at least work well enough to get the job done without the repeated catastrophes that result from ivory tower theories being implemented on a grand scale in the real world.

Learning and Evolution

Being creatures who have evolved large brains, we're inclined to think that learning is something that individuals do, by observing the world, drawing inferences, testing hypotheses, and taking on knowledge accumulated by others. But the overwhelming majority of creatures who have ever lived, and of those alive today, do not have large brains—indeed, many do not have brains at all. How have they learned to survive and proliferate, filling every niche on the planet where environmental conditions are compatible with biochemistry based upon carbon atoms and water? How have they, over the billions of years since life arose on Earth, inexorably increased in complexity, most recently producing a species with a big brain able to ponder such questions?

The answer is massive parallelism, exhaustive search, selection for survivors, and skin in the game, or, putting it all together, evolution. Every living creature has skin in the ultimate game of whether it will produce offspring that inherit its characteristics. Every individual is different, and the process of reproduction introduces small variations in progeny. Change the environment, and the characteristics of those best adapted to reproduce in it will shift and, eventually, the population will consist of organisms adapted to the new circumstances. The critical thing to note is that while each organism has skin in the game, many may, and indeed must, lose the game and die before reproducing. The individual organism does not learn, but the species does and, stepping back another level, the ecosystem as a whole learns and adapts as species appear, compete, die out, or succeed and proliferate. This simple process has produced all of the complexity we observe in the natural world, and it works because every organism and species has skin in the game: its adaptation to its environment has immediate consequences for its survival.

None of this is controversial or new. What the author has done in this book is to apply this evolutionary epistemology to domains far beyond its origins in biology—in fact, to almost everything in the human experience—and demonstrate that both success and wisdom are generated when this process is allowed to work, but failure and folly result when it is thwarted by institutions which take the skin out of the game.

How does this apply in present-day human society? Consider one small example of a free market in action. The restaurant business is notoriously risky. Restaurants come and go all the time, and most innovations in the business fall flat on their face and quickly disappear. And yet most cities have, at any given time, a broad selection of restaurants with a wide variety of menus, price points, ambiance, and service to appeal to almost any taste. Each restaurant has skin in the game: those which do not attract sufficient customers (or, having once been successful, fail to adapt when customers' tastes change) go out of business and are replaced by new entrants. And yet for all the churning and risk to individual restaurants, the restaurant “ecosystem” is remarkably stable, providing customers options closely aligned with their current desires.

To a certain kind of “expert” endowed with a big brain (often crammed into a pointy head), found in abundance around élite universities and government agencies, all of this seems messy, chaotic, and (the horror!) inefficient. Consider the money lost when a restaurant fails, the cooks and waiters who lose their jobs, having to find a new restaurant to employ them, the vacant building earning nothing for its owner until a new tenant is found—certainly there must be a better way. Why, suppose instead we design a standardised set of restaurants based upon a careful study of public preferences, then roll out this highly-optimised solution to the problem. They might be called “public feeding centres”. And they would work about as well as the name implies.

Survival and Extinction

Evolution ultimately works through extinction. Individuals who are poorly adapted to their environment (or, in a free market, companies which poorly serve their customers) fail to reproduce (or, in the case of a company, survive and expand). This leaves a population better adapted to its environment. When the environment changes, or a new innovation appears (for example, electricity in an age dominated by steam power), a new sorting out occurs which may see the disappearance of long-established companies that failed to adapt to the new circumstances. It is a tautology that the current population consists entirely of survivors, but there is a deep truth within this observation which is at the heart of evolution. As long as there is a direct link between performance in the real world and survival—skin in the game—evolution will work to continually optimise and refine the population as circumstances change.

This evolutionary process works just as powerfully in the realm of ideas as in biology and commerce. Ideas have consequences, and for the process of selection to function, those consequences, good or ill, must be borne by those who promulgate the idea. Consider inventions: an inventor who creates something genuinely useful and brings it to market (recognising that there are many possible missteps and opportunities for bad luck or timing to disrupt this process) may reap great rewards which, in turn, will fund elaboration of the original invention and development of related innovations. The new invention may displace existing technologies and cause them, and those who produce them, to become obsolete and disappear (or be relegated to a minor position in the market). Both the winner and loser in this process have skin in the game, and the outcome of the game is decided by the evaluation of the customers expressed in the most tangible way possible: what they choose to buy.

Now consider an academic theorist who comes up with some intellectual “innovation” such as “Modern Monetary Theory” (which basically says that a government can print as much paper money as it wishes to pay for what it wants without collecting taxes or issuing debt as long as full employment has not been achieved). The theory and the reputation of those who advocate it are evaluated by their peers: other academics and theorists employed by institutions such as national treasuries and central banks. Such a theory is not launched into a market to fend for itself among competing theories: it is “sold” to those in positions of authority and imposed from the top down upon an economy, regardless of the opinions of those participating in it. Now, suppose the brilliant new idea is implemented and results in, say, total collapse of the economy and civil society? What price do those who promulgated the theory and implemented it pay? Little or nothing, compared to the misery of those who lost their savings, jobs, houses, and assets in the calamity. Many of the academics will have tenure and suffer no consequences whatsoever: they will refine the theory, or else publish erudite analyses of how the implementation was flawed and argue that the theory “has never been tried”. Some senior officials may be replaced, but will doubtless land on their feet and continue to pull down large salaries as lobbyists, consultants, or pundits. The bureaucrats who patiently implemented the disastrous policies are civil servants: their jobs and pensions are as eternal as anything in this mortal sphere. And, before long, another bright, new idea will bubble forth from the groves of academe.

(If you think this hypothetical example is unrealistic, see the career of one Robert Rubin. “Bob”, during his association with Citigroup between 1999 and 2009, received total compensation of US$126 million for his “services” as a director, advisor, and temporary chairman of the bank, during which time he advocated the policies which eventually brought it to the brink of collapse in 2008 and vigorously fought attempts to regulate the financial derivatives which eventually triggered the global catastrophe. During his tenure at Citigroup, shareholders of its stock lost 70% of their investment, and eventually the bank was bailed out by the federal government using money taken by coercive taxation from cab drivers and hairdressers who had no culpability in creating the problems. Rubin walked away with his “winnings” and paid no price, financial, civil, or criminal, for his actions. He is one of the many poster boys and girls for the “no skin in the game club”. And lest you think that, chastened, the academics and pointy-heads in government would regain their grounding in reality, I have just one phrase for you, “trillion dollar coin”, which “Nobel Prize” winner Paul Krugman declared to be “the most important fiscal policy debate of our lifetimes”.)

Intellectual Yet Idiot

A cornerstone of civilised society, dating from at least the Code of Hammurabi (c. 1754 B.C.), is that those who create risks must bear those risks: an architect whose building collapses and kills its owner is put to death. This is the fundamental feedback loop which enables learning. When it is broken, when those who create risks (academics, government policy makers, managers of large corporations, etc.) are able to transfer those risks to others (taxpayers, those subject to laws and regulations, customers, or the public at large), the system does not learn; evolution breaks down; and folly runs rampant. This phenomenon is manifested most obviously in the modern proliferation of the affliction the author calls the “intellectual yet idiot” (IYI). These are people who are evaluated by their peers (other IYIs), not tested against the real world. They are the equivalent of a list of movies chosen based upon the opinions of high-falutin' snobbish critics as opposed to box office receipts. They strive for the approval of others like themselves and, inevitably, spiral into ever more abstract theories disconnected from ground truth, ascending ever higher into the sky.

Many IYIs achieve distinction in one narrow field and then assume that qualifies them to pronounce authoritatively on any topic whatsoever. As was said by biographer Roy Harrod of John Maynard Keynes,

He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.

Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,

Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains.

The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.

Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few.

One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.

The Lindy Effect

How can you sift the good and the enduring from the mass of ephemeral fads and bad ideas that swirl around us every day? The Lindy effect is a powerful tool. Lindy's delicatessen in New York City was a favoured hangout for actors who observed that the amount of time a show had been running on Broadway was the best predictor of how long it would continue to run. A show that has run for three months will probably last for at least three months more. A show that has made it to the one year mark probably has another year or more to go. In other words, the best test for whether something will stand the test of time is whether it has already withstood the test of time. This may, at first, seem counterintuitive: a sixty year old person has a shorter expected lifespan remaining than a twenty year old. The Lindy effect applies only to nonperishable things such as “ideas, books, technologies, procedures, institutions, and political systems”.

Thus, a book which has been in print continuously for a hundred years is likely to be in print a hundred years from now, while this season's hot best-seller may be forgotten a few years hence. The latest political or economic theory filling up pages in the academic journals and coming onto the radar of the IYIs in the think tanks, media punditry, and (shudder) government agencies, is likely to be forgotten and/or discredited in a few years while those with a pedigree of centuries or millennia continue to work for those more interested in results than trendiness.

Religion is Lindy. If you disregard all of the spiritual components to religion, long-established religions are powerful mechanisms to transmit accumulated wisdom, gained through trial-and-error experimentation and experience over many generations, in a ready-to-use package for people today. One disregards or scorns this distilled experience at one's own great risk. Conversely, one should be as sceptical about “innovation” in ancient religious traditions and brand-new religions as one is of shiny new ideas in any other field.

(A few more technical notes…. As I keep saying, “Once Pareto gets into your head, you'll never get him out.” It's no surprise to find that the Lindy effect is deeply related to the power-law distribution of many things in human experience. It's simply another way to say that the lifetime of nonperishable goods is distributed according to a power law just like incomes, sales of books, music, and movie tickets, use of health care services, and commission of crimes. Further, the Lindy effect is similar to J. Richard Gott's Copernican statement of the Doomsday argument, with the difference that Gott provides lower and upper bounds on survival time for a given confidence level predicted solely from a random observation that something has existed for a known time.)

Uncertainty, Risk, and Decision Making

All of these observations inform dealing with risk and making decisions based upon uncertain information. The key insight is that in order to succeed, you must first survive. This may seem so obvious as to not be worth stating, but many investors, including those responsible for blow-ups which make the headlines and take many others down with them, forget this simple maxim. It is deceptively easy to craft an investment strategy which will yield modest, reliable returns year in and year out—until it doesn't. Such strategies tend to be vulnerable to “tail risks”, in which an infrequently-occurring event (such as 2008) can bring down the whole house of cards and wipe out the investor and the fund. Once you're wiped out, you're out of the game: you're like the loser in a Russian roulette tournament who, after the gun goes off, has no further worries about the probability of that event. Once you accept that you will never have complete information about a situation, you can begin to build a strategy which will prevent your blowing up under any set of circumstances, and may even be able to profit from volatility. This is discussed in more detail in the author's earlier Antifragile.

The Silver Rule

People and institutions who have skin in the game are likely to act according to the Silver Rule: “Do not do to others what you would not like them to do to you.” This rule, combined with putting the skin of those “defence intellectuals” sitting in air-conditioned offices into the games they launch in far-off lands around the world, would do much to save the lives and suffering of the young men and women they send to do their bidding.

August 2019 Permalink

Unger, Roberto Mangabeira and Lee Smolin. The Singular Universe and the Reality of Time. Cambridge: Cambridge University Press, 2015. ISBN 978-1-107-07406-4.
In his 2013 book Time Reborn (June 2013), Lee Smolin argued that, despite its extraordinary effectiveness in understanding the behaviour of isolated systems, what he calls the “Newtonian paradigm” is inadequate to discuss cosmology: the history and evolution of the universe as a whole. In this book, Smolin and philosopher Roberto Mangabeira Unger expand upon that observation and present the case that the current crisis in cosmology, with its appeal to multiple universes and mathematical structures which are unobservable, even in principle, is a consequence of the philosophical, scientific, and mathematical tools we've been employing since the dawn of science attempting to be used outside their domain of applicability, and that we must think differently when speaking of the universe as a whole, which contains all of its own causes and obeys no laws outside itself. The authors do not present their own theories to replace those of present-day cosmology (although they discuss the merits of several proposals), but rather describe their work as a “proposal in natural philosophy” which might guide investigators searching for those new theories.

In brief, the Newtonian paradigm is that the evolution of physical systems is described by differential equations which, given a set of initial conditions, permit calculating the evolution of a system in the future. Since the laws of physics at the microscopic level are reversible, given complete knowledge of the state of a system at a given time, its past can equally be determined. Quantum mechanics modifies this only in that rather than calculating the position and momentum of particles (or other observables), we calculate the deterministic evolution of the wave function which gives the probability of observing them in specific states in the future.

This paradigm divides physics into two components: laws (differential equations) and initial conditions (specification of the initial state of the system being observed). The laws themselves, although they allow calculating the evolution of the system in time, are themselves timeless: they do not change and are unaffected by the interaction of objects. But if the laws are timeless and not subject to back-reaction by the objects whose interaction they govern, where did they come from and where do they exist? While conceding that these aren't matters which working scientists spend much time thinking about, in the context of cosmology they post serious philosophical problems. If the universe all that is and contains all of its own causes, there is no place for laws which are outside the universe, cannot be acted upon by objects within it, and have no apparent cause.

Further, because mathematics has been so effective in expressing the laws of physics we've deduced from experiments and observations, many scientists have come to believe that mathematics can be a guide to exploring physics and cosmology: that some mathematical objects we have explored are, in a sense, homologous to the universe, and that learning more about the mathematics can be a guide to discoveries about reality.

One of the most fundamental discoveries in cosmology, which has happened within the lifetimes of many readers of this book, including me, is that the universe has a history. When I was a child, some scientists (a majority, as I recall) believed the universe was infinite and eternal, and that observers at any time in the past or future would observe, at the largest scales, pretty much the same thing. Others argued for an origin at a finite time in the past, with the early universe having a temperature and density much greater than at present—this theory was mocked as the “big bang”. Discovery of the cosmic background radiation and objects in the distant universe which did not at all resemble those we see nearby decisively decided this dispute in favour of the big bang, and recent precision measurements have allowed determination of when it happened and how the universe evolved subsequently.

If the universe has a finite age, this makes the idea of timeless laws even more difficult to accept. If the universe is eternal, one can accept that the laws we observe have always been that way and always will be. But if the universe had an origin we can observe, how did the laws get baked into the universe? What happened before the origin we observe? If every event has a cause, what was the cause of the big bang?

The authors argue that in cosmology—a theory encompassing the entire universe—a global privileged time must govern all events. Time flows not from some absolute clock as envisioned by Newtonian physics or the elastic time of special and general relativity, but from causality: every event has one or more causes, and these causes are unique. Depending upon their position and state of motion, observers will disagree about the durations measured by their own clocks, and on the order in which things at different positions in space occurred (the relativity of simultaneity), but they will always observe a given event to have the same cause(s), which precede it. This relational notion of time, they argue, is primordial, and space may be emergent from it.

Given this absolute and privileged notion of time (which many physicists would dispute, although the authors argue does not conflict with relativity), that time is defined by the causality of events which cause change in the universe, and that there is a single universe with nothing outside it and which contains all of its own causes, then is it not plausible to conclude that the “laws” of physics which we observe are not timeless laws somehow outside the universe or grounded in a Platonic mathematics beyond the universe, but rather have their own causes, within the universe, and are subject to change: just as there is no “unmoved mover”, there is no timeless law? The authors, particularly Smolin, suggest that just as we infer laws from observing regularities in the behaviour of systems within the universe when performing experiments in various circumstances, these laws emerge as the universe develops “habits” as interactions happen over and over. In the present cooled-down state of the universe, it's very much set in its ways, and since everything has happened innumerable times we observe the laws to be unchanging. But closer to the big bang or at extreme events in the subsequent universe, those habits haven't been established and true novelty can occur. (Indeed, simply by synthesising a protein with a hundred amino acids at random, you're almost certain to have created a molecule which has never existed before in the observable universe, and it may be harder to crystallise the first time than subsequently. This appears to be the case. This is my observation, not the authors'.)

Further, not only may the laws change, but entirely new kinds of change may occur: change itself can change. For example, on Earth, change was initially governed entirely by the laws of physics and chemistry (with chemistry ultimately based upon physics). But with the emergence of life, change began to be driven by evolution which, while at the molecular level was ultimately based upon chemistry, created structures which equilibrium chemistry never could, and dramatically changed the physical environment of the planet. This was not just change, but a novel kind of change. If it happened here, in our own recent (in cosmological time) history, why should we assume other novel kinds of change did not emerge in the early universe, or will not continue to manifest themselves in the future?

This is a very difficult and somewhat odd book. It is written in two parts, each by one of the co-authors, largely independent of one another. There is a twenty page appendix in which the authors discuss their disagreements with one another, some of which are fundamental. I found Unger's part tedious, repetitive, and embodying all of things I dislike about academic philosophers. He has some important things to say, but I found that slogging through almost 350 pages of it was like watching somebody beat a moose to death with an aluminium baseball bat: I believe a good editor, or even a mediocre one, could have cut this to 50 pages without losing anything and making the argument more clearly than trying to dig it out of this blizzard of words. Lee Smolin is one of the most lucid communicators among present-day research scientists, and his part is clear, well-argued, and a delight to read; it's just that you have to slog through the swamp to get there.

While suggesting we may have been thinking about cosmology all wrong, this is not a book which suggests either an immediate theoretical or experimental programme to explore these new ideas. Instead, it intends to plant the seed that, apart from time and causality, everything may be emergent, and that when we think about the early universe we cannot rely upon the fixed framework of our cooled-down universe with its regularities. Some of this is obvious and non-controversial: before there were atoms, there was no periodic table of the elements. But was there a time before there was conservation of energy, or before locality?

September 2015 Permalink

van Dongen, Jeroen. Einstein's Unification. Cambridge: Cambridge University Press, 2010. ISBN 978-0-521-88346-7.
In 1905 Albert Einstein published four papers which transformed the understanding of space, time, mass, and energy; provided physical evidence for the quantisation of energy; and observational confirmation of the existence of atoms. These publications are collectively called the Annus Mirabilis papers, and vaulted the largely unknown Einstein to the top rank of theoretical physicists. When Einstein was awarded the Nobel Prize in Physics in 1921, it was for one of these 1905 papers which explained the photoelectric effect. Einstein's 1905 papers are masterpieces of intuitive reasoning and clear exposition, and demonstrated Einstein's technique of constructing thought experiments based upon physical observations, then deriving testable mathematical models from them. Unlike so many present-day scientific publications, Einstein's papers on special relativity and the equivalence of mass and energy were accessible to anybody with a college-level understanding of mechanics and electrodynamics and used no special jargon or advanced mathematics. Being based on well-understood concepts, neither cited any other scientific paper.

While special relativity revolutionised our understanding of space and time, and has withstood every experimental test to which it has been subjected in the more than a century since it was formulated, it was known from inception that the theory was incomplete. It's called special relativity because it only describes the behaviour of bodies under the special case of uniform unaccelerated motion in the absence of gravity. To handle acceleration and gravitation would require extending the special theory into a general theory of relativity, and it is upon this quest that Einstein next embarked.

As before, Einstein began with a simple thought experiment. Just as in special relativity, where there is no experiment which can be done in a laboratory without the ability to observe the outside world that can determine its speed or direction of uniform (unaccelerated) motion, Einstein argued that there should be no experiment an observer could perform in a sufficiently small closed laboratory which could distinguish uniform acceleration from the effect of gravity. If one observed objects to fall with an acceleration equal to that on the surface of the Earth, the laboratory might be stationary on the Earth or in a space ship accelerating with a constant acceleration of one gravity, and no experiment could distinguish the two situations. (The reason for the “sufficiently small” qualification is that since gravity is produced by massive objects, the direction a test particle will fall depends upon its position with respect to the centre of gravity of the body. In a very large laboratory, objects dropped far apart would fall in different directions. This is what causes tides.)

Einstein called this observation the “equivalence principle”: that the effects of acceleration and gravity are indistinguishable, and that hence a theory which extended special relativity to incorporate accelerated motion would necessarily also be a theory of gravity. Einstein had originally hoped it would be straightforward to reconcile special relativity with acceleration and gravity, but the deeper he got into the problem, the more he appreciated how difficult a task he had undertaken. Thanks to the Einstein Papers Project, which is curating and publishing all of Einstein's extant work, including notebooks, letters, and other documents, the author (a participant in the project) has been able to reconstruct Einstein's ten-year search for a viable theory of general relativity.

Einstein pursued a two-track approach. The bottom up path started with Newtonian gravity and attempted to generalise it to make it compatible with special relativity. In this attempt, Einstein was guided by the correspondence principle, which requires that any new theory which explains behaviour under previously untested conditions must reproduce the tested results of existing theory under known conditions. For example, the equations of motion in special relativity reduce to those of Newtonian mechanics when velocities are small compared to the speed of light. Similarly, for gravity, any candidate theory must yield results identical to Newtonian gravitation when field strength is weak and velocities are low.

From the top down, Einstein concluded that any theory compatible with the principle of equivalence between acceleration and gravity must exhibit general covariance, which can be thought of as being equally valid regardless of the choice of co-ordinates (as long as they are varied without discontinuities). There are very few mathematical structures which have this property, and Einstein was drawn to Riemann's tensor geometry. Over years of work, Einstein pursued both paths, producing a bottom-up theory which was not generally covariant which he eventually rejected as in conflict with experiment. By November 1915 he had returned to the top-down mathematical approach and in four papers expounded a generally covariant theory which agreed with experiment. General relativity had arrived.

Einstein's 1915 theory correctly predicted the anomalous perihelion precession of Mercury and also predicted that starlight passing near the limb of the Sun would be deflected by twice the angle expected based on Newtonian gravitation. This was confirmed (within a rather large margin of error) in an eclipse expedition in 1919, which made Einstein's general relativity front page news around the world. Since then precision tests of general relativity have tested a variety of predictions of the theory with ever-increasing precision, with no experiment to date yielding results inconsistent with the theory.

Thus, by 1915, Einstein had produced theories of mechanics, electrodynamics, the equivalence of mass and energy, and the mechanics of bodies under acceleration and the influence of gravitational fields, and changed space and time from a fixed background in which physics occurs to a dynamical arena: “Matter and energy tell spacetime how to curve. Spacetime tells matter how to move.” What do you do, at age 36, having figured out, largely on your own, how a large part of the universe works?

Much of Einstein's work so far had consisted of unification. Special relativity unified space and time, matter and energy. General relativity unified acceleration and gravitation, gravitation and geometry. But much remained to be unified. In general relativity and classical electrodynamics there were two field theories, both defined on the continuum, both with unlimited range and an inverse square law, both exhibiting static and dynamic effects (although the details of gravitomagnetism would not be worked out until later). And yet the theories seemed entirely distinct: gravity was always attractive and worked by the bending of spacetime by matter-energy, while electromagnetism could be either attractive or repulsive, and seemed to be propagated by fields emitted by point charges—how messy.

Further, quantum theory, which Einstein's 1905 paper on the photoelectric effect had helped launch, seemed to point in a very different direction than the classical field theories in which Einstein had worked. Quantum mechanics, especially as elaborated in the “new” quantum theory of the 1920s, seemed to indicate that aspects of the universe such as electric charge were discrete, not continuous, and that physics could, even in principle, only predict the probability of the outcome of experiments, not calculate them definitively from known initial conditions. Einstein never disputed the successes of quantum theory in explaining experimental results, but suspected it was a theory based upon phenomena which did not explain what was going on at a deeper level. (For example, the physical theory of elasticity explains experimental results and makes predictions within its domain of applicability, but it is not fundamental. All of the effects of elasticity are ultimately due to electromagnetic forces between atoms in materials. But that doesn't mean that the theory of elasticity isn't useful to engineers, or that they should do their spring calculations at the molecular level.)

Einstein undertook the search for a unified field theory, which would unify gravity and electromagnetism, just as Maxwell had unified electrostatics and magnetism into a single theory. In addition, Einstein believed that a unified field theory would be antecedent to quantum theory, and that the probabilistic results of quantum theory could be deduced from the more fundamental theory, which would remain entirely deterministic. From 1915 until his death in 1955 Einstein's work concentrated mostly on the quest for a unified field theory. He was aided by numerous talented assistants, many of whom went on to do important work in their own right. He explored a variety of paths to such a theory, but ultimately rejected each one, in turn, as either inconsistent with experiment or unable to explain phenomena such as point particles or quantisation of charge.

As the author documents, Einstein's approach to doing physics changed in the years after 1915. While before he was guided both by physics and mathematics, in retrospect he recalled and described his search of the field equations of general relativity as having followed the path of discovering the simplest and most elegant mathematical structure which could explain the observed phenomena. He thus came, like Dirac, to argue that mathematical beauty was the best guide to correct physical theories.

In the last forty years of his life, Einstein made no progress whatsoever toward a unified field theory, apart from discarding numerous paths which did not work. He explored a variety of approaches: “semivectors” (which turned out just to be a reformulation of spinors), five-dimensional models including a cylindrically compactified dimension based on Kaluza-Klein theory, and attempts to deduce the properties of particles and their quantum behaviour from nonlinear continuum field theories.

In seeking to unify electromagnetism and gravity, he ignored the strong and weak nuclear forces which had been discovered over the years and merited being included in any grand scheme of unification. In the years after World War II, many physicists ceased to worry about the meaning of quantum mechanics and the seemingly inherent randomness in its predictions which so distressed Einstein, and adopted a “shut up and calculate” approach as their computations were confirmed to ever greater precision by experiments.

So great was the respect for Einstein's achievements that only rarely was a disparaging word said about his work on unified field theories, but toward the end of his life it was outside the mainstream of theoretical physics, which had moved on to elaboration of quantum theory and making quantum theory compatible with special relativity. It would be a decade after Einstein's death before astronomical discoveries would make general relativity once again a frontier in physics.

What can we learn from the latter half of Einstein's life and his pursuit of unification? The frontier of physics today remains unification among the forces and particles we have discovered. Now we have three forces to unify (counting electromagnetism and the weak nuclear force as already unified in the electroweak force), plus two seemingly incompatible kinds of particles: bosons (carriers of force) and fermions (what stuff is made of). Six decades (to the day) after the death of Einstein, unification of gravity and the other forces remains as elusive as when he first attempted it.

It is a noble task to try to unify disparate facts and theories into a common whole. Much of our progress in the age of science has come from such unification. Einstein unified space and time; matter and energy; acceleration and gravity; geometry and motion. We all benefit every day from technologies dependent upon these fundamental discoveries. He spent the last forty years of his life seeking the next grand unification. He never found it. For this effort we should applaud him.

I must remark upon how absurd the price of this book is. At Amazon as of this writing, the hardcover is US$ 102.91 and the Kindle edition is US$ 88. Eighty-eight Yankee dollars for a 224 page book which is ranked #739,058 in the Kindle store?

April 2015 Permalink