Over My Shoulder:
John Walker's Reading List

The most frequently-asked question of all by visitors to this site has to be “Where do you get all those crazy ideas?”. Well, I read a lot of books…. Starting in January 2001, I decided to keep a list of what I've read to share with folks with similar interests. I read all kinds of stuff—technical books, science fiction, trash novels, history, fringe science, political screeds—you name it. My taste in literature is as indiscriminate as it is voracious.

A book's appearing on this list does not necessarily mean I recommend you read it, nor even that it's worth reading at all in my opinion; it simply means that I've read it. Books so awful I couldn't bear to finish are not included in the list, but that's a rare occurrence (none I can recall since 1999).

Click on titles to order books on-line from
Amazon.com.
As an Amazon.com associate, we earn a commission when you order books from this page.

Conversely, books I've re-read are included—works sufficiently enlightening or entertaining to revisit deserve mention alongside new discoveries.

Computer books are included only if I read them cover-to-cover (or, equivalently, the whole thing in non-linear order); computer books I use as references are not included, nor are other reference books. You may consider some of the works listed here controversial and/or disreputable; their appearance does not constitute an endorsement of the views expressed in the volume. According to Shannon's theorem, you gain information only from messages which are not predictable; getting inside the head of somebody you disagree with and dissecting arguments which come to different conclusions than your own is an excellent way to broaden one's perspective, if only on the way that others think.

Since I live in Europe, I sometimes read books not available in the U.S. These books are linked to an Amazon.com site in the UK, France, or Germany where the book may be purchased. If you're an Amazon customer, you can order books from any Amazon subsidiary for delivery worldwide; they even already know your payment and shipping information! The only exceptions are heavily-promoted bestsellers with movie or television tie-ins, and you'll rarely if ever see such books in this list.

If there's a book you'd like to recommend, there's a form at the bottom of the page for that purpose. Happy page-turning!

Most Recent Additions

Frames Edition

  2001  

January 2001

Goldsmith, Donald. The Runaway Universe. New York: Perseus Books, 2000. ISBN 0-7382-0068-9.

 Permalink

Rees, Martin. Just Six Numbers: The Deep Forces That Shape the Universe. New York: Basic Books, 2000. ISBN 0-465-03672-4.

 Permalink

Doran, Jamie and Piers Bizony. Starman: the Truth Behind the Legend of Yuri Gagarin. London: Bloomsbury, 1998. ISBN 0-7475-3688-0.

 Permalink

Hitchens, Christopher. No One Left to Lie To. London: Verso, 2000. ISBN 1-85984-284-4.

 Permalink

February 2001

Ward, Peter D. and Donald Brownlee. Rare Earth: Why Complex Life Is Uncommon in the Universe. New York: Copernicus, 2000. ISBN 0-387-98701-0.

 Permalink

Godwin, Robert ed. Gemini 6: The NASA Mission Reports. Burlington, Ontario, Canada: Apogee Books, 2000. ISBN 1-896522-61-0.

 Permalink

Knuth, Donald E. Literate Programming. Stanford: Center for the Study of Language and Information, 1992. ISBN 0-937073-80-6.

 Permalink

Sowell, Thomas. Inside American Education. New York: Free Press, 1993. ISBN 0-02-930330-3.

 Permalink

Guderian, Heinz. Panzer Leader. New York, Da Capo Press, 1996. ISBN 0-306-80689-4.

 Permalink

Krauss, Lawrence. Quintessence: The Mystery of Missing Mass in the Universe. New York: Basic Books, 2000. ISBN 0-465-03740-2.

 Permalink

March 2001

Gold, Thomas. The Deep Hot Biosphere. New York: Copernicus, 1999. ISBN 0-387-98546-8.

 Permalink

Lee, Henry and Jerry Labriola. Famous Crimes Revisited. Southington CT: Strong Books, 2001. ISBN 1-928782-14-0.

 Permalink

Rucker, Rudy. The Hollow Earth. New York: Avon, 1990. ISBN 0-380-75535-1.

 Permalink

Hoyle, Fred, Geoffrey Burbridge, and Jayant V. Narlikar. A Different Approach to Cosmology. Cambridge: Cambridge University Press, 2000. ISBN 0-521-66223-0.

 Permalink

Dana, Richard Henry. Two Years Before the Mast. New York: Signet, [1840, 1869] 2000. ISBN 0-451-52759-3.

 Permalink

April 2001

Adams, Fred and Greg Laughlin. The Five Ages of the Universe. New York: The Free Press, 1999. ISBN 0-684-85422-8.

 Permalink

Cassutt, Michael. Red Moon. New York: Tor, 2001. ISBN 0-312-87440-5.

 Permalink

Kane, Gordon. Supersymmetry. New York: Perseus Publishing, 2000. ISBN 0-7382-0203-7.

 Permalink

Knuth, Donald E. and Silvio Levy. The CWEB System of Structured Documentation. Reading, MA: Addison-Wesley, 1994. ISBN 0-201-57569-8.

 Permalink

Robinson, Kim Stanley. Green Mars. London: HarperCollins, 1994. ISBN 0-586-21390-2.

 Permalink

Kranz, Gene. Failure Is Not an Option. New York: Simon & Schuster, 2000. ISBN 0-7432-0079-9.

 Permalink

May 2001

Gott, J. Richard III. Time Travel in Einstein's Universe. New York: Houghton Mifflin, 2001. ISBN 0-395-95563-7.

 Permalink

Kraft, Christopher C. Flight: My Life in Mission Control. New York: Dutton, 2001. ISBN 0-525-94571-7.

 Permalink

Barrow, John D. The Book of Nothing. New York: Pantheon Books, 2000. ISBN 0-375-42099-1.

 Permalink

Schildt, Herbert. STL Programming from the Ground Up. Berkeley: Osborne, 1999. ISBN 0-07-882507-5.

 Permalink

Bovard, James. Feeling Your Pain. New York: St. Martin's Press, 2000. ISBN 0-312-23082-6.

 Permalink

Cassutt, Michael. Missing Man. New York: Tor, 1998. ISBN 0-8125-7786-8.

 Permalink

Scheider, Walter. A Serious But Not Ponderous Book About Nuclear Energy. Ann Arbor MI: Cavendish Press, 2001. ISBN 0-9676944-2-6.

 Permalink

June 2001

Callender, Craig and Nick Huggett, eds. Physics Meets Philosophy at the Planck Scale. Cambridge: Cambridge University Press, 2001. ISBN 0-521-66445-4.

 Permalink

Duesberg, Peter H. Inventing the AIDS Virus. Washington: Regnery, 1996. ISBN 0-89526-470-6.

 Permalink

Sammon, Bill. At Any Cost. New York: Regnery Publishing, 2001. ISBN 0-89526-227-4.

 Permalink

Douglas, John and Mark Olshaker. The Cases that Haunt Us. New York: Scribner, 2000. ISBN 0-684-84600-4.

 Permalink

Peron, Jim. Zimbabwe: the Death of a Dream. Johannesburg: Amagi Books, 2000. ISBN 0-620-26191-9.

 Permalink

Bloom, Allan. The Closing of the American Mind. New York: Touchstone Books, 1988. ISBN 0-671-65715-1.

 Permalink

July 2001

Smith, Michael. Station X. New York: TV Books, 1999. ISBN 1-57500-094-6.

 Permalink

Benford, Gregory. Timescape. New York: Bantam Books, 1980. ISBN 0-553-29709-0.

 Permalink

Radosh, Ronald. Commies. San Francisco: Encounter Books, 2001. ISBN 1-893554-05-8.

 Permalink

Einstein, Albert. Autobiographical Notes. Translated and edited by Paul Arthur Schilpp. La Salle, Illinois: Open Court, [1949] 1996. ISBN 0-8126-9179-2.

 Permalink

Erdman, Paul. The Set-Up. New York: St. Martin's, 1998. ISBN 0-312-96805-1.

 Permalink

Pinnock, Don. Gangs, Rituals and Rites of Passage. Cape Town: African Sun Press, 1997. ISBN 1-874915-08-3.

 Permalink

Aczel, Amir D. The Mystery of the Aleph. New York: Four Walls Eight Windows, 2000. ISBN 1-56858-105-X.

 Permalink

August 2001

Gray, Jeremy J. The Hilbert Challenge. Oxford: Oxford University Press, 2000. ISBN 0-19-850651-1.

 Permalink

Grisham, John. The Brethren. New York: Island Books, 2001. ISBN 0-440-23667-3.

 Permalink

Pellegrino, Charles. Ghosts of the Titanic. New York: Avon, 2000. ISBN 0-380-72472-3.

 Permalink

Verne, Jules. Autour de la lune. Paris: Poche, [1870] 1974. ISBN 2-253-00587-8.
Now available online at this site.

 Permalink

Forsyth, Frederick. The Fourth Protocol. New York: Bantam Books, 1985. ISBN 0-553-25113-9.

 Permalink

Smith, Michael. The Emperor's Codes. New York: Arcade Publishing, 2000. ISBN 1-55970-568-X.

 Permalink

September 2001

Burkett, B.G. and Glenna Whitley. Stolen Valor: How the Vietnam Generation Was Robbed of Its Heroes and Its History. Dallas: Verity Press, 1998. ISBN 1-56530-284-2.

 Permalink

Wade, Wyn Craig. The Titanic: End of a Dream. New York: Penguin, 1986. ISBN 0-14-016691-2.

 Permalink

Aratus of Soli. Phænomena. Edited, with introduction, translation, and commentary by Douglas Kidd. Cambridge: Cambridge University Press, [c. 275 B.C.] 1997. ISBN 0-521-58230-X.

 Permalink

Barks, Carl. A Cold Bargain. Prescott, AZ: Gladstone, [1957, 1960] 1989. ISBN 0-944599-24-9.

 Permalink

Latour, Bruno and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton: Princeton University Press, 1986. ISBN 0-691-02832-X.

 Permalink

Hall, Eldon C. Journey to the Moon: The History of the Apollo Guidance Computer. Reston, VA: AIAA, 1996. ISBN 1-56347-185-X.

 Permalink

October 2001

Ferro, Marc. Suez — 1956. Bruxelles: Éditions Complexe, 1982. ISBN 2-87027-101-8.

 Permalink

Pickover, Clifford A. Surfing through Hyperspace. Oxford: Oxford University Press, 1999. ISBN 0-19-514241-1.

 Permalink

Hergé [Georges Remi]. Les aventures de Tintin au pays des Soviets. Bruxelles: Casterman, [1930] 1999. ISBN 2-203-00100-3.

 Permalink

Mauldin, Bill. Back Home. Mattituck, New York: Amereon House, 1947. ISBN 0-89190-856-0.

 Permalink

Pickover, Clifford A. Black Holes: A Traveler's Guide. New York: John Wiley & Sons, 1998. ISBN 0-471-19704-1.

 Permalink

Churchill, Winston S. and Dwight D. Eisenhower. The Churchill-Eisenhower Correspondence, 1953–1955. Edited by Peter G. Boyle. Chapel Hill, NC: University of North Carolina Press, 1990. ISBN 0-8078-4951-0.

 Permalink

Hart-Davis, Adam. Eureakaaargh! A Spectacular Collection of Inventions that Nearly Worked. London: Michael O'Mara Books, 1999. ISBN 1-85479-484-1.

 Permalink

Darling, David J. Life Everywhere: The Maverick Science of Astrobiology. New York: Basic Books, 2001. ISBN 0-465-01563-8.

 Permalink

November 2001

Rogers, Lesley. Sexing the Brain. New York: Columbia University Press, 2001. ISBN 0-231-12010-9.

 Permalink

Roberts, Russell. The Invisible Heart: An Economic Romance. Cambridge, MA: MIT Press, 2001. ISBN 0-262-18210-6.

 Permalink

Linenger, Jerry M. Off the Planet. New York: McGraw-Hill, 2000. ISBN 0-07-137230-X.

 Permalink

Kaku, Michio. Hyperspace. New York: Anchor Books, 1994. ISBN 0-385-47705-8.

 Permalink

Aron, Leon. Yeltsin: A Revolutionary Life. New York: St. Martin's, 2000. ISBN 0-312-25185-8.

 Permalink

Pratchett, Terry and Gray Jolliffe. The Unadulterated Cat. London: Orion Books, 1989. ISBN 0-7528-3715-X.

 Permalink

December 2001

Hersh, Seymour M. The Samson Option. London: Faber and Faber, 1991, 1993. ISBN 0-571-16819-1.

 Permalink

Wendt, Guenter and Russell Still. The Unbroken Chain. Burlington, Canada: Apogee Books, 2001. ISBN 1-896522-84-X.

 Permalink

Hicks, Roger and Frances Schultz. Medium and Large Format Photography. New York: Amphoto Books, 2001. ISBN 0-8174-4557-9.

 Permalink

De-la-Noy, Michael. Scott of the Antarctic. Stroud, Gloucestershire, England: Sutton Publishing, 1997. ISBN 0-7509-1512-9.

 Permalink

Barnum, Phineas T. Art of Money Getting. Bedford, Massachusetts: Applewood Books, [1880] 1999. ISBN 1-55709-494-2.
Now available online at this site.

 Permalink

Morgan, Elaine. The Scars of Evolution. Oxford: Oxford University Press, [1990] 1994. ISBN 0-19-509431-X.

 Permalink

Smith, L. Neil. The WarDove. Culver City, California: Pulpless.Com, [1986] 1999. ISBN 1-58445-027-4.

 Permalink

Mauldin, Bill. Up Front. New York: W. W. Norton, [1945] 2000. ISBN 0-393-05031-9.

 Permalink

Bjornson, Adrian. A Universe that We Can Believe. Woburn, Massachusetts: Addison Press, 2000. ISBN 0-9703231-0-7.

 Permalink

  2002  

January 2002

Rhodes, Richard. Why They Kill. New York: Vintage Books, 1999. ISBN 0-375-70248-2.

 Permalink

Seife, Charles. Zero: The Biography of a Dangerous Idea. New York: Penguin Books, 2000. ISBN 0-14-029647-6.

 Permalink

Jones, Peter. The 1848 Revolutions. 2nd ed. Harlow, England: Pearson Education, 1991. ISBN 0-582-06106-7.

 Permalink

Goldberg, Bernard. Bias. Washington: Regnery Publishing, 2002. ISBN 0-89526-190-1.

 Permalink

Hawking, Stephen. The Universe in a Nutshell. New York: Bantam Books, 2001. ISBN 0-553-80202-X.

 Permalink

Buchanan, Patrick J. The Death of the West. New York: Thomas Dunne Books, 2002. ISBN 0-312-28548-5.

 Permalink

Conquest, Robert. The Great Terror: A Reassessment. New York: Oxford University Press, 1990. ISBN 0-19-507132-8.

 Permalink

February 2002

Ricks, Thomas E. Making the Corps. New York: Touchstone Books, 1998. ISBN 0-684-84817-1.

 Permalink

Warraq, Ibn [pseud.]. Why I Am Not a Muslim. Amherst, NY: Prometheus Books, 1995. ISBN 0-87975-984-4.

 Permalink

Smith, L. Neil. The American Zone. New York: Tor Books, 2001. ISBN 0-312-87369-7.

 Permalink

Adams, Scott. God's Debris: A Thought Experiment. Kansas City: Andrews McMeel, 2001. ISBN 0-7407-2190-9.

 Permalink

Maor, Eli. e: The Story of a Number. Princeton: Princeton University Press, [1994] 1998. ISBN 0-691-05854-7.

 Permalink

Hanson, Victor Davis. Carnage and Culture. New York: Doubleday, 2001. ISBN 0-385-50052-1.

 Permalink

March 2002

Noonan, Peggy. When Character Was King. New York: Viking, 2001. ISBN 0-670-88235-6.

 Permalink

Behe, Michael J., William A. Dembski, and Stephen C. Meyer. Science and Evidence for Design in the Universe. San Francisco: Ignatius Press, 2000. ISBN 0-89870-809-5.

 Permalink

Zamyatin, Yevgeny. We. Translated by Mirra Ginsburg. New York: Eos Books, [1921] 1999. ISBN 0-380-63313-2.

 Permalink

Yates, Raymond F. A Boy and a Battery. rev. ed. New York: Harper & Row, 1959. ISBN 0-06-026651-1.

 Permalink

Shull, Jim. The Beginner's Guide to Pinhole Photography. Buffalo, NY: Amherst Media, 1999. ISBN 0-936262-70-2.

 Permalink

Zelman, Aaron and L. Neil Smith. Hope. Hartford, WI: Mazel Freedom Press, 2001. ISBN 0-9642304-5-3.

 Permalink

Chomsky, Noam. Year 501: The Conquest Continues. Boston: South End Press, 1993. ISBN 0-89608-444-2.

 Permalink

Smith, L. Neil. Lever Action. Las Vegas: Mountain Media, 2001. ISBN 0-9670259-1-5.

 Permalink

Adams, Ansel. Examples: The Making of 40 Photographs. Boston: Little, Brown, 1983. ISBN 0-8212-1750-X.

 Permalink

Webb, James. Fields of Fire. New York: Bantam Books, [1978] 2001. ISBN 0-553-58385-9.

 Permalink

April 2002

McConnell, Brian. Beyond Contact: A Guide to SETI and Communicating with Alien Civilizations. Sebastopol, CA: O'Reilly, 2001. ISBN 0-596-00037-5.

 Permalink

Barks, Carl. The Sunken City and Luck of the North. Prescott, AZ: Gladstone, [1949, 1954] 1989. ISBN 0-944599-27-3.

 Permalink

Yates, Raymond F. Atomic Experiments for Boys. New York: Harper & Brothers, 1952. LCCN 52-007879.
This book is out of print. You may be able to locate a copy through abebooks.com; that's where I found mine.

 Permalink

Lamb, David. The Africans. New York: Vintage Books, 1987. ISBN 0-394-75308-9.

 Permalink

Koman, Victor. Kings of the High Frontier. Centreville, VA: Final Frontier, 1998. ISBN 0-9665662-0-3.

 Permalink

Simmons, Steve. Using the View Camera. rev. ed. New York: AMPHOTO, 1992. ISBN 0-8174-6353-4.

 Permalink

Bastiat, Frédéric. The Law. 2nd. ed. Translated by Dean Russell. Irvington-on-Hudson, NY: Foundation for Economic Education, [1850, 1950] 1998. ISBN 1-57246-073-3.
You may be able to obtain this book more rapidly directly from the publisher. The original French text, this English translation, and a Spanish translation are available online.

 Permalink

Clarke, Arthur C. and Michael Kube-McDowell. The Trigger. New York: Bantam Books, [1999] 2000. ISBN 0-553-57620-8.

 Permalink

Toole, John Kennedy. A Confederacy of Dunces. New York: Grove Press, [1982] 1987. ISBN 0-8021-3020-8.

 Permalink

May 2002

Richelson, Jeffrey T. The Wizards of Langley: Inside the CIA's Directorate of Science and Technology. Boulder, CO: Westview Press, 2001. ISBN 0-8133-6699-2.

 Permalink

Hayek, Friedrich A. The Road to Serfdom. Chicago: University of Chicago Press, [1944] 1994. ISBN 0-226-32061-8.

 Permalink

Pickover, Clifford A. Time: A Traveler's Guide. Oxford: Oxford University Press, 1998. ISBN 0-19-513096-0.

 Permalink

Satter, David. Age of Delirium: The Decline and Fall of the Soviet Union. New Haven, CT: Yale University Press, [1996] 2001. ISBN 0-300-08705-5.

 Permalink

Stewart, Ian. Flatterland. Cambridge, MA: Perseus Publishing, 2001. ISBN 0-7382-0442-0.

 Permalink

Wells, H. G. The Time Machine. London: Everyman, [1895, 1935] 1995. ISBN 0-460-87735-6.
Now available online at this site.

 Permalink

Stumpf, David K. Titan II: A History of a Cold War Missile Program. Fayetteville, AR: The University of Arkansas Press, 2000. ISBN 1-55728-601-9.

 Permalink

June 2002

Lamb, David. The Arabs. 2nd. ed. New York: Vintage Books, 2002. ISBN 1-4000-3041-2.

 Permalink

Harris, Robert. Fatherland. New York: Harper, [1992] 1995. ISBN 0-06-100662-9.

 Permalink

Visser, Matt. Lorentzian Wormholes: From Einstein to Hawking. New York: Springer-Verlag, 1996. ISBN 1-56396-653-0.

 Permalink

Charpak, Georges et Henri Broch. Devenez sorciers, devenez savants. Paris: Odile Jacob, 2002. ISBN 2-7381-1093-2.

 Permalink

Hoppe, Hans-Hermann. Democracy: The God That Failed. New Brunswick, NJ: Transaction Publishers, 2001. ISBN 0-7658-0868-4.
As of June 2002, the paperback edition of this book cited above is in short supply. The hardcover, ISBN 0-7658-0088-8, remains generally available.

 Permalink

Nugent, Ted and Shemane Nugent. Kill It and Grill It. Washington: Regnery Publishing, 2002. ISBN 0-89526-164-2.

 Permalink

Fallaci, Oriana. La rage et l'orgueil. Paris: Plon, 2002. ISBN 2-259-19712-4.
An English translation of this book was published in October 2002.

 Permalink

July 2002

Dyson, George. Project Orion: The True Story of the Atomic Spaceship. New York: Henry Holt, 2002. ISBN 0-8050-5985-7.

 Permalink

LaHaye, Tim and Jerry B. Jenkins. Left Behind. Wheaton, IL: Tyndale House, 1995. ISBN 0-8423-2912-9.

 Permalink

Light, Michael and Andrew Chaikin. Full Moon. New York: Alfred A. Knopf, 1999. ISBN 0-375-40634-4.

 Permalink

Baer, Robert. See No Evil. New York: Crown Publishers, 2002. ISBN 0-609-60987-4.

 Permalink

Lips, Ferdinand. Gold Wars. New York: FAME, 2001. ISBN 0-9710380-0-7.

 Permalink

LaHaye, Tim and Jerry B. Jenkins. Tribulation Force. Wheaton, IL: Tyndale House, 1996. ISBN 0-8423-2921-8.

 Permalink

Fregosi, Paul. Jihad in the West. Amherst, NY: Prometheus Books, 1998. ISBN 1-57392-247-1.

 Permalink

Adams, Ansel. Singular Images. Dobbs Ferry, NY: Morgan & Morgan, 1974. ISBN 0-87100-046-6.

 Permalink

Brink, Anthony. Debating AZT: Mbeki and the AIDS Drug Controversy. Pietermaritzburg, South Africa: Open Books, 2000. ISBN 0-620-26177-3.
I bought this volume in a bookshop in South Africa; none of the principal on-line booksellers have ever heard of it. The complete book is now available on the Web.

 Permalink

Meyssan, Thierry. L'effroyable imposture. Chatou, France: Editions Carnot, 2002. ISBN 2-912362-44-X.
An English translation of this book was published in August 2002.

 Permalink

Deary, Terry. The Cut-Throat Celts. London: Hippo, 1997. ISBN 0-590-13972-X.

 Permalink

August 2002

DiLorenzo, Thomas J. The Real Lincoln. Roseville, CA: Prima Publishing, 2002. ISBN 0-7615-3641-8.

 Permalink

Caldwell, Brian. We All Fall Down. Haverford, PA: Infinity Publishing.Com, 2001. ISBN 0-7414-0499-0.

 Permalink

Thompson, Milton O. and Curtis Peebles. Flying Without Wings: NASA Lifting Bodies and the Birth of the Space Shuttle. Washington: Smithsonian Institution Press, 1999. ISBN 1-56098-832-0.

 Permalink

Pratchett, Terry. The Amazing Maurice and His Educated Rodents. New York: HarperCollins, 2001. ISBN 0-06-001233-1.

 Permalink

Radosh, Ronald and Joyce Milton. The Rosenberg File. 2nd. ed. New Haven, CT: Yale University Press, 1997. ISBN 0-300-07205-8.

 Permalink

Smith, Edward E. The Skylark of Space. Lincoln, NE: University of Nebraska Press, [1928, 1946, 1947, 1950, 1958] 2001. ISBN 0-8032-9286-4.
“Doc” Smith revised the original 1928 edition of this book for each of four subsequent editions. This “Commemorative Edition” is a reprint of the most recent (1958) revision. It contains a variety of words: “fission”, “fusion”, “megaton”, "neutron", etc., which did not figure in the English language when the novel was completed in 1920 (it was not published until 1928). Earlier editions may have more of a “golden age” feel, but this was Smith's last word on the story. The original illustrations by O.G. Estes Jr. are reproduced, along with an introduction by Vernor Vinge which manages to misspell protagonist Richard Seaton's name throughout.

 Permalink

Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. ISBN 1-57955-008-8.
The full text of this book may now be read online.

 Permalink

September 2002

Hey, Anthony J.G. ed. Feynman and Computation. Boulder, CO: Westview Press, 2002. ISBN 0-8133-4039-X.

 Permalink

Prechter, Robert R., Jr. Conquer the Crash. Chichester, England: John Wiley & Sons, 2002. ISBN 0-470-84982-7.

 Permalink

Hackworth, David H. and Eilhys England. Steel My Soldiers' Hearts. New York: Rugged Land, 2002. ISBN 1-59071-002-9.

 Permalink

Dalrymple, Theodore. Life at the Bottom. Chicago: Ivan R. Dee, 2001. ISBN 1-56663-382-6.

 Permalink

Kyne, Peter B. The Go-Getter. New York: Henry Holt, 1921. ISBN 0-8050-0548-X.

 Permalink

Veeck, Bill and Ed Lynn. Thirty Tons a Day. New York: Viking, 1972. ISBN 0-670-70157-2.
This book is out of print. Used copies are generally available at Amazon.com.

 Permalink

Wells, H. G. The Last War. Lincoln, NE: University of Nebraska Press, [1914] 2001. ISBN 0-8032-9820-X.
This novel was originally published in 1914 as The World Set Free. Only the title has been changed in this edition.

 Permalink

Yates, Raymond F. A Boy and a Motor. New York: Harper & Brothers, 1944. LCCN  44-002179; ASIN 0-060-26666-X.
This book is out of print and used copies are not abundant. The enterprising young electrician who comes up empty handed at the link above is encouraged to also check abebooks.com.

 Permalink

Brimelow, Peter. Alien Nation. New York: HarperPerennial, 1996. ISBN 0-06-097691-8.

 Permalink

October 2002

Verne, Jules. La chasse au météore. Version d'origine. Paris: Éditions de l'Archipel, [1901, 1986] 2002. ISBN 2-84187-384-6.
This novel, one of three written by Verne in 1901, remained unpublished at the time of his death in 1905. At the behest of Verne's publisher, Jules Hetzel, Verne's son Michel “revised” the text in an attempt to recast what Verne intended as satirical work into the mold of an “Extraordinary Adventure”, butchering it in the opinion of many Verne scholars. In 1978 the original handwritten manuscript was discovered among a collection of Verne's papers. This edition, published under the direction of the Société Jules Verne, reproduces that text, and is the sole authentic edition. As of this writing, no English translation is available—all existing English editions are based upon the Michel Verne “revision”.

 Permalink

Heimann, Jim ed. Future Perfect. Köln, Germany: TASCHEN, 2002. ISBN 3-8228-1566-7.

 Permalink

Coulter, Ann. Slander. New York: Crown Publishers, 2002. ISBN 1-4000-4661-0.

 Permalink

Ferro, Marc. Le choc de l'Islam. Paris: Odile Jacob, 2002. ISBN 2-7381-1146-7.

 Permalink

Gertz, Bill. Breakdown. Washington: Regnery Publishing, 2002. ISBN 0-89526-148-0.

 Permalink

Clancy, Tom. Red Rabbit. New York: G.P. Putnam's Sons, 2002. ISBN 0-399-14870-1.

 Permalink

Bussjaeger, Carl. Net Assets. Internet: North American Samizdat, 2002.
Net Assets is published as an electronic book which can be purchased on-line and downloaded in a variety of formats. Befitting its anarcho-libertarian theme, you can even pay for it with real money.

 Permalink

Fingleton, Eamonn. In Praise of Hard Industries. New York: Houghton Mifflin, 1999. ISBN 0-395-89968-0.
On page 39, Autodesk is cited as an example of a non-hard industry undeserving of praise. Dunno—didn't seem all that damned easy to me at the time.

 Permalink

Stafford, Thomas P. with Michael Cassutt. We Have Capture. Washington: Smithsonian Institution Press, 2002. ISBN 1-58834-070-8.

 Permalink

November 2002

Muravchik, Joshua. Heaven on Earth: The Rise and Fall of Socialism. San Francisco: Encounter Books, 2002. ISBN 1-893554-45-7.

 Permalink

Koman, Victor. Solomon's Knife. Mill Valley, CA: Pulpless.Com, [1989] 1999. ISBN 1-58445-072-X.

 Permalink

Shayler, David J. Apollo: The Lost and Forgotten Missions. London: Springer-Praxis, 2002. ISBN 1-85233-575-0.
Space history buffs will find this well-documented volume fully as fascinating as the title suggests. For a Springer publication, there are a dismaying number of copyediting errors, but I noted only a few errors of fact.

 Permalink

Guderian, Heinz. Achtung—Panzer!. Translated by Christopher Duffy. London: Arms & Armour Press, [1937] 1995. ISBN 1-85409-282-0.
This edition is presently out of print in the U.S., but used copies are generally available. The U.K. edition, ISBN 0-304-35285-3, identical except for the cover, remains in print.

 Permalink

Todd, Emmanuel. Après l'empire. Paris: Gallimard, 2002. ISBN 2-07-076710-8.
An English translation is scheduled to be published in January 2004.

 Permalink

December 2002

Rucker, Rudy. As Above, So Below. New York: Forge, 2002. ISBN 0-7653-0403-1.
If you enjoy this novel as much as I did, you'll probably also want to read Rudy's notes on the book.

 Permalink

Gladwell, Malcolm. The Tipping Point. Boston: Little, Brown, 2000. ISBN 0-316-31696-2.

 Permalink

Parker, Ian. Complete Rollei TLR User's Manual. Faringdon, England: Hove Foto Books, 1994. ISBN 1-874031-96-7.

 Permalink

Edmonds, David and John Eidinow. Wittgenstein's Poker. London: Faber and Faber, 2001. ISBN 0-571-20909-2.
A U.S. edition of this book, ISBN 0-06-093664-9, was published in September 2002.

 Permalink

Dornberger, Walter. V-2. Translated by James Cleugh and Geoffrey Halliday. New York: Ballantine Books, [1952] 1954. LCCN 54-007830.
This book has been out of print for more than forty years. Used copies are generally available via abebooks.com, but the original Viking Press hardcover can be quite expensive. It's wisest to opt for the mass-market Ballantine paperback reprint; copies in perfectly readable condition can usually be had for about US$5.

Dornberger's account is an insider's view of Peenemünde. For an historical treatment with more technical detail plus a description of postwar research using the V-2, see Ordway and Sharpe's 1979 The Rocket Team, ISBN 0-262-65013-4, also out of print but readily available used.

 Permalink

Hitchens, Christopher. Why Orwell Matters. New York: Basic Books, 2002. ISBN 0-465-03049-1.

 Permalink

Burkett, Elinor. Another Planet. New York: HarperCollins, 2001. ISBN 0-06-050585-0.

 Permalink

Smith, Edward E. Skylark Three. New York: Pyramid Books, [1930, 1948] 1963. ISBN 0-515-02233-0.
This book is out of print; use the link above to locate used paperback copies, which are cheap and abundant. An illustrated reprint edition is scheduled for publication in 2003 by the University of Nebraska Press as ISBN 0-8032-9303-8.

 Permalink

Graham, Kathryn A. Flight from Eden. Lincoln, NE: Writer's Showcase, 2001. ISBN 0-595-19940-2.

 Permalink

Trefil, James. Are We Unique?. New York: John Wiley & Sons, 1997. ISBN 0-471-24946-7.

 Permalink

  2003  

January 2003

Engels, Friedrich. The Condition of the Working Class in England. Translated by Florence Wischnewetzky; edited with a foreword by Victor Kiernan. London: Penguin Books, [1845, 1886, 1892] 1987. ISBN 0-14-044486-6.
A Web edition of this title is available online.

 Permalink

How, Edith A. People of Africa. London: Society for Promoting Christian Knowledge, 1921.
This book was found in a Cairo bookbinder's shop; I know of no source for printed copies, but an electronic edition is now available online at this site.

 Permalink

Kelly, Thomas J. Moon Lander. Washington: Smithsonian Institution Press, 2001. ISBN 1-56098-998-X.

 Permalink

Leeson, Nick with Edward Whitley. Rogue Trader. London: Warner Books, 1996. ISBN 0-7515-1708-9.

 Permalink

Crichton, Michael. Prey. New York: HarperCollins, 2002. ISBN 0-06-621412-2.

 Permalink

Erasmus, Desiderius. The Praise of Folly. Translated, with an introduction and commentary by Clarence H. Miller. New Haven, CT: Yale University Press, [1511, 1532] 1979. ISBN 0-300-02373-1.
This edition translates the Moriae Encomium into very colloquial American English. The effect is doubtless comparable to the original Latin on a contemporary reader (one, that is, who grasped the thousands of classical and scriptural allusions in the text, all nicely annotated here), but still it's somewhat jarring to hear Erasmus spout phrases such as “fit as a fiddle”, “bull [in] a chinashop”, and “x-ray vision”. If you prefer a little more gravitas in your Erasmus, check out the 1688 English translation and the original Latin text available online at the Erasmus Text Project. After the first unauthorised edition was published in 1511, Erasmus revised the text for each of seven editions published between 1512 and 1532; the bulk of the changes were in the 1514 and 1516 editions. This translation is based on the 1532 edition published at Basel, and identifies the changes since 1511, giving the date of each.

 Permalink

Magueijo, João. Faster Than the Speed of Light. Cambridge, MA: Perseus Books, 2003. ISBN 0-7382-0525-7.

 Permalink

Gordon, Deborah M. Ants at Work. New York: The Free Press, 1999. ISBN 0-684-85733-2.

 Permalink

Liddy, G. Gordon. When I Was a Kid, This Was a Free Country. Washington: Regnery Publishing, 2002. ISBN 0-89526-175-8.

 Permalink

Orwell, George. Homage to Catalonia. San Diego: Harcourt Brace, [1938, 1952] 1987. ISBN 0-15-642117-8.
The orwell.ru site makes available electronic editions of this work in both English and Русский which you can read online or download to read at your leisure. All of Orwell's works are in the public domain under Russia's 50 year copyright law.

 Permalink

Christensen, Mark. Build the Perfect Beast. New York: St. Martin's Press, 2001. ISBN 0-312-26873-4.
Here's the concept: a bunch of Southern California morons set out to reinvent the automobile in the 1990's. This would be far more amusing were it not written by one of them, who remains, after all the misadventures recounted in the text, fully as clueless as at the get-go, and enormously less irritating had his editor at St. Martin's Press—a usually respectable house—construed their mandate to extend beyond running the manuscript through a spelling checker. Three and four letter words are misspelled; technical terms are rendered phonetically (“Nacca-duct”, p. 314; “tinsel strength”, p. 369), factual howlers of all kinds litter the pages, and even the spelling of principal characters varies from page to page—on page 6 one person's name is spelled two different ways within five lines. This may be the only book ever issued by a major publisher which manages to misspell “Popsicle” in two entirely different ways (pp. 234, 350). When you fork out US$26.95 for a book, you deserve something better than a first draft manuscript between hard covers. I've fact-checked many a manuscript with fewer errors than this book.

 Permalink

February 2003

Orizio, Riccardo. Talk of the Devil: Encounters with Seven Dictators. Translated by Avril Bardoni. London: Secker & Warburg, 2003. ISBN 0-436-20999-3.
A U.S. edition was published in April 2003.

 Permalink

Scully, Matthew. Dominion. New York: St. Martin's Press, 2002. ISBN 0-312-26147-0.

 Permalink

Ward, Peter D. and Donald Brownlee. The Life and Death of Planet Earth. New York: Times Books, 2003. ISBN 0-8050-6781-7.

 Permalink

Rose, Michael S. Goodbye, Good Men. Washington: Regnery Publishing, 2002. ISBN 0-89526-144-8.

 Permalink

Rosen, Milton W. The Viking Rocket Story. New York: Harper & Brothers, 1955. LCCN 55-006592.
This book is out of print. You can generally find used copies at abebooks.com.

 Permalink

Spencer, Robert. Islam Unveiled. San Francisco: Encounter Books, 2002. ISBN 1-893554-58-9.

 Permalink

Harris, Robert. Archangel. London: Arrow Books, 1999. ISBN 0-09-928241-0.
A U.S. edition is also in print.

 Permalink

Heinlein, Robert A. Have Space Suit—Will Travel. New York: Del Rey, [1958] 1977. ISBN 0-345-32441-2.

 Permalink

Hester, Elliott. Plane Insanity. New York: St. Martin's Press, 2002. ISBN 0-312-26958-7.

 Permalink

Hagen, Rose-Marie and Rainer Hagen. Bruegel: The Complete Paintings. Translated by Michael Claridge. Köln, Germany: TASCHEN, 2000. ISBN 3-8228-5991-5.

 Permalink

March 2003

Chancellor, Henry. Colditz. New York: HarperCollins, 2001. ISBN 0-06-001252-8.

 Permalink

Gémignani, Anne-Marie. Une femme au royaume des interdits. Paris: Presses de la Renaissance, 2003. ISBN 2-85616-888-4.

 Permalink

Carter, Bill and Merri Sue Carter. Latitude. Annapolis: Naval Institute Press, 2002. ISBN 1-55750-016-9.
Although I bought this book from Amazon, recently it's shown there as “out of stock”; you may want to order it directly from the publisher. Naturally, you'll also want to read Dava Sobel's 1995 Longitude, which I read before I began keeping this list.

 Permalink

Postrel, Virginia. The Future and Its Enemies. New York: Touchstone Books, 1998. ISBN 0-684-86269-7.
Additional references, updates, and a worth-visiting blog related to the topics discussed in this book are available at the author's Web site, www.dynamist.com.

 Permalink

Smith, Edward E. Skylark of Valeron. New York: Pyramid Books, [1934, 1935, 1949] 1963. LCCN  49-008714.
This book is out of print; use the link above to locate used copies. Paperbacks published in the 1960s and 70s are available in perfectly readable condition at modest cost—compare the offers, however, since some sellers quote outrageous prices for these mass-market paperbacks. University of Nebraska Press are in the process of re-issuing “Doc” Smith's Skylark novels, but they haven't yet gotten to this one.

 Permalink

Thompson, Hunter S. Kingdom of Fear. New York: Simon & Schuster, 2003. ISBN 0-684-87323-0.
Autodesk old-timers who recall the IPO era will find the story recounted on pages 153–157 amusing, particularly those also present at the first encounter.

 Permalink

Furland, Gerald K. Transfer. Chattanooga, TN: Intech Media, 1999. ISBN 0-9675322-0-5.
This novel is set in the U.S. during the implementation of technology similar to that described in my 1994 Unicard paper. This is one of those self-published, print-on-demand jobs: better than most. It reads like the first volume of a trilogy of which the balance has yet to appear. The cover price of US$19.95 is outrageous; Amazon sell it for US$9.99. What is the difficulty these authors have correctly employing the possessive case?

 Permalink

Hitchens, Christopher. The Missionary Position: Mother Teresa in Theory and Practice. London: Verso, 1995. ISBN 1-85984-054-X.

 Permalink

Ekers, Ronald D. et al., eds. SETI 2020. Mountain View, CA: SETI Institute, 2002. ISBN 0-9666335-3-9.

 Permalink

Allin, Michael. Zarafa. New York: Walker and Company, 1998. ISBN 0-385-33411-7.

 Permalink

April 2003

Berman, Morris. The Twilight of American Culture. New York: W. W. Norton, 2000. ISBN 0-393-32169-X.

 Permalink

Williams, Jonathan, Joe Cribb, and Elizabeth Errington, eds. Money: A History. London: British Museum Press, 1997. ISBN 0-312-21212-7.

 Permalink

Schneider, Ben Ross, Jr. Travels in Computerland. Reading, MA: Addison-Wesley, 1974. ISBN 0-201-06737-4.
It's been almost thirty years since I first read this delightful little book, which is now sadly out of print. It's well worth the effort of tracking down a used copy. You can generally find one in readable condition for a reasonable price through the link above or through abebooks.com. If you're too young to have experienced the mainframe computer era, here's an illuminating and entertaining view of just how difficult it was to accomplish anything back then; for those of us who endured the iron age of computing, it is a superb antidote to nostalgia. The insights into organising and managing a decentralised, multidisciplinary project under budget and deadline constraints in an era of technological change are as valid today as they were in the 1970s. The glimpse of the embryonic Internet on pages 241–242 is a gem.

 Permalink

Greene, Graham. The Comedians. New York: Penguin Books, 1965. ISBN 0-14-018494-5.

 Permalink

Vertosick, Frank T., Jr. The Genius Within. New York: Harcourt, 2002. ISBN 0-15-100551-6.

 Permalink

Ferro, Marc. Les tabous de l'histoire. Paris: NiL, 2002. ISBN 2-84111-147-4.

 Permalink

Waugh, Auberon. Will This Do? New York: Carroll & Graf 1991. ISBN 0-7867-0639-2.
This is about the coolest title for an autobiography I've yet to encounter.

 Permalink

Begleiter, Steven H. The Art of Color Infrared Photography. Buffalo, NY: Amherst Media, 2002. ISBN 1-58428-065-4.

 Permalink

Warraq, Ibn [pseud.] ed. What the Koran Really Says. Amherst, NY: Prometheus Books, 2002. ISBN 1-57392-945-X.
This is a survey and reader of Western Koranic studies of the nineteenth and twentieth centuries. A wide variety of mutually conflicting interpretations are presented and no conclusions are drawn. The degree of detail may be more than some readers have bargained for: thirty-five pages (pp. 436–464, 472–479) discuss a single word. For a scholarly text there are a surprising number of typographical errors, many of which would have been found by a spelling checker.

 Permalink

LaHaye, Tim and Jerry B. Jenkins. Nicolae. Wheaton, IL: Tyndale House, 1997. ISBN 0-8423-2924-2.

 Permalink

Buckley, William F. The Redhunter. Boston: Little, Brown, 1999. ISBN 0-316-11589-4.
It's not often one spots an anachronism in one of WFB's historical novels. On page 330, two characters imitate “NBC superstar nightly newsers Chet Huntley and David Brinkley” in a scene set in late 1953. Huntley and Brinkley did not, in fact, begin their storied NBC broadcasts until October 29th, 1956.

 Permalink

May 2003

Minc, Alain. Épîtres à nos nouveaux maîtres. Paris: Grasset, 2002. ISBN 2-246-61981-5.

 Permalink

Williams, Andrew. The Battle of the Atlantic. New York: Basic Books, 2003. ISBN 0-465-09153-9.

 Permalink

Fussell, Paul. BAD. New York: Summit Books, 1991. ISBN 0-671-67652-0.

 Permalink

Feynman, Richard P. Feynman Lectures on Computation. Edited by Anthony J.G. Hey and Robin W. Allen. Reading MA: Addison-Wesley, 1996. ISBN 0-201-48991-0.
This book is derived from Feynman's lectures on the physics of computation in the mid 1980s at CalTech. A companion volume, Feynman and Computation (see September 2002), contains updated versions of presentations by guest lecturers in this course.

 Permalink

Weschler, Lawrence. Mr. Wilson's Cabinet of Wonder. New York: Pantheon Books, 1995. ISBN 0-679-76489-5.
The Museum of Jurassic Technology has a Web site now!

 Permalink

Smith, Edward E. Skylark DuQuesne. New York: Pyramid Books, 1965. ISBN 0-515-03050-3.
This book is out of print; use the link above to locate used copies. Paperbacks are readily available in readable condition at modest cost. The ISBN given here is for a hardback dumped on the market at a comparable price by a library with no appreciation of the classics of science fiction. Unless you have the luck I did in finding such a copy, you're probably better off looking for a paperback.

 Permalink

Ray, Erik T. and Jason McIntosh. Perl and XML. Sebastopol, CA: O'Reilly, 2002. ISBN 0-596-00205-X.

 Permalink

Buckley, William F. Getting It Right. Washington: Regnery Publishing, 2003. ISBN 0-89526-138-3.

 Permalink

Manly, Peter L. Unusual Telescopes. Cambridge: Cambridge University Press, 1991. ISBN 0-521-48393-X.

 Permalink

Lindenberg, Daniel. Le rappel à l'ordre. Paris: Seuil, 2002. ISBN 2-02-055816-5.

 Permalink

Fussell, Paul. Uniforms. New York: Houghton Mifflin, 2002. ISBN 0-618-06746-9.

 Permalink

June 2003

Derbyshire, John. Prime Obsession. Washington: Joseph Henry Press, 2003. ISBN 0-309-08549-7.
This is simply the finest popular mathematics book I have ever read.

 Permalink

Adams, Scott. The Dilbert Future. New York: HarperBusiness, 1997. ISBN 0-88730-910-0.
Uh oh. He's on to the secret about Switzerland (chapter 4).

 Permalink

Wright, Robert. Nonzero. New York: Pantheon Books, 2000. ISBN 0-679-44252-9.
Yuck. Four hundred plus pages of fuzzy thinking, tangled logic, and prose which manages to be simultaneously tortured and jarringly colloquial ends up concluding that globalisation and the attendant extinction of liberty and privacy are not only good things, but possibly Divine (chapter 22). Appendix 1 contains the lamest description of the iterated prisoner's dilemma I have ever read, and the key results table on 341 is wrong (top right entry, at least in the hardback). Bill Clinton loved this book. A paperback edition is now available.

 Permalink

Fenton, James. An Introduction to English Poetry. New York: Farrar, Straus and Giroux, 2002. ISBN 0-374-10464-6.

 Permalink

Barrow, John D. The Constants of Nature. New York: Pantheon Books, 2002. ISBN 0-375-42221-8.
This main body copy in this book is set in a type font in which the digit “1” is almost indistinguishable from the capital letter “I”. Almost—look closely at the top serif on the “1” and you'll note that it rises toward the right while the “I” has a horizontal top serif. This struck my eye as ugly and antiquated, but I figured I'd quickly get used to it. Nope: it looked just as awful on the last page as in the first chapter. Oddly, the numbers on pages 73 and 74 use a proper digit “1”, as do numbers within block quotations.

 Permalink

King, David. The Commissar Vanishes. New York: Henry Holt, 1997. ISBN 0-8050-5295-X.

 Permalink

Kagan, Robert. Of Paradise and Power. New York: Alfred A. Knopf, 2003. ISBN 1-4000-4093-0.

 Permalink

Carpenter, [Malcolm] Scott and Kris Stoever. For Spacious Skies. New York: Harcourt, 2002. ISBN 0-15-100467-6.
This is the most detailed, candid, and well-documented astronaut memoir I've read (Collins' Carrying the Fire is a close second). Included is a pointed riposte to “the man malfunctioned” interpretation of Carpenter's MA-7 mission given in Chris Kraft's autobiography Flight (May 2001). Co-author Stoever is Carpenter's daughter.

 Permalink

Arkes, Hadley. Natural Rights and the Right to Choose. Cambridge: Cambridge University Press, 2002. ISBN 0-521-81218-6.

 Permalink

July 2003

Thomas, Dominique. Le Londonistan. Paris: Éditions Michalon, 2003. ISBN 2-84186-195-3.

 Permalink

Rees, Martin. Our Final Hour. New York: Basic Books, 2003. ISBN 0-465-06862-6.
Rees, the English Astronomer Royal, writes with a literary tic one has become accustomed to in ideologically biased news reporting. Almost every person he names is labeled to indicate Rees' approbation or disdain for that individual's viewpoint. Freeman Dyson—Freeman Dyson!—is dismissed as a “futurist”, Ray Kurzweil and Esther Dyson as “gurus”, and Bjørn Lomborg as an “anti-gloom environmental propagandist”, while those he approves of such as Kurt Gödel (“great logician”), Arnold Schwarzenegger (“greatest Austrian-American body”), Luis Alvarez (“Nobel physicist”), and Bill Joy (“co-founder of Sun Microsystems, and the inventor of the Java computer language”) get off easier. (“Inventor of Java” is perhaps a tad overstated: while Joy certainly played a key rôle in the development of Java, the programming language was principally designed by James Gosling. But that's nothing compared to note 152 on page 204, where the value given for the approximate number of nucleons in the human body is understated by fifty-six orders of magnitude.) The U.K. edition bears the marginally more optimistic title, Our Final Century. but then everything takes longer in Britain.

 Permalink

Goldstuck, Arthur. The Aardvark and the Caravan: South Africa's Greatest Urban Legends. Johannesburg: Penguin Books, 1999. ISBN 0-14-029026-5.
This book is out of print. I bought my copy in a bookshop in South Africa during our 2001 solar eclipse expedition, but didn't get around to reading it until now. You can occasionally find used copies on abebooks.com, but the prices quoted are often more than I'd be willing to pay for this amusing but rather lightweight book.

 Permalink

Graham, Richard H. SR-71 Revealed. Osceola, WI: Motorbooks International, 1996. ISBN 0-7603-0122-0.
The author, who piloted SR-71's for seven years and later commanded the 9th Strategic Reconnaissance Wing, provides a view from the cockpit, including descriptions of long-classified operational missions. There's relatively little discussion of the plane's development history, engineering details, or sensors; if that's what you're looking for, Dennis Jenkins' Lockheed SR-71/YF-12 Blackbirds may be more to your liking. Colonel Graham is inordinately fond of the word “unique”, so much so that each time he uses it he places it in quotes as I have (correctly) done here.

 Permalink

Zakaria, Fareed. The Future of Freedom. New York: W. W. Norton, 2003. ISBN 0-393-04764-4.
The discussion of the merits of the European Union bureaucracy and World Trade Organisation on pages 241–248 will get you thinking. For a treatment of many of the same issues from a hard libertarian perspective, see Hans-Hermann Hoppe's Democracy: The God That Failed (June 2002).

 Permalink

Benford, Gregory ed. Far Futures. New York: Tor, 1995. ISBN 0-312-86379-9.

 Permalink

Wells, H. G. Mind at the End of Its Tether and The Happy Turning. New York: Didier, 1946. LCCN 47-002117.
This thin volume, published in the year of the author's death, contains Wells' final essay, Mind at the End of Its Tether, along with The Happy Turning, his dreamland escape from grim, wartime England. If you've a low tolerance for blasphemy, you'd best give the latter a pass. The unrelenting pessimism of the former limited its appeal; press runs were small and it has rarely been reprinted. The link above will find all editions containing the main work, Mind at the End of Its Tether. Bear in mind when pricing used copies that both essays together are less than 90 pages, with Mind alone a mere 34.

 Permalink

O'Leary, Brian. The Making of an Ex-Astronaut. Boston: Houghton Mifflin, 1970. LCCN 70-112277.
This book is out of print. The link above will search for used copies at abebooks.com.

 Permalink

August 2003

Wood, Peter. Diversity: The Invention of a Concept. San Francisco: Encounter Books, 2003. ISBN 1-893554-62-7.

 Permalink

Jenkins, Dennis R. Magnesium Overcast: The Story of the Convair B-36. North Branch, MN: Specialty Press, [2001] 2002. ISBN 1-58007-042-6.
As alluded to by its nickname, the B-36, which first flew in 1946, was one big airplane. Its 70 metre wingspan is five metres more than the present-day 747-400 (64.4 m), although the fuselage, at 49 metres, is shorter than the 70 metre 747. Later versions, starting in 1950, were powered by ten engines: six piston engines (with 28 cylinders each) driving propellers, and four J47 jet engines, modified to run on the same high-octane aviation gasoline as the piston engines. It could carry a bomb load of 39,000 kg—no subsequent U.S. bomber came close to this figure, which is the weight of an entire F-15E with maximum fuel and weapons load. Depending on winds and mission profile, a B-36 could stay aloft for more than 48 hours without refueling (for which it was not equipped), and 30 hour missions were routinely flown.

 Permalink

Milton, Julie and Richard Wiseman. Guidelines for Extrasensory Perception Research. Hatfield, UK: University of Hertfordshire Press, 1997. ISBN 0-900458-74-7.

 Permalink

Pickover, Clifford A. The Science of Aliens. New York: Basic Books, 1998. ISBN 0-465-07315-8.

 Permalink

Breslin, Jimmy. Can't Anybody Here Play This Game? Chicago: Ivan R. Dee, [1963] 2003. ISBN 1-56663-488-1.

 Permalink

Barnouw, Erik. Handbook of Radio Writing. Boston: Little, Brown, 1939. LCCN  39-030193.
This book is out of print. The link above will search for used copies which, while not abundant, when available are generally comparable in price to current hardbacks of similar length. The copy I read is the 1939 first edition. A second edition was published in 1945; I haven't seen one and don't know how it may differ.

 Permalink

Herrnstein, Richard J. and Charles Murray. The Bell Curve. New York: The Free Press, [1994] 1996. ISBN 0-684-82429-9.

 Permalink

Hitchens, Christopher. A Long Short War. New York: Plume, 2003. ISBN 0-452-28498-8.

 Permalink

Hanson, Victor Davis. Mexifornia. San Francisco: Encounter Books, 2003. ISBN 1-893554-73-2.

 Permalink

September 2003

Chambers, Whittaker. Witness. Washington: Regnery Publishing, [1952] 2002. ISBN 0-89526-789-6.

 Permalink

Jenkins, Dennis R. and Tony Landis. North American XB-70A Valkyrie. North Branch, MN: Specialty Press, 2002. ISBN 1-58007-056-6.

 Permalink

Standage, Tom. The Victorian Internet. New York: Berkley, 1998. ISBN 0-425-17169-8.

 Permalink

Moorcock, Michael. Behold the Man. London: Gollancz, [1969] 1999. ISBN 1-85798-848-5.
The link above is to the 1999 U.K. reprint, the only in-print edition as of this writing. I actually read a 1980 mass market paperback found at abebooks.com, where numerous inexpensive copies are offered.

 Permalink

Havil, Julian. Gamma: Exploring Euler's Constant. Princeton: Princeton University Press, 2003. ISBN 0-691-09983-9.

 Permalink

Fleming, Thomas. The New Dealers' War. New York: Basic Books, 2001. ISBN 0-465-02464-5.

 Permalink

Dyson, Freeman J. The Sun, the Genome, and the Internet. Oxford: Oxford University Press, 1999. ISBN 0-19-513922-4.
The text in this book is set in a hideous flavour of the Adobe Caslon font in which little curlicue ligatures connect the letter pairs “ct” and “st” and, in addition, the “ligatures” for “ff”, “fi”, “fl”, and “ft” lop off most of the bar of the “f”, leaving it looking like a droopy “l”. This might have been elegant for chapter titles, but it's way over the top for body copy. Dyson's writing, of course, more than redeems the bad typography, but you gotta wonder why we couldn't have had the former without the latter.

 Permalink

Large, Christine. Hijacking Enigma. Chichester, England: John Wiley & Sons, 2003. ISBN 0-470-86346-3.
The author, Director of the Bletchley Park Trust, recounts the story of the April 2000 theft and eventual recovery of Bletchley's rare Abwehr Engima cipher machine, interleaved with a history of Bletchley's World War II exploits in solving the Engima and its significance in the war. If the latter is your primary interest, you'll probably prefer Michael Smith's Station X (July 2001), which provides much more technical and historical detail. Readers who didn't follow the Enigma theft as it played out and aren't familiar with the names of prominent British news media figures may feel a bit at sea in places. A Web site devoted to the book is now available, and a U.S. edition is scheduled for publication later in 2003.

 Permalink

Gardner, Martin. How Not to Test a Psychic. Buffalo, NY: Prometheus Books, 1989. ISBN 0-87975-512-1.

 Permalink

October 2003

Sowell, Thomas. The Quest for Cosmic Justice. New York: Touchstone Books, 1999. ISBN 0-684-86463-0.

 Permalink

Spotts, Frederic. Hitler and the Power of Aesthetics. Woodstock, NY: Overlook Press, 2002. ISBN 1-58567-345-5.
A paperback edition is scheduled to be published in February 2004.

 Permalink

Nettle, Daniel and Suzanne Romaine. Vanishing Voices. Oxford: Oxford University Press, 2000. ISBN 0-19-513624-1.
Of the approximately 6000 languages in use in the world today, nearly 85 percent have fewer than 100,000 speakers—half fewer than 6000 speakers. Development and globalisation imperil the survival of up to 90% of these minority languages—many are already no longer spoken by children, which virtually guarantees their extinction. Few details are known of many of these vanishing languages; their disappearance will forever foreclose whatever insights they hold to the evolution and structure of human languages, the cultures of those who speak them, and the environments which shaped them. Somebody ought to write a book about this. Regrettably, these authors didn't. Instead, they sprinkle interesting factoids about endangered languages here and there amid a Chomsky-style post-colonial rant which attempts to conflate language diversity with biodiversity through an argument which, in the absence of evidence, relies on “proof through repeated assertion,” while simultaneously denying that proliferation and extinction of languages might be a process akin to Darwinian evolution rather than the more fashionable doctrines of oppression and exploitation. One can only shake one's head upon reading, “The same is true for Spanish, which is secure in Spain, but threatened in the United States.” (p. 48) or “Any language can, in fact, be turned to any purpose, perhaps by the simple incorporation of a few new words.” (p. 129). A paperback edition is now available.

 Permalink

Haig, Matt. Brand Failures. London: Kogan Page, 2003. ISBN 0-7494-3927-0.

 Permalink

Rousmaniere, John. Fastnet, Force 10. New York: W. W. Norton, [1980] 2000. ISBN 0-393-30865-0.

 Permalink

Webb, Stephen. If the Universe Is Teeming with Aliens…Where Is Everybody?. New York: Copernicus, 2002. ISBN 0-387-95501-1.

 Permalink

Wilson, Robin. Four Colors Suffice. Princeton: Princeton University Press, 2002. ISBN 0-691-11533-8.

 Permalink

Rabinowitz, Dorothy. No Crueler Tyrannies. New York: Free Press, 2003. ISBN 0-7432-2834-0.

 Permalink

Brin, David. The Transparent Society. Cambridge, MA: Perseus Books, 1998. ISBN 0-7382-0144-8.
Having since spent some time pondering The Digital Imprimatur, I find the alternative Brin presents here rather more difficult to dismiss out of hand than when I first encountered it.

 Permalink

November 2003

Wodehouse, P. G. Psmith in the City. Woodstock, NY: Overlook Press, [1910] 2003. ISBN 1-58567-478-8.
The link above is to the only edition presently in print in the U.S., a hardcover which is rather pricey for such a thin volume. I actually read a 15 year old mass market paperback; you can often find such copies at attractive prices on abebooks.com. If you're up for a larger dose of Psmith, you might consider The World of Psmith paperback published by Penguin in the U.K., which includes Psmith Journalist and Leave It to Psmith along with this novel.

 Permalink

Stepczynski, Marian. Dollar: Histoire, actualité et avenir de la monnaie impériale. Lausanne: Éditions Favre, 2003. ISBN 2-8289-0730-9.
In the final paragraph on page 81, in the sentence which begins «À fin septembre 1972», 2002 is intended, not 1972.

 Permalink

LaHaye, Tim and Jerry B. Jenkins. Soul Harvest. Wheaton, IL: Tyndale House, 1998. ISBN 0-8423-2925-0.
This is what happens when trilogies go bad. Paraphrasing the eternal programming language COBOL, “04 FILLER SIZE IS 90%”. According to the lumpen eschatology in which the Left Behind series (of which this is volume four) is grounded, the world will come to an end in a seven-year series of cataclysms and miracles loosely based on the book of Revelation in the New Testament of the Christian Bible. Okay, as a fictional premise, that works for me. The problem here is that while Saint John the Divine managed to recount this story in fewer than 1600 words, these authors have to date filled twelve volumes, with Tetragrammaton knows how many more yet to come, stringing readers of the series along for more years than the entire apocalypse is supposed to take to go down. It is an accomplishment of sorts to start with the very archetypal account of fire and brimstone, wormwood and rivers running with blood, and make it boring. Precisely one paragraph—half a page in this 425 page tome—is devoted to describing the impact of of a “thousand mile square” asteroid in the the middle of the Atlantic Ocean, while dozens, nay hundreds, of pages are filled with dialogue which, given the apparent attention span of the characters (or perhaps the authors, the target audience, or all of the above), recaps the current situation and recent events every five pages or so. I decided to read the first volume of the series, Left Behind (July 2002), after reading a magazine article about the social and political impact of the large number of people (more than fifty million copies of these books have been sold to date) who consider this stuff something more than fantasy. I opted for a “bargain box” of the first four volumes instead of just volume one and so, their having already got my money, decided to slog through all four. This was illogical—I should have reasoned, “I've already wasted my money; I'm not going to waste my time as well”—but I doubt many Vulcans buy these books in the first place. Time and again, whilst wading through endless snowdrifts of dialogue, I kept thinking, “This is like a comic book.” In this, as in size of their audience, the authors were way ahead of me.

 Permalink

Sacco, Joe. Palestine. Seattle: Fantagraphics Books, 2001. ISBN 1-56097-432-X.

 Permalink

Lime, Jean-Hugues. Le roi de Clipperton. Paris: Le Cherche Midi, 2002. ISBN 2-86274-947-8.
This fascinating novel, reminiscent of Lord of the Flies, is based on events which actually occurred during the Mexican occupation of Clipperton Island from 1910 through 1917. (After World War I, the island returned to French possession, as it remains today; it has been uninhabited since 1917.) There is one instance of bad astronomy here: in chapter 4, set on the evening of November 30th, 1910, the Moon is described as «…très lumineuse…. On y voyait comme en plein jour.» (“…very luminous;…. One could see like in broad daylight” [my translation]). But on that night, the Moon was not visible at all! Here is the sky above Clipperton at about 21:00 local time courtesy of Your Sky. (Note that in Universal time it's already the morning of December 1st, and that I have supplied the actual latitude of Clipperton, which is shown as one minute of latitude too far North in the map on page 8.) In fact, the Moon was only 17 hours before new as shown by Earth and Moon Viewer, and hence wasn't visible from anywhere on Earth on that night. Special thanks to the person who recommended this book using the recommendation form! This was an excellent read which I'd otherwise never have discovered.

 Permalink

Hazlitt, Henry. Economics in One Lesson. New York: Three Rivers Press, [1946, 1962] 1979. ISBN 0-517-54823-2.

 Permalink

Cahill, Thomas. Sailing the Wine-Dark Sea: Why the Greeks Matter. New York: Doubleday, 2003. ISBN 0-385-49553-6.

 Permalink

Ferguson, Niels and Bruce Schneier. Practical Cryptography. Indianapolis: Wiley Publishing, 2003. ISBN 0-471-22357-3.
This is one of the best technical books I have read in the last decade. Those who dismiss this volume as Applied Cryptography Lite” are missing the point. While the latter provides in-depth information on a long list of cryptographic systems (as of its 1996 publication date), Practical Cryptography provides specific recommendations to engineers charged with implementing secure systems based on the state of the art in 2003, backed up with theoretical justification and real-world experience. The book is particularly effective in conveying just how difficult it is to build secure systems, and how “optimisation”, “features”, and failure to adopt a completely paranoid attitude when evaluating potential attacks on the system can lead directly to the bull's eye of disaster. Often-overlooked details such as entropy collection to seed pseudorandom sequence generators, difficulties in erasing sensitive information in systems which cache data, and vulnerabilities of systems to timing-based attacks are well covered here.

 Permalink

Vazsonyi, Balint. America's Thirty Years War. Washington: Regnery Publishing, 1998. ISBN 0-89526-354-8.

 Permalink

December 2003

von Dach, Hans. Total Resistance. Boulder, CO: Paladin Press, [1958] 1965. ISBN 0-87364-021-7.
This is an English translation of Swiss Army Major von Dach's Der totale Widerstand — Kleinkriegsanleitung für jedermann, published in 1958 by the Swiss Non-commissioned Officers' Association. It remains one of the best manuals for guerrilla warfare and civilian resistance to enemy occupation in developed countries. This is not a book for the faint-hearted: von Dach does not shrink from practical advice such as, “Fire upon the driver and the assistant driver with an air rifle. …the force of the projectile is great enough to wound them so that you can dispose of them right afterward with a bayonet.” and “The simplest and surest way to dispose of guards noiselessly is to kill them with an ax. Do not use the sharp edge but the blunt end of the ax.” There is strategic wisdom as well—making the case for a general public uprising when the enemy is near defeat, he observes, “This way you can also prevent your country from being occupied again even though by friendly forces. Past experience shows that even ‘allies’ and ‘liberators’ cannot be removed so easily. At least, it's harder to get them to leave than to enter.”

 Permalink

Seuss, Dr. [Theodor Seuss Geisel]. Horton Hears a Who! New York: Random House, 1954. ISBN 0-679-80003-4.

 Permalink

Ross, John F. Unintended Consequences. St. Louis: Accurate Press, 1996. ISBN 1-888118-04-0.
I don't know about you, but when I hear the phrases “first novel” and “small press” applied to the same book, I'm apt to emit an involuntary groan, followed by a wince upon hearing said volume is more than 860 pages in length. John Ross has created the rarest of exceptions to this prejudice. This is a big, sprawling, complicated novel with a multitude of characters (real and fictional) and a plot which spans most of the 20th century, and it works. What's even more astonishing is that it describes an armed insurrection against the United States government which is almost plausible. The information age has changed warfare at the national level beyond recognition; Ross explores what civil war might look like in the 21st century. The book is virtually free of typographical errors and I only noted a few factual errors—few bestsellers from the largest publishers manifest such attention to detail. Some readers may find this novel intensely offensive—the philosophy, morality, and tolerance for violence may be deemed “out of the mainstream” and some of the characterisations in the last 200 pages may be taken as embodying racial stereotypes—you have been warned.

 Permalink

Becker, Jasper. Hungry Ghosts: Mao's Secret Famine. New York: Henry Holt, [1996] 1998. ISBN 0-8050-5668-8.

 Permalink

Hirshfeld, Alan W. Parallax. New York: Henry Holt, 2001. ISBN 0-8050-7133-4.

 Permalink

Popper, Karl R. The Open Society and Its Enemies. Vol. 1: The Spell of Plato. 5th ed., rev. Princeton: Princeton University Press, [1945, 1950, 1952, 1957, 1962] 1966. ISBN 0-691-01968-1.
The two hundred intricately-argued pages of main text are accompanied by more than a hundred pages of notes in small type. Popper states that “The text of the book is self-contained and may be read without these Notes. However, a considerable amount of material which is likely to interest all readers of the book will be found here, as well as some references and controversies which may not be of general interest.” My recommendation? Read the notes. If you skip them, you'll miss Popper's characterisation of Plato as the first philosopher to adopt a geometrical (as opposed to arithmetic) model of the world along with his speculations based on the sum of the square roots of 2 and 3 (known to Plato) differing from π by less than 1.5 parts per thousand (Note 9 to Chapter 6), or the exquisitely lucid exposition (written in 1942!) of why international law and institutions must ultimately defend the rights of human individuals as opposed to the sovereignty of nation states (Note 7 to Chapter 9). The second volume, which dissects the theories of Hegel and Marx, is currently out of print in the U.S. but a U.K. edition is available.

 Permalink

Carlos [Ilich Ramírez Sánchez]. L'Islam révolutionnaire. Textes et propos recueillis, rassemblés et présentés par Jean-Michel Vernochet. Monaco: Éditions du Rocher, 2003. ISBN 2-268-04433-5.
Prior to his capture in Sudan in 1994 and “exfiltration” to a prison in France by the French DST, Carlos (“the Jackal”), nom de guerre of Venezuelan-born Ilich Ramírez Sánchez (a true red diaper baby, his brothers were named “Vladimir” and “Lenin”) was one of the most notorious and elusive terrorists of the latter part of the twentieth century. This is a collection of his writings and interviews from prison, mostly dating from the early months of 2003. I didn't plan it that way, but I found reading Carlos immediately after Popper's The Open Society and its Enemies (above) extremely enlightening, particularly in explaining the rather mysterious emerging informal alliance among Western leftists and intellectuals, the political wing of Islam, the remaining dribs and drabs of Marxism, and third world kleptocratic and theocratic dictators. Unlike some Western news media, Carlos doesn't shrink from the word “terrorism”, although he prefers to be referred to as a “militant revolutionary”, but this is in many ways a deeply conservative book. Carlos decries Western popular culture and its assault on traditional morality and family values in words which wouldn't seem out of place in a Heritage Foundation white paper. A convert to Islam in 1975, he admits he paid little attention to the duties and restrictions of his new religion until much later. He now believes that only Islam provides the framework to resist what he describes as U.S. totalitarian imperialism. Essentially, he's exchanged utopian Marxism for Islam as a comprehensive belief system. Now consider Popper: the essence of what he terms the open society, dating back to the Athens of Pericles, is the absence of any utopian vision, or plan, or theory of historical inevitability, religious or otherwise. Open societies have learned to distinguish physical laws (discovered through the scientific method) from social laws (or conventions), which are made by fallible humans and evolve as societies do. The sense of uncertainty and requirement for personal responsibility which come with an open society, replacing the certainties of tribal life and taboos which humans evolved with, induce what Popper calls the “strain of civilisation”, motivating utopian social engineers from Plato through Marx to attempt to create an ideal society, an endpoint of human social evolution, forever frozen in time. Look at Carlos; he finds the open-ended, make your own rules, everything's open to revision outlook of Western civilisation repellent. Communism having failed, he seizes upon Islam as a replacement. Now consider the motley anti-Western alliance I mentioned earlier. What unifies them is simply that they're anti-Western: Popper's enemies of the open society. All have a vision of a utopian society (albeit very different from one another), and all share a visceral disdain for Western civilisation, which doesn't need no steenkin' utopias but rather proceeds incrementally toward its goals, in a massively parallel trial and error fashion, precisely as the free market drives improvements in products and services.

 Permalink

McMath, Robert M. and Thom Forbes. What Were They Thinking?. New York: Three Rivers Press, 1998. ISBN 0-8129-3203-X.

 Permalink

  2004  

January 2004

Truss, Lynne. Eats, Shoots & Leaves. London: Profile Books, 2003. ISBN 1-86197-612-7.
A U.S edition is now available.

 Permalink

O'Brien, Flann [Brian O'Nolan]. The Third Policeman. Normal, IL: Dalkey Archive Press, [1967] 1999. ISBN 1-56478-214-X.
This novel, one of the most frequently recommended books by visitors to this page, was completed in 1940 but not published until 1967, a year after the author's death. Perhaps the world was insufficiently weird before the High Sixties! This is one strange book; in some ways it anticipates surreal new wave science fiction such as John Brunner's Stand on Zanzibar and The Jagged Orbit, but O'Brien is doing something quite different here which I'll refrain from giving away. Don't read the (excellent) Introduction before you read the novel—there is one big, ugly spoiler therein.

 Permalink

Wellum, Geoffrey. First Light. London: Penguin Books, 2002. ISBN 0-14-100814-8.
A U.S edition is available, but as of this date only in hardcover.

 Permalink

Didion, Joan. The White Album. New York: Farrar, Straus and Giroux, 1979. ISBN 0-374-52221-9.

 Permalink

Bolton, Andrew. Bravehearts: Men in Skirts. London: V&A Publications, 2003. ISBN 0-8109-6558-5.

 Permalink

Thernstrom, Abigail and Stephan Thernstrom. No Excuses: Closing the Racial Gap in Learning. New York: Simon & Schuster, 2003. ISBN 0-7432-0446-8.

 Permalink

Drosnin, Michael. The Bible Code 2. New York: Penguin Books, [2002] 2003. ISBN 0-14-200350-6.
What can you say about a book, published by Viking and Penguin as non-fiction, which claims the Hebrew Bible contains coded references to events in the present and future, put there by space aliens whose spacecraft remains buried under a peninsula on the Jordan side of the Dead Sea? Well, actually a number of adjectives come to mind, most of them rather pithy. The astonishing and somewhat disturbing thing, if the author is to believed, is that he has managed to pitch this theory and the apocalyptic near-term prophecies he derives from it to major players on the world stage including Shimon Peres, Yasir Arafat, Clinton's chief of staff John Podesta in a White House meeting in 2000, and in a 2003 briefing at the Pentagon, to the head of the Defense Intelligence Agency and other senior figures at the invitation of Deputy Secretary of Defense Paul Wolfowitz. If this is the kind of input that's informing decisions about the Middle East, it's difficult to be optimistic about the future. When predicting an “atomic holocaust” for 2006 in The Bible Code 2, Drosnin neglects to mention that in chapter 6 of his original 1997 The Bible Code, he predicted it for either 2000 or 2006, but I suppose that's standard operating procedure in the prophecy biz.

 Permalink

Guéhenno, Jean-Marie. La fin de la démocratie. Paris: Flammarion, 1993. ISBN 2-08-081322-6.
This book, written over a decade ago, provides a unique take on what is now called “globalisation” and the evolution of transnational institutions. It has been remarkably prophetic in the years since its publication and a useful model for thinking about such issues today. Guéhenno argues that the concept of the nation-state emerged in Europe and North America due to their common history. The inviolability of borders, parliamentary democracy as a guarantor of liberty, and the concept of shared goals for the people of a nation are all linked to this peculiar history and consequently non-portable to regions with different histories and cultural heritages. He interprets most of disastrous post-colonial history of the third world as a mistaken attempt to implant the European nation-state model where the precursors and prerequisites for it do not exist. The process of globalisation and the consequent transformation of hierarchical power structures, both political and economic, into self-organising and dynamic networks is seen as rendering the nation-state obsolete even in the West, bringing to a close a form of organisation dating from the Enlightenment, replacing democratic rule with a system of administrative rules and regulations similar to the laws of the Roman Empire. While offering hope of eliminating the causes of the large-scale conflicts which characterised the 20th century, this scenario has distinct downsides: an increased homogenisation of global cultures and people into conformist “interchangeable parts”, a growing sense that while the system works, it lacks a purpose, erosion of social solidarity in favour of insecurity at all levels, pervasive corruption of public officials, and the emergence of diffuse violence which, while less extreme than 20th century wars, is also far more common and difficult to deter. That's a pretty good description of the last decade as I saw it, and an excellent list of things to ponder in the years to come. An English translation, The End of the Nation-State, is now available; I've not read it.

 Permalink

Robinson, Kim Stanley. Blue Mars. New York: Bantam Books, 1996. ISBN 0-553-57335-7.
This is the third volume in Robinson's Mars Trilogy: the first two volumes are Red Mars and Green Mars (April 2001). The three volumes in the trilogy tell one continuous story and should be read in order; if you start with Green or Blue, you'll be totally lost as to the identities of characters introduced in Red or events which occurred in prior volumes. When I read Red Mars in the mid 1990s, I considered it to be one of the very best science fiction novels I'd ever read, and I've read all of the works of the grand masters. Green Mars didn't quite meet this standard, but was still a superb and thought-provoking read. By contrast, I found Blue Mars a tremendous disappointment—tedious and difficult to finish. It almost seems like Robinson ran out of ideas before filling the contracted number of pages. There are hundreds of pages of essentially plot-free pastoral descriptions of landscapes on terraformed Mars; if you like that kind of stuff, you may enjoy this book, but I prefer stories in which things happen and characters develop and interact in interesting ways, and there's precious little of that here. In part, I think the novel suffers from the inherent difficulty of writing about an epoch in which human technological capability permits doing essentially anything whatsoever—it's difficult to pose challenges which characters have to surmount once they can simply tell their AIs to set the robots to work, then sit around drinking kavajava until the job is done. The politics and economics in these books has never seemed particularly plausible to me, and in Blue Mars it struck me as even more naïve, but perhaps that's just because there's so little else going on. I can't make any sense at all of the immigration and population figures Robinson gives. On page 338 (mass-market paperback edition) the population of Mars is given as 15 million and Earth's population more than 15 billion in 2129, when Mars agrees to accept “at least ten percent of its population in immigrants every year”. Since Earth pressed for far more immigration while Mars wished to restrict it, presumably this compromise rate is within the capability of the interplanetary transportation system. Now there's two ways to interpret the “ten percent”. If every year Mars accepts 10% of its current population, including immigrants from previous years, the Mars population runs away geometrically, exploding to more than two billion by 2181. But on page 479, set in that year, the population of Mars is given as just 18 million, still a thousandth of Earth's, which has grown to 18 billion. Okay, let's assume the agreement between Earth and Mars meant that Mars was only to accept 10% of its present population as of the date of the agreement, 2129. Well, if that's the case, then you have immigration of 1.5 million per year, which leaves us with a Mars population of 93 million by 2181 (see the spreadsheet I used to perform these calculations for details). And these figures assume that neither the Mars natives nor the immigrants have any children at all, which is contradicted many times in the story. In fact, to get from a population of 15 million in 2129 to only 18 million in 2181 requires a compounded growth rate of less than 0.4%, an unprecedentedly low rate for frontier civilisations without any immigration at all.

 Permalink

Buckley, Reid. USA Today: The Stunning Incoherence of American Civilization. Camden, SC: P.E.N. Press, 2002. ISBN 0-9721000-0-8.

The author, brother of William F. Buckley, is founder of a school of public speaking and author of several books on public speaking and two novels. Here, however, we have Buckley's impassioned, idiosyncratic, and (as far as I can tell) self-published rant against the iniquities of contemporary U.S. morals, politics, and culture. Bottom line: he doesn't like it—the last two sentences are “The supine and swinish American public is the reason why our society has become so vile. We are vile.” This book would have been well served had the author enlisted brother Bill or his editor to red-pencil the manuscript. How the humble apostrophe causes self-published authors to stumble! On page 342 we trip over the “biography of John Quincy Adam's” among numerous other exemplars of proletarian mispunctuation. On page 395, Michael Behe, author of Darwin's Black Box has his name given as “Rehe” (and in the index too). On page 143, he misquotes Alan Guth's Inflationary Universe as saying the grand unification energy is “1016 GeV”, thereby getting it wrong by thirteen orders of magnitude compared to the 1016 GeV a sharp-eyed proofreader would have caught. All of this, and Buckley's meandering off into anecdotes of his beloved hometown of Camden, South Carolina and philosophical disquisitions distract from the central question posed in the book which is both profound and disturbing: can a self-governing republic survive without a consensus moral code shared by a large majority of its citizens? This is a question stalwarts of Western civilisation need to be asking themselves in this non-judgemental, multi-cultural age, and I wish Buckley had posed it more clearly in this book, which despite the title, has nothing whatsoever to do with that regrettable yet prefixally-eponymous McNewspaper.

 Permalink

February 2004

Ferry, Georgina. A Computer Called LEO. London: Fourth Estate, 2003. ISBN 1-84115-185-8.
I'm somewhat of a computer history buff (see my Babbage and UNIVAC pages), but I knew absolutely nothing about the world's first office computer before reading this delightful book. On November 29, 1951 the first commercial computer application went into production on the LEO computer, a vacuum tube machine with mercury delay line memory custom designed and built by—(UNIVAC? IBM?)—nope: J. Lyons & Co. Ltd. of London, a catering company which operated the Lyons Teashops all over Britain. LEO was based on the design of the Cambridge EDSAC, but with additional memory and modifications for commercial work. Many present-day disasters in computerisation projects could be averted from the lessons of Lyons, who not only designed, built, and programmed the first commercial computer from scratch but understood from the outset that the computer must fit the needs and operations of the business, not the other way around, and managed thereby to succeed on the very first try. LEO remained on the job for Lyons until January 1965. (How many present-day computers will still be running 14 years after they're installed?) A total of 72 LEO II and III computers, derived from the original design, were built, and some remained in service as late as 1981. The LEO Computers Society maintains an excellent Web site with many photographs and historical details.

 Permalink

Alinsky, Saul D. Rules for Radicals. New York: Random House, 1971. ISBN 0-679-72113-4.
Ignore the title. Apart from the last two chapters, which are dated, there is remarkably little ideology here and a wealth of wisdom directly applicable to anybody trying to accomplish something in the real world, entrepreneurs and Open Source software project leaders as well as social and political activists. Alinsky's unrelenting pragmatism and opportunism are a healthy antidote to the compulsive quest for purity which so often ensnares the idealistic in such endeavours.

 Permalink

Heinlein, Robert A. For Us, The Living. New York: Scribner, 2004. ISBN 0-7432-5998-X.
I was ambivalent about reading this book, knowing that Robert and Virginia Heinlein destroyed what they believed to be all copies of the manuscript shortly before the author's death in 1988, and that Virginia Heinlein died in 2003 before being informed of the discovery of a long-lost copy. Hence, neither ever gave their permission that it be published. This is Heinlein's first novel, written in 1938–1939. After rejection by Macmillan and then Random House, he put the manuscript aside in June 1939 and never attempted to publish it subsequently. His first fiction sale, the classic short story “Life-Line”, to John W. Campbell's Astounding Science Fiction later in 1939 launched Heinlein's fifty year writing career. Having read almost every word Heinlein wrote, I decided to go ahead and see how it all began, and I don't regret that decision. Certainly nobody should read this as an introduction to Heinlein—it's clear why it was rejected in 1939—but Heinlein fans will find here, in embryonic form, many of the ideas and themes expressed in Heinlein's subsequent works. It also provides a glimpse at the political radical Heinlein (he'd run unsuccessfully for the California State Assembly in 1938 as a Democrat committed to Upton Sinclair's Social Credit policies), with the libertarian outlook of his later years already beginning to emerge. Much of the book is thinly—often very thinly—disguised lectures on Heinlein's political, social, moral, and economic views, but occasionally you'll see the great storyteller beginning to flex his muscles.

 Permalink

Schulman, J. Neil. Stopping Power. Pahrump, NV: Pulpless.Com, [1994] 1999. ISBN 1-58445-057-6.
The paperback edition is immediately available from the link above. This and most of the author's other works are supposed to be available in electronic form for online purchase and download from his Web site, but the ordering links appear to be broken at the moment. Note that the 1999 paperback contains some material added since the original 1994 hardcover edition.

 Permalink

Jenkins, Roy. Churchill: A Biography. New York: Plume, 2001. ISBN 0-452-28352-3.
This is a splendid biography of Churchill. The author, whose 39 year parliamentary career overlapped 16 of Churchill's almost 64 years in the House of Commons, focuses more on the political aspects of Churchill's career, as opposed to William Manchester's The Last Lion (in two volumes: Visions of Glory and Alone) which delves deeper into the British and world historical context of Churchill's life. Due to illness, Manchester abandoned plans for the third volume of The Last Lion, so his biography regrettably leaves the story in 1940. Jenkins covers Churchill's entire life in one volume (although at 1001 pages including end notes, it could easily have been two) and occasionally assumes familiarity with British history and political figures which may send readers not well versed in twentieth century British history, particularly the Edwardian era, scurrying to other references. Having read both Manchester and Jenkins, I find they complement each other well. If I were going to re-read them, I'd probably start with Manchester.

 Permalink

Szpiro, George G. Kepler's Conjecture. Hoboken, NJ: John Wiley & Sons, 2003. ISBN 0-471-08601-0.
In 1611, Johannes Kepler conjectured that no denser arrangement of spheres existed than the way grocers stack oranges and artillerymen cannonballs. For more than 385 years this conjecture, something “many mathematicians believe, and all physicists know”, defied proof. Over the centuries, many distinguished mathematicians assaulted the problem to no avail. Then, in 1998, Thomas C. Hales, assisted by Samuel P. Ferguson, announced a massive computer proof of Kepler's conjecture in which, to date, no flaw has been found. Who would have imagined that a fundamental theorem in three-dimensional geometry would be proved by reducing it to a linear programming problem? This book sketches the history of Kepler's conjecture and those who have assaulted it over the centuries, and explains, in layman's language, the essentials of the proof. I found the organisation of the book less than ideal. The author works up to Kepler's general conjecture by treating the history of lattice packing and general packing in two dimensions, then the kissing and lattice packing problems in three dimensions, each in a separate chapter. Many of the same people occupied themselves with these problems over a long span of time, so there is quite a bit of duplication among these chapters and one has to make an effort not to lose track of the chronology, which keeps resetting at chapter boundaries. To avoid frightening general readers, the main text interleaves narrative and more technical sections set in a different type font and, in addition, most equations are relegated to appendices at the end of the book. There's also the irritating convention that numerical approximations are, for the most part, given to three or four significant digits without ellipses or any other indication they are not precise values. (The reader is warned of this in the preface, but it still stinks.) Finally, there are a number of factual errors in historical details. Quibbles aside, this is a worthwhile survey of the history and eventual conquest of one of the most easily stated, difficult to prove, and longest standing problems in mathematics. The proof of Kepler's conjecture and all the programs used in it are available on Thomas C. Hales' home page.

 Permalink

Zubrin, Robert. The Holy Land. Lakewood, CO: Polaris Books, 2003. ISBN 0-9741443-0-4.
Did somebody say science fiction doesn't do hard-hitting social satire any more? Here, Robert Zubrin, best known for his Mars Direct mission design (see The Case for Mars) turns his acid pen (caustic keyboard?) toward the Israeli-Palestinian conflict, with plenty of barbs left over for the absurdities and platitudes of the War on Terrorism (or whatever). This is a novel which will have you laughing out loud while thinking beyond the bumper-sticker slogans mouthed by politicians into the media echo chamber.

 Permalink

Haynes, John Earl and Harvey Klehr. Venona: Decoding Soviet Espionage in America. New Haven, CT: Yale University Press, 1999. ISBN 0-300-08462-5.
Messages encrypted with a one-time pad are absolutely secure unless the adversary obtains a copy of the pad or discovers some non-randomness in the means used to prepare it. Soviet diplomatic and intelligence traffic used one-time pads extensively, avoiding the vulnerabilities of machine ciphers which permitted World War II codebreakers to read German and Japanese traffic. The disadvantage of one-time pads is key distribution: since every message consumes as many groups from the one-time pad as its own length and pads are never reused (hence the name), embassies and agents in the field require a steady supply of new one-time pads, which can be a logistical nightmare in wartime and risk to covert operations. The German invasion of the Soviet Union in 1941 caused Soviet diplomatic and intelligence traffic to explode in volume, surpassing the ability of Soviet cryptographers to produce and distribute new one-time pads. Apparently believing the risk to be minimal, they reacted by re-using one-time pad pages, shuffling them into a different order and sending them to other posts around the world. Bad idea! In fact, reusing one-time pad pages opened up a crack in security sufficiently wide to permit U.S. cryptanalysts, working from 1943 through 1980, to decode more than five thousand pages (some only partially) of Soviet cables from the wartime era. The existence of this effort, later codenamed Project VENONA, and all the decoded material remained secret until 1995 when it was declassified. The most-requested VENONA decrypts may be viewed on-line at the NSA Web site. (A few months ago, there was a great deal of additional historical information on VENONA at the NSA site, but at this writing the links appear to be broken.) This book has relatively little to say about the cryptanalysis of the VENONA traffic. It is essentially a history of Soviet espionage in the U.S. in the 1930s and 40s as documented by the VENONA decrypts. Some readers may be surprised at how little new information is presented here. In essence, VENONA messages completely confirmed what Whittaker Chambers (Witness, September 2003) and Elizabeth Bentley testified to in the late 1940s, and FBI counter-intelligence uncovered. The apparent mystery of why so many who spied for the Soviets escaped prosecution and/or conviction is now explained by the unwillingness of the U.S. government to disclose the existence of VENONA by using material from it in espionage cases. The decades long controversy over the guilt of the Rosenbergs (The Rosenberg File, August 2002) has been definitively resolved by disclosure of VENONA—incontrovertible evidence of their guilt remained secret, out of reach to historians, for fifty years after their crimes. This is a meticulously-documented work of scholarly history, not a page-turning espionage thriller; it is probably best absorbed in small doses rather than one cover to cover gulp.

 Permalink

March 2004

Grisham, John. The King of Torts. New York: Doubleday, 2003. ISBN 0-385-50804-2.
A mass market paperback edition is now available.

 Permalink

Lynn, Richard and Tatu Vanhanen. IQ and the Wealth of Nations. Westport, CT: Praeger, 2002. ISBN 0-275-97510-X.
Kofi Annan, Secretary General of the United Nations, said in April 2000 that intelligence “is one commodity equally distributed among the world's people”. But is this actually the case? Numerous studies of the IQ of the populations of various countries have been performed from the 1930s to the present and with few exceptions, large variations have been found in the mean IQs of countries—more than two standard deviations between the extremes—while different studies of the same population show remarkable consistency, and countries with similar populations in the same region of the world tend to have roughly the same mean IQ. Many social scientists believe that these results are attributable to cultural bias in IQ tests, or argue that IQ tests measure not intelligence, but rather proficiency in taking IQ tests, which various educational systems and environments develop to different degrees. The authors of this book accept the IQ test results at face value and pose the question, “Whatever IQ measures, how accurately does the average IQ of a country's population correlate with its economic success, measured both by per capita income and rate of growth over various historical periods?” From regression studies of 81 countries whose mean population IQ is known and 185 countries where IQ is known or estimated based on neighbouring countries, they find that IQ correlates with economic development better than any other single factor advanced in prior studies. IQ, in conjunction with a market economy and, to a lesser extent, democratic governance “explains” (in the strict sense of the square of the correlation coefficient) more than 50% of the variation in GDP per capita and other measures of economic development (of course, IQ, economic freedom, and democracy may not be independent variables). Now, correlation is not causation, but the evidence that IQ stabilises early in childhood and remains largely constant afterward allows one to rule out many potential kinds of influence by economic development on IQ, strengthening the argument for causation. If this is the case, the consequences for economic assistance are profound. For example, providing adequate nutrition during pregnancy and for children, which is known to substantially increase IQ, may not only be the humanitarian thing to do but could potentially promote economic progress more than traditional forms of development assistance. Estimating IQ and economic development for a large collection of disparate countries is a formidable challenge, and this work contains more correction, normalisation, and adjustment factors than a library full of physics research—close to half the book is data tables and source documentation, and non-expert readers cannot be certain that source data might not have been selected which tend to confirm the hypothesis and others excluded. But this is a hypothesis which can be falsified by further research, which would seem well-warranted. Scientists and policy makers must live in the real world and are ill advised to ignore aspects of it which make them uncomfortable. (If these comments move you to recommend Stephen Jay Gould's The Mismeasure of Man, you needn't—I've read it twice before I started keeping this list, and found it well-argued. But you may also want to weigh the points raised in J. Philippe Rushton's critique of Gould's book.)

 Permalink

Lewis, Sinclair. It Can't Happen Here. New York: Signet, [1935] 1993. ISBN 0-451-52582-5.
Just when you need it, this classic goes out of print. Second-hand copies at reasonable prices are available from the link above or through abebooks.com. I wonder to what extent this novel might have motivated Heinlein to write For Us, The Living (February 2004) a few years later. There are interesting parallels between Lewis's authoritarian dystopia and the 1944–1950 dictatorial interregnum in Heinlein's novel. Further, one of the utopian reformers Lewis mocks is Upton Sinclair, of whom Heinlein was a committed follower at the time, devoting much of the latter part of For Us, The Living to an exposition of Sinclair's economic system.

 Permalink

Sacks, David. Language Visible: Unraveling the Mystery of the Alphabet. New York: Broadway Books, 2003. ISBN 0-7679-1172-5.
Whaddya gonna do? The hardcover is out of print and the paperback isn't scheduled for publication until August 2004. The U.K. hardback edition, simply titled The Alphabet, is currently available.

 Permalink

Stöhlker, Klaus J. Adieu la Suisse—Good Morning Switzerland. Le Mont-sur-Lausanne: Éditions LEP, 2003. ISBN 2-606-01086-8.
This is a French translation of the original German edition, which has the same French-and-English title. The French edition can be found in almost any bookshop in la Suisse romande, but I know of no online source.

 Permalink

McGivern, Ed. Fast and Fancy Revolver Shooting. Clinton, NJ: New Win Publishing, [1938] 1975. ISBN 0-8329-0557-7.
This is a facsimile of the 1938 first edition, published to commemorate the centenary of the author's birth in 1874. Earlier facsimile editions of this classic were published in 1945, 1957, and 1965; copies of these as well as the first edition may be found at abebooks.com, but most are substantially more expensive than new copies of the 1975 reprint. Imagine trying to publish a book today which includes advice (pp. 461–462) on shooting targets off an assistant's head!

 Permalink

Jenkins, Dennis R. and Tony R. Landis. Hypersonic: The Story of the North American X-15. North Branch, MN: Specialty Press, 2003. ISBN 1-58007-068-X.
Specialty Press have drastically raised the bar in aviation history publishing. This volume, like the B-36 (August 2003) and XB-70A (September 2003) books mentioned previously here, combines coffee-table book production values, comprehensive historical coverage, and abundant technical details. Virtually absent are the typographical errors, mis-captioned photographs, and poorly reproduced colour photos which too often mar well-intended aviation books from other publishers. In their research, the authors located many more historical photographs than they could include in this book (which has more than 550). The companion X-15 Photo Scrapbook includes 400 additional significant photos, many never before published.

 Permalink

Dyson, Freeman J. Origins of Life. 2nd. ed. Cambridge: Cambridge University Press, 1999. ISBN 0-521-62668-4.
The years which followed Freeman Dyson's 1985 Tarner lectures, published in the first edition of Origins of Life that year, saw tremendous progress in molecular biology, including the determination of the complete nucleotide sequences of organisms ranging from E. coli to H. sapiens, and a variety of evidence indicating the importance of Archaea and the deep, hot biosphere to theories of the origin of life. In this extensively revised second edition, Dyson incorporates subsequent work relevant to his double-origin (metabolism first, replication later) hypothesis. It's perhaps indicative of how difficult the problem of the origin of life is that none of the multitude of experiments done in the almost 20 years since Dyson's original lectures has substantially confirmed or denied his theory nor answered any of the explicit questions he posed as challenges to experimenters.

 Permalink

April 2004

Weightman, Gavin. The Frozen-Water Trade. New York: Hyperion, 2003. ISBN 0-7868-8640-4.
Those who scoff at the prospect of mining lunar Helium-3 as fuel for Earth-based fusion power plants might ponder the fact that, starting in 1833, British colonists in India beat the sweltering heat of the subcontinent with a steady, year-round supply of ice cut in the winter from ponds and rivers in Massachusetts and Maine and shipped in the holds of wooden sailing ships—a voyage of some 25,000 kilometres and 130 days. In 1870 alone, 17,000 tons of ice were imported by India in ships sailing from Boston. Frederic Tudor, who first conceived the idea of shipping winter ice, previously considered worthless, to the tropics, was essentially single-handedly responsible for ice and refrigeration becoming a fixture of daily life in Western communities around the world. Tudor found fortune and fame in creating an industry based on commodity which beforehand simply melted away every spring. No technological breakthrough was required or responsible—this is a classic case of creating a market by filling a need of which customers were previously unaware. In the process, Tudor suffered just about every adversity one can imagine and never gave up, an excellent illustration that the one essential ingredient of entrepreneurial success is the ability to “take a whacking and keep on hacking”.

 Permalink

Olson, Walter K. The Rule of Lawyers. New York: St. Martin's Press, 2003. ISBN 0-312-28085-8.
The author operates the valuable Overlawyered.com Web site. Those who've observed that individuals with a clue are under-represented on juries in the United States will be delighted to read on page 217 of the Copiah County, Mississippi jury which found for the plaintiff and awarded US$75 billion in damages. When asked why, jurors said they'd intended to award “only” US$75 million, but nobody knew how many zeroes to write down for a million, and they'd guessed nine.

 Permalink

Walsh, Jill Paton and Dorothy L. Sayers. A Presumption of Death. New York: St. Martin's Press, 2002. ISBN 0-312-29100-0.
This is an entirely new Lord Peter Wimsey mystery written by Jill Paton Walsh, based upon the “Wimsey Papers”—mock wartime letters among members of the Wimsey family by Dorothy L. Sayers, published in the London Spectator in 1939 and 1940. Although the hardcover edition is 378 pages long, the type is so large that this is almost a novella in length, and the plot is less intricate, it seems to me, than the genuine article. Walsh, who was three years old at the period in which the story is set, did her research well: I thought I'd found half a dozen anachronisms, but on each occasion investigation revealed the error to be mine. But please, RAF pilots do not “bale” out of their Spitfires—they bail out!

 Permalink

Muirden, James. A Rhyming History of Britain: 55 B.C.A.D. 1966. Illustrated by David Eccles. New York: Walker and Company, 2003. ISBN 0-8027-7680-9.

 Permalink

Rucker, Rudy. Frek and the Elixir. New York: Tor, 2004. ISBN 0-7653-1058-9.
Phrase comments in dialect of Unipusk aliens in novel. Congratulate author's hitting sweet spot combining Heinlein juvenile adventure, Rucker zany imagination, and Joseph Campbell hero myth. Assert suitable for all ages. Direct readers to extensive (145 page) working notes for the book, and book's Web site, with two original oil paintings illustrating scenes. Commend author for attention to detail: two precise dates in the years 3003 and 3004 appear in the story, and the days of the week are correct! Show esteemed author and humble self visiting Unipusk saucer base in July 2002.

 Permalink

Spengler, Oswald. The Decline of the West: An Abridged Edition. Oxford: Oxford University Press, [1918, 1922, 1932, 1959, 1961] 1991. ISBN 0-19-506634-0.
Only rarely do I read abridged editions. I chose this volume simply because it was the only readily-available English translation of the work. In retrospect, I don't think I could have handled much more Spengler, at least in one dose. Even in English, reading Spengler conjures up images of great mountain ranges of polysyllabic German philosophical prose. For example, chapter 21 begins with the following paragraph. “Technique is as old as free-moving life itself. The original relation between a waking-microcosm and its macrocosm—‘Nature’—consists in a mental sensation which rises from mere sense-impressions to sense-judgement, so that already it works critically (that is, separatingly) or, what comes to the same thing, causal-analytically”. In this abridged edition the reader need cope only with a mere 415 pages of such text. It is striking the extent to which today's postmodern nostrums of cultural relativism were anticipated by Spengler.

 Permalink

Lileks, James. The Gallery of Regrettable Food. New York: Crown Publishers, 2001. ISBN 0-609-60782-0.
The author is a syndicated columnist and pioneer blogger. Much of the source material for this book and a wealth of other works in progress are available on the author's Web site.

 Permalink

Verne, Jules. Voyage au centre de la terre. Paris: Gallimard, [1864] 1998. ISBN 2-07-051437-4.
A free electronic edition of this text is available from Project Gutenberg. This classic adventure is endlessly adaptable: you may prefer a translation in English, German, or Spanish. The 1959 movie with James Mason and Pat Boone is a fine flick but substantially departs from Verne's story in many ways: of the three principal characters in the novel, two are rather unsympathetic and the third taciturn in the extreme—while Verne was just having his usual fun with Teutonic and Nordic stereotypes, one can see that this wouldn't work for Hollywood. Rick Wakeman's musical edition is, however, remarkably faithful to the original.

 Permalink

May 2004

Spufford, Francis. Backroom Boys: The Secret Return of the British Boffin. London: Faber and Faber, 2003. ISBN 0-571-21496-7.
It is rare to encounter a book about technology and technologists which even attempts to delve into the messy real-world arena where science, engineering, entrepreneurship, finance, marketing, and government policy intersect, yet it is there, not solely in the technological domain, that the roots of both great successes and calamitous failures lie. Backroom Boys does just this and pulls it off splendidly, covering projects as disparate as the Black Arrow rocket, Concorde, mid 1980s computer games, mobile telephony, and sequencing the human genome. The discussion on pages 99 and 100 of the dynamics of new product development in the software business is as clear and concise a statement I've seen of the philosophy that's guided my own activities for the past 25 years. While celebrating the technological renaissance of post-industrial Britain, the author retains the characteristic British intellectual's disdain for private enterprise and economic liberty. In chapter 4, he describes Vodaphone's development of the mobile phone market: “It produced a blind, unplanned, self-interested search strategy, capitalism's classic method for exploring a new space in the market where profit may be found.” Well…yes…indeed, but that isn't just “capitalism's” classic method, but the very one employed with great success by life on Earth lo these four and a half billion years (see The Genius Within, April 2003). The wheels fall off in chapter 5. Whatever your position may have been in the battle between Celera and the public Human Genome Project, Spufford's collectivist bias and ignorance of economics (simply correcting the noncontroversial errors in basic economics in this chapter would require more pages than it fills) gets in the way of telling the story of how the human genome came to be sequenced five years before the original estimated date. A truly repugnant passage on page 173 describes “how science should be done”. Taxpayer-funded researchers, a fine summer evening, “floated back downstream carousing, with stubs of candle stuck to the prows, … and the voices calling to and fro across the water as the punts drifted home under the overhanging trees in the green, green, night.“ Back to the taxpayer-funded lab early next morning, to be sure, collecting their taxpayer-funded salaries doing the work they love to advance their careers. Nary a word here of the cab drivers, sales clerks, construction workers and, yes, managers of biotech start-ups, all taxed to fund this scientific utopia, who lack the money and free time to pass their own summer evenings so sublimely. And on the previous page, the number of cells in the adult body of C. elegans is twice given as 550. Gimme a break—everybody knows there are 959 somatic cells in the adult hermaphrodite, 1031 in the male; he's confusing adults with 558-cell newly-hatched L1 larvæ.

 Permalink

Powell, Jim. FDR's Folly. New York: Crown Forum, 2003. ISBN 0-7615-0165-7.

 Permalink

Vallee, Jacques. The Heart of the Internet. Charlottesville, VA: Hampton Roads Publishing, 2003. ISBN 1-57174-369-3.
The author (yes, that Jacques Vallee) recounts the history of the Internet from an insider's perspective: first as a member of Doug Engelbart's Augmentation group at SRI from 1971, and later as a developer of the pioneering Planet conferencing system at the Institute for the Future and co-founder of the 1976 spin-off InfoMedia. He does an excellent job both of sketching Engelbart's still unrealised vision of computer networks as a means of connecting human minds in new ways, and in describing how it, like any top-down system design, was doomed to fail in the real world populated by idiosyncratic and innovative human beings. He celebrates the organic, unplanned growth of the Internet so far and urges that it be allowed to continue, free of government and commercial constraints. The present-day state of the Internet worries him as it worries me; he eloquently expresses the risk as follows (p. 162): “As a venture capitalist who invests in high tech, I have to worry that the web will be perceived as an increasingly corrupt police state overlying a maze of dark alleys and unsafe practices outside the rule of law. The public and many corporations will be reluctant to embrace a technology fraught with such problems. The Internet economy will continue to grow, but it will do so at a much slower pace than forecast by industry analysts.” This is precisely the scenario I have come to call “the Internet slum”. The description of the present-day Internet and what individuals can do to protect their privacy and defend their freedom in the future is sketchy and not entirely reliable. For example, on page 178, “And who has time to keep complete backup files anyway?”, which rhetorical question I would answer, “Well, anybody who isn't a complete idiot.” His description of the “Mesh” in chapter 8 is precisely what I've been describing to gales of laughter since 1992 as “Gizmos”—a world in which everything has its own IPv6 address—each button on your VCR, for example—and all connections are networked and may be redefined at will. This is laid out in more detail in the Unicard Ubiquitous section of my 1994 Unicard paper.

 Permalink

Solé, Robert. Le grand voyage de l'obélisque. Paris: Seuil, 2004. ISBN 2-02-039279-8.
No, this is not an Astérix book—it's “obélisque”, not “Obélix”! This is the story of how an obelisk of Ramses II happened to end up in the middle of la Place de la Concorde in Paris. Moving a 22 metre, 220 metric ton chunk of granite from the banks of the Nile to the banks of the Seine in the 1830s was not a simple task—it involved a purpose-built ship, an expedition of more than two and a half years with a crew of 121, twelve of whom died in Egypt from cholera and dysentery, and the combined muscle power of 350 artillerymen in Paris to erect the obelisk where it stands today. One has to be impressed with the ancient Egyptians, who managed much the same more than thirty centuries earlier. The book includes a complete transcription and translation of the hieroglyphic inscriptions—Ramses II must have set the all-time record for effort expended in publishing banal text.

 Permalink

Hitchens, Peter. The Abolition of Liberty. London: Atlantic Books, [2003] 2004. ISBN 1-84354-149-1.
This is a revised edition of the hardcover published in 2003 as A Brief History of Crime. Unlike the police of most other countries (including most of the U.S.), since the founding of the Metropolitan Police in 1829, police in England and Wales focused primarily on the prevention of crime through a regular, visible presence and constant contact with the community, as opposed to responding after the commission of a crime to investigate and apprehend those responsible. Certainly, detection was among the missions of the police, but crime was viewed as a failure of policing, not an inevitable circumstance to which one could only react. Hitchens argues that it is this approach which, for more than a century, made these lands among the safest, civil, and free on Earth, with police integrated in the society as uniformed citizens, not a privileged arm of the state set above the people. Starting in the 1960s, all of this began to change, motivated by a mix of utopian visions and the hope of cutting costs. The bobby on the beat was replaced by police in squad cars with sirens and flashing lights, inevitably arriving after a crime was committed and able to do little more than comfort the victims and report yet another crime unlikely to be solved. Predictably, crime in Britain exploded to the upside, with far more police and police spending per capita than before the “reforms” unable to even reduce its rate of growth. The response of the government elite has not been to return to preventive policing, but rather to progressively infringe the fundamental liberties of citizens, trending toward the third world model of a police state with high crime. None of this would have surprised Hayek, who foresaw it all The Road to Serfdom (May 2002). Theodore Dalrymple's Life at the Bottom (September 2002) provides a view from the streets surrendered to savagery, and the prisons and hospitals occupied by the perpetrators and their victims. In this edition, Hitchens deleted two chapters from the hardcover which questioned Britain's abolition of capital punishment and fanatic program of victim disarmament (“gun control”). He did so “with some sadness” because “the only way to affect politics in this country is to influence the left”, and these issues are “articles of faith with the modern left”. As “People do not like to be made to think about their faith”, he felt the case better put by their exclusion. I have cited these quotes from pp. xi–xii of the Preface without ellipses but, I believe, fairly.

 Permalink

Cellan-Jones, Rory. Dot.bomb: The Strange Death of Dot.com Britain. London: Aurum Press, [2001] 2003. ISBN 1-85410-952-9.
The dot.com bubble in Britain was shorter and more intense than in the U.S.—the mania didn't really begin until mid-1999 with the public share flotation of Freeserve, then collapsed along with the NASDAQ in the spring of 2000, days after the IPO of evocatively named lastminute.com (a rare survivor). You're probably aware of the much-hyped rise, profligate peak, and ugly demise of boo.com, poster child of the excesses of dot.com Britain, but how about First Tuesday, which almost succeeded in raising US$15 million from two venture funds, putting a valuation of US$62 million on what amounted to a cocktail party? The babe on the back cover beside the author's biography isn't the author (who is male), but British sitcom celeb Joanna Lumley, erstwhile spokesblonde for ephemeral on-line health food peddler Clickmango.com.

 Permalink

Miranda, Eduardo Reck. Composing Music with Computers. Oxford: Focal Press, 2001. ISBN 0-240-51567-6.

 Permalink

June 2004

Hutchinson, Robert. Weapons of Mass Destruction. London: Cassell, 2003. ISBN 0-304-36653-6.
This book provides a history and survey of present-day deployment of nuclear, chemical, and biological weapons. The author, a former journalist with Jane's, writes from a British perspective and discusses the evolution of British nuclear forces in some detail. The focus is very much on nuclear weapons—of the 260 pages of text, a total of 196 are devoted to nuclear weapons and delivery systems. Two chapters at the end cover chemical and biological weapons adequately but less thoroughly. Several glaring technical errors make one worry about the reliability of the information on deployments and policy. The discussion of how fission and fusion weapons function is complete gibberish; if that's what interests you, the Nuclear Weapons Frequently-Asked Questions available on the Nuclear Weapons Archive is the place to go. There is one anecdote I don't recall encountering before. The British had so much difficulty getting their staged implosion thermonuclear weapon to work (this was during the years when the McMahon Act denied the British access to U.S. weapon design information) that they actually deployed a 500 kT pure fission weapon, similar to Ted Taylor's “Super Oralloy Bomb” tested in the Ivy King shot in 1952. The British bomb contained 70 kg of highly enriched uranium, far more than the 52 kg unreflected critical mass of U-235. To keep this contraption from going off accidentally in an aircraft accident, the uranium masses were separated by 450 kg of steel balls (I'll bet, alloyed with boron, but Hutchinson is silent on this detail) which were jettisoned right before the bomb was to be dropped. Unfortunately, once armed, the weapon could not be disarmed, so you had to be awfully certain you intended to drop the bomb before letting the ball bearings out.

 Permalink

Walsh, Jill Paton and Dorothy L. Sayers. Thrones, Dominations. New York: St. Martin's Press, 1998. ISBN 0-312-96830-2.
This is the first of the Sayers/Walsh posthumous collaborations extending the Lord Peter Wimsey / Harriet Vane mysteries beyond Busman's Honeymoon. (The second is A Presumption of Death, April 2004.) A Wimsey insider informs me the splice between Sayers and Walsh occurs at the end of chapter 6. It was undetectable to this Wimsey fan, who found this whodunit delightful.

 Permalink

Jenkins, Dennis R., Mike Moore, and Don Pyeatt. B-36 Photo Scrapbook. North Branch, MN: Specialty Press, 2003. ISBN 1-58007-075-2.
After completing his definitive history of the B-36, Magnesium Overcast (August 2003), Dennis Jenkins wound up with more than 300 historical photographs which didn't fit in the book. This companion volume includes them all, with captions putting each into context. Many of these photos won't make much sense unless you've read Magnesium Overcast, but if you have and still hanker for more humongous bomber shots, here's your book. On page 48 there's a photo of a New York Central train car to which the twin J47 jet pod from a retired B-36 was attached “to see what would happen”. Well, on a 38.5 km section of straight track, it went 295 km/hour. Amazing, the things they did in the U.S. before the safety fascists took over!

 Permalink

Haigh, Gideon. Bad Company: The Strange Cult of the CEO. London: Aurum Press, 2004. ISBN 1-85410-969-3.
In this small and quirky book, Haigh puts his finger precisely on the problem with today's celebrity CEOs. It isn't just that they're paid obscenely out of proportion to their contribution to the company, it's that for the most part they don't know all that much about the company's products, customers, and industry. Instead, skilled only in management, they attempt to analyse and operate the company by examining and manipulating financial aggregates. While this may be an effective way to cut costs and improve short-term operating results through consolidation, outsourcing and offshoring, cutting research and development, and reducing the level of customer service, all these things tend to injure the prospects of the company over the long haul. But CEOs are mostly compensated based on current financial results and share price. With length of tenure at the top becoming ever shorter as executives increasingly job hop among companies, the decisions a CEO makes today may have consequences which manifest themselves only after the stock options are cashed in and his successor is left to sort out the mess. Certainly there are exceptions, usually entrepreneurs who remain at the helm of the companies they've founded, but the nature of the CEO rôle in today's publicly traded company tends to drive such people out of the job, a phenomenon with which I have had some experience. I call the book “quirky” because the author draws examples not just from well known corporate calamities, but also films and works of fiction. He is fond of literary allusions and foreign phrases, which readers are expected to figure out on their own. Still, the message gets across, at least to readers with attention spans longer than the 10 to 30 minute time slices which characterise most CEOs. The ISBN on the copyright page is wrong; I've given the correct one here.

 Permalink

Sarich, Vincent and Frank Miele. Race: The Reality of Human Differences. Boulder, CO: Westview Press, 2004. ISBN 0-8133-4086-1.
This book tackles the puzzle posed by the apparent contradiction between the remarkable genetic homogeneity of humans compared to other species, while physical differences between human races (non-controversial measures such as cranial morphology, height, and body build) actually exceed those between other primate species and subspecies. Vincent Sarich, emeritus Professor of Anthropology at UC Berkeley, pioneer in the development of the “molecular clock”, recounts this scientific adventure and the resulting revolution in the human evolutionary tree and timescale. Miele (editor of Skeptic magazine) and Sarich then argue that the present-day dogma among physical anthropologists and social scientists that “race does not exist”, if taken to its logical conclusion, amounts to rejecting Darwinian evolution, which occurs through variation and selection. Consequently variation among groups is an inevitable consequence, recognised as a matter of course in other species. Throughout, the authors stress that variation of characteristics among individual humans greatly exceeds mean racial variation, which makes racial prejudice and discrimination not only morally abhorrent but stupid from the scientific standpoint. At the same time, small differences in the mean of a set of standard distributions causes large changes in their representation in the aggregate tail representing extremes of performance. This is why one should be neither surprised nor dismayed to find a “disproportionate” number of Kenyans among cross-country running champions, Polynesians in American professional football, or east Asians in mathematical research. A person who comprehends this basic statistical fact should be able to treat individuals on their own merit without denying the reality of differences among sub-populations of the human species. Due to the broad overlap among groups, members of every group, if given the opportunity, will be represented at the highest levels of performance in each field, and no individual should feel deterred nor be excluded from aspiring to such achievement due to group membership. For the argument against the biological reality of race, see the Web site for the United States Public Broadcasting Service documentary, Race: The Power of an Illusion. This book attempts to rebut each of the assertions in that documentary.

 Permalink

Meyssan, Thierry ed. Le Pentagate. Chatou, France: Editions Carnot, 2002. ISBN 2-912362-77-6.
This book is available online in both Web and PDF editions from the book's Web site. An English translation is available, but only in a print edition, not online.

 Permalink

Hofschröer, Peter. Wellington's Smallest Victory. London: Faber and Faber, 2004. ISBN 0-571-21768-0.
Wellington's victory over Napoléon at Waterloo in 1815 not only inspired Beethoven's worst musical composition, but a veritable industry of histories, exhibitions, and re-enactments in Britain. The most spectacular of these was the model of the battlefield which William Siborne, career officer and author of two books on military surveying, was commissioned to build in 1830. Siborne was an assiduous researcher; after surveying the battlefield in person, he wrote to most of the surviving officers in the battle: British, Prussian, and French, to determine the precise position of their troops at the “crisis of the battle” he had chosen to depict: 19:00 on June 18th, 1815. The responses he received indicated that Wellington's Waterloo Despatch, the after-action report penned the day after the battle was, shall we say, at substantial variance with the facts, particularly as regards the extent to which Prussian troops contributed to the victory and the time at which Wellington was notified of Napoléon's attack. Siborne stuck with the facts, and his model, first exhibited in London in 1838, showed the Prussian troops fully engaged with the French at the moment the tide of battle turned. Wellington was not amused and, being not only a national hero but former Prime Minister, was a poor choice as enemy. For the rest of Siborne's life, Wellington waged a war of attrition against Siborne's (accurate) version of the events at Waterloo, with such success that most contemporary histories take Wellington's side, even if it requires believing in spyglasses capable of seeing on the other side of hills. But truth will out. Siborne's companion History of the Waterloo Campaign remains in print 150 years after its publication, and his model of the battlefield (albeit with 40,000 figures of Prussian soldiers removed) may be seen at the National Army Museum in London.

 Permalink

Smith, Edward E. Triplanetary. Baltimore: Old Earth Books, [1948] 1997. ISBN 1-882968-09-3.
Summer's here (though you'd never guess from the thermometer), and the time is right for some light reading, so I've begun my fourth lifetime traverse of Doc Smith's Lensman series, which now, by Klono's gadolinium guts, has been re-issued by Old Earth Books in trade paperback facsimiles of the original Fantasy Press editions, complete with all illustrations. The snarky foreword, where John Clute, co-editor of The Encyclopedia of Science Fiction, shows off his pretentious post-modern vocabulary and scorn for the sensibilities of an author born in 1890, is best skipped.

 Permalink

July 2004

Barrow, John D., Paul C.W. Davies, and Charles L. Harper, Jr., eds. Science and Ultimate Reality. Cambridge: Cambridge University Press, 2004. ISBN 0-521-83113-X.
These are the proceedings of the festschrift at Princeton in March 2002 in honour of John Archibald Wheeler's 90th year within our light-cone. This volume brings together the all-stars of speculative physics, addressing what Wheeler describes as the “big questions.” You will spend a lot of time working your way through this almost 700 page tome (which is why entries in this reading list will be uncharacteristically sparse this month), but it will be well worth the effort. Here we have Freeman Dyson posing thought-experiments which purport to show limits to the applicability of quantum theory and the uncertainty principle, then we have Max Tegmark on parallel universes, arguing that the most conservative model of cosmology has infinite copies of yourself within the multiverse, each choosing either to read on here or click another link. Hideo Mabuchi's chapter begins with an introductory section which is lyrical prose poetry up to the standard set by Wheeler, and if Shou-Cheng Zhang's final chapter doesn't make you re-think where the bottom of reality really lies, you're either didn't get it or have been spending way too much time reading preprints on ArXiv. I don't mean to disparage any of the other contributors by not mentioning them—every chapter of this book is worth reading, then re-reading carefully. This is the collected works of the 21th century equivalent of the savants who attended the Solvay Congresses in the early 20th century. Take your time, reread difficult material as necessary, and look up the references. You'll close this book in awe of what we've learned in the last 20 years, and in wonder of what we'll discover and accomplish the the rest of this century and beyond.

 Permalink

Neisser, Ulric, ed. The Rising Curve: Long-Term Gains in IQ and Related Measures. Washington: American Psychological Association, 1998. ISBN 1-55798-503-0.
One of the most baffling phenomena in the social sciences is the “Flynn Effect”. Political scientist James Flynn was among the first to recognise the magnitude of increasing IQ scores over time and thoroughly document that increase in more than a dozen nations around the world. The size of the effect is nothing less than stunning: on tests of “fluid intelligence” or g (problem-solving ability, as opposed to acquired knowledge, vocabulary, etc.), Flynn's research shows scores rising at least 3 IQ points per decade ever since testing began—as much as one 15 point standard deviation per generation. If you take these figures at face value and believe that IQ measures what we perceive as intelligence in individuals, you arrive at any number of absurdities: our grandparents' generation having a mean IQ of 70 (the threshold of retardation), an expectation that Einstein-level intellect would be 10,000 times more common per capita today than in his birth cohort, and that veteran teachers would perceive sons and daughters of the students they taught at the start of their careers as gifted to the extent of an IQ 115 student compared to a classmate with an IQ of 100. Obviously, none of these are the case, and yet the evidence for Flynn effect is overwhelming—the only reason few outside the psychometric community are aware of it is that makers of IQ tests periodically “re-standardise” their tests (in other words, make them more difficult) in order to keep the mean score at 100. Something is terribly wrong here: either IQ is a bogus measure (as some argue), or it doesn't correlate with real-world intelligence, or some environmental factor is increasing IQ test performance but not potential for achievement or … well, who knows? These are among the many theories advanced to explain this conundrum, most of which are discussed in this volume, a collection of papers by participants in a 1996 conference at Emory University on the evidence for and possible causes of the Flynn effect, and its consequences for long-term trends in human intelligence. My conclusions from these papers are threefold. First, the Flynn effect is real, having been demonstrated as conclusively as almost any phenomenon in the social sciences. Second, nobody has the slightest idea what is going on—theories abound, but available data are insufficient to exclude any of numerous plausible theories. Third, this is because raw data relating to these questions is sparse and poorly suited to answering the questions with which the Flynn effect confronts us. Almost every chapter laments the shortcomings of the data set on which it was based or exhorts “somebody” to collect data better suited to exploring details of the Flynn effect and its possible causes. If human intelligence is indeed increasing by one standard deviation per generation, this is one of the most significant phenomena presently underway on our planet. If IQ scores are increasing at this rate, but intelligence isn't, then there's something very wrong with IQ tests or something terribly pernicious which is negating the effects of the problem-solving capability they claim to measure. Given the extent to which IQ tests (or their close relatives: achievement tests such as the SAT, GRE, etc.) determine the destiny of individuals, if there's something wrong with these tests, it would be best to find out what's wrong sooner rather than later.

 Permalink

Wheen, Francis. How Mumbo-Jumbo Conquered the World. London: Fourth Estate, 2004. ISBN 0-00-714096-7.
I picked up this book in an airport bookshop, expecting a survey of contemporary lunacy along the lines of Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds or Martin Gardner's Fads and Fallacies in the Name of Science. Instead, what we have is 312 pages of hateful, sneering political rant indiscriminately sprayed at more or less every target in sight. Mr Wheen doesn't think very much of Ronald Reagan or Margaret Thatcher (who he likens repeatedly to the Ayatollah Khomeini). Well, that's to be expected, I suppose, in a columnist for the Guardian, but there's no reason they need to be clobbered over and over, for the same things and in almost the same words, every three pages or so throughout this tedious, ill-organised, and repetitive book. Neither does the author particularly fancy Tony Blair, who comes in for the same whack-a-mole treatment. A glance at the index (which is not exhaustive) shows that between them, Blair, Thatcher, and Reagan appear on 85 pages equally sprinkled throughout the text. In fact, Mr Wheen isn't very keen on almost anybody or anything dating from about 1980 to the present; one senses an all-consuming nostalgia for that resplendent utopia which was Britain in the 1970s. Now, the crusty curmudgeon is a traditional British literary figure, but masters of the genre leaven their scorn with humour and good will which are completely absent here. What comes through instead is simply hate: the world leaders who dismantled failed socialist experiments are not, as a man of the left might argue, misguided but rather Mrs Thatcher's “drooling epigones” (p. 263). For some months, I've been pondering a phenomenon in today's twenty-something generation which I call “hate kiddies.” These are people, indoctrinated in academia by ideologues of the Sixties generation to hate their country, culture, and all of its achievements—supplanting the pride which previous generations felt with an all-consuming guilt. This seems, in many otherwise gifted and productive people, to metastasise in adulthood into an all-consuming disdain and hate for everything; it's like the end point of cultural relativism is the belief that everything is evil. I asked an exemplar of this generation once whether he could name any association of five or more people anywhere on Earth which was not evil: nope. Detesting his “evil” country and government, I asked whether he could name any other country which was less evil or even somewhat good: none came to mind. (If you want to get a taste of this foul and poisonous weltanschauung, visit the Slashdot site and read the comments posted for almost any article. This site is not a parody—this is how the young technological elite really think, or rather, can't think.) In Francis Wheen, the hate kiddies have found their elder statesman.

 Permalink

Ronald Reagan Presidential Foundation. Ronald Reagan: An American Hero. New York: Dorling Kindersley, 2001. ISBN 0-7894-7992-3.
This is basically a coffee-table book. There are a multitude of pictures, many you're unlikely to have seen before, but the text is sparse and lightweight. If you're looking for a narrative, try Peggy Noonan's When Character Was King (March 2002).

 Permalink

Sullivan, Scott P. Virtual Apollo. Burlington, Canada: Apogee Books, 2002. ISBN 1-896522-94-7.
Every time I see an Apollo command module in a museum, I find myself marveling, “How did they cram all that stuff into that tiny little spacecraft?”. Think about it—the Apollo command and service modules provided everything three men needed to spend two weeks in space, navigate autonomously from the Earth to the Moon and back, dock with other spacecraft, enter and leave lunar orbit, re-enter the Earth's atmosphere at interplanetary speed, fly to a precision splash-down, then serve as a boat until the Navy arrived. And if that wasn't enough, most of the subsystems were doubly or triply redundant, so even in the event of failure, the ship could still get the crew back home, which it did on every single flight, even the dicey Apollo 13. And this amazing flying machine was designed on drawing boards in an era before computer-aided interactive solid modeling was even a concept. Virtual Apollo uses computer aided design to help you appreciate the work of genius which was the Apollo spacecraft. The author created more than 200 painstakingly researched and highly detailed solid models of the command and service modules, which were used to produce the renderings in this book. Ever wondered how the Block II outward-opening crew hatch worked? See pages 41–43. How the devil did they make the docking probe removable? Pages 47–49. Regrettably, the attention to detail which went into production of the models and images didn't follow through to the captions and text, which have apparently been spell-checked but never carefully proofread and contain almost a complete set of nerdish stumbles: its/it's, lose/loose, principal/principle, etc. Let's hope these are remedied in a subsequent edition, and especially that the author or somebody equally talented extends this labour of love to include the lunar module as well.

 Permalink

Royce, Kenneth W. Hologram of Liberty. Ignacio, CO: Javelin Press, 1997. ISBN 1-888766-03-4.
The author, who also uses the nom de plume “Boston T. Party”, provides a survey of the tawdry machinations which accompanied the drafting and adoption of the United States Constitution, making the case that the document was deliberately designed to permit arbitrary expansion of federal power, with cosmetic limitations of power to persuade the states to ratify it. It is striking the extent to which not just vocal anti-federalists like Patrick Henry, but also Thomas Jefferson, anticipated precisely how the federal government would slip its bonds—through judiciary power and the creation of debt, both of which were promptly put into effect by John Marshall and Alexander Hamilton, respectively. Writing on this topic seems to have, as an occupational hazard, a tendency to rant. While Royce never ascends to the coruscating rhetoric of Lysander Spooner's No Treason, there is a great deal of bold type here, as well as some rather curious conspiracy theories (which are, in all fairness, presented for the reader's consideration, not endorsed by the author). Oddly, although chapter 11 discusses the 27th amendment (Congressional Pay Limitation)—proposed in 1789 as part of the original Bill of Rights, but not ratified until 1992—it is missing from the text of the Constitution in appendix C.

 Permalink

Malmsten, Ernst, Erik Portanger, and Charles Drazin. Boo Hoo. London: Arrow Books, 2001. ISBN 0-09-941837-1.
In the last few years of the twentieth century, a collective madness seized the investment community, who stumbled over one another to throw money at companies with no sales, profits, assets, or credible plans, simply because they appended “.com” to their name and identified themselves in some way with the Internet. Here's an insider's story of one of the highest fliers, boo.com, which was one of the first to fall when sanity began to return in early 2000. Ernst Malmsten, co-founder and CEO of boo, and his co-authors trace its trajectory from birth to bankruptcy. On page 24, Malmsten describes what was to make boo different: “This was still a pretty new idea. Most of the early American internet companies had sprung from the minds of technologists. All they cared about was functionality and cost.” Well, what happens when you start a technology-based business and don't care about functionality and cost? About what you'd expect: boo managed to burn through about US$135 million of other peoples' money in 18 months, generating total sales of less than US$2 million. A list of subjects about which the founders were clueless includes technology, management, corporate finance, accounting, their target customers, suppliers, and competition. “Market research? That was something Colgate did before it launched a new toothpaste. The internet was something you had to feel in your fingertips.” (page 47). Armed with exquisitely sensitive fingertips and empty heads, they hired the usual “experts” to help them out: J.P. Morgan, Skadden Arps, Leagas Delaney, Hill & Knowlton, Heidrick & Struggles, and the Boston Consulting Group, demonstrating once again that the only way to screw up quicker and more expensively than ignorance alone is to enlist professional help. But they did have style: every ritzy restaurant, exclusive disco, Concorde day-trip to New York, and lavish party for the staff is chronicled in detail, leaving one to wonder if there was a single adult in the company thinking about how quickly the investors' money was going down the drain. They spent more than US$22 million on advertising and PR before their Web site was working which, when it finally did open to the public, took dial-up users four minutes to download the Flash-based home page and didn't accept orders at all from Macintosh users. But these are mere matters of “functionality and cost” which obsess nerdy technologists and green eyeshade entrepreneurs like myself.

 Permalink

August 2004

Halperin, James L. The First Immortal. New York: Del Rey, 1998. ISBN 0-345-42182-5.
As Yogi Berra said, “It's hard to make predictions, especially about the future.” In this novel, the author tackles one of the most daunting challenges in science fiction: the multi-generation saga which spans its publication date. There are really only two ways to approach this problem: show the near future in soft focus, concentrating on characters and avoiding any mention of news and current events, or boldly predict and take your lumps when you inevitably get it wrong. Hey, even if you do, odds are the books will either be on readers' shelves or in the remainder bins by the time reality diverges too far from the story line. Halperin opts for the latter approach. Preachy novels with an agenda have a tendency to sit on my shelf quite a while until I get around to them—in this case six years. (The hardcover I bought in 1998 is out of print, so I've linked to the paperback which remains available.) The agenda here is cryonics, the resurrection myth of the secular humanists, presented in full dogmatic form: vitrification, cryogenic storage of the dead (or their heads, for the budget-conscious), nanotechnological restoration of damage due to freezing, repair of disease damage and genetic defects, reversal of aging, organ and eventually full body cloning, brain state backup and uploading, etc.—the full mambo chicken meme-bag. The book gets just about everything predicted for the years after its publication as wrong as possible: Xanadu-style back-links in Netscape, the Gore administration, etc. Fine—all were reasonable extrapolations when the first draft was written in 1996. My problem is that the further-out stuff seems, if anything, even less plausible than the near term predictions have proved to be. How likely is it that artificial intelligences with a hundred times the power of the human brain will remain subservient, obedient slaves of their creators? Or that a society with full-on Drexler nanotechnology and space stations outside the orbit of Pluto would be gripped by mass hysteria upon learning of a rain of comets due a hundred years hence? Or that a quasi-socialist U.N. style World Government would spontaneously devolve freedom to its subjects and reduce tax rates to 9.5%? And doesn't the requirement that individuals brought back from the freezer be sponsored by a living person (and hence remain on ice indefinitely if nobody steps up as a sponsor) represent an immoral inter-generational breach of contract with those who paid to be frozen and brought back to life under circumstances they prescribed? The novel is well-written and presents the views of the cryonicists faithfully and effectively. Still, you're left with the sense of having read an advocacy document where story and characters are subordinate to the pitch.

 Permalink

Novak, David P. DownTime: A Guide to Federal Incarceration. Vancouver, WA: Davrie Communications, 2002. ISBN 0-9710306-0-X.
I read this book in the interest of research, not career planning, although in these days when simply looking askance at some badge-wearing pithecanthropoid thug in a U.S. airport can land you in Club Fed, it's information those travelling to that country might be wise to take on board before getting on board. This is a 170 page letter-size comb bound book whose camera-ready copy appears to have been printed on a daisy wheel printer. I bought my copy through Amazon, but the publisher appears to have removed the book from the general distribution channels; you can order it directly from the publisher. My comments are based upon the March 2002 edition. According to the publisher's Web site, the book was completely rewritten in January 2004, which edition I've not seen.

 Permalink

Beckerman, Marty. Generation S.L.U.T.. New York: MTV Books, 2004. ISBN 0-7434-7109-1.
I bought this book based on a recommendation by Hunter S. Thompson. I don't know what the good doctor was smoking—he rarely knows what he's smoking—but this is one messed up, incoherent, choppy, gratuitously obscene, utterly amoral mix of fiction, autobiography, cartoons, newspaper clippings, and statistical factoids seemingly aimed at an audience with an attention span measured in seconds. All together now, “Well, what did you expect from something published by MTV Books?” The “S.L.U.T.” in the title stands for “Sexually Liberated Urban Teens”, and the book purports to be a view from the inside (the author turned 20 while writing the book) of contemporary teenage culture in the United States. One can only consider the word “Liberated” here in a Newspeak sense—the picture painted is of a generation enslaved to hormones and hedonism so banal it brings no pleasure to those who so mindlessly pursue it. The cartoons which break up the fictional thread into blocks of text short enough for MTV zombies are cheaply produced—they re-use a few line drawings of the characters, scaled, mirrored, and with different backgrounds, changing only the text in the balloon. The Addendum by the author is a straight rip-off of Hunter Thompson's style, right down the signature capitalisation of nouns for emphasis. The reader is bludgeoned with a relentless vulgarity which ultimately leaves one numb (and I say this as a fan of both Thompson and South Park). I found myself saying, again and again, “Teenagers in the U.S. can't possibly be this vapid, dissolute, and depraved, can they? Can they?” Er, maybe so, if this Teenwire site, sponsored by the Planned Parenthood Federation of America, is any indication. (You may be shocked, dismayed, and disgusted by the content of this site. I would not normally link to such material, but seeing as how it's deliberately directed at teenagers, I do so in the interest of showing parents how their kids are are being indoctrinated. Note how the welcome page takes you into the main site even if you don't click “Enter”, and that there is no disclaimer whatsoever regarding the site's suitability for children of any age.)

 Permalink

Carr, Nicholas G. Does IT Matter? Boston: Harvard Business School Press, 2004. ISBN 1-59139-444-9.
This is an expanded version of the author's May 2003 Harvard Business Review paper titled “IT Doesn't Matter”, which sparked a vituperous ongoing debate about the rôle of information technology (IT) in modern business and its potential for further increases in productivity and competitive advantage for companies who aggressively adopt and deploy it. In this book, he provides additional historical context, attempts to clear up common misperceptions of readers of the original article, and responds to its critics. The essence of Carr's argument is that information technology (computer hardware, software, and networks) will follow the same trajectory as other technologies which transformed business in the past: railroads, machine tools, electricity, the telegraph and telephone, and air transport. Each of these technologies combined high risk with the potential for great near-term competitive advantage for their early adopters, but eventually became standardised “commodity inputs” which all participants in the market employ in much the same manner. Each saw a furious initial period of innovation, emergence of standards to permit interoperability (which, at the same time, made suppliers interchangeable and the commodity fungible), followed by a rapid “build-out” of the technological infrastructure, usually accompanied by over-optimistic hype from its boosters and an investment bubble and the inevitable crash. Eventually, the infrastructure is in place, standards have been set, and a consensus reached as to how best to use the technology in each industry, at which point it's unlikely any player in the market will be able to gain advantage over another by, say, finding a clever new way to use railroads, electricity, or telephones. At this point the technology becomes a commodity input to all businesses, and largely disappears off the strategic planning agenda. Carr believes that with the emergence of low-cost commodity computers adequate for the overwhelming majority of business needs, and the widespread adoption of standard vendor-supplied software such as office suites, enterprise resource planning (ERP), and customer relationship management (CRM) packages, corporate information technology has reached this level of maturity, where senior management should focus on cost-cutting, security, and maintainability rather than seeking competitive advantage through innovation. Increasingly, companies adapt their own operations to fit the ERP software they run, as opposed to customising the software for their particular needs. While such procrusteanism was decried in the IBM mainframe era, today it's touted as deploying “industry best practices” throughout the economy, tidily packaged as a “company in a box”. (Still, one worries about the consequences for innovation.) My reaction to Carr's argument is, “How can anybody find this remotely controversial?” Not only do we have a dozen or so historical examples of the adoption of new technologies, the evidence for the maturity of corporate information technology is there for anybody to see. In fact, in February 1997, I predicted that Microsoft's ability to grow by adding functionality to its products was about to reach the limit, and looking back, it was with Office 97 that customers started to push back, feeling the added “features” (such as the notorious talking paper clip) and initial lack of downward compatibility with earlier versions was for Microsoft's benefit, not their own. How can one view Microsoft's giving back half its cash hoard to shareholders in a special dividend in 2004 (and doubling its regular dividend, along with massive stock buybacks), as anything other than acknowledgement of this reality. You only give your cash back to the investors (or buy your own stock), when you can't think of anything else to do with it which will generate a better return. So, if there's to be a a “next big thing”, Microsoft do not anticipate it coming from them.

 Permalink

Ryn, Claes G. America the Virtuous. New Brunswick, NJ: Transaction Publishers, 2003. ISBN 0-7658-0219-8.
If you've been following political commentary of the more cerebral kind recently, you may have come across the term “neo-Jacobin” and thought “Whuzzat? I thought those guys went out with the tumbrels and guillotines.” Claes Ryn coined the term “neo-Jacobin” more than decade ago, and in this book explains the philosophical foundation, historical evolution, and potential consequences of that tendency for the U.S. and other Western societies. A neo-Jacobin is one who believes that present-day Western civilisation is based on abstract principles, knowable through pure reason, which are virtuous, right, and applicable to all societies at all times. This is precisely what the original Jacobins believed, with Jacobins old and new drawing their inspiration from Rousseau and John Locke. The claim of superiority of Western civilisation makes the neo-Jacobin position superficially attractive to conservatives, who find it more congenial than post-modernist villification of Western civilisation as the source of all evil in the world. But true conservatism, and the philosophy shared by most of the framers of the U.S. Constitution, rejects abstract theories and utopian promises in favour of time-proven solutions which take into account the imperfections of human beings and the institutions they create. As Alexander Hamilton wrote in Federalist No. 6, “Have we not already seen enough of the fallacy and extravagance of those idle theories which have amused us with promises of an exemption from the imperfections, the weaknesses, and the evils incident to society in every shape.” Sadly, we have not, and are unlikely to ever see the end of such theories as long as pointy-heads with no practical experience, but armed with intimidating prose, are able to persuade true believers they've come up with something better than the collective experience of every human who's ever lived on this planet before them. The French Revolution was the first modern attempt to discard history and remake the world based on rationality, but its lessons failed to deter numerous subsequent attempts, at an enormous cost in human life and misery, the most recently concluded such experiment being Soviet Communism. They all end badly. Ryn believes the United States is embarking on the next such foredoomed adventure, declaring its “universal values” (however much at variance with those of its founders) to be applicable everywhere, and increasingly willing to impose them by the sword “in the interest of the people” where persuasion proves inadequate. Although there is some mention of contemporary political figures, this is not at all a partisan argument, nor does it advocate (nor even present) an alternative agenda. Ryn believes the neo-Jacobin viewpoint so deeply entrenched in both U.S. political parties, media, think tanks, and academia that the choice of a candidate or outcome of an election is unlikely to make much difference. Although the focus is primarily on the U.S. (and rightly so, because only in the U.S. do the neo-Jacobins have access to the military might to impose their will on the rest of the world), precisely the same philosophy can be seen in the ongoing process of “European integration”, where a small group of unelected elite theorists are positioning themselves to dictate the “one best way” hundreds of millions of people in dozens of diverse cultures with thousands of years of history should live their lives. For example, take a look at the hideous draft “constitution” (PDF) for the European Union: such a charter of liberty and democracy that those attemping to put it into effect are doing everything in their power to deprive those who will be its subjects the chance to vote upon it. As Michael Müller, Social Democrat member of parliament in Germany said, “Sometimes the electorate has to be protected from making the wrong decisions.” The original Jacobins had their ways, as well.

 Permalink

Winchester, Simon. The Map that Changed the World. New York: HarperCollins, 2001. ISBN 0-06-093180-9.
This is the story of William Smith, the son of an Oxfordshire blacksmith, who, with almost no formal education but keen powers of observation and deduction, essentially single-handedly created the modern science of geology in the last years of the 18th and the beginning of the 19th century, culminating in the 1815 publication of Smith's masterwork: a large scale map of the stratigraphy of England, Wales, and part of Scotland, which is virtually identical to the most modern geological maps. Although fossil collecting was a passion of the aristocracy in his time, Smith was the first to observe that particular fossil species were always associated with the same stratum of rock and hence, conversely, that rock containing the same population of fossils was the same stratum, wherever it was found. This permitted him to decode the layering of strata and their relative ages, and predict where coal and other minerals were likely to be found, which was a matter of great importance at the dawn of the industrial revolution. In his long life, in addition to inventing modern geology (he coined the word “stratigraphical”), he surveyed mines, built canals, operated a quarry, was the victim of plagiarism, designed a museum, served time in debtor's prison, was denied membership in the newly-formed Geological Society of London due to his humble origins, yet years later was the first recipient of its highest award, the Wollaston Medal, presented to him as the “Father of English Geology”. Smith's work transformed geology from a pastime for fossil collectors and spinners of fanciful theories to a rigorous empirical science and laid the bedrock (if you'll excuse the term) for Darwin and the modern picture of the history of the Earth. The author is very fond of superlatives. While Smith's discoveries, adventures, and misadventures certainly merit them, they get a little tedious after a hundred pages or so. Winchester seems to have been traumatised by his childhood experiences in a convent boarding-school (chapter 11), and he avails himself of every possible opportunity to express his disdain for religion, the religious, and those (the overwhelming majority of learned people in Smith's time) who believed in the Biblical account of creation and the flood. This is irrelevant to and a distraction from the story. Smith's career marked the very beginning of scientific investigation of natural history; when Smith's great geological map was published in 1815, Charles Darwin was six years old. Smith never suffered any kind of religious persecution or opposition to his work, and several of his colleagues in the dawning days of earth science were clergymen. Simon Winchester is also the author of The Professor and the Madman, the story of the Oxford English Dictionary.

 Permalink

Scott, David and Alexei Leonov with Christine Toomey. Two Sides of the Moon. London: Simon & Schuster, 2004. ISBN 0-7432-3162-7.
Astronaut David Scott flew on the Gemini 8 mission which performed the first docking in space, Apollo 9, the first manned test of the Lunar Module, and commanded the Apollo 15 lunar landing, the first serious scientific exploration of the Moon (earlier Apollo landing missions had far less stay time and, with no lunar rover, limited mobility, and hence were much more “land, grab some rocks, and scoot” exercises). Cosmonaut Alexei Leonov was the first to walk in space on Voskhod 2, led the training of cosmonauts for lunar missions and later the Salyut space station program, and commanded the Soviet side of the Apollo Soyuz Test Project in 1975. Had the Soviet Union won the Moon race, Leonov might well have been first to walk on the Moon. This book recounts the history of the space race as interleaved autobiographies of two participants from contending sides, from their training as fighter pilots ready to kill one another in the skies over Europe in the 1950s to Leonov's handshake in space with an Apollo crew in 1975. This juxtaposition works very well, and writer Christine Toomey (you're not a “ghostwriter” when your name appears on the title page and the principals effusively praise your efforts) does a marvelous job in preserving the engaging conversational style of a one-on-one interview, which is even more an achievement when one considers that she interviewed Leonov through an interpreter, then wrote his contributions in English which was translated to Russian for Leonov's review, with his comments in Russian translated back to English for incorporation in the text. A U.S. edition is scheduled for publication in October 2004.

 Permalink

Lelièvre, Domnique. L'Empire américain en échec sous l'éclairage de la Chine impériale. Chatou, France: Editions Carnot, 2004. ISBN 2-84855-097-X.
This is a very odd book. About one third of the text is a fairly conventional indictment of the emerging U.S. “virtuous empire” along the lines of America the Virtuous (earlier this month), along with the evils of globalisation, laissez-faire capitalism, cultural imperialism, and the usual scélérats du jour. But the author, who has published three earlier books of Chinese history, anchors his analysis of current events in parallels between the present day United States and the early Ming dynasty in China, particularly the reign of Zhu Di (朱棣), the Emperor Yongle (永樂), A.D. 1403-1424. (Windows users: if you didn't see the Chinese characters in the last sentence and wish to, you'll need to install Chinese language support using the Control Panel / Regional Options / Language Settings item, enabling “Simplified Chinese”. This may require you to load the original Windows install CD, reboot your machine after the installation is complete, and doubtless will differ in detail from one version of Windows to another. It may be a global village, but it can sure take a lot of work to get from one hut to the next.) Similarities certainly exist, some of them striking: both nations had overwhelming naval superiority and command of the seas, believed themselves to be the pinnacle of civilisation, sought large-scale hegemony (from the west coast of Africa to east Asia in the case of China, global for the U.S.), preferred docile vassal states to allies, were willing to intervene militarily to preserve order and their own self-interests, but for the most part renounced colonisation, annexation, territorial expansion, and religious proselytising. Both were tolerant, multi-cultural, multi-racial societies which believed their values universal and applicable to all humanity. Both suffered attacks from Islamic raiders, the Mongols under Tamerlane (Timur) and his successors in the case of Ming China. And both even fought unsuccessful wars in what is now Vietnam which ended in ignominious withdrawals. All of this is interesting, but how useful it is in pondering the contemporary situation is problematic, for along with the parallels, there are striking differences in addition to the six centuries of separation in time and all that implies for cultural and technological development including communications, weapons, and forms of government. Ming dynasty China was the archetypal oriental despotism, where the emperor's word was law, and the administrative and military bureaucracy was in the hands of eunuchs. The U.S., on the other hand, seems split right about down the middle regarding its imperial destiny, and many observers of U.S. foreign and military policy believe it suffers a surfeit of balls, not their absence. Fifteenth century China was self-sufficient in everything except horses, and its trade with vassal states consisted of symbolic potlatch-type tribute payments in luxury goods. The U.S., on the other hand, is the world's largest debtor nation, whose economy is dependent not only on an assured supply of imported petroleum, but also a wide variety of manufactured goods, access to cheap offshore labour, and the capital flows which permit financing its chronic trade deficits. I could go on listing fundamental differences which make any argument by analogy between these two nations highly suspect, but I'll close by noting that China's entire career as would-be hegemon began with Yongle and barely outlasted his reign—six of the seven expeditions of the great Ming fleet occurred during his years on the throne. Afterward China turned inward and largely ignored the rest of the world until the Europeans came knocking in the 19th century. Is it likely the U.S. drift toward empire which occupied most of the last century will end so suddenly and permanently? Stranger things have happened, but I wouldn't bet on it.

 Permalink

September 2004

Huntington, Samuel P. Who Are We? New York: Simon & Schuster, 2004. ISBN 0-684-87053-3.
The author, whose 1996 The Clash of Civilisations anticipated the conflicts of the early 21st century, here turns his attention inward toward the national identity of his own society. Huntington (who is, justifiably, accorded such respect by his colleagues that you might think his full name is “Eminent Scholar Samuel P. Huntington”) has written a book few others could have gotten away with without being villified in academia. His scholarship, lack of partisan agenda, thoroughness, and meticulous documentation make his argument here, that the United States were founded as what he calls an “Anglo-Protestant” culture by their overwhelmingly English Protestant settlers, difficult to refute. In his view, the U.S. were not a “melting pot” of immigrants, but rather a nation where successive waves of immigrants accepted and were assimilated into the pre-existing Anglo-Protestant culture, regardless of, and without renouncing, their ethnic origin and religion. The essentials of this culture—individualism, the work ethic, the English language, English common law, high moral standards, and individual responsibility—are not universals but were what immigrants had to buy into in order to “make it in America”. In fact, as Huntington points out, in the great waves of immigration in the 19th and early 20th centuries, many of those who came to America were self-selected for those qualities before they boarded the boat. All of this has changed, he argues, with the mass immigration which began in the 1960s. For the first time, a large percentage of immigrants share a common language (Spanish) and hail from a common culture (Mexico), with which it is easy to retain contact. At the same time, U.S. national identity has been eroded among the elite (but not the population as a whole) in favour of transnational (U.N., multinational corporation, NGO) and subnational (race, gender) identities. So wise is Huntington that I found myself exclaiming every few pages at some throw-away insight I'd never otherwise have had, such as that most of the examples offered up of successful multi-cultural societies (Belgium, Canada, Switzerland) owe their stability to fear of powerful neighbours (p. 159). This book is long on analysis but almost devoid of policy prescriptions. Fair enough: the list of viable options with any probability of being implemented may well be the null set, but even so, it's worthwhile knowing what's coming. While the focus of this book is almost entirely on the U.S., Europeans whose countries are admitting large numbers of difficult to assimilate immigrants will find much to ponder here. One stylistic point—Huntington is as fond of enumerations as even the most fanatic of the French encyclopédistes: on page 27 he indulges in one with forty-eight items and two levels of hierarchy! The enumerations form kind of a basso continuo to the main text.

 Permalink

Holt, John. How Children Fail. rev. ed. Cambridge, MA: Da Capo Press, [1964] 1982. ISBN 0-201-48402-1.
This revised edition of Holt's classic includes the entire text of the 1964 first edition with extensive additional interspersed comments added after almost twenty years of additional experience and reflection. It is difficult to find a book with as much wisdom and as many insights per page as this one. You will be flabbergasted by Holt's forensic investigation of how many fifth graders (in an elite private school for high IQ children) actually think about arithmetic, and how many teachers and parents delude themselves into believing that parroting correct answers has anything to do with understanding or genuine learning. What is so refreshing about Holt is his scientific approach—he eschews theory and dogma in favour of observing what actually goes on in classrooms and inside the heads of students. Some of his insights about how those cunning little rascals game the system to get the right answer without enduring the submission to authority and endless boredom of what passes for education summoned some of the rare fond memories I have of that odious period in my own life. As a person who's spent a lot of time recently thinking about intelligence, problem solving, and learning, I found Holt's insights absolutely fascinating. This book has sold more than a million copies, but I'd have probably never picked it up had it not been recommended by a kind reader using the recommendation form—thank you!

 Permalink

Chesterton, Gilbert K. Heretics. London: John Lane, [1905] 1914. ISBN 0-7661-7476-X.
In this collection of essays, the ever-quotable Chesterton takes issue with prominent contemporaries (including Kipling, G.B. Shaw, and H.G. Wells) and dogma (the cults of progress, science, simple living, among others less remembered almost a century later). There is so much insight and brilliant writing here it's hard to single out a few examples. My favourites include his dismantling of cultural anthropology and folklore in chapter 11, the insight in chapter 16 that elevating science above morality leads inevitably to oligarchy and rule by experts, and the observation in chapter 17, writing of Whistler, that what is called the “artistic temperament” is a property of second-rate artists. The link above is to a 2003 Kessinger Publishing facsimile reprint of the 1914 twelfth edition. The reprint is on letter-size pages, much larger than the original, with each page blown up to fit; consequently, the type is almost annoyingly large. A free electronic edition is available.

 Permalink

Gamow, George. One, Two, Three…Infinity. Mineola, NY: Dover, [1947] 1961. rev. ed. ISBN 0-486-25664-2.
This book, which first I read at around age twelve, rekindled my native interest in mathematics and science which had, by then, been almost entirely extinguished by six years of that intellectual torture called “classroom instruction”. Gamow was an eminent physicist: among other things, he advocated the big bang theory decades before it became fashionable, originated the concept of big bang nucleosynthesis, predicted the cosmic microwave background radiation 16 years before it was discovered, proposed the liquid drop model of the atomic nucleus, worked extensively in the astrophysics of energy production in stars, and even designed a nuclear bomb (“Greenhouse George”), which initiated the first deuterium-tritium fusion reaction here on Earth. But he was also one of most talented popularisers of science in the twentieth century, with a total of 18 popular science books published between 1939 and 1967, including the Mr Tompkins series, timeless classics which inspired many of the science visualisation projects at this site, in particular C-ship. A talented cartoonist as well, 128 of his delightful pen and ink drawings grace this volume. For a work published in 1947 with relatively minor revisions in the 1961 edition, this book has withstood the test of time remarkably well—Gamow was both wise and lucky in his choice of topics. Certainly, nobody should consider this book a survey of present-day science, but for folks well-grounded in contemporary orthodoxy, it's a delightful period piece providing a glimpse of the scientific world view of almost a half-century ago as explained by a master of the art. This Dover paperback is an unabridged reprint of the 1961 revised edition.

 Permalink

Cowan, Rick and Douglas Century. Takedown. New York: Berkley, 2002. ISBN 0-425-19299-7.
This is the true story of a New York Police Department detective who almost accidentally found himself in a position to infiltrate the highest levels of the New York City garbage cartel, one of the Mafia's fattest and most fiercely guarded cash cows for more than half a century. Cowan's investigation, dubbed “Operation Wasteland”, resulted in the largest organised crime bust in New York history, eliminating the “mob tax” paid by New York businesses which tripled their waste disposal charges compared to other cities. This book was recommended as a real world antidote to the dramatic liberties taken by the writers of The Sopranos. Curiously, I found it confirmed several aspects of The Sopranos I'd dismissed as far-fetched, such as the ability of mobsters to murder and dispose of the bodies of those who cross them with impunity, and the “mad dog” behaviour (think Ralph Cifaretto) of high ranked top-earner wiseguys.

 Permalink

Stack, Jack with Bo Burlingham. The Great Game of Business. New York: Doubleday, [1992] 1994. ISBN 0-385-47525-X.
When you take a company public in the United States, there's inevitably a session where senior management sits down with the lawyers for the lecture on how, notwithstanding much vaunted constitutional guarantees of free speech, loose lips may land them in Club Fed for “insider trading”. This is where you learn of such things as the “quiet period”, and realise that by trying to cash out investors who risked their life savings on you before any sane person would, you're holding your arms out to be cuffed and do the perp walk should things end badly. When I first encountered this absurd mind-set, my immediate reaction was, “Well, if ‘insider trading’ is forbidden disclosure of secrets, then why not eliminate all secrets entirely? When you're a public company, you essentially publish your general ledger every quarter anyway—why not make it open to everybody in the company and outside on a daily (or whatever your reporting cycle is) basis?” As with most of my ideas, this was greeted with gales of laughter and dismissed as ridiculous. N'importe…right around when I and the other perpetrators were launching Autodesk, Jack Stack and his management team bought out a refurbishment business shed by International Harvester in its death agonies and turned it around into a people-oriented money machine, Springfield Remanufacturing Corporation. My Three Laws of business have always been: “1) Build the best product. 2) No bullshit. 3) Reward the people who do the work.” Reading this book opened my eyes to how I had fallen short in the third item. Stack's “Open Book” management begins by inserting what I'd call item “2a) Inform the people who do the work”. By opening the general ledger to all employees (and the analysts, and your competitors: get used to it—they know almost every number to within epsilon anyway if you're a serious player in the market), you give your team—hourly and salaried—labour and management—union and professional—the tools they need to know how they're doing, and where the company stands and how secure their jobs are. This is always what I wanted to do, but I was insufficiently bull-headed to override the advice of “experts” that it would lead to disaster or land me in the slammer. Jack Stack went ahead and did it, and the results his company has achieved stand as an existence proof that opening the books is the key to empowering and rewarding the people who do the work. This guy is a hero of free enterprise: go and do likewise; emulate and grow rich.

 Permalink

Bin Ladin, Carmen. The Veiled Kingdom. London: Virago Press, 2004. ISBN 1-84408-102-8.
Carmen Bin Ladin, a Swiss national with a Swiss father and Iranian mother, married Yeslam Bin Ladin in 1974 and lived in Jeddah, Saudi Arabia from 1976 to 1985. Yeslam Bin Ladin is one of the 54 sons and daughters sired by that randy old goat Sheikh Mohamed Bin Laden on his twenty-two wives including, of course, murderous nutball Osama. (There is no unique transliteration of Arabic into English. Yeslam spells his name “Bin Ladin”, while other members of the clan use “Bin Laden”, the most common spelling in the media. This book uses “Bin Ladin” when referring to Yeslam, Carmen, and their children, and “Bin Laden” when referring to the clan or other members of it.) This autobiography provides a peek, through the eyes of a totally Westernised woman, into the bizarre medieval life of Saudi women and the arcane customs of that regrettable kingdom. The author separated from her husband in 1988 and presently lives in Geneva. The link above is to a U.K. paperback edition. I believe the same book is available in the U.S. under the title Inside the Kingdom : My Life in Saudi Arabia, but at the present time only in hardcover.

 Permalink

Rucker, Rudy. The Lifebox, the Seashell, and the Soul. New York: Thunder's Mouth Press, 2005. ISBN 1-56025-722-9.
I read this book in manuscript form. An online excerpt is available.

 Permalink

Paulos, John Allen. A Mathematician Plays The Stock Market. New York: Basic Books, 2003. ISBN 0-465-05481-1.
Paulos, a mathematics professor and author of several popular books including Innumeracy and A Mathematician Reads the Newspaper, managed to lose a painfully large pile of money (he never says how much) in Worldcom (WCOM) stock in 2000–2002. Other than broadly-based index funds, this was Paulos's first flier in the stock market, and he committed just about every clueless market-newbie blunder in the encyclopedia of Wall Street woe: he bought near the top, on margin, met every margin call and “averaged down” all the way from $47 to below $5 per share, bought out-of-the-money call options on a stock in a multi-year downtrend, never placed stop loss orders or hedged with put options or shorting against the box, based his decisions selectively on positive comments in Internet chat rooms, and utterly failed to diversify (liquidating index funds to further concentrate in a single declining stock). This book came highly recommended, but I found it unsatisfying. Paulos uses his experience in the market as a leitmotif in a wide ranging but rather shallow survey of the mathematics and psychology of markets and investors. Along the way we encounter technical and fundamental analysis, the efficient market hypothesis, compound interest and exponential growth, algorithmic complexity, nonlinear functions and fractals, modern portfolio theory, game theory and the prisoner's dilemma, power laws, financial derivatives, and a variety of card tricks, psychological games, puzzles, and social and economic commentary. Now all of this adds up to only 202 pages, so nothing is treated in much detail—while the explanation of compound interest is almost tedious, many of the deeper mathematical concepts may not be explained sufficiently for readers who don't already understand them. The “leitmotif” becomes pretty heavy after the fiftieth time or so the author whacks himself over the head for his foolishness, and wastes a lot of space which would have been better used discussing the market in greater depth. He dismisses technical analysis purely on the basis of Elliott wave theory, without ever discussing the psychological foundation of many chart patterns as explained in Edwards and Magee; the chapter on fundamental analysis mentions Graham and Dodd only in passing. The author's incessant rambling and short attention span leaves you feeling like you do after a long conversation with Ted Nelson. There is interesting material here, and few if any serious errors, but the result is kind of like English cooking—there's nothing wrong with the ingredients; it's what's done with them that's ultimately bland and distasteful.

 Permalink

October 2004

Itzkoff, Seymour W. The Decline of Intelligence in America. Westport, CT: Praeger, 1994. ISBN 0-275-95229-0.
This book had the misfortune to come out in the same year as the first edition of The Bell Curve (August 2003), and suffers by comparison. Unlike that deservedly better-known work, Itzkoff presents few statistics to support his claims that dysgenic reproduction is resulting in a decline in intelligence in the U.S. Any assertion of declining intelligence must confront the evidence for the Flynn Effect (see The Rising Curve, July 2004), which seems to indicate IQ scores are rising about 15 points per generation in a long list of countries including the U.S. The author dismisses Flynn's work in a single paragraph as irrelevant to international competition since scores of all major industrialised countries are rising at about the same rate. But if you argue that IQ is a measure of intelligence, as this book does, how can you claim intelligence is falling at the same time IQ scores are rising at a dizzying rate without providing some reason that Flynn's data should be disregarded? There's quite a bit of hand wringing about the social, educational, and industrial prowess of Japan and Germany which sounds rather dated with a decade's hindsight. The second half of the book is a curious collection of policy recommendations, which defy easy classification into a point on the usual political spectrum. Itzkoff advocates economic protectionism, school vouchers, government-led industrial policy, immigration restrictions, abolishing affirmative action, punitive taxation, government incentives for conventional families, curtailment of payments to welfare mothers and possibly mandatory contraception, penalties for companies which export well-paying jobs, and encouragement of inter-racial and -ethnic marriage. I think that if an ADA/MoveOn/NOW liberal were to read this book, their head might explode. Given the political climate in the U.S. and other Western countries, such policies had exactly zero chance of being implemented either when he recommended them in 1994 and no more today.

 Permalink

Appleton, Victor. Tom Swift and His Giant Cannon. McLean, VA: IndyPublish.com, [1913] 2002. ISBN 1-4043-3589-7.
The link above is to a paperback reprint of the original 1913 novel, 16th in the original Tom Swift series, which is in the public domain. I actually read this novel on my PalmOS PDA (which is also my mobile phone, so it's usually right at hand). I always like to have some light reading available which doesn't require a long attention span or intense concentration to pass the time while waiting in line at the post office or other dreary moments one can't program, and early 20th century juvenile pulp fiction on a PDA fills the bill superbly. This novel lasted about a year and a half until I finished it earlier today in the check-out line at the grocery store. The PalmOS version I read was produced as a demo from the Project Gutenberg EText of the novel. This Palm version doesn't seem to be available any more (and was inconvenient, being broken into four parts in order to fit on early PalmPilots with limited memory). For those of you who prefer an electronic edition, I've posted downloadable files of these texts in a variety of formats.

 Permalink

Cabbage, Michael and William Harwood. Comm Check…The Final Flight of Shuttle Columbia. New York: Free Press, 2004. ISBN 0-7432-6091-0.
This is an excellent account for the general reader of the Space Shuttle Columbia STS-107 accident and subsequent investigation. The authors are veteran space reporters: Cabbage for the Orlando Sentinel and Harwood for CBS News. If you've already read the Columbia Accident Investigation Board Report (note that supplementary volumes II through VI are now available), you won't learn anything new about the technical details of the accident and its engineering and organisational causes here, but there's interesting information about the dynamics of the investigation and the individuals involved which you won't find in the formal report. The NASA Implementation Plan for Return to Flight and Beyond mentioned on page 264 is available online.

 Permalink

Hayward, Steven F. The Real Jimmy Carter. Washington: Regnery Publishing, 2004. ISBN 0-89526-090-5.
In the acknowledgements at the end, the author says one of his motivations for writing this book was to acquaint younger readers and older folks who've managed to forget with the reality of Jimmy Carter's presidency. Indeed, unless one lived through it, it's hard to appreciate how Carter's formidable intellect allowed him to quickly grasp the essentials of a situation, absorb vast amounts of detailed information, and then immediately, intuitively leap to the absolutely worst conceivable course of action. It's all here: his race-baiting 1970 campaign for governor of Georgia; the Playboy interview; “ethnic purity”; “I'll never lie to you”; the 111 page list of campaign promises; alienating the Democratic controlled House and Senate before inaugural week was over; stagflation; gas lines; the Moral Equivalent of War (MEOW); turning down the thermostat; spending Christmas with the Shah of Iran, “an island of stability in one of he more troubled areas of the world”; Nicaragua; Afghanistan; “malaise” (which he actually never said, but will be forever associated with his presidency); the cabinet massacre; kissing Brezhnev; “Carter held Hostage”, and more. There is a side-splitting account of the “killer rabbit” episode on page 155. I'd have tried to work in Billy Beer, but I guess you gotta stop somewhere. Carter's post-presidential career, hobnobbing with dictators, loose-cannon freelance diplomacy, and connections with shady middle-east financiers including BCCI, are covered along with his admirable humanitarian work with Habitat for Humanity. That this sanctimonious mountebank who The New Republic, hardly a right wing mouthpiece, called “a vain, meddling, amoral American fool” in 1995 after he expressed sympathy for Serbian ethnic cleanser Radovan Karadzic, managed to win the Nobel Peace Prize, only bears out the assessment of Carter made decades earlier by notorious bank robber Willie Sutton, “I've never seen a bigger confidence man in my life, and I've been around some of the best in the business.”

 Permalink

Bell, John S. Speakable and Unspeakable in Quantum Mechanics. Cambridge: Cambridge University Press, [1987] 1993. ISBN 0-521-52338-9.
This volume collects most of Bell's papers on the foundations and interpretation of quantum mechanics including, of course, his discovery of “Bell's inequality”, which showed that no local hidden variable theory can reproduce the statistical results of quantum mechanics, setting the stage for the experimental confirmation by Aspect and others of the fundamental non-locality of quantum physics. Bell's interest in the pilot wave theories of de Broglie and Bohm is reflected in a number of papers, and Bell's exposition of these theories is clearer and more concise than anything I've read by Bohm or Hiley. He goes on to show the strong similarities between the pilot wave approach and the “many world interpretation” of Everett and de Witt. An extra added treat is chapter 9, where Bell derives special relativity entirely from Maxwell's equations and the Bohr atom, along the lines of Fitzgerald, Larmor, Lorentz, and Poincaré, arriving at the principle of relativity (which Einstein took as a hypothesis) from the previously known laws of physics.

 Permalink

Schott, Ben. Schott's Original Miscellany. London: Bloomsbury, 2002. ISBN 1-58234-349-7.
At last—a readily available source one can cite for the definition of the unit “millihelen” (p. 152)!

 Permalink

Schama, Simon. Citizens: A Chronicle of the French Revolution. New York: Vintage Books, 1989. ISBN 0-679-72610-1.
The French Revolution is so universally used as a metaphor in social and political writing that it's refreshing to come across a straight narrative history of what actually happened. The French Revolution is a huge, sprawling story, and this is a big, heavy book about it—more than nine hundred pages, with an enormous cast of characters—in large part because each successive set of new bosses cut off the heads of their predecessors. Schama stresses the continuity of many of the aspects of the Revolution with changes already underway in the latter decades of the ancien régime—Louis XVI comes across as kind of Enlightenment Gorbachev—attempting to reform a bankrupt system from the top and setting in motion forces which couldn't be controlled. Also striking is how many of the most extreme revolutionaries were well-off before the Revolution and, in particular, the large number of lawyers in their ranks. Far from viewing the Terror as an aberration, Schama argues that from the very start, the summer of 1789, “violence was the motor of the Revolution”. With the benefit of two centuries of hindsight, you almost want to reach back across the years, shake these guys by the shoulders, and say “Can't you see where you're going with this?” But then you realise: this was all happening for the very first time—they had no idea of the inevitable outcome of their idealism! In a mere four years, they invented the entire malevolent machinery of the modern, murderous, totalitarian nation-state, and all with the best intentions, informed by the persuasively posed yet relentlessly wrong reasoning of Rousseau. Those who have since repeated the experiment, with the example of the French Revolution before them as a warning, have no such excuse.

 Permalink

Djavann, Chahdortt. Que pense Allah de l'Europe?. Paris: Gallimard, 2004. ISBN 2-07-077202-0.
The author came of age in revolutionary Iran. After ten years living in Paris, she sees the conflict over the Islamic veil in French society as one in which those she calls “islamists” use the words of the West in ways which mean one thing to westerners and something entirely different to partisans of their own cause. She argues what while freedom of religion is a Western value which cannot be compromised, neither should it be manipulated to subvert the social liberty which is equally a contribution of the West to civilisation. Europe, she believes, is particularly vulnerable to infiltration by those who do not share its values but can employ its traditions and institutions to subvert them. This is not a book length treatment, but rather an essay of 55 pages. For a less personally impassioned but more in-depth view of the situation across the Channel, see Le Londonistan (July 2003).

 Permalink

Lewis, Michael. Moneyball. New York: W. W. Norton, [2003] 2004. ISBN 0-393-32481-8.
Everybody knows there's no faster or more reliable way to make a lot of money than to identify an inefficiency in a market and arbitrage it. (If you didn't know that, consider it free advice and worth everything you paid for it!) Modern financial markets are Hellishly efficient. Millions of players armed with real-time transaction data, massive computing and database resources for data mining, and more math, physics, and economics Ph.D.s than a dozen Ivy League campuses are continuously looking for the slightest discrepancy between price and value, which more or less guarantees that even when one is discovered, it won't last for more than a moment, and that by the time you hear about it, it'll be long gone. It's much easier to find opportunities in slower moving, less intensely scrutinised fields where conventional wisdom and lack of imagination can blind those in the market to lucrative inefficiencies. For example, in the 1980s generic personal computers and graphics adaptors became comparable in performance to special purpose computer aided design (CAD) workstations ten times or more as costly. This created a situation where the entire value-added in CAD was software, not hardware—all the hardware development, manufacturing, and support costs of the existing vendors were simply an inefficiency which cost their customers dearly. Folks who recognised this inefficiency and moved to exploit the opportunity it created were well rewarded, even while their products were still being ridiculed or ignored by “serious vendors”. Opportunities like this don't come around very often, and there's a lot of luck involved in being in the right place at the right time with the skills and resources at hand to exploit one when you do spot it.

But just imagine what you could do in a field mired in tradition, superstition, ignorance, meaningless numbers, a self-perpetuating old boy network, and gross disparities between spending and performance…Major League Baseball, say? Starting in the 1970s and 80s, Bill James and a slowly growing group of statistically knowledgeable and scientifically minded baseball fanatics—outsiders all—began to look beyond conventional statistics and box scores and study what really determines how many runs a team will score and how many games it will win. Their results turned conventional wisdom completely on its head and that, combined with the clubbiness of professional baseball, caused their work to be utterly ignored until Billy Beane became general manager of the Oakland A's in 1997. Beane and his statistics wizard Paul DePodesta were faced with the challenge of building a winning team with a budget for player salaries right at the bottom of the league—they had less to spend on the entire roster than some teams spent on three or four superstar free agents. I've always been fond of the phrase “management by lack of alternatives”, and that's the situation Beane faced. He took on board the wisdom of the fan statisticians and built upon it, to numerically estimate the value in runs—the ultimate currency of baseball—of individual players, and compare that to the cost of acquiring them. He quickly discovered the market in professional baseball players was grossly inefficient—teams were paying millions for players with statistics which contributed little or nothing to runs scored and games won, while players with the numbers that really mattered were languishing in the minors, available for a song.

The Oakland A's are short for “Athletics”, but under Beane it might as well have been “Arbitrageurs”—trading overvalued stars for cash, draft picks, and undervalued unknowns spotted by the statistical model. Conventional scouting went out the window; the A's roster was full of people who didn't look like baseball players but fit the mathematical profile. Further, Beane changed the way the game was played—if the numbers said stolen bases and sacrifice bunts were a net loss in runs long-term, then the A's didn't do them. The sportswriters and other teams thought it was crazy, but it won ball games: an amazing 103 in 2002 with a total payroll of less than US$42 million. In most other markets or businesses competitors would be tripping over one another to copy the methods which produced such results, but so hidebound and inbred is baseball that so far only two other teams have adopted the Oakland way of winning. Writing on the opening day of the 2004 World Series, is is interesting to observe than one of those two is the Boston Red Sox. I must observe, however, amongst rooting for the scientific method and high fives for budget discipline and number crunching, that the ultimate product of professional baseball is not runs scored, nor games, pennants, or World Series won, but rather entertainment and the revenue it generates from fans, directly or indirectly. One wonders whether this new style of MBAseball run from the front office will ultimately be as enjoyable as the intuitive, risk-taking, seat of the pants game contested from the dugout by a Leo Durocher, Casey Stengel, or Earl Weaver. This superbly written, fascinating book is by the author of the almost indescribably excellent Liar's Poker. The 2004 paperback edition contains an Afterword recounting the “religious war” the original 2003 hardcover ignited. Again, this is a book recommended by an anonymous visitor with the recommendation form—thanks, Joe!

 Permalink

Jacobs, Jane. Dark Age Ahead. New York: Random House, 2004. ISBN 1-4000-6232-2.
The reaction of a reader who chooses this book solely based on its title or the dust-jacket blurb is quite likely to be, “Huh?” The first chapter vividly evokes the squalor and mass cultural amnesia which followed the fall of Western Rome, the collapse of the Chinese global exploration and trade in the Ming dynasty, and the extinction of indigenous cultures in North America and elsewhere. Then, suddenly, we find ourselves talking about urban traffic planning, the merits of trolley buses vs. light rail systems, Toronto metropolitan government, accounting scandals, revenue sharing with municipalities, and a host of other issues which, however important, few would rank as high on the list of probable causes of an incipient dark age. These are issues near and dear to the author, who has been writing about them ever since her 1961 classic The Death and Life of Great American Cities (Jacobs was born in 1916 and wrote this book at the age of 87). If you're unfamiliar with her earlier work, the extensive discussion of “city import replacement” in the present volume will go right over your head as she never defines it here. Further, she uses the word “neoconservative” at variance with its usual meaning in the U.S. and Europe. It's only on page 113 (of 175 pages of main text) that we discover this is a uniquely Canadian definition. Fine, she's been a resident of Toronto since 1969, but this book is published in New York and addressed to an audience of “North Americans” (another Canadian usage), so it's unnecessarily confusing. I find little in this book to disagree with, but as a discussion of the genuine risks which face Western civilisation, it's superficial and largely irrelevant.

 Permalink

O'Neill, John E. and Jerome L. Corsi. Unfit for Command. Washington: Regnery Publishing, 2004. ISBN 0-89526-017-4.

 Permalink

November 2004

Barnett, Thomas P. M. The Pentagon's New Map. New York: G.P. Putnam's Sons, 2004. ISBN 0-399-15175-3.
This is one scary book—scary both for the world-view it advocates and the fact that its author is a professor at the U.S. Naval War College and participant in strategic planning at the Pentagon's Office of Force Transformation. His map divides the world into a “Functioning Core” consisting of the players, both established (the U.S., Europe, Japan) and newly arrived (Mexico, Russia, China, India, Brazil, etc.) in the great game of globalisation, and a “Non-Integrating Gap” containing all the rest (most of Africa, Andean South America, the Middle East and Central and Southeast Asia), deemed “disconnected” from globalisation. (The detailed map may be consulted on the author's Web site.) Virtually all U.S. military interventions in the years 1990–2003 occurred in the “Gap” while, he argues, nation-on-nation violence within the Core is a thing of the past and needn't concern strategic planners. In the Gap, however, he believes it is the mission of the U.S. military to enforce “rule-sets”, acting preemptively and with lethal force where necessary to remove regimes which block connectivity of their people with the emerging global system, and a U.S.-led “System Administration” force to carry out the task of nation building when the bombs and boots of “Leviathan” (a term he uses repeatedly—think of it as a Hobbesian choice!) re-embark their transports for the next conflict. There is a rather bizarre chapter, “The Myths We Make”, in which he says that global chaos, dreams of an American empire, and the U.S. as world police are bogus argument-enders employed by “blowhards”, which is immediately followed by a chapter proposing a ten-point plan which includes such items as invading North Korea (2), fomenting revolution in (or invading) Iran (3), invading Colombia (4), putting an end to Wahabi indoctrination in Saudi Arabia (5), co-operating with the Chinese military (6), and expanding the United States by a dozen more states by 2050, including the existing states of Mexico (9). This isn't globocop? This isn't empire? And even if it's done with the best of intentions, how probable is it that such a Leviathan with a moral agenda and a “shock and awe” military without peer would not succumb to the imperative of imperium?

 Permalink

Barry, Max. Jennifer Government. New York: Vintage Books, 2003. ISBN 1-4000-3092-7.
When you try to explain personal liberty to under-thirty-fivers indoctrinated in government schools, their general reaction is, “Well, wouldn't the big corporations just take over and you'd end up with a kind of corporate fascism which relegated individuals to the rôle of passive consumers?” Of course, that's what they've been taught is already the case—even as intrusive government hits unprecedented new highs—but then logic was never a strong point of collectivist kiddies. Max Barry has written the rarest of novels—a persuasive libertarian dystopia—what it would look like if the “big corporations” really did take over. In this world, individuals take the surname of their employer, and hence the protagonist, Jennifer, is an agent of what is left of the Government—get it? It is a useful exercise for libertarians to figure out “what's wrong with this picture” and identify why corporations self-size to that of the predominant government power: the smaller the government, the more local the optimal enterprise. This is another excellent recommendation by a visitor to this page.

 Permalink

Miller, John J. and Mark Molesky. Our Oldest Enemy. New York: Doubleday, 2004. ISBN 0-385-51219-8.
In this history of relations between the America and France over three centuries—starting in 1704, well before the U.S. existed, the authors argue that the common perception of sympathy and shared interest between the “two great republics” from Lafayette to “Lafayette, we are here” and beyond is not borne out by the facts, that the recent tension between the U.S. and France over Iraq is consistent with centuries of French scheming in quest of its own, now forfeit, status as a great power. Starting with French-incited and led Indian raids on British settlements in the 18th century, through the undeclared naval war of 1798–1800, Napoleon's plans to invade New Orleans, Napoleon III's adventures in Mexico, Clemenceau's subverting Wilson's peace plans after being rescued by U.S. troops in World War I, Eisenhower's having to fight his way through Vichy French troops in North Africa in order to get to the Germans, Stalinst intellectuals in the Cold War, Suez, de Gaulle's pulling out of NATO, Chirac's long-term relationship with his “personal friend” Saddam Hussein, through recent perfidy at the U.N., the case is made that, with rare exceptions, France has been the most consistent opponent of the U.S. over all of their shared history. The authors don't hold France and the French in very high esteem, and there are numerous zingers and turns of phrase such as “Time and again in the last two centuries, France has refused to come to grips with its diminished status as a country whose greatest general was a foreigner, whose greatest warrior was a teenage girl, and whose last great military victory came on the plains of Wagram in 1809” (p. 10). The account of Vichy in chapter 9 is rather sketchy and one-dimensional; readers interested in that particular shameful chapter in French history will find more details in Robert Paxton's Vichy France and Marc Ferro's biography, Pétain or the eponymous movie made from it.

 Permalink

Satrapi, Marjane. Persepolis: The Story of a Childhood. New York: Pantheon Books, [2000, 2001] 2003. ISBN 0-375-71457-X.
This story is told in comic strip form, but there's nothing funny about it. Satrapi was a 10 year old girl in Tehran when the revolution overthrew the Shah of Iran. Her well-off family detested the Shah, had several relatives active in leftist opposition movements, and supported the revolution, but were horrified when the mullahs began to turn the clock back to the middle ages. The terror and mass slaughter of the Iran/Iraq war are seen through the eyes of a young girl, along with the paranoia and repression of the Islamic regime. At age 14, her parents sent her to Vienna to escape Iran; she now lives and works in Paris. Persepolis was originally published in French in two volumes (1, 2). This edition combines the two volumes, with Satrapi's original artwork re-lettered with the English translation.

 Permalink

Hammersley, Ben. Content Syndication with RSS. Sebastopol, CA: O'Reilly, 2003. ISBN 0-596-00383-8.
Sometimes the process of setting standards for the Internet just leaves you wanting to avert your eyes. The RSS standard, used by Web loggers, news sites, and other to provide “feeds” which apprise other sites of updates to their content is a fine example of what happens when standards go bad. At first, there was the idea that RSS would be fully RDF compliant, but then out came version 0.9 which used RDF incompletely and improperly. Then came 0.91, which stripped out RDF entirely, which was followed by version 1.0, which re-incorporated full support for RDF along with modules and XML namespaces. Two weeks later, along came version 0.92 (I'm not making this up), which extended 0.91 and remained RDF free. Finally, late in 2002, RSS 2.0 arrived, a further extension of 0.92, and not in any way based on 1.0—got that? Further, the different standards don't even agree on what “RSS” stands for; personally, I'd opt for “Ridiculous Standard Setting”. For the poor guy who simply wants to provide feeds to let folks know what's changed on a Web log or site, this is a huge mess, as it is for those who wish to monitor such feeds. This book recounts the tawdry history of RSS, provides examples of the various dialects, and provides useful examples for generating and using RSS feeds, as well as an overview of the RSS world, including syndication directories, aggregators, desktop feed reader tools, and Publish and Subscribe architectures.

 Permalink

Gleick, James. Isaac Newton. New York: Pantheon Books, 2003. ISBN 0-375-42233-1.
Fitting a satisfying biography of one of the most towering figures in the history of the human intellect into fewer than 200 pages is a formidable undertaking, which James Gleick has accomplished magnificently here. Newton's mathematics and science are well covered, placing each in the context of the “shoulders of Giants” which he said helped him see further, but also his extensive (and little known, prior to the twentieth century) investigations into alchemy, theology, and ancient history. His battles with Hooke, Leibniz, and Flamsteed, autocratic later years as Master of the Royal Mint and President of the Royal Society and ceaseless curiosity and investigation are well covered, as well as his eccentricity and secretiveness. I'm a little dubious of the discussion on pp. 186–187 where Newton is argued to have anticipated or at least left the door open for relativity, quantum theory, equivalence of mass and energy, and subatomic forces. Newton wrote millions of words on almost every topic imaginable, most for his own use with no intention of publication, few examined by scholars until centuries after his death. From such a body of text, it may be possible to find sentences here and there which “anticipate” almost anything when you know from hindsight what you're looking for. In any case, the achievements of Newton, who not only laid the foundation of modern physical science, invented the mathematics upon which much of it is based, and created the very way we think about and do science, need no embellishment. The text is accompanied by 48 pages of endnotes (the majority citing primary sources) and an 18 page bibliography. A paperback edition is now available.

 Permalink

Babbin, Jed. Inside the Asylum. Washington: Regnery Publishing, 2004. ISBN 0-89526-088-3.
You'll be shocked, shocked, to discover, turning these pages, that the United Nations is an utterly corrupt gang of despots, murderers, and kleptocrats, not just ineffectual against but, in some cases, complicit in supporting terrorism, while sanctimoniously proclaiming the moral equivalence of savagery and civilisation. And that the European Union is a feckless, collectivist, elitist club of effete former and wannabe great powers facing a demographic and economic cataclysm entirely of their own making. But you knew that, didn't you? That's the problem with this thin (less than 150 pages of main text) volume. Most of the people who will read it already know most of what's said here. Those who still believe the U.N. to be “the last, best hope for peace” (and their numbers are, sadly, legion—more than 65% of my neighbours in the Canton of Neuchâtel voted for Switzerland to join the U.N. in the March 2002 referendum) are unlikely to read this book.

 Permalink

Bonner, William with Addison Wiggin. Financial Reckoning Day. Hoboken, NJ: John Wiley & Sons, 2003. ISBN 0-471-44973-3.
William Bonner's Daily Reckoning newsletter was, along with a few others like Downside, a voice of sanity in the bubble markets of the turn of millennium. I've always found that the best investment analysis looks well beyond the markets to the historical, social, political, moral, technological, and demographic trends which market action ultimately reflects. Bonner and Wiggin provide a global, multi-century tour d'horizon here, and make a convincing case that the boom, bust, and decade-plus “soft depression” which Japan suffered from the 1990s to the present is the prototype of what's in store for the U.S. as the inevitable de-leveraging of the mountain of corporate and consumer debt on which the recent boom was built occurs, with the difference that Japan has the advantage of a high savings rate and large trade surplus, while the U.S. saves nothing and runs enormous trade deficits. The analysis of how Alan Greenspan's evolution from supreme goldbug in Ayn Rand's inner circle to maestro of paper money is completely consistent with his youthful belief in Objectivism is simply delightful. The authors readily admit that markets can do anything, but believe that in the long run, markets generally do what they “ought to”, and suggest an investment strategy for the next decade on that basis.

 Permalink

December 2004

Fallaci, Oriana. La Force de la Raison. Monaco: Éditions du Rocher, 2004. ISBN 2-268-05264-8.
If, fifty years from now, there still are historians permitted to chronicle the civilisation of Western Europe (which, if the trends described in this book persist, may not be the way to bet), Fallaci may be seen as a figure like Churchill in the 1930s, willing to speak the truth about a clear and present danger, notwithstanding the derision and abuse doing so engenders from those who prefer to live the easy life, avoid difficult decisions, and hope things will just get better. In this, and her earlier La rage et l'orgueil (June 2002), Fallaci warns, in stark and uncompromising terms verging occasionally on a rant, of the increasing Islamicisation of Western Europe, and decries the politicians, church figures, and media whose inaction or active efforts aid and abet it. She argues that what is at risk is nothing less than European civilisation itself, which Islamic figures openly predict among themselves eventually being transformed through the inexorable power of demographics and immigration into an Islamic Republic of “Eurabia”. The analysis of the “natural alliance” between the extreme political left and radical Islam is brilliant, and brings to mind L'Islam révolutionnaire (December 2003) by terrorist “Carlos the Jackal” (Ilich Ramírez Sánchez). There is a shameful little piece of paper tipped into the pages of the book by the publisher, who felt no need for a disclaimer when earlier publishing the screed by mass murderer “Carlos”. In language worthy of Pierre Laval, they defend its publication in the interest of presenting a «différent» viewpoint, and ask readers to approach it “critically, in light of the present-day international context” (my translation).

 Permalink

Marasco, Joe. The Software Development Edge. Upper Saddle River, NJ: Addison-Wesley, 2005. ISBN 0-321-32131-6.
I read this book in manuscript form when it was provisionally titled The Psychology of Software Development.

 Permalink

Godwin, Robert ed. Freedom 7: The NASA Mission Reports. Burlington, Ontario, Canada: Apogee Books, 2000. ISBN 1-896522-80-7.
This volume in the superb Apogee NASA Mission Reports series covers Alan Shepard's May 5th, 1961 suborbital flight in Freedom 7, the first U.S. manned space flight. Included are the press kit for the mission, complete transcripts of the post-flight debriefings and in-flight communications, and proceedings of a conference held in June 1961 to report mission results. In addition, the original 1958 request for astronaut volunteers (before it was decided that only military test pilots need apply) is reproduced, along with the press conference introducing the Mercury astronauts, which Tom Wolfe so vividly (and accurately) described in The Right Stuff. A bonus CD-ROM includes the complete in-flight films of the instrument panel and astronaut, a 30 minute NASA documentary about the flight, and the complete NASA official history of Project Mercury, This New Ocean, as a PDF document. There are few if any errors in the transcriptions of the documents. The caption for the photograph of Freedom 7 on the second page of colour plates makes the common error of describing its heat shield as “ablative fiberglass”. In fact, as stated on page 145, suborbital missions used a beryllium heat sink; only orbital capsules were equipped with the ablative shield.

 Permalink

Lundstrom, David E. A Few Good Men from Univac. Cambridge, MA: MIT Press, 1987. ISBN 0-262-12120-4.
The author joined UNIVAC in 1955 and led the testing of the UNIVAC II which, unlike the UNIVAC I, was manufactured in the St. Paul area. (This book uses “Univac” as the name of the company and its computers; in my experience and in all the documents in my collection, the name, originally an acronym for “UNIVersal Automatic Computer”, was always written in all capitals: “UNIVAC”; that is the convention I shall use here.) He then worked on the development of the Navy Tactical Data System (NTDS) shipboard computer, which was later commercialised as the UNIVAC 490 real-time computer. The UNIVAC 1107 also used the NTDS circuit design and I/O architecture. In 1963, like many UNIVAC alumni, Lundstrom crossed the river to join Control Data, where he worked until retiring in 1985. At Control Data he was responsible for peripherals, terminals, and airline reservation system development. It was predictable but sad to observe how Control Data, founded by a group of talented innovators to escape the stifling self-destructive incompetence of UNIVAC management, rapidly built up its own political hierarchy which chased away its own best people, including Seymour Cray. It's as if at a board meeting somebody said, “Hey, we're successful now! Let's build a big office tower and fill it up with idiots and politicians to keep the technical geniuses from getting anything done.” Lundstrom provides an authentic view from the inside of the mainframe computer business over a large part of its history. His observations about why technology transfer usually fails and the destruction wreaked on morale by incessant reorganisations and management shifts in direction are worth pondering. Lundstrom's background is in hardware. In chapter 13, before describing software, he cautions that “Professional programmers are going to disagree violently with what I say.” Well, this professional programmer certainly did, but it's because most of what he goes on to say is simply wrong. But that's a small wart on an excellent, insightful, and thoroughly enjoyable book. This book is out of print; used copies are generally available but tend to be expensive—you might want to keep checking over a period of months as occasionally a bargain will come around.

 Permalink

Lileks, James. Interior Desecrations: Hideous Homes from the Horrible '70s. New York: Crown Publishers, 2004. ISBN 1-4000-4640-8.
After turning your tastebuds inside out with The Gallery of Regrettable Food (April 2004), Lileks now tackles what passed for home decoration in the 1970s. Seldom will you encounter a book which makes you ask “What were they thinking?” so many times. It makes you wonder which aspects of the current scene will look as weird twenty or thirty years from now. Additional material which came to hand after the book was published may be viewed on the author's Web site.

 Permalink

Bovard, James. The Bush Betrayal. New York: Macmillan, 2004. ISBN 1-4039-6727-X.
Having dissected the depredations of Clinton and Socialist Party A against the liberty of U.S. citizens in Feeling Your Pain (May 2001), Bovard now turns his crypto-libertarian gaze toward the ravages committed by Bush and Socialist Party B in the last four years. Once again, Bovard demonstrates his extraordinary talent in penetrating the fog of government propaganda to see the crystalline absurdity lurking within. On page 88 we discover that under the rules adopted by Colorado pursuant to the “No Child Left Behind Act”, a school with 1000 students which had a mere 179 or fewer homicides per year would not be classified as “persistently dangerous”, permitting parents of the survivors to transfer their children to less target-rich institutions.

On page 187, we encounter this head-scratching poser asked of those who wished to become screeners for the “Transportation Security Administration”:

Question: Why is it important to screen bags for IEDs [Improvised Explosive Devices]?
  1. The IED batteries could leak and damage other passenger bags.
  2. The wires in the IED could cause a short to the aircraft wires.
  3. IEDs can cause loss of lives, property, and aircraft.
  4. The ticking timer could worry other passengers.
I wish I were making this up. The inspector general of the “Homeland Security Department” declined to say how many of the “screeners” who intimidate citizens, feel up women, and confiscate fingernail clippers and putatively dangerous and easily-pocketed jewelry managed to answer this one correctly.

I call Bovard a “crypto-libertarian” because he clearly bases his analysis on libertarian principles, yet rarely observes that any polity with unconstrained government power and sedated sheeple for citizens will end badly, regardless of who wins the elections. As with his earlier books, sources for this work are exhaustively documented in 41 pages of endnotes.

 Permalink

Holmes, W. J. Double-Edged Secrets. Annapolis: U.S. Naval Institute, [1979] 1998. ISBN 1-55750-324-9.
This is the story of U.S. Naval Intelligence in the Pacific theatre during World War II, told by somebody who was there—Holmes served in the inner sanctum of Naval Intelligence at Pearl Harbor from before the Japanese attack in 1941 through the end of the war in 1945. Most accounts of naval intelligence in the war with Japan focus on cryptanalysis and use of the “Ultra” information it yielded from Japanese radio intercepts. Holmes regularly worked with this material, and with the dedicated and sometimes eccentric individuals who produced it, but his focus is broader—on intelligence as a whole, of which cryptanalysis was only a part. The “product” delivered by his shop to warfighters in the fleet was painstakingly gleaned not only from communications intercepts, but also traffic analysis, direction finding, interpretation of aerial and submarine reconnaissance photos, interrogation of prisoners, translations of captured documents, and a multitude of other sources. In preparing for the invasion of Okinawa, naval intelligence tracked down an eighty-year-old seashell expert who provided information on landing beaches from his pre-war collecting expedition there. The total material delivered by intelligence for the Okinawa operation amounted to 127 tons of paper. This book provides an excellent feel for the fog of war, and how difficult it is to discern enemy intentions from the limited and conflicting information at hand. In addition, the difficult judgement calls which must be made between the risk of disclosing sources of information versus getting useful information into the hands of combat forces on a timely basis is a theme throughout the narrative. If you're looking for more of a focus on cryptanalysis and a discussion of the little-known British contribution to codebreaking in the Pacific war, see Michael Smith's The Emperor's Codes (August 2001).

 Permalink

Sharpe, Tom. Wilt in Nowhere. London: Hutchinson, 2004. ISBN 0-09-179965-1.
Tom Sharpe is, in my opinion, the the greatest living master of English farce. Combining Wodehouse's sense of the absurd and Waugh's acid-penned black humour, his novels make you almost grateful for the worst day you've ever had, as it's guaranteed to be a sunny stroll through the park compared to what his characters endure. I read most of Sharpe's novels to date in the 1980s, and was delighted to discover he's still going strong, bringing the misadventures of Henry Wilt up to date in this side-splitting book. The “release the hounds” episode in chapter 13 makes me laugh out loud every time I read it. A U.S. edition is scheduled for publication in June 2005. There are numerous references to earlier episodes in the Wilt saga, but this book can be enjoyed without having read them. If you'd like to enjoy them in order, they're Wilt, The Wilt Alternative, Wilt on High, and then the present volume.

 Permalink

Nisbett, Richard E. The Geography of Thought. New York: Free Press, 2003. ISBN 0-7432-5535-6.
It's a safe bet that the vast majority of Westerners who have done business in East Asia (China, Japan, and Korea), and Asians who've done business in the West have come to the same conclusion: “Asians and Westerners think differently.” They may not say as much, at least not to the general public, for fear of being thought intolerant, but they believe it on the evidence of their own experience nonetheless.

Psychologist Richard E. Nisbett and his colleagues in China and Korea have been experimentally investigating the differences in Asian and Western thought processes, and their results are summarised in this enlightening book (with citations of the original research). Their work confirms the conventional wisdom—Westerners view the world through a telephoto lens, applying logic and reductionism to find the “one best way”, while Asians see a wide-angle view, taking into account the context of events and seeking a middle way between extremes and apparent contradictions—with experimental effect sizes which are large, robust, and reliable.

Present-day differences in Asian and Western thought are shown to be entirely consistent with those of ancient Greek and Chinese philosophy, implying that whatever the cause, it is stable over long spans of history. Although experiments with infants provide some evidence for genetic predisposition, Nisbett suspects that a self-reinforcing homeostatic feedback loop between culture, language, and society is responsible for most of the difference in thought processes. The fact that Asian-Americans and Westernised Asians in Hong Kong and Singapore test between Asian and Western extremes provides evidence for this. (The fact that Asians excel at quintessentially Western intellectual endeavours such as abstract mathematics and theoretical science would, it seems to me, exclude most simple-minded explanations based on inherited differences in brain wiring.)

This work casts doubt upon Utopian notions of an End of History in which Western-style free markets and democracy are adopted by all nations and cultures around the globe. To a large extent, such a scenario assumes all people think like Westerners and share the same values, an assumption to which Nisbett's research offers persuasive counter examples. This may be for the best; both Western and Asian styles of thought are shown as predisposing those applying them to distinct, yet equally dangerous, fallacies. Perhaps a synthesis of these (and other) ways of thinking is a sounder foundation for a global society than the Western model alone.

 Permalink

  2005  

January 2005

Lamont, Peter. The Rise of the Indian Rope Trick. New York: Thunder's Mouth Press, 2004. ISBN 1-56025-661-3.
Charmed by a mysterious swami, the end of a rope rises up of its own accord high into the air. A small boy climbs the rope and, upon reaching the top, vanishes. The Indian rope trick: ancient enigma of the subcontinent or 1890 invention by a Chicago newspaper embroiled in a circulation war? Peter Lamont, magician and researcher at the University of Edinburgh, traces the origin and growth of this pervasive legend. Along the way we encounter a cast of characters including Marco Polo; a Chief of the U.S. Secret Service; Madame Blavatsky; Charles Dickens; Colonel Stodare, an Englishman who impersonated a Frenchman performing Indian magic; William H. Seward, Lincoln's Secretary of State; Professor Belzibub; General Earl Haig and his aptly named aide-de-camp, Sergeant Secrett; John Nevil Maskelyne, conjurer, debunker of spiritualism, and inventor of the pay toilet; and a host of others. The author's style is occasionally too clever for its own good, but this is a popular book about the Indian rope trick, not a quantum field theory text after all, so what the heck. I read the U.K. edition.

 Permalink

Orsenna, Erik. La grammaire est une chanson douce. Paris: Poche, 2001. ISBN 2-253-14910-1.
Ten year old Jeanne and her insufferable fourteen year old brother survive a shipwreck and find themselves on an enchanted island where words come alive and grammar escapes the rationalistic prison of Madame Jargonos and her Cartesian colleagues in the black helicopters (nice touch, that) to emerge as the intuitive music of thought and expression. As Jeanne recovers her ability to speak, we discover the joy of forging phrases from the raw material of living words with the tools of grammar. The result of Jeanne's day in the factory on page 129 is a pure delight. The author is a member of l'Académie française.

 Permalink

Crichton, Michael. State of Fear. New York: HarperCollins, 2004. ISBN 0-06-621413-0.
Ever since I read his 2003 Commonwealth Club speech, I've admired Michael Crichton's outspoken defence of rationality against the junk science, elitist politics, and immoral anti-human policies of present-day Big Environmentalism. In State of Fear, he enlists his talent as a techno-thriller writer in the cause, debunking the bogus fear-mongering of the political/legal/media/academic complex which is increasingly turning the United States into a nation of safety-obsessed sheeple, easily manipulated by the elite which constructs the fact-free virtual reality they inhabit. To the extent this book causes people to look behind the green curtain of environmentalism, it will no doubt do a world of good. Scientific integrity is something which matters a great deal to Crichton—when's the last time you read a thriller which included dozens of citations of peer-reviewed scientific papers, charts based on public domain climate data, a list of data sources for independent investigation, a twenty page annotated bibliography, and an explicit statement of the author's point of view on the issues discussed in the novel?

The story is a compelling page-turner, but like other recent Crichton efforts, requires somewhat more suspension of disbelief than I'm comfortable with. I don't disagree with the scientific message—I applaud it—but I found myself less than satisfied with how the thing worked as a thriller. As in Prey (January 2003), the characters often seemed to do things which simply weren't the way real people would actually behave. It is plausible that James Bond like secret agent John Kenner would entrust a raid on an eco-terrorist camp to a millionaire's administrative assistant and a lawyer who'd never fired a gun, or that he'd include these two, along with an actor who played a U.S. president on television, sent to spy for the bad guys, on an expedition to avert a horrific terrorist strike? These naïve, well-intentioned, but clueless characters provide convenient foils for Crichton's scientific arguments and come to deliciously appropriate ends, at least in one case, but all the time you can't help but thinking they're just story devices who don't really belong there. The villains' grand schemes also make this engineer's reality detector go bzzzt! In each case, they're trying to do something on an unprecedented scale, involving unconfirmed theories and huge uncertainties in real-world data, and counting on it working the very first time, with no prior prototyping or reduced-scale tests. In the real world, heroics wouldn't be necessary—you could just sit back and wait for something to go wrong, as it always does in such circumstances.

 Permalink

Schmitt, Christopher. CSS Cookbook. Sebastopol, CA: O'Reilly, 2004. ISBN 0-596-00576-8.
It's taken a while, but Cascading Style Sheets have finally begun to live up to their promise of separating content from presentation on the Web, allowing a consistent design, specified in a single place and easily modified, to be applied to large collections of documents, and permitting content to be rendered in different ways depending on the media and audience: one style for online reading, another for printed output, an austere presentation for handheld devices, large type for readers with impaired vision, and a text-only format tailored for screen reader programs used by the blind. This book provides an overview of CSS solutions for common Web design problems, with sample code and screen shots illustrating what can be accomplished. It doesn't purport to be a comprehensive reference—you'll want to have Eric Meyer's Cascading Style Sheets: The Definitive Guide at hand as you develop your own CSS solutions, but Schmitt's book is valuable in showing how common problems can be solved in ways which aren't obvious from reading the specification or a reference book. Particularly useful for the real-world Web designer are Schmitt's discussion of which CSS features work and don't work in various popular browsers and suggestions of work-arounds to maximise the cross-platform portability of pages.

Many of the examples in this book are more or less obvious, and embody techniques which folks who've rolled their own Movable Type style sheets will be familiar, but every chapter has one or more gems which caused this designer of minimalist Web pages to slap his forehead and exclaim, “I didn't know you could do that!” Chapter 9, which presents a collection of brutal hacks, many involving exploiting parsing bugs, for working around browser incompatibilities may induce nausea in those who cherish standards compliance or worry about the consequences of millions of pages on the Web containing ticking time bombs which will cause them to fall flat on their faces when various browser bugs are fixed. One glimpses here the business model of the Web site designer who gets paid when the customer is happy with how the site looks in Exploder and views remediation of incompatibilities down the road as a source of recurring revenue. Still, if you develop and maintain Web sites at the HTML level, there are many ideas here which can lead to more effective Web pages, and encourage you to dig deeper into the details of CSS.

 Permalink

Weinberg, Steven. Facing Up. Cambridge, MA: Harvard University Press, 2001. ISBN 0-674-01120-1.
This is a collection of non-technical essays written between 1985 and 2000 by Nobel Prize winning physicist Steven Weinberg. Many discuss the “science wars”—the assault by postmodern academics on the claim that modern science is discovering objective truth (well, duh), but many other topics are explored, including string theory, Zionism, Alan Sokal's hoax at the expense of the unwitting (and witless) editors of Social Text, Thomas Kuhn's views on scientific revolutions, science and religion, and the comparative analysis of utopias. Weinberg applies a few basic principles to most things he discusses—I counted six separate defences of reductionism in modern science, most couched in precisely the same terms. You may find this book more enjoyable a chapter at a time over an extended period rather than in one big cover-to-cover gulp.

 Permalink

Appleton, Victor. Tom Swift and His Motor-Cycle. Bedford, MA: Applewood Books, [1910] 1992. ISBN 1-55709-175-7.
Here's where it all began—the first episode of the original Tom Swift saga. Here we encounter Tom, his father Barton Swift, Mrs. Baggert, Ned Newton, Eradicate Sampson and his mule Boomerang, Wakefield “bless my hatband” Damon, Happy Harry, and the rest of the regulars for the first time. In this first outing, Appleton is still finding his voice: a good deal of the narration occurs as Tom's thinking or talking out loud, and there are way too many references to Tom as “our hero” for the cynical modern reader. But it's a rip-snorting, thoroughly enjoyable yarn, and the best point of departure to explore the world of Tom Swift and American boyhood in the golden years before the tragically misnamed Great War. I read the electronic edition of this novel published in the Tom Swift and His Pocket Library collection at this site on my PalmOS PDA. I've posted an updated electronic edition which corrects a few typographical and formatting errors I noted whilst reading the novel.

 Permalink

Allitt, Patrick. I'm the Teacher, You're the Student. Philadelphia: University of Pennsylvania Press, 2005. ISBN 0-8122-1887-6.
This delightfully written and enlightening book provides a look inside a present-day university classroom. The author, a professor of history at Emory University in Atlanta, presents a diary of a one semester course in U.S. history from 1877 to the present. Descriptions of summer programs at Oxford and experiences as a student of Spanish in Salamanca Spain (the description of the difficulty of learning a foreign language as an adult [pp. 65–69] is as good as any I've read) provide additional insight into the life of a professor. I wish I'd had a teacher explain the craft of expository writing as elegantly as Allitt in his “standard speech” (p. 82). The sorry state of undergraduate prose is sketched in stark detail, with amusing howlers like, “Many did not survive the harsh journey west, but still they trekked on.” Although an introductory class, students were a mix of all four undergraduate years; one doesn't get a sense the graduating seniors thought or wrote any more clearly than the freshmen. Along the way, Allitt provides a refresher course in the historical period covered by the class. You might enjoy answering the factual questions from the final examination on pp. 215–218 before and after reading the book and comparing your scores (answers are on p. 237—respect the honour code and don't peek!). The darker side of the educational scene is discussed candidly: plagiarism in the age of the Internet; clueless, lazy, and deceitful students; and the endless spiral of grade inflation. What grade would you give to students who, after a semester in an introductory undergraduate course, “have no aptitude for history, no appreciation for the connection between events, no sense of how a historical situation changes over time, [who] don't want to do the necessary hard work, … skimp on the reading, and can't write to save their lives” (p. 219)—certainly an F? Well, actually, no: “Most of them will get B− and a few really hard cases will come in with Cs”. And, refuting the notion that high mean grade point averages at elite schools simply reflect the quality of the student body and their work, about a quarter of Allitt's class are these intellectual bottom feeders he so vividly evokes.

 Permalink

Sinclair, Upton. Mental Radio. Charlottesville, VA: Hampton Roads, [1930, 1962] 2001. ISBN 1-57174-235-2.
Upton Sinclair, self-described (p. 8) “Socialist ‘muckraker’” is best known for his novels such as The Jungle (which put a generation off eating sausage), Oil!, and The Moneychangers, and his social activism. His 1934 run for Governor of California was supported by young firebrand Robert A. Heinlein, whose 1938-1939 “lost first novel” For Us, The Living (February 2004) was in large part a polemic for Sinclair's “Social Credit” platform.

Here, however, the focus is on the human mind, in particular the remarkable experiments in telepathy and clairvoyance performed in the late 1920s with his wife, Mary Craig Sinclair. The experiments consisted of attempts to mentally transmit or perceive the content of previously drawn images. Some experiments were done with the “sender” and “receiver” separated by more than 40 kilometres, while others involved Sinclair drawing images in a one room with the door closed, while his wife attempted to receive them in a different room. Many of the results are simply astonishing, so much so that given the informal conditions of the testing, many sceptics (especially present-day CSICOPs who argue that any form of cheating or sensory information transfer, whether deliberate or subconscious, which cannot be definitively excluded must be assumed to have occurred), will immediately discard them as flawed. But the Sinclair experiments took place just as formal research in parapsychology was getting underway—J.B. Rhine's Parapsychology Laboratory at Duke University was not founded until 1935—five years after the publication of Mental Radio, with the support of William McDougall, chairman of the Duke psychology department who, in 1930, himself performed experiments with Mary Craig Sinclair and wrote the introduction to the present volume.

This book is a reprint of the 1962 edition, which includes a retrospective foreword by Upton Sinclair, the analysis of the Sinclair experiments by Walter Franklin Prince published in the Bulletin of the Boston Society for Psychic Research in 1932, and a preface by Albert Einstein.

 Permalink

Smith, L. Neil. Ceres. Unpublished manuscript, January 2005.
I read this book in manuscript form; I'll add the ISBN when it is published. An online plot summary is available.

 Permalink

Landis, Tony R. and Dennis R. Jenkins. X-15 Photo Scrapbook. North Branch, MN: Specialty Press, 2003. ISBN 1-58007-074-4.

This companion to Hypersonic: The Story of the North American X-15 (March 2004) contains more than 400 photos, 50 in colour, which didn't make the cut for the main volume, as well as some which came to hand only after its publication. There's nothing really startling, but if you can't get enough of this beautiful flying machine, here's another hefty dose of well-captioned period photos, many never before published. The two page spread on pp. 58–59 is interesting. It's a North American Aviation presentation from 1962 on how the X-15 could be used for various advanced propulsion research programs, including ramjets, variable cycle turboramjets, scramjets, and liquid air cycle engines (LACE) burning LH2 and air liquefied on board. More than forty years later, these remain “advanced propulsion” concepts, with scant progress to show for the intervening decades. None of the X-15 propulsion research programs were ever flown.

 Permalink

Smith, L. Neil and Scott Bieser. The Probability Broach: The Graphic Novel. Round Rock, TX: Big Head Press, 2004. ISBN 0-9743814-1-1.
What a tremendous idea! Here is L. Neil Smith's classic libertarian science fiction novel, Prometheus Award winning The Probability Broach, transformed into a comic book—er—graphic novel—with story by Smith and artwork by Scott Bieser. The artwork and use of colour are delightful—particularly how Win Bear's home world is rendered in drab collectivist grey and the North American Confederacy in vibrant hues. Lucy Kropotkin looks precisely as I'd imagined her. Be sure to look at all the detail and fine print in the large multi-panel spreads. After enjoying a couple of hours back in the Confederacy, why not order copies to give to all the kids in the family who've never thought about what it would be like to live in a world where free individuals entirely owned their own lives?

 Permalink

February 2005

Kurlansky, Mark. Salt: A World History. New York: Penguin Books, 2002. ISBN 0-14-200161-9.
You may think this a dry topic, but the history of salt is a microcosm of the history of human civilisation. Carnivorous animals and human tribes of hunters get all the salt they need from the meat they eat. But as soon as humans adopted a sedentary agricultural lifestyle and domesticated animals, they and their livestock had an urgent need for salt—a cow requires ten times as much salt as a human. The collection and production of salt was a prerequisite for human settlements and, as an essential commodity required by every individual, the first to be taxed and regulated by that chronic affliction of civilisation, government. Salt taxes supported the Chinese empire for almost two millennia, the Viennese and Genoan trading empires and the Hanseatic League, precipitated the French Revolution and India's struggle for independence from the British empire. Salt was a strategic commodity in the Roman Empire: most Roman cities were built near saltworks, and the words “salary” and “soldier” are both derived from the Latin word for salt. This and much more is covered in this fascinating look at human civilisation through the crystals of a tasty and essential inorganic compound composed of two poisonous elements. Recipes for salty specialities of cultures around the world and across the centuries are included, along with recommendations for surviving that “surprisingly pleasant” Swedish speciality surströmming (p. 139): “The only remaining problem is how to get the smell out of the house…”.

 Permalink

Kopparapu, Chandra. Load Balancing Servers, Firewalls, and Caches. New York: John Wiley & Sons, 2002. ISBN 0-471-41550-2.
Don't even think about deploying a server farm or geographically dispersed mirror sites without reading this authoritative book. The Internet has become such a mountain of interconnected kludges that something as conceptually simple as spreading Web and other Internet traffic across a collection of independent servers or sites in the interest of increased performance and fault tolerance becomes a matter of enormous subtlety and hideous complexity. Most of the problems come from the need for “session persistence”: when a new user arrives at your site, you can direct them to any available server based on whatever load balancing algorithm you choose, but if the user's interaction with the server involves dynamically generated content produced by the server (for example, images generated by Earth and Moon Viewer, or items the user places in their shopping cart at a commerce site), subsequent requests by the user must be directed to the same server, as only it contains the state of the user's session.

(Some load balancer vendors will try to persuade you that session persistence is a design flaw in your Web applications which you should eliminate by making them stateless or by using a common storage pool shared by all the servers. Don't believe this. I defy you to figure out how an application as simple as Earth and Moon Viewer, which does nothing more complicated than returning a custom Web page which contains a dynamically generated embedded image, can be made stateless. And shared backing store [for example, Network Attached Storage servers] has its own scalability and fault tolerance challenges.)

Almost any simple scheme you can come up with to get around the session persistence problem will be torpedoed by one or more of the kludges and hacks through which a user's packet traverses between client and server: NAT, firewalls, proxy servers, content caches, etc. Consider what at first appears to be a foolproof scheme (albeit sub-optimal for load distribution): simply hash the client's IP address into a set of bins, one for each server, and direct the packets accordingly. Certainly, that would work, right? Wrong: huge ISPs such as AOL and EarthLink have farms of proxy servers between their customers and the sites they contact, and these proxy servers are themselves load balanced in a non-persistent manner. So even two TCP connections from the same browser retrieving, say, the text and an image from a single Web page, may arrive at your site apparently originating from different IP addresses!

This and dozens of other gotchas and ways to work around them are described in detail in this valuable book, which is entirely vendor-neutral, except for occasionally mentioning products to illustrate different kinds of architectures. It's a lot better to slap your forehead every few pages as you discover something else you didn't think of which will sabotage your best-laid plans than pull your hair out later after putting a clever and costly scheme into production and discovering that it doesn't work. When I started reading this book, I had no idea how I was going to solve the load balancing problem for the Fourmilab site, and now I know precisely how I'm going to proceed. This isn't a book you read for entertainment, but if you need to know this stuff, it's a great place to learn it.

 Permalink

Smith, Edward E. First Lensman. Baltimore: Old Earth Books, [1950] 1997. ISBN 1-882968-10-7.
There's no better way to escape for a brief respite from the world of session persistence, subnet masks, stateful fallover, gratuitous ARP packets, and the like than some coruscating, actinic space opera, and nobody does it better than the guy who invented it, Doc Smith. About every decade I re-read the Lensman series, of which this is the second of six volumes (seven if you count Masters of the Vortex) and never cease to be amazed at Smith's talent for thinking big—really big. I began this fourth expedition through the Lensman saga with the first installment, Triplanetary, in June 2004. Old Earth Books are to be commended for this reprint, which is a facsimile of the original 1950 Fantasy Press edition including all the illustrations.

 Permalink

Roosevelt, Theodore. The Rough Riders. Philadelphia: Pavilion Press, [1899] 2004. ISBN 1-4145-0492-6.
This is probably, by present-day standards, the most politically incorrect book ever written by a United States President. The fact that it was published and became a best-seller before his election as Vice President in 1900 and President in 1904 indicates how different the world was in the age in which Theodore Roosevelt lived and helped define. T.R. was no chicken-hawk. After advocating war with Spain as assistant secretary of the Navy in the McKinley administration, as war approached, he left his desk job in Washington to raise a volunteer regiment from the rough and ready horse- and riflemen of his beloved Wild West, along with number of his fellow Ivy Leaguers hungry for a piece of the action. This book chronicles his adventures in raising, equipping, and training the regiment, and its combat exploits in Cuba in 1898. The prose is pure T.R. passionate purple; it was rumoured that when the book was originally typeset the publisher had to send out for more copies of the upper-case letter “I”. Almost every page contains some remark or other which would end the career of what passes for politicians in today's pale, emasculated world. What an age. What a man! The bloodthirsty warrior who wrote this book would go on to win the Nobel Peace Prize in 1906 for brokering an end to the war between Russia and Japan.

This paperback edition from Pavilion Press is a sorry thing physically. The text reads like something that's been OCR scanned and never spelling checked or proofread—on p. 171, for example, “antagonists” is printed as “antagon1sts”, and this is one of many such errors. There's no excuse for this at all, since there's an electronic text edition of The Rough Riders freely available from Project Gutenberg which is free of these errors, and an on-line edition which lacks these flaws. The cover photo of T.R. on his horse is a blow-up of a low-resolution JPEG image with obvious pixels and compression artefacts.

Roosevelt's report to his commanding general (pp. 163–170) detailing the logistical and administrative screwups in the campaign is an excellent illustration of the maxim that the one area in which government far surpasses the capabilities of free enterprise is in the making of messes.

 Permalink

Sullivan, Scott P. Virtual LM. Burlington, Canada: Apogee Books, 2004. ISBN 1-894959-14-0.
I closed my comments about the author's earlier Virtual Apollo (July 2004) expressing my hope he would extend the project to the Lunar Module (LM). Well, here it is! These books are based on intricate computer solid models created by Sullivan from extensive research, then rendered to show how subsystems fit into the tightly-packed and weight-constrained spacecraft. The differences between the initial “H mission” modules (Apollo 9–14) and the extended stay “J mission” landers of Apollo 15–17 are shown in comparison renderings. In addition, the Lunar Roving Vehicle (moon buggy) used on the J missions is dissected in the same manner as the LM, along with the life support backpack worn by astronauts on the lunar surface. Nothing about the Lunar Module was simple, and no gory detail is overlooked in this book—there are eight pages (40–47) devoted to the door of the scientific equipment bay and the Rube Goldberg-like mechanism used to open it.

Sadly, like Virtual Apollo, this modeling and rendering labour of love is marred by numerous typographical errors in text and captions. From the point where I started counting, I noted 25, which is an unenviable accomplishment in a 250 page book which is mostly pictures. A companion CD-ROM includes the Apollo Operations Handbook, Lunar Module flight documents from Apollo 14–16, and photographs of the LM simulator and test article.

 Permalink

Kuhns, Elizabeth. The Habit. New York: Doubleday, 2003. ISBN 0-385-50588-4.
For decades I've been interested in and worried about how well-intentioned “modernisations” might interrupt the chain of transmission of information and experience between generations and damage, potentially mortally, the very institutions modernisers were attempting to adapt to changing circumstances. Perhaps my concern with this somewhat gloomy topic stems from having endured both “new math” in high school and “new chemistry” in college, in both cases having to later re-learn the subject matter in the traditional way which enables one to, you know, actually solve problems.

Now that the radicals left over from the boomer generation are teachers and professors, we're into the second or third generation of a feedback cycle in which students either never learn the history of their own cultures or are taught contempt and hatred for it. The dearth of young people in the United States and U.K. who know how to think and have the factual framework from which to reason (or are aware what they don't know and how to find it out) is such that I worry about a runaway collapse of Western civilisation there. The very fact that it's impolitic to even raise such an issue in most of academia today only highlights how dire the situation is. (In continental Europe the cultural and educational situation is nowhere near as bad, but given that the population is aging and dying out it hardly matters. I read a prediction a couple of weeks ago that, absent immigration or change in fertility, the population of Switzerland, now more than seven million, could fall to about one million before the end of this century, and much the same situation obtains elsewhere in Europe. There is no precedent in human history for this kind of population collapse unprovoked by disaster, disease, or war.)

When pondering “macro, macro” issues like this, it's often useful to identify a micro-model to serve as a canary in the mineshaft for large-scale problems ahead. In 1965, the Second Vatican Council promulgated a top to bottom modernisation of the Roman Catholic Church. In that same year, there were around 180,000 Catholic nuns in the U.S.—an all time historical high—whose lifestyle, strongly steeped in tradition, began to immediately change in many ways far beyond the clothes they wore. Increasingly, orders opted for increasing invisibility—blending into the secular community. The result: an almost immediate collapse in their numbers, which has continued to the present day (graph). Today, there are only about 70,000 left, and with a mean age of 69, their numbers are sure to erode further in the future. Now, it's impossible to separate the consequences of modernisation of tradition from those of social changes in society at large, but it gives one pause to see an institution which, as this book vividly describes, has tenaciously survived two millennia of rising and falling empires, war, plague, persecution, inquisition, famine, migration, reformation and counter-reformation, disappearing like a puff of smoke within the space of one human lifetime. It makes you wonder about how resilient other, far more recent, components of our culture may be in the face of changes which discard the experience and wisdom of the past.

A paperback edition is scheduled for publication in April 2005.

 Permalink

Purdy, Gregor N. Linux iptables Pocket Reference. Sebastopol, CA: O'Reilly, 2004. ISBN 0-596-00569-5.
Sure, you could just read the manual pages, but when your site is under attack and you're the “first responder”, this little book is just what you want in your sweaty fingers. It's also a handy reference to the fields in IP, TCP, UDP, and ICMP packets, which can be useful in interpreting packet dumps. Although intended as a reference, it's well worth taking the time (less than an hour) to read cover to cover. There are a number of very nice facilities in iptables/Netfilter which permit responding to common attacks. For example, the iplimit match allows blocking traffic from the bozone layer (yes, you—I know who you are and I know where you live) which ties up all of your HTTP server processes by connecting to them and then letting them time out or, slightly more sophisticated, feeding characters of a request every 20 seconds or so to keep it alive. The solution is:
    /sbin/iptables -A INPUT -p tcp --syn --dport 80 -m iplimit \
    	--iplimit-above 20 --iplimit-mask 32 -j REJECT
Anybody who tries to open more than 20 connections will get whacked on each additional SYN packet. You can see whether this rule is affecting too many legitimate connections with the status query:
    /sbin/iptables -L -v
Geekly reading, to be sure, but just the thing if you're responsible for defending an Internet server or site from malefactors in the Internet Slum.

 Permalink

Satrapi, Marjane. Persepolis 2: The Story of a Return. New York: Pantheon Books, [2002, 2003] 2004. ISBN 0-375-42288-9.
Having escaped from Iran in the middle of Iran/Iraq war to secular, decadent Austria, Marjane Satrapi picks up her comic book autobiography with the culture shock of encountering the amoral West. It ends badly. She returns to Tehran in search of her culture, and finds she doesn't fit there either, eventually abandoning a failed marriage to escape to the West, where she has since prospered as an author and illustrator. This intensely personal narrative brings home both why the West is hated in much of the world, and why, at the same time, so many people dream of escaping the tyranny of dull conformity for the light of liberty and reason in the West. Like Persepolis: The Story of a Childhood (November 2004), this is a re-lettered English translation of the original French edition published in two volumes: (3, 4).

 Permalink

Bragg, Melvyn. The Adventure of English. London: Sceptre, 2003. ISBN 0-340-82993-1.
How did a language spoken by 150,000 or so Germanic warriors who invaded the British Isles in the fifth century A.D. become the closest thing so far to a global language, dominating the worlds of science and commerce which so define the modern age? Melvyn Bragg, who earlier produced a television series (which I haven't seen) with the same name for the British ITV network follows the same outline in this history of English. The tremendous contingency in the evolution of a language is much to be seen here: had Shakespeare, Dr. Johnson, or William Tyndale (who first translated the Bible into English and paid with his life for having done so) died in infancy, how would we speak today, and in what culture would we live? The assembly of the enormous vocabulary of English by devouring words from dozens of other languages is well documented, as well as the differentiation of British English into distinct American, Caribbean, Australian, South African, Indian, and other variants which enrich the mother tongue with both vocabulary and grammar. Fair dinkum, innit man?

As English has grown by accretion, it has also cast out a multitude of words into the “Obs.” bin of the OED, many in the “Inkhorn Controversy” in the 16th century. What a loss! The more words, the richer the language, and I hereby urge we reinstate “abstergify”, last cited in the OED in 1612, defined as the verb “To cleanse”. I propose this word to mean “to clean up, æsthetically, without any change in function”. For example, “I spent all day abstergifying the configuration files for the Web server”.

The mystery of why such an ill-structured language with an almost anti-phonetic spelling should have become so widespread is discussed here only on the margin, often in apologetic terms invoking the guilt of slavery and colonialism. (But speakers of other languages pioneered these institutions, so why didn't they triumph?) Bragg suggests, almost in passing, what I think is very significant. The very irregularity of English permits it to assimilate the vocabulary of every language it encounters. In Greek, Latin, Spanish, or French, there are rules about the form of verbs and the endings of nouns and agreement of adjectives which cannot accommodate words from fundamentally different languages. But in English, there are no rules whatsoever—bring your own vocabulary—there's room for everybody and every word. Come on in, it's great—the more the better!

A U.S edition is now available, but as of this date only in hardcover.

 Permalink

March 2005

Smith, Edward E. Galactic Patrol. Baltimore: Old Earth Books, [1937-1938, 1950] 1998. ISBN 1-882968-11-5.
Although this is the third volume of the Lensman series, it was written first; Triplanetary (June 2004) and First Lensman (February 2005) are “prequels”, written more than a decade after Galactic Patrol ran in serial form in Astounding Science Fiction beginning in September 1937. This was before John W. Campbell, Jr. assumed the editor's chair, the event usually considered to mark the beginning of the Golden Age of science fiction. This volume is a facsimile of the illustrated 1950 Fantasy Press edition, which was revised somewhat by the author from the original magazine version.

While I enjoy the earlier books, and read them in order in this fourth lifetime trip through the saga, Galactic Patrol is where the story really takes off for me. If you're new to Doc Smith, you might want to begin here to experience space opera at its best, then go back and read the two slower-paced prior installments afterward. Having been written first, this novel is completely self-contained; everything introduced in the earlier books is fully explained when it appears here.

 Permalink

Hayek, Friedrich A. The Fatal Conceit. Chicago: University of Chicago Press, 1988. ISBN 0-226-32066-9.
The idiosyncratic, if not downright eccentric, synthesis of evolutionary epistemology, spontaneous emergence of order in self-organising systems, free markets as a communication channel and feedback mechanism, and individual liberty within a non-coercive web of cultural traditions which informs my scribblings here and elsewhere is the product of several decades of pondering these matters, digesting dozens of books by almost as many authors, and discussions with brilliant and original thinkers it has been my privilege to encounter over the years.

If, however, you want it all now, here it is, in less than 160 pages of the pellucid reasoning and prose for which Hayek is famed, ready to be flashed into your brain's philosophical firmware in a few hours' pleasant reading. This book sat on my shelf for more than a decade before I picked it up a couple of days ago and devoured it, exclaiming “Yes!”, “Bingo!”, and “Precisely!” every few pages. The book is subtitled “The Errors of Socialism”, which I believe both misstates and unnecessarily restricts the scope of the actual content, for the errors of socialism are shared by a multitude of other rationalistic doctrines (including the cult of design in software development) which, either conceived before biological evolution was understood, or by those who didn't understand evolution or preferred the outlook of Aristotle and Plato for aesthetic reasons (“evolution is so messy, and there's no rational plan to it”), assume, as those before Darwin and those who reject his discoveries today, that the presence of apparent purpose implies the action of rational design. Hayek argues (and to my mind demonstrates) that the extended order of human interaction: ethics, morality, division of labour, trade, markets, diffusion of information, and a multitude of other components of civilisation fall between biological instinct and reason, poles which many philosophers consider a dichotomy.

This middle ground, the foundation of civilisation, is the product of cultural evolution, in which reason plays a part only in variation, and selection occurs just as brutally and effectively as in biological evolution. (Cultural and biological evolution are not identical, of course; in particular, the inheritance of acquired traits is central in the development of cultures, yet absent in biology.)

The “Fatal Conceit” of the title is the belief among intellectuals and social engineers, mistaking the traditions and institutions of human civilisation for products of reason instead of evolution, that they can themselves design, on a clean sheet of paper as it were, a one-size-fits-all eternal replacement which will work better than the product of an ongoing evolutionary process involving billions of individuals over millennia, exploring a myriad of alternatives to find what works best. The failure to grasp the limits of reason compared to evolution explains why the perfectly consistent and often tragic failures of utopian top-down schemes never deters intellectuals from championing new (or often old, already discredited) ones. Did I say I liked this book?

 Permalink

Penrose, Roger. The Road to Reality. New York: Alfred A. Knopf, 2005. ISBN 0-679-45443-8.
This is simply a monumental piece of work. I can't think of any comparable book published in the last century, or any work with such an ambitious goal which pulls it off so well. In this book, Roger Penrose presents the essentials of fundamental physics as understood at the turn of the century to the intelligent layman in the way working theoretical physicists comprehend them. Starting with the Pythagorean theorem, the reader climbs the ladder of mathematical abstraction to master complex numbers, logarithms, real and complex number calculus, Fourier decomposition, hyperfunctions, quaternions and octionions, manifolds and calculus on manifolds, symmetry groups, fibre bundles and connections, transfinite numbers, spacetime, Hamiltonians and Lagrangians, Clifford and Grassman algebras, tensor calculus, and the rest of the mathematical armamentarium of the theoretical physicist. And that's before we get to the physics, where classical mechanics and electrodynamics, special and general relativity, quantum mechanics, and the standard models of particle physics and cosmology are presented in the elegant and economical notation into which the reader has been initiated in the earlier chapters.

Authors of popular science books are cautioned that each equation they include (except, perhaps E=mc²) will halve the sales of their book. Penrose laughs in the face of such fears. In this “big damned fat square book” of 1050 pages of main text, there's an average of one equation per page, which, according to conventional wisdom should reduce readership by a factor of 2−1050 or 8.3×10−317, so the single copy printed would have to be shared by among the 1080 elementary particles in the universe over an extremely long time. But, according to the Amazon sales ranking as of today, this book is number 71 in sales—go figure.

Don't deceive yourself; in committing to read this book you are making a substantial investment of time and brain power to master the underlying mathematical concepts and their application to physical theories. If you've noticed my reading being lighter than usual recently, both in terms of number of books and their intellectual level, it's because I've been chewing through this tome for last two and a half months and it's occupied my cerebral capacity to the exclusion of other works. But I do not regret for a second the time I've spent reading this work and working the exercises, and I will probably make a second pass through it in a couple of years to reinforce the mathematical toolset into my aging neurons. As an engineer whose formal instruction in mathematics ended with differential equations, I found chapters 12–15 to be the “hump”—after making it through them (assuming you've mastered their content), the rest of the book is much more physical and accessible. There's kind of a phase transition between the first part of the book and chapters 28–34. In the latter part of the book, Penrose gives free rein to his own view of fundamental physics, introducing his objective reduction of the quantum state function (OR) by gravity, twistor theory, and a deconstruction of string theory which may induce apoplexy in researchers engaged in that programme. But when discussing speculative theories, he takes pains to identify his own view when it differs from the consensus, and to caution the reader where his own scepticism is at variance with a widely accepted theory (such as cosmological inflation).

If you really want to understand contemporary physics at the level of professional practitioners, I cannot recommend this book too highly. After you've mastered this material, you should be able to read research reports in the General Relativity and Quantum Cosmology preprint archives like the folks who write and read them. Imagine if, instead of two or three hundred taxpayer funded specialists, four or five thousand self-educated people impassioned with figuring out how nature does it contributed every day to our unscrewing of the inscrutable. Why, they'll say it's a movement. And that's exactly what it will be.

 Permalink

Bear, Greg. Moving Mars. New York: Tor, 1993. ISBN 0-8125-2480-2.
I received an electronic edition of this novel several years ago as part of a bundle when I purchased a reader program for my PalmOS PDA, and only got around to reading it in odd moments over the last few months. I've really enjoyed some of Greg Bear's recent work, such as 1999's Darwin's Radio, so I was rather surprised to find this story disappointing. However, that's just my opinion, and clearly at variance with the majority of science fiction authors and fans, for this book won the 1994 Nebula and Science Fiction Chronicle awards for best novel and was nominated for the Hugo, Locus, and Campbell awards that year. The electronic edition I read remains available.

 Permalink

Fort, Adrian. Prof: The Life of Frederick Lindemann. London: Jonathan Cape, 2003. ISBN 0-224-06317-0.
Frederick Lindemann is best known as Winston Churchill's scientific advisor in the years prior to and during World War II. He was the central figure in what Churchill called the “Wizard War”, including the development and deployment of radar, antisubmarine warfare technologies, the proximity fuze, area bombing techniques, and nuclear weapons research (which was well underway in Britain before the Manhattan Project began in the U.S.). Lindemann's talents were so great and his range of interests so broad that if he had settled into the cloistered life of an Oxford don after his appointment as Professor of Experimental Philosophy and chief of the Clarendon Laboratory in 1919, he would still be remembered for his scientific work in quantum mechanics, X-ray spectra, cryogenics, photoelectric photometry in astronomy, and isotope separation, as well as for restoring Oxford's reputation in the natural sciences, which over the previous half century “had sunk almost to zero” in Lindemann's words.

Educated in Germany, he spoke German and French like a native. He helped organise the first historic Solvay Conference in 1911, which brought together the pioneers of the relativity and quantum revolutions in physics. There he met Einstein, beginning a life-long friendship. Lindemann was a world class tennis champion and expert golfer and squash player, as well as a virtuoso on the piano. Although a lifetime bachelor, he was known as a ladies' man and never lacked female companionship.

In World War I Lindemann tackled the problem of spin recovery in aircraft, then thought to be impossible (this in an era when pilots were not issued parachutes!). To collect data and test his theories, he learned to fly and deliberately induced spins in some of the most notoriously dangerous aircraft types and confirmed his recovery procedure by putting his own life on the line. The procedure he developed is still taught to pilots today.

With his close contacts in Germany, Lindemann was instrumental in arranging and funding the emigration of Jewish and other endangered scientists after Hitler took power in 1933. The scientists he enabled to escape not only helped bring Oxford into the first rank of research universities, many ended up contributing to the British and U.S. atomic projects and other war research. About the only thing he ever failed at was his run for Parliament in 1937, yet his influence as confidant and advisor to Churchill vastly exceeded that of a Tory back bencher. With the outbreak of war in 1939, he joined Churchill at the Admiralty, where he organised and ran the Statistical Branch, which applied what is now called Operations Research to the conduct of the war, which rôle he expanded as chief of “S Department” after Churchill became Prime Minister in May 1940. Many of the wartime “minutes” quoted in Churchill's The Second World War were drafted by Lindemann and sent out verbatim over Churchill's signature, sometimes with the addition “Action this day”. Lindemann finally sat in Parliament, in the House of Lords, after being made Lord Cherwell in 1941, and joined the Cabinet in 1942 and became a Privy Counsellor in 1943.

After the war, Lindemann returned to Oxford, continuing to champion scientific research, taking leave to serve in Churchill's cabinet from 1951–1953, where he almost single-handedly and successfully fought floating of the pound and advocated the establishment of an Atomic Energy Authority, on which he served for the rest of his life.

There's an atavistic tendency when writing history to focus exclusively on the person at the top, as if we still lived in the age of warrior kings, neglecting those who obtain and filter the information and develop the policies upon which the exalted leader must ultimately decide. (This is as common, or more so, in the business press where the cult of the CEO is well entrenched.) This biography, of somebody many people have never heard of, shows that the one essential skill a leader must have is choosing the right people to listen to and paying attention to what they say.

A paperback edition is now available.

 Permalink

Lebeau, Caroline. Les nouvelles preuves sur l'assassinat de J. F. Kennedy. Monaco: Éditions du Rocher, 2003. ISBN 2-268-04915-9.
If you don't live in Europe, you may not be fully aware just how deranged the Looney Left can be in their hatred of Western civilisation, individual liberty, and the United States in particular. This book, from the same publisher who included a weasel-word disclaimer in each copy of Oriana Fallaci's La Force de la Raison (December 2004), bears, on its cover, in 42 point white type on a red background, the subtitle «Le clan Bush est-il coupable?»—“Is the Bush clan guilty?” This book was prominently displayed in French language bookstores in 2004. The rambling narrative and tangled illogic finally pile up to give an impression reminiscent of the JFK assassination headline in The Onion's Our Dumb Century: “Kennedy Slain by CIA, Mafia, Castro, Teamsters, Freemasons”. Lebeau declines to implicate the Masons, but fleshes out the list, adding multinational corporations, defence contractors, the Pentagon, Khrushchev, anti-Casto Cuban exiles, a cabal within the Italian army (I'm not making this up—see pp. 167–168), H.L. Hunt, Richard Nixon, J. Edgar Hoover, the mayor of Dallas … and the Bush family, inter alia. George W. Bush, who was 17 years old at the time, is not accused of being a part of the «énorme complot», but his father is, based essentially on the deduction: “Kennedy was killed in Dallas. Dallas is in Texas. George H. W. Bush lived in Texas at the time—guilty, guilty, guilty!

“Independent investigative journalist” Lebeau is so meticulous in her “investigations” that she confuses JFK's older brother's first and middle names, misspells Nixon's middle name, calls the Warren Report the product of a Republican administration, confuses electoral votes with Senate seats, consistently misspells “grassy knoll”, thinks a “dum-dum” bullet is explosive, that Gerald Ford was an ex-FBI agent, and confuses H. L. Hunt and E. Howard Hunt on the authority of “journalist” Mumia Abu-Jamal, not noting that he is a convicted cop killer. Her studies in economics permit her to calculate (p. 175) that out of a total cost of 80 billion dollars, the Vietnam war yielded total profits to the military-industrial complex and bankers of 220 trillion dollars, which is about two centuries worth of the U.S. gross national product as of 1970. Some of the illustrations in the book appear to have been photographed off a television screen, and many of the original documents reproduced are partially or entirely illegible.

 Permalink

Pickover, Clifford A. The Loom of God. New York: Perseus Books, 1997. ISBN 0-306-45411-4.
Clifford Pickover has more than enough imagination for a hundred regular people. An enormously prolific author, his work includes technical books on computing and scientific visualisation, science fiction, and popular works on mathematics and a wide variety of scientific topics. This book explores the boundary between mathematics and religion, including Pythagorean cults, Stonehenge, cave paintings from 20,000 years ago which may be the first numbers, the Kabala, the quipu of the Incas, numerology, eschatology, and real-world doomsday scenarios, along with a wide variety of puzzles in number theory, geometry, and other mathematical topics. One of the many fascinating unsolved problems he discusses is the “integer brick”, which seems to be more often referred to as the “perfect cuboid”: can you find a three-dimensional rectangular parallelopiped in which all the edges and face and space diagonals are integers? Computer searches have shown than no cuboid with a smallest edge less than 1,281,000,000 satisfies this requirement but, who knows, you may find it in just a few more compute cycles! (I'll pass on this one, after spending three years of computer time pursuing another unicorn of recreational mathematics.) As with Pickover's other popular books, this one includes source code for programs to explore topics raised in the text, explanation of the science and history behind the science fiction narrative, and extensive literature citations for those interested in digging deeper.

 Permalink

April 2005

Lewis, Bernard. What Went Wrong? New York: Perennial, 2002. ISBN 0-06-051605-4.
Bernard Lewis is the preeminent Western historian of Islam and the Middle East. In his long career, he has written more than twenty volumes (the list includes those currently in print) on the subject. In this book he discusses the causes of the centuries-long decline of Islamic civilisation from a once preeminent empire and culture to the present day. The hardcover edition was in press when the September 2001 terrorist attacks took place. So thoroughly does Lewis cover the subject matter that a three page Afterword added in October 2002 suffices to discuss their causes and consequences. This is an excellent place for anybody interested in the “clash of civilisations” to discover the historical context of Islam's confrontation with modernity. Lewis writes with a wit which is so dry you can easily miss it if you aren't looking. For example, “Even when the Ottoman Turks were advancing into southeastern Europe, they were always able to buy much needed equipment for their fleets and armies from Christian European suppliers, to recruit European experts, and even to obtain financial cover from Christian European banks. What is nowadays known as ‘constructive engagement’ has a long history.” (p. 13).

 Permalink

Scalzi, John. Old Man's War. New York: Tor, 2005. ISBN 0-7653-0940-8.
I don't read a lot of contemporary science fiction, but the review by Glenn Reynolds and those of other bloggers he cited on Instapundit motivated me to do the almost unthinkable—buy a just-out science fiction first novel in hardback—and I'm glad I did. It's been a long time since I last devoured a three hundred page novel in less than 36 hours in three big gulps, but this is that kind of page-turner. It will inevitably be compared to Heinlein's Starship Troopers. Remarkably, it stands up well beside the work of the Master, and also explores the kinds of questions of human identity which run through much of Heinlein's later work. The story is in no way derivative, however; this is a thoroughly original work, and even more significant for being the author's first novel in print. Here's a writer to watch.

 Permalink

Healy, Gene, ed. Go Directly to Jail. Washington: Cato Institute, 2004. ISBN 1-930865-63-5.
Once upon a time, when somebody in the U.S. got carried away and started blowing something out of proportion, people would chide them, “Don't make a federal case out of it.” For most of U.S. history, “federal cases”—criminal prosecutions by the federal government—were a big deal because they were about big things: treason, piracy, counterfeiting, bribery of federal officials, and offences against the law of nations. With the exception of crimes committed in areas of exclusive federal jurisdiction such as the District of Columbia, Indian reservations, territories, and military bases, all other criminal matters were the concern of the states. Well, times have changed. From the 17 original federal crimes defined by Congress in 1790, the list of federal criminal offences has exploded to more than 4,000 today, occupying 27,000 pages of the U.S. Code, the vast majority added since 1960. But it's worse than that—many of these “crimes” consist of violations of federal regulations, which are promulgated by executive agencies without approval by Congress, constantly changing, often vague and conflicting, and sprawling through three hundred thousand or so pages of the Code of Federal Regulations.

This creates a legal environment in which the ordinary citizen or, for that matter, even a professional expert in an area of regulation cannot know for certain what is legal and what is not. And since these are criminal penalties and prosecutors have broad discretion in charging violators, running afoul of an obscure regulation can lead not just to a fine but serious downtime at Club Fed, such as the seafood dealers facing eight years in the pen for selling lobster tails which violated no U.S. law. And don't talk back to the Eagle—a maintenance supervisor who refused to plead guilty to having a work crew bury some waste paint cans found himself indicted on 43 federal criminal counts (United States v. Carr, 880 F.2d 1550 (1989)). Stir in enforcement programs which are self-funded by the penalties and asset seizures they generate, and you have a recipe for entrepreneurial prosecution at the expense of liberty.

This collection of essays is frightening look at criminalisation run amok, trampling common law principles such as protection against self-incrimination, unlawful search and seizure, and double jeopardy, plus a watering down of the rules of evidence, standard of proof, and need to prove both criminal intent (mens rea) and a criminal act (actus reus). You may also be amazed and appalled at how the traditional discretion accorded trial judges in sentencing has been replaced by what amount to a “spreadsheet of damnation” of 258 cells which, for example, ranks possession of 150 grams of crack cocaine a more serious offence than second-degree murder (p. 137). Each essay concludes with a set of suggestions as to how the trend can be turned around and something resembling the rule of law re-established, but that's not the way to bet. Once the ball of tyranny starts to roll, even in the early stage of the soft tyranny of implied intimidation, it gains momentum all by itself. I suppose we should at be glad they aren't torturing people. Oh, right….

 Permalink

Sinclair, Upton. The Jungle. Tucson, AZ: See Sharp Press, [1905] 2003. ISBN 1-884365-30-2.
A century ago, in 1905, the socialist weekly The Appeal to Reason began to run Upton Sinclair's novel The Jungle in serial form. The editors of the paper had commissioned the work, giving the author $500 to investigate the Chicago meat packing industry and conditions of its immigrant workers. After lengthy negotiations, Macmillan rejected the novel, and Sinclair took the book to Doubleday, which published it in 1906. The book became an immediate bestseller, has remained in print ever since, spurred the passage of the federal Pure Food and Drug Act in the very year of its publication, and launched Sinclair's career as the foremost American muckraker. The book edition published in 1906 was cut substantially from the original serial in The Appeal to Reason, which remained out of print until 1988 and the 2003 publication of this slightly different version based upon a subsequent serialisation in another socialist periodical.

Five chapters and about one third of the text of the original edition presented here were cut in the 1906 Doubleday version, which is considered the canonical text. This volume contains an introduction written by a professor of American Literature at that august institution of higher learning, the Pittsburg State University of Pittsburg, Kansas, which inarticulately thrashes about trying to gin up a conspiracy theory behind the elisions and changes in the book edition. The only problem with this theory is, as is so often the case with postmodern analyses by Literature professors (even those who are not “anti-corporate, feminist” novelists), the facts. It's hard to make a case for “censorship”, when the changes to the text were made by the author himself, who insisted over the rest of his long and hugely successful career that the changes were not significant to the message of the book. Given that The Appeal to Reason, which had funded the project, stopped running the novel two thirds of the way through due to reader complaints demanding news instead of fiction, one could argue persuasively that cutting one third was responding to reader feedback from an audience highly receptive to the subject matter. Besides, what does it mean to “censor” a work of fiction, anyway?

One often encounters mentions of The Jungle which suggest those making them aren't aware it's a novel as opposed to factual reportage, which probably indicates the writer hasn't read the book, or only encountered excerpts years ago in some college course. While there's no doubt the horrors Sinclair describes are genuine, he uses the story of the protagonist, Jurgis Rudkos, as a Pilgrim's Progress to illustrate them, often with implausible coincidences and other story devices to tell the tale. Chapters 32 through the conclusion are rather jarring. What was up until that point a gritty tale of life on the streets and in the stockyards of Chicago suddenly mutates into a thinly disguised socialist polemic written in highfalutin English which would almost certainly go right past an uneducated immigrant just a few years off the boat; it reminded me of nothing so much as John Galt's speech near the end of Atlas Shrugged. It does, however, provide insight into the utopian socialism of the early 1900s which, notwithstanding many present-day treatments, was directed as much against government corruption as the depredations of big business.

 Permalink

Orsenna, Erik. Les Chevaliers du Subjonctif. Paris: Stock, 2004. ISBN 2-234-05698-5.
Two years have passed since Jeanne and her brother Thomas were marooned on the enchanted island of words in La grammaire est une chanson douce (January 2005). In this sequel, Jeanne takes to the air in a glider with a diminutive cartographer to map the Archipelago of Conjugation and search for her brother who has vanished. Jeanne's luck with voyages hasn't changed—the glider crashes on the Island of the Subjunctives, where Jeanne encounters its strange inhabitants, guardians of the verbs which speak of what may be, or may not—the mode of dreams and love (for what is love if not hope and doubt?), the domain of the subjunctive. To employ a subjunctive survival from old French, oft-spoken but rarely thought of as such, « Vive le subjonctif ! ».

The author has been a member of the French Conseil d'État since 1985, has written more than a dozen works of fiction and nonfiction, is an accomplished sailor and president of the Centre de la mer, and was elected to l'Académie française in 1998. For additional information, visit his beautiful and creatively designed Web site, where you will find a map of the Archipelago of Conjugation and the first chapter of the book in both text and audio editions.

Can you spot the perspective error made by the artist on the front cover? (Hint: the same goof occurs in the opening title sequence of Star Trek: Voyager.)

 Permalink

Pais, Abraham. The Genius of Science. Oxford: Oxford University Press, 2000. ISBN 0-19-850614-7.
In this volume Abraham Pais, distinguished physicist and author of Subtle Is the Lord, the definitive scientific biography of Einstein, presents a “portrait gallery” of eminent twentieth century physicists, including Bohr, Dirac, Pauli, von Neumann, Rabi, and others. If you skip the introduction, you may be puzzled at some of the omissions: Heisenberg, Fermi, and Feynman, among others. Pais wanted to look behind the physics to the physicist, and thus restricted his biographies to scientists he personally knew; those not included simply didn't cross his career path sufficiently to permit sketching them in adequate detail. Many of the chapters were originally written for publication in other venues and revised for this book; consequently the balance of scientific and personal biography varies substantially among them, as does the length of the pieces: the chapter on Victor Weisskopf, adapted from an honorary degree presentation, is a mere two and half pages, while that on George Eugene Uhlenbeck, based on a lecture from a memorial symposium, is 33 pages long. The scientific focus is very much on quantum theory and particle physics, and the collected biographies provide an excellent view of the extent to which researchers groped in the dark before discovering phenomena which, presented in a modern textbook, seem obvious in retrospect. One wonders whether the mysteries of present-day physics will seem as straightforward a century from now.

 Permalink

Rand, Ayn. We the Living. New York: Signet, [1936] 1959. ISBN 0-451-18784-9.
This is Ayn Rand's first novel, which she described to be “as near to an autobiography as I will ever write”. It is a dark story of life in the Soviet Union in 1925, a year after the death of Lenin and a year before Ayn Rand's own emigration to the United States from St. Petersburg / Petrograd / Leningrad, the city in which the story is set. Originally published in 1936, this edition was revised by Rand in 1958, shortly after finishing Atlas Shrugged. Somehow, I had never gotten around to reading this novel before, and was surprised to discover that the characters were, in many ways, more complex and believable and the story less preachy than her later work. Despite the supposedly diametrically opposed societies in which they are set and the ideologies of their authors, this story and Upton Sinclair's The Jungle bear remarkable similarities and are worth reading together for an appreciation of how horribly things can go wrong in any society in which, regardless of labels, ideals, and lofty rhetoric, people do not truly own their own lives.

 Permalink

Goscinny, René and Albert Uderzo. Astérix chez les Helvètes. Paris: Hachette, [1970] 2004. ISBN 2-01-210016-3.

 Permalink

May 2005

Dembski, William A. No Free Lunch. Lanham, MD: Rowan & Littlefield, 2002. ISBN 0-7425-1297-5.
It seems to be the rule that the softer the science, the more rigid and vociferously enforced the dogma. Physicists, confident of what they do know and cognisant of how much they still don't, have no problems with speculative theories of parallel universes, wormholes and time machines, and inconstant physical constants. But express the slightest scepticism about Darwinian evolution being the one, completely correct, absolutely established beyond a shadow of a doubt, comprehensive and exclusive explanation for the emergence of complexity and diversity in life on Earth, and outraged biologists run to the courts, the legislature, and the media to suppress the heresy, accusing those who dare to doubt their dogma as being benighted opponents of science seeking to impose a “theocracy”. Funny, I thought science progressed by putting theories to the test, and that all theories were provisional, subject to falsification by experimental evidence or replacement by a more comprehensive theory which explains additional phenomena and/or requires fewer arbitrary assumptions.

In this book, mathematician and philosopher William A. Dembski attempts to lay the mathematical and logical foundation for inferring the presence of intelligent design in biology. Note that “intelligent design” needn't imply divine or supernatural intervention—the “directed panspermia” theory of the origin of life proposed by co-discoverer of the structure of DNA and Nobel Prize winner Francis Crick is a theory of intelligent design which invokes no deity, and my perpetually unfinished work The Rube Goldberg Variations and the science fiction story upon which it is based involve searches for evidence of design in scientific data, not in scripture.

You certainly won't find any theology here. What you will find is logical and mathematical arguments which sometimes ascend (or descend, if you wish) into prose like (p. 153), “Thus, if P characterizes the probability of E0 occurring and f characterizes the physical process that led from E0 to E1, then Pf −1 characterizes the probability of E1 occurring and P(E0) ≤ Pf −1(E1) since f(E0) = E1 and thus E0 ⊂ f −1(E1).” OK, I did cherry-pick that sentence from a particularly technical section which the author advises readers to skip if they're willing to accept the less formal argument already presented. Technical arguments are well-supplemented by analogies and examples throughout the text.

Dembski argues that what he terms “complex specified information” is conclusive evidence for the presence of design. Complexity (the Shannon information measure) is insufficient—all possible outcomes of flipping a coin 100 times in a row are equally probable—but presented with a sequence of all heads, all tails, alternating heads and tails, or a pattern in which heads occurred only for prime numbered flips, the evidence for design (in this case, cheating or an unfair coin) would be considered overwhelming. Complex information is considered specified if it is compressible in the sense of Chaitin-Kolmogorov-Solomonoff algorithmic information theory, which measures the randomness of a bit string by the length of the shortest computer program which could produce it. The overwhelming majority of 100 bit strings cannot be expressed more compactly than simply by listing the bits; the examples given above, however, are all highly compressible. This is the kind of measure, albeit not rigorously computed, which SETI researchers would use to identify a signal as of intelligent origin, which courts apply in intellectual property cases to decide whether similarity is accidental or deliberate copying, and archaeologists use to determine whether an artefact is of natural or human origin. Only when one starts asking these kinds of questions about biology and the origin of life does controversy erupt!

Chapter 3 proposes a “Law of Conservation of Information” which, if you accept it, would appear to rule out the generation of additional complex specified information by the process of Darwinian evolution. This would mean that while evolution can and does account for the development of resistance to antibiotics in bacteria and pesticides in insects, modification of colouration and pattern due to changes in environment, and all the other well-confirmed cases of the Darwinian mechanism, that innovation of entirely novel and irreducibly complex (see chapter 5) mechanisms such as the bacterial flagellum require some external input of the complex specified information they embody. Well, maybe…but one should remember that conservation laws in science, unlike invariants in mathematics, are empirical observations which can be falsified by a single counter-example. Niels Bohr, for example, prior to its explanation due to the neutrino, theorised that the energy spectrum of nuclear beta decay could be due to a violation of conservation of energy, and his theory was taken seriously until ruled out by experiment.

Let's suppose, for the sake of argument, that Darwinian evolution does explain the emergence of all the complexity of the Earth's biosphere, starting with a single primordial replicating lifeform. Then one still must explain how that replicator came to be in the first place (since Darwinian evolution cannot work on non-replicating organisms), and where the information embodied in its molecular structure came from. The smallest present-day bacterial genomes belong to symbiotic or parasitic species, and are in the neighbourhood of 500,000 base pairs, or roughly 1 megabit of information. Even granting that the ancestral organism might have been much smaller and simpler, it is difficult to imagine a replicator capable of Darwinian evolution with an information content 1000 times smaller than these bacteria, Yet randomly assembling even 500 bits of precisely specified information seems to be beyond the capacity of the universe we inhabit. If you imagine every one of the approximately 1080 elementary particles in the universe trying combinations every Planck interval, 1045 times every second, it would still take about a billion times the present age of the universe to randomly discover a 500 bit pattern. Of course, there are doubtless many patterns which would work, but when you consider how conservative all the assumptions are which go into this estimate, and reflect upon the evidence that life seemed to appear on Earth just about as early as environmental conditions permitted it to exist, it's pretty clear that glib claims that evolution explains everything and there are just a few details to be sorted out are arm-waving at best and propaganda at worst, and that it's far too early to exclude any plausible theory which could explain the mystery of the origin of life. Although there are many points in this book with which you may take issue, and it does not claim in any way to provide answers, it is valuable in understanding just how difficult the problem is and how many holes exist in other, more accepted, explanations. A clear challenge posed to purely naturalistic explanations of the origin of terrestrial life is to suggest a prebiotic mechanism which can assemble adequate specified information (say, 500 bits as the absolute minimum) to serve as a primordial replicator from the materials available on the early Earth in the time between the final catastrophic bombardment and the first evidence for early life.

 Permalink

Suprynowicz, Vin. The Black Arrow. Las Vegas: Mountain Media, 2005. ISBN 0-9762516-0-4.
For more than a decade, Vin Suprynowicz's columns in the Las Vegas Review-Journal (collected in Send In The Waco Killers and The Ballad of Carl Drega) have chronicled the closing circle of individual freedom in the United States. You may find these books difficult to finish, not due to any fault in the writing, which is superb, but because reading of the treatment of citizens at the hands of a government as ignorant as it is imperious makes your blood boil. Here, however, in his first venture into fiction, the author has written a book which is difficult to put down.

The year is 2030, and every complacent person who asked rhetorically, “How much worse can it get?” has seen the question answered beyond their worst nightmares. What's left of the United States is fighting to put down the secessionist mountain states of New Columbia, and in the cities of the East, people are subject to random searches by jackbooted Lightning Squads, when they aren't shooting up clandestine nursery schools operated by anarchist parents who refuse to deliver their children into government indoctrination. This is the kind of situation which cries out for a superhero and, lo and behold, onto the stage steps The Black Arrow and his deadly serious but fun-loving band to set things right through the time-tested strategy of killing the bastards. The Black Arrow has a lot in common with Batman—actually maybe a tad too much. Like Batman, he's a rich and resourceful man with a mission (but no super powers), he operates in New York City, which is called “Gotham” in the novel, and he has a secret lair in a cavern deep beneath the city.

There is a modicum of libertarian background and philosophy, but it never gets in the way of the story. There is enough explicit violence and copulation for an R rated movie—kids and those with fragile sensibilities should give this one a miss. Some of the verbal imagery in the story is so vivid you can almost see it erupting from the page—this would make a tremendous comic book adaptation or screenplay for an alternative universe Hollywood where stories of liberty were welcome.

 Permalink

Appleton, Victor. Tom Swift and His Motor-Boat. McLean, VA: IndyPublish.com, [1910] 2005. ISBN 1-4142-4253-0.
This is the second installment in the Tom Swift saga. These early volumes are more in the genre of juvenile adventure than the science fiction which emerges later in the series. I read the electronic edition of this novel published in the Tom Swift and His Pocket Library collection at this site on my PalmOS PDA. I've posted an updated electronic edition which corrects typographical and formatting errors I noted in reading the novel.

 Permalink

Herken. Gregg. Brotherhood of the Bomb. New York: Henry Holt, 2002. ISBN 0-8050-6589-X.
What more's to be said about the tangled threads of science, politics, ego, power, and history that bound together the lives of Ernest O. Lawrence, J. Robert Oppenheimer, and Edward Teller from the origin of the Manhattan Project through the postwar controversies over nuclear policy and the development of thermonuclear weapons? In fact, a great deal, as declassification of FBI files, including wiretap transcripts, release of decrypted Venona intercepts of Soviet espionage cable traffic, and documents from Moscow archives opened to researchers since the collapse of the Soviet Union have provide a wealth of original source material illuminating previously dark corners of the epoch.

Gregg Herken, a senior historian and curator at the National Air and Space Museum, draws upon these resources to explore the accomplishments, conflicts, and controversies surrounding Lawrence, Oppenheimer, and Teller, and the cold war era they played such a large part in defining. The focus is almost entirely on the period in which the three were active in weapons development and policy—there is little discussion of their prior scientific work, nor of Teller's subsequent decades on the public stage. This is a serious academic history, with almost 100 pages of source citations and bibliography, but the story is presented in an engaging manner which leaves the reader with a sense of the personalities involved, not just their views and actions. The author writes with no discernible ideological bias, and I noted only one insignificant technical goof.

 Permalink

Entine, Jon. Taboo. New York: PublicAffairs, 2000. ISBN 1-58648-026-X.

A certain segment of the dogma-based community of postmodern academics and their hangers-on seems to have no difficulty whatsoever believing that Darwinian evolution explains every aspect of the origin and diversification of life on Earth while, at the same time, denying that genetics—the mechanism which underlies evolution—plays any part in differentiating groups of humans. Doublethink is easy if you never think at all. Among those to whom evidence matters, here's a pretty astonishing fact to ponder. In the last four Olympic games prior to the publication of this book in the year 2000, there were thirty-two finalists in the men's 100-metre sprint. All thirty-two were of West African descent—a region which accounts for just 8% of the world's population. If finalists in this event were randomly chosen from the entire global population, the probability of this concentration occurring by chance is 0.0832 or about 8×10−36, which is significant at the level of more than twelve standard deviations. The hardest of results in the flintiest of sciences—null tests of conservation laws and the like—are rarely significant above 7 to 8 standard deviations.

Now one can certainly imagine any number of cultural and other non-genetic factors which predispose those with West African ancestry toward world-class performance in sprinting, but twelve standard deviations? The fact that running is something all humans do without being taught, and that training for running doesn't require any complicated or expensive equipment (as opposed to sports such as swimming, high-diving, rowing, or equestrian events), and that champions of West African ancestry hail from countries around the world, should suggest a genetic component to all but the most blinkered of blank slaters.

Taboo explores the reality of racial differences in performance in various sports, and the long and often sordid entangled histories of race and sports, including the tawdry story of race science and eugenics, over-reaction to which has made most discussion of human biodiversity, as the title of book says, taboo. The equally forbidden subject of inherent differences in male and female athletic performance is delved into as well, with a look at the hormone dripping “babes from Berlin” manufactured by the cruel and exploitive East German sports machine before the collapse of that dismal and unlamented tyranny.

Those who know some statistics will have no difficulty understanding what's going on here—the graph on page 255 tells the whole story. I wish the book had gone into a little more depth about the phenomenon of a slight shift in the mean performance of a group—much smaller than individual variation—causing a huge difference in the number of group members found in the extreme tail of a normal distribution. Another valuable, albeit speculative, insight is that if one supposes that there are genes which confer advantage to competitors in certain athletic events, then given the intense winnowing process world-class athletes pass through before they reach the starting line at the Olympics, it is plausible all of them at that level possess every favourable gene, and that the winner is determined by training, will to win, strategy, individual differences, and luck, just as one assumed before genetics got mixed up in the matter. It's just that if you don't have the genes (just as if your legs aren't long enough to be a runner), you don't get anywhere near that level of competition.

Unless research in these areas is suppressed due to an ill-considered political agenda, it is likely that the key genetic components of athletic performance will be identified in the next couple of decades. Will this mean that world-class athletic competition can be replaced by DNA tests? Of course not—it's just that one factor in the feedback loop of genetic endowment, cultural reinforcement of activities in which group members excel, and the individual striving for excellence which makes competitors into champions will be better understood.

 Permalink

Sharansky, Natan with Ron Dermer. The Case for Democracy. New York: PublicAffairs, 2004. ISBN 1-58648-261-0.
Every now and then you come across a book which cuts through the fog of contemporary political discourse with pure clarity of thought. Well of course, the programmer peanut gallery shouts in unison, Sharansky was a computer scientist before becoming a Soviet dissident and political prisoner, then Israeli politician! In this book Sharansky draws a line of unambiguous binary distinction between “free societies” and “fear societies”. In a free society, you can walk into the town square and express your views without fear of arrest, imprisonment, or physical harm (p. 41); in a “fear society”, you can't—it's that simple. Note that, as Sharansky is quick to observe, this counts as free societies without a trace of democracy, with dirigiste economies, and which discriminate against minorities and women—yet permit those who live there to protest these and other shortcomings without fear of recrimination. A society which he deems “free” may not be just, but a society which doesn't pass this most basic test of freedom is always unjust.

From this viewpoint, every compromise with fear societies and their tyrants in the interest of “stability” and “geopolitics” is always ill-considered, not just in terms of the human rights of those who live there, but in the self-interest of all free people. Fear societies require an enemy, internal or external, to unite their victims behind the tyrant, and history shows how fickle the affections of dictators can be when self-interest is at stake.

The disastrous example of funding Arafat's ugly dictatorship over the Palestinian people is dissected in detail, but the message is applicable everywhere diplomats argue for a “stable partner” over the inherent human right of people to own their own lives and govern themselves. Sharansky is forthright in saying it's better to face a democratically elected fanatic opponent than a dictator “we can do business with”, because ultimately the democratic regime will converge on meeting the needs of its citizens, while the dictator will focus on feathering his own nest at the expense of those he exploits.

If you're puzzled about which side to back in all the myriad conflicts around the globe, you could do a lot worse that simply picking the side which comes out best in Sharansky's “town square test”. Certainly, the world would be a better place if the diplomats who prattle on about “complexity” and realpolitik were hit over the head with the wisdom of an author who spent 13 years in Siberian labour camps rather than compromise his liberty.

 Permalink

Brookhiser, Richard. Founding Father. New York: Free Press, 1996. ISBN 0-684-83142-2.
This thin (less than 200 pages of main text) volume is an enlightening biography of George Washington. It is very much a moral biography in the tradition of Plutarch's Lives; the focus is on Washington's life in the public arena and the events in his life which formed his extraordinary character. Reading Washington's prose, one might assume that he, like many other framers of the U.S. Constitution, had an extensive education in the classics, but in fact his formal education ended at age 15, when he became an apprentice surveyor—among U.S. presidents, only Andrew Johnson had less formal schooling. Washington's intelligence and voracious reading—his library numbered more than 900 books at his death—made him the intellectual peer of his just sprouting Ivy League contemporaries. One historical footnote I'd never before encountered is the tremendous luck the young U.S. republic had in escaping the risk of dynasty—among the first five U.S. presidents, only John Adams had a son who survived to adulthood (and his eldest son, John Quincy Adams, became the sixth president).

 Permalink

Levin, Mark R. Men in Black. Washington: Regnery Publishing, 2005. ISBN 0-89526-050-6.
Let's see—suppose we wanted to set up a system of self-government—a novus ordo seclorum as it were—which would be immune to the assorted slippery slopes which delivered so many other such noble experiments into the jaws of tyranny, and some dude shows up and suggests, “Hey, what you really need is a branch of government composed of non-elected people with lifetime tenure, unable to be removed from office except for the most egregious criminal conduct, granted powers supreme above the legislative and executive branches, and able to define and expand the scope of their own powers without constraint.”

What's wrong with this picture? Well, it's pretty obvious that it's a recipe for an imperial judiciary, as one currently finds ascendant in the United States. Men in Black, while focusing on recent abuses of judicial power, demonstrates that there's nothing new about judges usurping the prerogatives of democratically elected branches of government—in fact, the pernicious consequences of “judicial activism” are as old as America, winked at by each generation of politicians as long as it advanced their own agenda more rapidly than the ballot box permitted, ignoring (as politicians are inclined to do, never looking beyond the next election), that when the ideological pendulum inevitably swings back the other way, judges may thwart the will of elected representatives in the other direction for a generation or more.

But none of this is remotely new. Robert Yates, a delegate to the Constitutional Convention who came to oppose the ratification of that regrettable document, wrote in 1788:

They will give the sense of every article of the constitution, that may from time to time come before them. And in their decisions they will not confine themselves to any fixed or established rules, but will determine, according to what appears to them, the reason and spirit of the constitution. The opinions of the supreme court, whatever they may be, will have the force of law; because there is no power provided in the constitution, that can correct their errors, or controul [sic] their adjudications. From this court there is no appeal.
The fact that politicians are at loggerheads over the selection of judges has little or nothing to do with ideology and everything to do with judges having usurped powers explicitly reserved for representatives accountable to their constituents in regular elections.

How to fix it? Well, I proposed my own humble solution here not so long ago, and the author of this book suggests 12 year terms for Supreme Court judges staggered with three year expiry. Given how far the unchallenged assertion of judicial supremacy has gone, a constitutional remedy in the form of a legislative override of judicial decisions (with the same super-majority as required to override an executive veto) might also be in order.

 Permalink

June 2005

Rolfe, Fr. Hadrian the Seventh. New York: New York Review Books, [1904] 2001. ISBN 0-940322-62-5.
This is a masterpiece of eccentricity. The author, whose full name is Frederick William Serafino Austin Lewis Mary Rolfe, deliberately abbreviated his name to “Fr.” not just in the interest of concision, but so it might be mistaken for “Father” and the book deemed the work of a Catholic priest. (Rolfe also used the name “Baron Corvo” and affected a coat of arms with a raven.) Having twice himself failed in aspirations to the priesthood, in this novel the protagonist, transparently based upon the author, finds himself, through a sequence of events straining even the omnipotence of the Holy Spirit, vaulted from the humble estate of debt-ridden English hack writer directly to the papacy, taking the name Hadrian the Seventh in honour of Hadrian IV, the first, last, and only English pope to date.

Installed on the throne of Saint Peter, Hadrian quickly moves to remedy the discrepancies his erstwhile humble life has caused to him to perceive between the mission of the Church and the policies of its hierarchy. Dodging intrigue from all sides, and wielding his intellect, wit, and cunning along with papal authority, he quickly becomes what now would be called a “media pope” and a major influence on the world political stage, which he remakes along lines which, however alien and ironic they may seem today, might have been better than what actually happened a decade after this novel was published in 1904.

Rolfe, like Hadrian, is an “artificer in verbal expression”, and his neologisms and eccentric spelling (“saxificous head of the Medoysa”) and Greek and Latin phrases—rarely translated—sprinkle the text. Rolfe/Hadrian doesn't think too highly of the Irish, the French, Socialists, the press, and churchmen who believe their mission is building cathedrals and accumulating treasure rather than saving souls, and he skewers these and other targets on every occasion—if such barbs irritate you, you will find plenty here at which to take offence. The prose is simply beautiful, and thought provoking as well as funny. The international politics of a century ago figures in the story, and if you're not familiar with that now rather obscure era, you may wish to refresh your memory as to principal players and stakes in the Great Game of that epoch.

 Permalink

Appleton, Victor. Tom Swift and His Airship. Bedford, MA: Applewood Books, [1910] 1992. ISBN 1-55709-177-3.
Following his adventures on land and lake, in this third volume of the Tom Swift series, our hero takes to the air in his hybrid dirigible/airplane, the Red Cloud. (When this book was written, within a decade of the Wright Brothers' first flight, “airship” referred to any flying craft, lighter or heavier than air.) Along the way he survives a forest fire, thunderstorm, flying bullets, false accusation of a crime, and an irritable schoolmarm not amused by having an airship crash into her girls' school, and solves the crime, bags the perpetrators, and clears his good name. Bless my seltzer bottle—never get on the wrong side of Mr. Wakefield Damon!

Apart from the arm-waving about new inventions which is the prerogative of the science fiction writer, Victor Appleton is generally quite careful about the technical details—All American Boys in the early 20th century knew their machinery and would be all over a scribbler who didn't understand how a carburetor worked! Here, however, he misunderstands lighter than air flight. He describes the Red Cloud as supported by a rigid aluminium gas container filled with “a secret gas, made partly of hydrogen, being very light and powerful”. But since the only thing that matters in generating lift is the weight of the air displaced compared to the weight of the gas displacing it, and since hydrogen is the lightest of elements (can't have fewer than one proton, mate!), then any mixture of hydrogen with anything else would have less lift than hydrogen alone. (You might mix hydrogen with helium to obtain a nonflammable gas lighter than pure helium—something suggested by Arthur C. Clarke a few years ago—but here Tom's secret gas is claimed to have more lift than hydrogen, and the question of flammability is never raised. Also, the gas is produced on demand by a “gas generator”. That rules out helium as a component, as it is far too noble to form compounds.) Later, Tom increases the lift on the ship by raising the pressure in the gas cells: “when an increased pressure of the vapor was used the ship was almost as buoyant as before” (chapter 21). But increasing the pressure of any gas in a fixed volume cell reduces the lift, as it increases the weight of the gas within without displacing any additional air. One could make this work by assuming a gas cell with a flexible bladder which permitted the volume occupied by the lift gas to expand and contract as desired, the rest being filled with ambient air, but even then the pressure of the lift gas would not increase, but simply stay the same as atmospheric pressure as more air was displaced. Feel free to berate me for belabouring such a minor technical quibble in a 95 year old story, but I figure that Tom Swift fans probably, like myself, enjoy working out this kind of stuff. The fact that this is only such item I noticed is a testament to the extent Appleton sweated the details.

I read the electronic edition of this novel published in the Tom Swift and His Pocket Library collection at this site on my PalmOS PDA in random moments of downtime over a month or so. I've posted an updated electronic edition which corrects typographical errors I spotted while reading the yarn.

 Permalink

Stenhoff, Mark. Ball Lightning. New York: Kluwer Academic / Plenum Publishers, 1999. ISBN 0-306-46150-1.
Reports of ball lightning—glowing spheres of light which persist for some number of seconds, usually associated with cloud to ground lightning strikes during thunderstorms, date back to the classical Greeks. Since 1838, when physicist and astronomer Dominique Arago published a survey of twenty reports of ball lightning, a long list of scientists, many eminent, have tried their hands at crafting a theory which might explain such an odd phenomenon yet, at the start of the twenty-first century ball lightning remains, as Arago said in 1854, “One of the most inexplicable problems of physics today.”

Well, actually, ball lightning only poses problems to the physics of yesterday and today if it, you know, exists, and the evidence that it does is rather weak, as this book demonstrates. (Its author does come down in favour of the existence of ball lightning, and wrote the 1976 Nature paper which helped launched the modern study of the phenomenon.) As of the date this book was published, not a single unambiguous photograph, movie, or video recording of ball lightning was known to exist, and most of the “classic” photographs illustrated in chapter 9 are obvious fakes created by camera motion and double exposure. It is also difficult when dealing with reports by observers unacquainted with the relevant phenomena to sort out genuine ball lightning (if such exists) from other well-documented and understood effects such as corona discharges (St. Elmo's fire), that perennial favourite of UFO debunkers: ignis fatuus or swamp gas, and claims of damage caused by the passage of ball lightning or its explosive dissipation from those produced by conventional lightning strikes. See the author's re-casting of a lightning strike to a house which he personally investigated into “ball lightning language” on pp. 105–106 for an example of how such reports can originate.

Still, after sorting out the mis-identifications, hoaxes, and other dross, a body of reports remains, some by expert observers of atmospheric phenomena, which have a consistency not to be found, for example, in UFO reports. A number of observations of ball lightning within metallic aircraft fuselages are almost identical and pose a formidable challenge to most models. The absence of unambiguous evidence has not in any way deterred the theoretical enterprise, and chapters 11–13 survey models based on, among other mechanisms, heated air, self-confining plasma vortices and spheroids, radial charge separation, chemical reactions and combustion, microwave excitation of metastable molecules of atmospheric gases, nuclear fusion and the production of unstable isotopes of oxygen and nitrogen, focusing of cosmic rays, antimatter meteorites, and microscopic black holes. One does not get the sense of this converging upon a consensus. Among the dubious theories, there are some odd claims of experimental results such as the production of self-sustaining plasma balls by placing a short burning candle in a kitchen microwave oven (didn't work for me, anyway—if you must try it yourself, please use common sense and be careful), and reports of producing ball lightning sustained by fusion of deuterium in atmospheric water vapour by short circuiting a 200 tonne submarine accumulator battery. (Don't try this one at home, kids!)

The book concludes with the hope that with increasing interest in ball lightning, as evidenced by conferences such as the International Symposia on Ball Lightning, and additional effort in collecting and investigating reports, this centuries-old puzzle may be resolved within this decade. I'm not so sure—the UFO precedent does not incline one to optimism. For those motivated to pursue the matter further, a bibliography of more than 75 pages and 2400 citations is included.

 Permalink

Barlow, Connie. The Ghosts of Evolution. New York: Basic Books, 2000. ISBN 0-465-00552-7.
Ponder the pit of the avocado; no, actually ponder it—hold it in your hand and get a sense of how big and heavy it is. Now consider that due to its toughness, slick surface, and being laced with toxins, it was meant to be swallowed whole and deposited far from the tree in the dung of the animal who gulped down the entire fruit, pit and all. Just imagine the size of the gullet (and internal tubing) that requires—what on Earth, or more precisely, given the avocado's range, what in the Americas served to disperse these seeds prior to the arrival of humans some 13,000 years ago?

The Western Hemisphere was, in fact, prior to the great extinction at the end of the Pleistocene, (coincident with the arrival of humans across the land bridge with Asia, and probably the result of their intensive hunting), home to a rich collection of megafauna: mammoths and mastodons, enormous ground sloths, camels, the original horses, and an armadillo as large as a bear, now all gone. Plants with fruit which doesn't seem to make any sense—which rots beneath the tree and isn't dispersed by any extant creature—may be the orphaned ecological partners of extinct species with which they co-evolved. Plants, particularly perennials and those which can reproduce clonally, evolve much more slowly than mammal and bird species, and may survive, albeit in a limited or spotty range, through secondary dispersers of their seeds (seed hoarders and predators, water, and gravity) long after the animal vectors their seeds evolved to employ have departed the scene. That is the fascinating premise of this book, which examines how enigmatic, apparently nonsensical fruit such as the osage orange, Kentucky coffee tree, honey locust, ginkgo, desert gourd, and others may be, figuratively, ripening their fruit every year waiting for the passing mastodon or megatherium which never arrives, some surviving because they are attractive, useful, and/or tasty to the talking apes who killed off the megafauna.

All of this is very interesting, and along the way one learns a great deal about the co-evolution of plants and their seed dispersal partners and predators—an endless arms race involving armour, chemical warfare (selective toxins and deterrents in pulp and seeds), stealth, and co-optation (burrs which hitch a ride on the fur of animals). However, this 250 page volume is basically an 85 page essay struggling to get out of the rambling, repetitious, self-indulgent, pretentious prose and unbridled speculations of the author, which results in a literary bolus as difficult to masticate as the seed pods of some of the plants described therein. This book desperately needed the attention of an editor ready to wield the red pencil and Basic Books, generally a quality publisher of popularisations of science, dropped the ball (or, perhaps I should say, spit out the seed) here. The organisation of the text is atrocious—we encounter the same material over and over, frequently see technical terms such as indehiscent used four or five times before they are first defined, only to then endure a half-dozen subsequent definitions of the same word (a brief glossary of botanical terms would be a great improvement), and on occasions botanical jargon is used apparently because it rolls so majestically off the tongue or lends authority to the account—which authority is sorely lacking. While there is serious science and well-documented, peer-reviewed evidence for anachronism in certain fruits, Barlow uses the concept as a launching pad for wild speculation in which any apparent lack of perfect adaptation between a plant and its present-day environment is taken as evidence for an extinct ecological partner.

One of many examples is the suggestion on p. 164 that the fact that the American holly tree produces spiny leaves well above the level of any current browser (deer here, not Internet Exploder or Netscrape!) is evidence it evolved to defend itself against much larger herbivores. Well, maybe, but it may just be that a tree lacks the means to precisely measure the distance from the ground, and those which err on the side of safety are more likely to survive. The discussion of evolution throughout is laced with teleological and anthropomorphic metaphors which will induce teeth-grinding among Darwinists audible across a large lecture hall.

At the start of chapter 8, vertebrate paleontologist Richard Tedford is quoted as saying, “Frankly, this is not really science. You haven't got a way of testing any of this. It's more metaphysics.”—amen. The author tests the toxicity of ginkgo seeds by feeding them to squirrels in a park in New York City (“All the world seems in tune, on a spring afternoon…”), and the attractiveness of maggot-ridden overripe pawpaw fruit by leaving it outside her New Mexico trailer for frequent visitor Mrs. Foxie (you can't make up stuff like this) and, in the morning, it was gone! I recall a similar experiment from childhood involving milk, cookies, and flying reindeer; she does, admittedly, acknowledge that skunks or raccoons might have been responsible. There's an extended discourse on the possible merits of eating dirt, especially for pregnant women, then in the very next chapter the suggestion that the honey locust has “devolved” into the swamp locust, accompanied by an end note observing that a professional botanist expert in the genus considers this nonsense.

Don't get me wrong, there's plenty of interesting material here, and much to think about in the complex intertwined evolution of animals and plants, but this is a topic which deserves a more disciplined author and a better book.

 Permalink

Mack, John E. Abduction. New York: Ballantine Books, [1994] 1995. ISBN 0-345-39300-7.
I started this book, as I recall, sometime around 1998, having picked it up to get a taste for the “original material” after reading C.D.B. Bryan's excellent Close Encounters of the Fourth Kind, describing an MIT conference on the alien abduction phenomenon. I made it most of the way through Abduction on the first attempt, but ran out of patience and steam about 100 pages from the finish line while reading the material “recovered” from “experiencer” Carlos, which is the literary equivalent of a Vulcan mind meld with a custard pudding. A mercifully brief excerpt with Mack's interpolations in parentheses goes as follows (p. 355).
Their bodies go from being the little white creatures they are to light. But when they become light, they first become like cores of light, like molten light. The appearance (of the core of light) is one of solidity. They change colors and a haze is projected around the (interior core which is centralized; surrounding this core in an immediate environment is a denser, tighter) haze (than its outer peripheries). The eyes are the last to go (as one perceives the process of the creatures disappearing into the light), and then they just kind of disappear or are absorbed into this. … We are or exist through our flesh, and they are or exist through whatever it is they are.
Got that? If not, there is much, much more along these lines in the extended babblings of this and a dozen other abductees, developed during the author's therapy sessions with them. Now, de mortuis nihil nisi bonum (Mack was killed in a traffic accident in 2004), and having won a Pulitzer Prize for his biography of T.E. Lawrence in addition to his career as a professor of psychiatry at the Harvard Medical School and founder of the psychiatry department at Cambridge Hospital, his credentials incline one to hear him out, however odd the message may seem to be.

One's mind, however, eventually summons up Thomas Jefferson's (possibly apocryphal) remark upon hearing of two Yale professors who investigated a meteor fall in Connecticut and pronounced it genuine, “Gentlemen, I would rather believe that two Yankee professors would lie than believe that stones fall from heaven.” Well, nobody's accusing Professor Mack of lying, but the leap from the oh-wow, New Age accounts elicited by hypnotic regression and presented here, to the conclusion that they are the result of a genuine phenomenon of some kind, possibly contact with “another plane of reality” is an awfully big one, and simply wading through the source material proved more than I could stomach on my first attempt. So, the book went back on the unfinished shelf, where it continued to glare at me balefully until a few days ago when, looking for something to read, I exclaimed, “Hey, if I can make it through The Ghosts of Evolution, surely I can finish this one!” So I did, picking up from the bookmark I left where my first assault on the summit petered out.

In small enough doses, much of this material can be quite funny. This paperback edition includes two appendices added to address issues raised after the publication of the original hardcover. In the first of these (p. 390), Mack argues that the presence of a genuine phenomenon of some kind is strongly supported by “…the reports of the experiencers themselves. Although varied in some respects, these are so densely consistent as to defy conventional psychiatric explanations.” Then, a mere three pages later, we are informed:

The aliens themselves seem able to change or disguise their form, and, as noted, may appear initially to the abductees as various kinds of animals, or even as ordinary human beings, as in Peter's case. But their shape-shifting abilities extend to their vehicles and to the environments they present to the abductees, which include, in this sample, a string of motorcycles (Dave), a forest and conference room (Catherine), images of Jesus in white robes (Jerry), and a soaring cathedral-like structure with stained glass windows (Sheila). One young woman, not written about in this book, recalled at age seven seeing a fifteen-foot kangaroo in a park, which turned out to be a small spacecraft.
Now that's “densely consistent”! One is also struck by how insipidly banal are the messages the supposed aliens deliver, which usually amount to New Age cerebral suds like “All is one”, “Treat the Earth kindly”, and the rest of the stuff which appeals to those who are into these kinds of things in the first place. Occam's razor seems to glide much more smoothly over the supposition that we are dealing with seriously delusional people endowed with vivid imaginations than that these are “transformational” messages sent by superior beings to avert “planetary destruction” by “for-profit business corporations” (p. 365, Mack's words, not those of an abductee). Fifteen-foot kangaroo? Well, anyway, now this book can hop onto the dubious shelf in the basement and stop making me feel guilty! For a sceptical view of the abduction phenomenon, see Philip J. Klass's UFO Abductions: A Dangerous Game.

 Permalink

Job, Macarthur. Air Disaster, Vol. 3. Fyshwick, Australia: Aerospace Publications, 1998. ISBN 1-875671-34-X.
In the early 1970s I worked for a company that sold remote batch computing services on UNIVAC mainframes. Our management visited Boeing headquarters in Seattle to pitch for some of their business (unlikely, as Boeing had their own computer service bureau at the time, but you never know unless you try). Part of the presentation focused on how reliable our service was, averaging better than 99.5% uptime. The Boeing data processing manager didn't seem too impressed with this. He asked, “When you came up here from San Francisco, did you fly on one of our airplanes?” “As a matter of fact, we did.”, answered the president of our company. The Boeing guy then asked, “Well, how would you feel if I told you Boeing airplanes only crash about once every two hundred flights?” The meeting moved on to other topics; we never did get any business from Boeing.

Engineering is an art we learn from failure, and the aviation safety community is the gold standard when it comes to getting to the probable cause of a complicated disaster and defining achievable steps to prevent it from recurring. There is much for practitioners of other branches of engineering to admire and learn from looking over the shoulders of their colleagues in air accident investigation, and Macarthur Job's superb Air Disaster series, of which this is the third volume (Vol. 1, Vol. 2) provides precisely such a viewpoint. Starting from the official accident reports, author Job and illustrator Matthew Tesch recreate the circumstances which led to each accident and the sometimes tortuous process through which investigators established what actually happened. The presentation is not remotely sensationalistic, yet much more readable than the dry prose of most official accident reports. If detail is required, Job and Tesch do not shrink from providing it; four pages of text and a detailed full page diagram on page 45 of this volume explain far more about the latching mechanism of the 747 cargo door than many people might think there is to know, but since you can't otherwise understand how the door of a United 747 outbound from Honolulu could have separated in flight, it's all there.

Reading the three volumes, which cover the jet age from the de Havilland Comet through the mid 1990s, provides an interesting view of the way in which assiduous investigation of anomalies and incremental fixes have made an inherently risky activity so safe that some these days seem more concerned with fingernail clippers than engine failure or mid-air collisions. Many of the accidents in the first two volumes were due to the machine breaking in some way or another, and one by one, they have basically been fixed to the extent that in this volume, the only hardware related accident is the 747 cargo door failure (in which nine passengers died, but 345 passengers and crew survived). The other dozen are problems due to the weather, human factors, and what computer folks call “user interface”—literally so in several cases of mode confusion and mismanagement of the increasingly automated flight decks of the latest generation of airliners. Anybody designing interfaces in which the user is expected to have a correct mental model of the operation of a complex, partially opaque system will find many lessons here, some learnt at tragic cost in an environment where the stakes are high and the margin of error small.

 Permalink

Hawks, Tony. Round Ireland with a Fridge. London: Ebury Press, 1998. ISBN 0-09-186777-0.
The author describes himself as “not, by nature” either a drinking or a betting man. Ireland, however, can have a way of changing those particular aspects of one's nature, and so it was that after a night about which little else was recalled, our hero found himself having made a hundred pound bet that he could hitch-hike entirely around the Republic of Ireland in one calendar month, accompanied the entire way by a refrigerator. A man, at a certain stage in his life, needs a goal, even if it is, as this epic quest was described by an Irish radio host, “A totally purposeless idea, but a damn fine one.” And the result is this very funny book. Think about it; almost every fridge lives a life circumscribed by a corner of a kitchen—door opens—light goes on—door closes—light goes out (except when the vegetables are having one of their wild parties in the crisper—sssshhh—mustn't let the homeowner catch on). How singular and rare it is for a fridge to experience the freedom of the open road, to go surfing in the Atlantic (chapter 10), to be baptised with a Gaelic name that means “freedom”, blessed by a Benedictine nun (chapter 14), be guest of honour at perhaps the first-ever fridge party at an Irish pub (chapter 21), and make a triumphal entry into Dublin amid an army of well-wishers consisting entirely of the author pulling it on a trolley, a radio reporter carrying a mop and an ice cube tray, and an elderly bagpiper (chapter 23). Tony Hawks points out one disadvantage of his profession I'd never thought of before. When one of those bizarre things with which his life and mine are filled comes to pass, and you're trying to explain something like, “No, you see there were squirrels loose in the passenger cabin of the 747”, and you're asked the inevitable, “What are you, a comedian?”, he has to answer, “Well, actually, as a matter of fact, I am.”

A U.S. edition is now available.

 Permalink

July 2005

Sloane, Eric. Diary of an Early American Boy. Mineola, NY: Dover, [1962] 2004. ISBN 0-486-43666-7.
In 1805, fifteen year old Noah Blake kept a diary of his life on a farm in New England. More than a century and a half later, artist, author, and collector of early American tools Eric Sloane discovered the diary and used it as the point of departure for this look at frontier life when the frontier was still in Connecticut. Young Noah was clearly maturing into a fine specimen of the taciturn Yankee farmer—much of the diary reads like:
21: A sour, foggy Sunday.
22: Heavy downpour, but good for the crops.
23: Second day of rain. Father went to work under cover at the mill.
24: Clear day. Worked in the fields. Some of the corn has washed away.
The laconic diary entries are spun into a fictionalised but plausible story of farm life focusing on the self-reliant lifestyle and the tools and techniques upon which it was founded. Noah Blake was atypical in being an only child at a time when large families were the norm; Sloane takes advantage of this in showing Noah learning all aspects of farm life directly from his father. The numerous detailed illustrations provide a delightful glimpse into the world of two centuries ago and an appreciation for the hard work and multitude of skills it took to make a living from the land in those days.

 Permalink

Faverjon, Philippe. Les mensonges de la Seconde Guerre mondiale. Paris: Perrin, 2004. ISBN 2-262-01949-5.
“In wartime,” said Winston Churchill, “truth is so precious that she should always be attended by a bodyguard of lies.” This book examines lies, big and small, variously motivated, made by the principal combatants in World War II, from the fabricated attack on a German radio station used as a pretext to launch the invasion of Poland which ignited the conflict, to conspiracy theories about the Yalta conference which sketched the map of postwar Europe as the war drew to a close. The nature of the lies discussed in the various chapters differs greatly—some are propaganda addressed to other countries, others intended to deceive domestic populations; some are strategic disinformation, while still others are delusions readily accepted by audiences who preferred them to the facts. Although most chapters end with a paragraph which sets the stage for the next, each is essentially a stand-alone essay which can be read on its own, and the book can be browsed in any order. The author is either (take your pick) scrupulous in his attention to historical accuracy or, (if you prefer) almost entirely in agreement with my own viewpoint on these matters. There is no “big message”, philosophical or otherwise, here, nor any partisan agenda—this is simply a catalogue of deception in wartime based on well-documented historical examples which, translated into the context of current events, can aid in critical analysis of conventional wisdom and mass stampede media coverage of present-day conflicts.

 Permalink

Sowell, Thomas. Black Rednecks and White Liberals. San Francisco: Encounter Books, 2005. ISBN 1-59403-086-3.
One of the most pernicious calumnies directed at black intellectuals in the United States is that they are “not authentic”—that by speaking standard English, assimilating into the predominant culture, and seeing learning and hard work as the way to get ahead, they have somehow abandoned their roots in the ghetto culture. In the title essay in this collection, Thomas Sowell demonstrates persuasively that this so-called “black culture” owes its origins, in fact, not to anything blacks brought with them from Africa or developed in times of slavery, but rather to a white culture which immigrants to the American South from marginal rural regions of Britain imported and perpetuated long after it had died out in the mother country. Members of this culture were called “rednecks” and “crackers” in Britain long before they arrived in America, and they proceeded to install this dysfunctional culture in much of the rural South. Blacks arriving from Africa, stripped of their own culture, were immersed into this milieu, and predictably absorbed the central values and characteristics of the white redneck culture, right down to patterns of speech which can be traced back to the Scotland, Wales, and Ulster of the 17th century. Interestingly, free blacks in the North never adopted this culture, and were often well integrated into the community until the massive northward migration of redneck blacks (and whites) from the South spawned racial prejudice against all blacks. While only 1/3 of U.S. whites lived in the South, 90% of blacks did, and hence the redneck culture which was strongly diluted as southern whites came to the northern cities, was transplanted whole as blacks arrived in the north and were concentrated in ghetto communities.

What makes this more than an anthropological and historical footnote is, that as Sowell describes, the redneck culture does not work very well—travellers in the areas of Britain it once dominated and in the early American South described the gratuitous violence, indolence, disdain for learning, and a host of other characteristics still manifest in the ghetto culture today. This culture is alien to the blacks who it mostly now afflicts, and is nothing to be proud of. Scotland, for example, largely eradicated the redneck culture, and became known for learning and enterprise; it is this example, Sowell suggests, that blacks could profitably follow, rather than clinging to a bogus culture which was in fact brought to the U.S. by those who enslaved their ancestors.

Although the title essay is the most controversial and will doubtless generate the bulk of commentary, it is in fact only 62 pages in this book of 372 pages. The other essays discuss the experience of “middleman minorities” such as the Jews, Armenians in the Ottoman Empire, Lebanese in Africa, overseas Chinese, etc.; the actual global history of slavery, as a phenomenon in which people of all races, continents, and cultures have been both slaves and slaveowners; the history of ethnic German communities around the globe and whether the Nazi era was rooted in the German culture or an aberration; and forgotten success stories in black education in the century prior to the civil rights struggles of the mid 20th century. The book concludes with a chapter on how contemporary “visions” and agendas can warp the perception of history, discarding facts which don't fit and obscuring lessons from the past which can be vital in deciding what works and what doesn't in the real world. As with much of Sowell's work, there are extensive end notes (more than 60 pages, with 289 notes on the title essay alone) which contain substantial “meat” along with source citations; they're well worth reading over after the essays.

 Permalink

Hickam, Homer H., Jr. Rocket Boys. New York: Doubleday, 1998. ISBN 0-385-33321-8.
The author came of age in southern West Virginia during the dawn of the space age. Inspired by science fiction and the sight of Sputnik gliding through the patch of night sky between the mountains which surrounded his coal mining town, he and a group of close friends decided to build their own rockets. Counselled by the author's mother, “Don't blow yourself up”, they managed not only to avoid that downside of rocketry (although Mom's garden fence was not so lucky), but succeeded in building and launching more than thirty rockets powered by, as they progressed, first black powder, then melted saltpetre and sugar (“rocket candy”), and finally “zincoshine”, a mixture of powdered zinc and sulphur bound by 200 proof West Virginia mountain moonshine, which propelled their final rocket almost six miles into the sky. Their efforts won them the Gold and Silver award at the National Science Fair in 1960, and a ticket out of coal country for the author, who went on to a career as a NASA engineer. This is a memoir by a member of the last generation when the U.S. was still free enough for boys to be boys, and boys with dreams were encouraged to make them come true. This book will bring back fond memories for any member of that generation, and inspire envy among those who postdate that golden age.

This book served as the basis for the 1999 film October Sky, which I have not seen.

 Permalink

Posner, Gerald L. Secrets of the Kingdom. New York: Random House, 2005. ISBN 1-4000-6291-8.
Most of this short book (196 pages of main text) is a straightforward recounting of the history of Saudi Arabia from its founding as a unified kingdom in 1932 under Ibn Saud, and of the petroleum-dominated relationship between the United States and the kingdom up to the present, based almost entirely upon secondary sources. Chapter 10, buried amidst the narrative and barely connected to the rest, and based on the author's conversations with an unnamed Mossad (Israeli intelligence) officer and an unidentified person claiming to be an eyewitness, describes a secret scheme called “Petroleum Scorched Earth” (“Petro SE”) which, it is claimed, was discovered by NSA intercepts of Saudi communications which were shared with the Mossad and then leaked to the author.

The claim is that the Saudis have rigged all of their petroleum infrastructure so that it can be destroyed from a central point should an invader be about to seize it, or the House of Saud fall due to an internal revolution. Oil and gas production facilities tend to be spread out over large areas and have been proven quite resilient—the damage done to Kuwait's infrastructure during the first Gulf War was extensive, yet reparable in a relatively short time, and the actual petroleum reserves are buried deep in the Earth and are essentially indestructible—if a well is destroyed, you simply sink another well; it costs money, but you make it back as soon as the oil starts flowing again. Refineries and storage facilities are more easily destroyed, but the real long-term wealth (and what an invader or revolutionary movement would covet most) lies deep in the ground. Besides, most of Saudi Arabia's export income comes from unrefined products (in the first ten months of 2004, 96% of Saudi Arabia's oil exports to the U.S. were crude), so even if all the refineries were destroyed (which is difficult—refineries are big and spread out over a large area) and took a long time to rebuild, the core of the export economy would be up and running as soon as the wells were pumping and pipelines and oil terminals were repaired.

So, it is claimed, the Saudis have mined their key facilities with radiation dispersal devices (RDDs), “dirty bombs” composed of Semtex plastic explosive mixed with radioactive isotopes of cesium, rubidium (huh?), and/or strontium which, when exploded, will disperse the radioactive material over a broad area, which (p. 127) “could render large swaths of their own country uninhabitable for years”. What's that? Do I hear some giggling from the back of the room from you guys with the nuclear bomb effects computers? Well, gosh, where shall we begin?

Let us commence by plinking an easy target, the rubidium. Metallic rubidium burns quite nicely in air, which makes it easy to disperse, but radioactively it's a dud. Natural rubidium contains about 28% of the radioactive isotope rubidium-87, but with a half-life of about 50 billion years, it's only slightly more radioactive than dirt when dispersed over any substantial area. The longest-lived artificially created isotope is rubidium-83 with a half-life of only 86 days, which means that once dispersed, you'd only have to wait a few months for it to decay away. In any case, something which decays so quickly is useless for mining facilities, since you'd need to constantly produce fresh batches of the isotope (in an IAEA inspected reactor?) and install it in the bombs. So, at least the rubidium part of this story is nonsense; how about the rest?

Cesium-137 and strontium-90 both have half-lives of about 30 years and are readily taken up and stored in the human body, so they are suitable candidates for a dirty bomb. But while a dirty bomb is a credible threat for contaminating high-value, densely populated city centres in countries whose populations are wusses about radiation, a sprawling oil field or petrochemical complex is another thing entirely. The Federation of American Scientists report, “Dirty Bombs: Response to a Threat”, estimates that in the case of a cobalt-salted dirty bomb, residents who lived continuously in the contaminated area for forty years after the detonation would have a one in ten chance of death from cancer induced by the radiation. With the model cesium bomb, five city blocks would be contaminated at a level which would create a one in a thousand chance of cancer for residents.

But this is nothing! To get a little perspective on this, according to the U.S. Centers for Disease Control's Leading Causes of Death Reports, people in the United States never exposed to a dirty bomb have a 22.8% probability of dying of cancer. While the one in ten chance created by the cobalt dirty bomb is a substantial increase in this existing risk, that's the risk for people who live for forty years in the contaminated area. Working in a contaminated oil field is quite different. First of all, it's a lot easier to decontaminate steel infrastructure and open desert than a city, and oil field workers can be issued protective gear to reduce their exposure to the remaining radiation. In any case, they'd only be in the contaminated area for the work day, then return to a clean area at the end of the shift. You could restrict hiring to people 45 years and older, pay a hazard premium, and limit their contract to either a time period (say two years) or based on integrated radiation dose. Since radiation-induced cancers usually take a long time to develop, older workers are likely to die of some other cause before the effects of radiation get to them. (This sounds callous, but it's been worked out in detail in studies of post nuclear war decontamination. The rules change when you're digging out of a hole.)

Next, there is this dumb-as-a-bag-of-dirt statement on p. 127:

Saudi engineers calculated that the soil particulates beneath the surface of most of their three hundred known reserves are so fine that radioactive releases there would permit the contamination to spread widely through the soil subsurface, carrying the radioactivity far under the ground and into the unpumped oil. This gave Petro SE the added benefit of ensuring that even if a new power in the Kingdom could rebuild the surface infrastructure, the oil reserves themselves might be unusable for years.
Hey, you guys in the back—enough with the belly laughs! Did any of the editors at Random House think to work out, even if you stipulated that radioactive contamination could somehow migrate from the surface down through hundreds to thousands of metres of rock (how, due to the abundant rain?), just how much radioactive contaminant you'd have to mix with the estimated two hundred and sixty billion barrels of crude oil in the Saudi reserves to render it dangerously radioactive? In any case, even if you could magically transport the radioactive material into the oil bearing strata and supernaturally mix it with the oil, it would be easy to separate during the refining process.

Finally, there's the question of why, if the Saudis have gone to all the trouble to rig their oil facilities to self-destruct, it has remained a secret waiting to be revealed in this book. From a practical standpoint, almost all of the workers in the Saudi oil fields are foreigners. Certainly some of them would be aware of such a massive effort and, upon retirement, say something about it which the news media would pick up. But even if the secret could be kept, we're faced with the same question of deterrence which arose in the conclusion of Dr. Strangelove with the Soviet doomsday machine—it's idiotic to build a doomsday machine and keep it a secret! Its only purpose is to deter a potential attack, and if attackers don't know there's a doomsday machine, they won't be deterred. Precisely the same logic applies to the putative Saudi self-destruct button.

Now none of this argumentation proves in any way that the Saudis haven't rigged their oil fields to blow up and scatter radioactive material on the debris, just that it would be a phenomenally stupid thing for them to try to do. But then, there are plenty of precedents for the Saudis doing dumb things—they have squandered the greatest fortune in the history of the human race and, while sitting on a quarter of all the world's oil, seen their per capita GDP erode to fall between that of Poland and Latvia. If, indeed, they have done something so stupid as this scorched earth scheme, let us hope they manage the succession to the throne, looming in the near future, in a far more intelligent fashion.

 Permalink

Aagaard, Finn. Aagaard's Africa. Washington: National Rifle Association, 1991. ISBN 0-935998-62-4.
The author was born in Kenya in 1932 and lived there until 1977 when, after Kenya's ban on game hunting destroyed his livelihood as a safari guide, he emigrated to the United States, where he died in April 2000. This book recounts his life in Kenya, from boyhood through his career as a professional hunter and guide. If you find the thought of hunting African wildlife repellent, this is not the book for you. It does provide a fine look at Africa and its animals by a man who clearly cherished the land and the beasts which roam it, and viewed the responsible hunter as an integral part of a sustainable environment. A little forensic astronomy allows us to determine the day on which the kudu hunt described on page 124 took place. Aagaard writes, “There was a total eclipse of the sun that afternoon, but it seemed a minor event to us. Laird and I will always remember that day as ‘The Day We Shot The Kudu’.” Checking the canon of 20th century solar eclipses shows that the only total solar eclipse crossing Kenya during the years when Aagaard was hunting there was on June 30th, 1973, a seven minute totality once in a lifetime spectacle. So, the kudu hunt had to be that morning. To this amateur astronomer, no total solar eclipse is a minor event, and the one I saw in Africa will forever remain a major event in my life. A solar eclipse with seven minutes of totality is something I shall never live to see (the next occurring on June 25th, 2150), so I would have loved to have seen the last and would never have deemed it a “minor event”, but then I've never shot a kudu the morning of an eclipse!

This book is out of print and used copies, at this writing, are offered at outrageous prices. I bought this book directly from the NRA more than a decade ago—books sometimes sit on my shelf a long time before I read them. I wouldn't pay more than about USD 25 for a used copy.

 Permalink

Lefevre, Edwin. Reminiscences of a Stock Operator. New York: John Wiley & Sons, [1923] 1994. ISBN 0-471-05970-6.
This stock market classic is a thinly fictionalised biography of the exploits of the legendary speculator Jesse Livermore, written in the form of an autobiography of “Larry Livingston”. (In 1940, shortly before his death, Livermore claimed that he had actually written the book himself, with writer Edwin Lefevre acting as editor and front-man; I know of no independent confirmation of this claim.) In any case, there are few books you can read which contain so much market wisdom packed into 300 pages of entertaining narrative. The book was published in 1923, and covers Livermore/Livingston's career from his start in the bucket shops of Boston to a millionaire market mover as the great 1920s bull market was just beginning to take off.

Trading was Livermore's life; he ended up making and losing four multi-million dollar fortunes, and was blamed for every major market crash from 1917 through the year of his death, 1940. Here is a picture of the original wild and woolly Wall Street—before the SEC, Glass-Steagall, restrictions on insider trading, and all the other party-pooping innovations of later years. Prior to 1913, there were not even any taxes on stock market profits. Market manipulation was considered (chapter 19) “no more than common merchandising processes”, and if the public gets fleeced, well, that's what they're there for! If you think today's financial futures, options, derivatives, and hedge funds are speculative, check out the description of late 19th century “bucket shops”: off-track betting parlours for stocks, which actually made no transactions in the market at all. Some things never change, however, and anybody who read chapter 23 about media hyping of stocks in the early decades of the last century would have been well cautioned against the “perma-bull” babblers who sucked the public into the dot-com bubble near the top.

 Permalink

August 2005

Barks, Carl. Back to the Klondike. Prescott, AZ: Gladstone, [1953] 1987. ISBN 0-944599-02-8.
When this comic was originally published in 1953, the editors considered Barks's rendition of the barroom fight and Scrooge McDuck's argument with his old flame Glittering Goldie a bit too violent for the intended audience and cut those panels from the first edition. They are restored here, except for four lost panels which have been replaced by a half-page pencil drawing of the fight scene by Barks, inked and coloured in his style for this edition. Ironically, this is one of the first Scrooge comics which shows the heart of gold (hey, he can afford it!) inside the prickly skinflint.

 Permalink

York, Byron. The Vast Left Wing Conspiracy. New York: Crown Forum, 2005. ISBN 1-4000-8238-2.
The 2004 presidential election in the United States was heralded as the coming of age of “new media”: Internet-based activism such as MoveOn, targeted voter contact like America Coming Together, political Weblogs, the Air America talk radio network, and politically-motivated films such as Michael Moore's Fahrenheit 9/11 and Robert Greenwald's Uncovered and Outfoxed. Yet, in the end, despite impressive (in fact unprecedented) fund-raising, membership numbers, and audience figures, the thoroughly conventional Bush campaign won the election, performing better in essentially every way compared to the 2000 results. This book explores what went wrong with the “new politics” revolution, and contains lessons that go well beyond the domain of politics and the borders of the United States.

The many-to-many mass medium which is the Internet provides a means for those with common interests to find one another, organise, and communicate unconstrained by time and distance. MoveOn, for example, managed so sign up 2.5 million members, and this huge number and giddy rate of growth persuaded those involved that they had tapped into a majority which could be mobilised to not only win, but as one of the MoveOn founders said not long before the election, “Yeah, we're going to win by a landslide” (p. 45). But while 2.5 million members is an impressive number, it is quite small compared to the approximately 120 million people who voted in the presidential election. That electorate is made up of about 15 million hard-core liberals and about the same number of uncompromising conservatives. The remaining 90 million are about evenly divided in leaning one direction or another, but are open to persuasion.

The Internet and the other new media appear to have provided a way for committed believers to connect with one another, ending up in an echo chamber where they came to believe that everybody shared their views. The approximately USD 200 million that went into these efforts was spent, in effect, preaching to the choir—reaching people whose minds were already made up. Outreach to swing voters was ineffective because if you're in a community which believes that anybody who disagrees is insane or brainwashed, it's difficult to persuade the undecided. Also, the closed communication loop of believers pushes rhetoric to the extremes, which alienates those in the middle.

Although the innovations in the 2004 campaign had negligible electoral success, they did shift the political landscape away from traditional party organisations to an auxiliary media-savvy network funded by wealthy donors. The consequences of this will doubtless influence U.S. politics in the future. The author, White House correspondent for National Review, writes from a conservative standpoint but had excellent access to the organisations about which he writes in the run-up to the election and provides an inside view of the new politics in the making. You have to take the author's research on faith, however, as there is not a single source citation in the book. The book's title was inspired by a 2001 Slate article, “Wanted: A Vast Left-Wing Conspiracy”; there is no suggestion of the existence of a conspiracy in a legal sense.

 Permalink

Rucker, Rudy. Mathematicians in Love. New York: Tor, 2006. ISBN 0-7653-1584-X.
I read this book in manuscript form; the manuscript was dated 2005-07-28. Now that Tor have issued a hardcover edition, I've added its ISBN to this item. Notes and reviews are available on Rudy's Weblog.

 Permalink

Smith, L. Neil. The Lando Calrissian Adventures. New York: Del Rey, [1983] 1994. ISBN 0-345-39110-1.
This volume collects together the three Lando Calrissian short novels: Lando Calrissian and the Mindharp of Sharu, Lando Calrissian and the Flamewind of Oseon, and Lando Calrissian and the StarCave of ThonBoka, originally published separately in 1983 and now out of print (but readily available second-hand). All three novels together are just 409 mass market paperback pages. I wouldn't usually bother with an item of Star Wars merchandising, but as these yarns were written by one of my favourite science fiction authors, exalted cosmic libertarian L. Neil Smith, I was curious to see what he'd make of a character created by the Lucas organisation. It's pretty good, especially as a gentle introduction for younger readers who might be more inclined to read a story with a Star Wars hook than the more purely libertarian (although no more difficult to read) The Probability Broach (now available in a comic book edition!) or Pallas.

The three novels, which form a continuous story arc and are best read in order, are set in the period after Lando has won the Millennium Falcon in a card game but before he encounters Han Solo and loses the ship to him the same way. Lando is the only character in the Star Wars canon who appears here; if the name of the protagonist and ship were changed, one would scarcely guess the setting was the Star Wars universe, although parts of the “back-story” are filled in here and there, such as how a self-described interstellar gambler and con artiste came to be an expert starship pilot, why the steerable quad-guns on the Falcon “recoil” when they fire like World War II ack-ack guns, and how Lando laid his hands on enough money to “buy an entire city” (p. 408).

Lando's companion in all the adventures is the droid Vuffi Raa, also won in a card game, who is a full-fledged character and far more intriguing than any of the droids in the Star Wars movies. Unlike the stilted and mechanical robots of the films, Vuffi Raa is a highly dextrous starfish-like creature, whose five fractal-branching tentacles can detach and work independently, and who has human-level intelligence, a mysterious past (uncovered as the story progresses), and ethical conflicts between his built-in pacifism and moral obligation to his friends when they are threatened. (The cover art is hideous; Vuffi Raa, an elegant and lithe creature in the story, is shown as something like a squared-off R2-D2 with steel dreadlocks.) Now that computer graphics permits bringing to film any character the mind can imagine, Vuffi Raa would make a marvelous addition to a movie: for once, a robot fully as capable as a human without being even remotely humanoid.

The first novel is more or less straightforward storytelling, while the second and third put somewhat more of a libertarian edge on things. StarCave of ThonBoka does an excellent job of demonstrating how a large organisation built on fear and coercion, regardless how formidably armed, is vulnerable to those who think and act for themselves. This is a theme which fits perfectly with the Star Wars movies which occur in this era, but cannot be more than hinted at within the constraints of a screenplay.

 Permalink

Smith, Edward E. Gray Lensman. Baltimore: Old Earth Books, [1939-1940, 1951] 1998. ISBN 1-882968-12-3.
This is the fourth volume of the Lensman series, following Triplanetary (June 2004), First Lensman (February 2005), and Galactic Patrol (March 2005). Gray Lensman ran in serial form in Astounding Science Fiction from October 1939 through January 1940. This book is a facsimile of the illustrated 1951 Fantasy Press edition, which was revised somewhat from the original magazine serial.

Gray Lensman is one of the most glittering nuggets of the Golden Age of science fiction. In this story, Doc Smith completely redefined the standard for thinking big and created an arena for the conflict between civilisation and chaos that's larger than a galaxy. This single novel has more leaps of the imagination than some other authors content themselves with in their entire careers. Here we encounter the “primary projector”: a weapon which can only be used when no enemy can possibly survive or others observe because the mere knowledge that it exists may compromise its secret (this, in a story written more that a decade before the first hydrogen bomb); the “negasphere”: an object which, while described as based on antimatter, is remarkably similar to a black hole (first described by J.R. Oppenheimer and H. Snyder in 1939, the same year the serial began to run in Astounding); the hyper-spatial tube (like a traversable wormhole); the Grand Fleet (composed of one million combat units); the Z9M9Z Directrix command ship, with its “tank” display 700 feet wide by 80 feet thick able to show the tactical situation in an entire galaxy at once; directed planetary impact weapons; a multi-galactic crime syndicate; insects and worms as allies of the good guys; organ regeneration; and more. Once you've experienced the Doc Smith universe, the Star Wars Empire may feel small and antiquated.

This edition contains two Forewords: the author's original, intended to bring readers who haven't read the earlier books up to speed, and a snarky postmodern excretion by John Clute which is best skipped. If you're reading the Lensman series for the first time (this is my fourth), it's best to start either at the beginning with Triplanetary, or with Galactic Patrol, which was written first and stands on its own, not depending on any of the material introduced in the first two “prequel” volumes.

 Permalink

Jordan, Bill [William Henry]. No Second Place Winner. Concord, NH: Police Bookshelf, [1965] 1989. ISBN 0-936279-09-5.
This thin (114 page) book is one of the all-time classics of gunfighting, written by a man whose long career in the U.S. Border Patrol in an era when the U.S. actually defended its southern border schooled him in the essentials of bringing armed hostilities to an end as quickly and effectively as possible while minimising risk to the lawman. Although there are few pages and many pictures, in a way that's part of the message: there's nothing particularly complicated about winning a gunfight; it's a matter of skill acquired by patient practice until one can perform reliably under the enormous stress of a life-or-death situation. All of the refinements and complexity of “combat shooting” competitions are a fine game, the author argues, but have little to do with real-world situations where a peace officer has no alternative to employing deadly force.

The author stresses repeatedly that one shouldn't attempt to learn the fast draw or double action hip shooting techniques he teaches before having completely mastered single action aimed fire at bullseye targets, and advocates extensive dry-fire practice and training with wax or plastic primer-only practice loads before attempting the fast draw with live ammunition, “unless you wish to develop the three-toed limp of the typical Hollywood ‘gunslinger’” (p. 61). Jordan considers the double action revolver the only suitable weapon for a law officer, but remember that this book was written forty years ago, before the advent of today's light and reliable semiautomatics with effective factory combat loads. Still, the focus is on delivering the first shot to the malefactor's centre of gravity before he pulls the trigger, so magazine capacity and speedy reloading aren't as high priorities as they may be with today's increasingly militarised police.

This book is out of print, but used copies are readily available.

 Permalink

September 2005

Stevenson, David. 1914–1918: The History of the First World War. London: Allen Lane, 2004. ISBN 0-14-026817-0.
I have long believed that World War I was the absolutely pivotal event of the twentieth century, and that understanding its causes and consequences was essential to comprehending subsequent history. Here is an excellent single-volume history of the war for those interested in this tragic and often-neglected epoch of modern history. The author, a professor of International History at the London School of Economics, attempts to balance all aspects of the war: politics, economics, culture, ideology, demographics, and technology, as well as the actual military history of the conflict. This results in a thick (727 page), heavy book which is somewhat heavy going and best read and digested over a period of time rather than in one frontal assault (I read the book over a period of about four months). Those looking for a detailed military history won't find it here; while there is a thorough discussion of grand strategy and evolving war aims and discussion of the horrific conditions of the largely static trench warfare which characterised most of the war, there is little or no tactical description of individual battles.

The high-level integrated view of the war (and subsequent peacemaking and its undoing) is excellent for understanding the place of the war in modern history. It was World War I which, more than any other event, brought the leviathan modern nation state to its malign maturity: mass conscription, direct taxation, fiat currency, massive public debt, propaganda aimed at citizens, manipulation of the news, rationing, wage and price controls, political intrusion into the economy, and attacks on noncombatant civilians. All of these horrors, which were to characterise the balance of the last century and continue to poison the present, appeared in full force in all the powers involved in World War I. Further, the redrawing of borders which occurred following the liquidation of the German, Austro-Hungarian, and Ottoman empires sowed the seeds of subsequent conflicts, some still underway almost a century later, to name a few: Yugoslavia, Rwanda, Palestine, and Iraq.

The U.S edition, titled Cataclysm: The First World War as Political Tragedy, is now available in paperback.

 Permalink

Appleton, Victor. Tom Swift and His Submarine Boat. McLean, VA: IndyPublish.com, [1910] 2002. ISBN 1-4043-3567-6.
As usual, I read the electronic edition of this novel published in the Tom Swift and His Pocket Library collection at this site on my PalmOS PDA in random moments of downtime over a couple of months. I've posted an updated electronic edition which corrects typographical errors I noted whilst reading the book, the fourth installment in the original Tom Swift saga.

It's delightful to read a book which uses the word “filibuster” in its original sense: “to take part in a private military action in a foreign country” but somewhat disconcerting to encounter Brazilians speaking Spanish! The diving suits which allow full mobility on the abyssal plain two miles beneath the ocean surface remain as science-fictional as when this novel was written almost a century ago.

 Permalink

Guy, Richard K. Unsolved Problems in Number Theory. 3rd ed. New York: Springer, 2004. ISBN 0-387-20860-7.
Your hard-working and overheated CPU chip does not want you to buy this book! Collected here are hundreds of thorny problems, puzzles, and conjectures, many of which, even if you lack the cerebral horsepower to tackle a formal proof, are candidates for computational searches for solutions or counterexamples (and, indeed, a substantial number of problems posed in the first and second editions have been so resolved, some with quite modest computation by today's standards). In the 18th century, Leonhard Euler conjectured that there was no nontrivial solution to the equation:
a5 + b5 + c5 + d5 = e5
The problem remained open until 1966 when Lander and Parkin found the counterexample:
275 + 845 + 1105 + 1335 = 1445
Does the equation:
a6 + b6 + c6 + d6 + e6 = f6
have a nontrivial integer solution? Ladies and gentlemen, start your (analytical) engines! (Problem D1.) There are a large collection of mathematical curiosities here, including a series which grows so slowly it is proportional to the inverse of the Ackermann function (E20), and a conjecture (E16) regarding the esoteric equation “3x+1” about which Paul Erdös said, “Mathematics may not be ready for such problems.” The 196 palindrome problem which caused me to burn up three years of computer time some fifteen years ago closes the book (F32). Many (but not all) of the problems to which computer attacks are applicable indicate the status of searches as of 2003, giving you some idea what you're getting into should you be inclined to launch your own.

For a book devoted to one of the most finicky topics in pure mathematics, there are a dismaying number of typographical errors, and not just in the descriptive text. Even some of the LaTeX macros used to typeset the book are bungled, with “@”-form \index entries appearing explicitly in the text. Many of the errors would have been caught by a spelling checker, and there are a number of rather obvious typesetting errors in equations. As the book contains an abundance of “magic numbers” related to the various problems which may figure in computer searches, I would make a point to independently confirm their accuracy before launching any extensive computing project.

 Permalink

Woods, Thomas E., Jr. The Politically Incorrect Guide to American History. Washington: Regnery Publishing, 2004. ISBN 0-89526-047-6.
You know you're getting old when events you lived through start showing up in history textbooks! Upon reaching that milestone (hey, it beats the alternative), you'll inevitably have the same insight which occurs whenever you see media coverage of an event at which you were personally present or read a popular account of a topic which you understand in depth—“Hey, it wasn't like that at all!”…and then you begin to wonder about all the coverage of things about which you don't have direct knowledge.

This short book (246 pages of widely-leaded text with broad margins and numerous sidebars and boxed quotations, asides, and recommendations for further reading) provides a useful antidote to the version of U.S. history currently taught in government brainwashing institutions, written from a libertarian/conservative standpoint. Those who have made an effort to educate themselves on the topics discussed will find little here they haven't already encountered, but those whose only knowledge of U.S. history comes from contemporary textbooks will encounter many eye-opening “stubborn facts” along with source citations to independently verify them (the excellent bibliography is ten pages long).

The topics covered appear to have been selected based on the degree to which the present-day collectivist academic party line is at variance with the facts (although, as Woods points out, in many cases historians specialising in given areas themselves diverge from textbook accounts). This means that while “hot spots” such as the causes of the Civil War, the events leading to U.S. entry in World War I, and the reasons for the Great Depression and the rôle of New Deal programs in ending it are discussed, many others are omitted entirely; the book is suitable as a corrective for those who know an outline of U.S. history but not as an introduction for those college graduates who believe that FDR defeated Santa Anna at the Little Big Horn.

 Permalink

Smith, George O. Venus Equilateral. New York: Del Rey, [1942-1945, 1947, 1976] 1980. ISBN 0-345-28953-6.
During World War II the author worked on one of the most outrageous (and successful) electrical engineering projects of all time—a vacuum tube radio set manufactured in the tens of thousands, designed to be fired from an artillery piece, withstanding an initial acceleration of 20,000 gravities and spinning at 500 revolutions per second—the radio proximity fuze. To relax, he wrote the Venus Equilateral stories, published in Astounding Science Fiction and collected in this volume along with a retrospective written in 1973 for an anthology in memory of long-time Astounding/Analog editor John W. Campbell, Jr.

If you like your science fiction hard, this is about as geeky as it gets:

“The nice thing about this betatron,” said Channing, “is the fact that it can and does run both ends on the same supply. The current and voltage phases are correct so that we do not require two supplies which operate in a carefully balanced condition. The cyclotron is one of the other kinds; though the one supply is strictly D.C., the strength of the field must be controlled separately from the supply to the oscillator that runs the D plates. You're sitting on a fence, juggling knobs and stuff all the time you are bombarding with a cyc.” (From “Recoil”, p. 95)
Notwithstanding such passages, and how quaint an interplanetary radio relay station based on vacuum tubes with a staff of 2700 may seem to modern readers, these are human stories which are, on occasions, breathtaking in their imagination and modernity. The account of the impact of an “efficiency expert” on a technology-based operation in “QRM—Interplanetary” is as trenchant (and funny) as anything in Dilbert. The pernicious effect of abusive patent litigation on innovation, the economics of a technological singularity created by what amounts to a nanotechnological assembler, and the risk of identity theft, are the themes of other stories which it's difficult to imagine having been written half a century ago, along with timeless insights into engineering. One, in particular, from “Firing Line” (p. 259) so struck me when I read it thirty-odd years ago that it has remained in my mind ever since as one of the principal differences between the engineer and the tinkerer, “They know one simple rule about the universe. That rule is that if anything works once, it may be made to work again.” The tinkerer is afraid to touch something once it mysteriously starts to work; an engineer is eager to tear it apart and figure out why. I found the account of the end of Venus Equilateral in “Mad Holiday” disturbing when I first read it, but now see it as a celebration of technological obsolescence as an integral part of progress, to be welcomed, and the occasion for a blow-out party, not long faces and melancholy.

Arthur C. Clarke, who contributes the introduction to this collection, read these stories while engaged in his own war work, in copies of Astounding sent from America by Willy Ley, acknowledges that these tales of communication relays in space may have played a part in his coming up with that idea.

This book is out of print, but inexpensive used copies are readily available.

 Permalink

Ronson, Jon. The Men Who Stare at Goats. London: Picador, 2004. ISBN 0-330-37548-2.
I'm not quite sure what to make of this book. If you take everything at face value, you're asked to believe that U.S. Army Intelligence harbours a New Age pentacle in the Pentagon cabal bent on transforming Special Forces soldiers into “warrior monks” who can walk through walls, become invisible, and kill goats (and presumably the enemy, even if they are not goats) just by staring at them. These wannabe paranormal super-soldiers are responsible for the cruel and inhuman torture of prisoners in Iraq by playing the Barney the Purple Dinosaur song and all-girl Fleetwood Mac covers around the clock, and are implicated in the Waco massacre, the Abu Ghraib prison scandal, and the Heaven's Gate suicides, and have “re-activated” Uri Geller in the War on Terror.

Now, stipulating that “military intelligence” is an oxymoron, this still seems altogether too zany to be entirely credible. Lack of imagination is another well-known military characteristic, and all of this seems to be so far outside the box that it's in another universe entirely, say one summoned up by a writer predisposed to anti-American conspiracy theories, endowed with an over-active imagination, who's spent way too much time watching X-Files reruns. Anyway, that's what one would like to believe, since it's rather disturbing to contemplate living in a world in which the last remaining superpower is so disconnected from reality that its Army believes it can field soldiers with…super powers. But, as much as I'd like to dismiss this story as fantasy, I cannot entirely do so. Here's my problem: one of the central figures in the narrative is a certain Colonel John Alexander. Now I happen to know from independent and direct personal contacts that Colonel Alexander is a real person, that he is substantially as described in the book, and is involved in things every bit as weird as those with which he is associated here. So maybe all the rest is made up, but the one data point I can confirm checks out. Maybe it's time to start equipping our evil mutant attack goat legions with Ray-Ban shades! For an earlier, better sourced look at the Pentagon's first foray into psychic spying, see Jim Schnabel's 1997 Remote Viewers.

A U.S edition is now available, but presently only in hardcover; a U.S. paperback edition is scheduled for April 2006.

 Permalink

October 2005

Sloane, Eric. The Cracker Barrel. Mineola, NY: Dover, [1967] 2005. ISBN 0-486-44101-6.
In the 1960s, artist and antiquarian Eric Sloane wrote a syndicated column of which many of the best are collected in this volume. This is an excellent book for browsing in random order in the odd moment, but like the contents of the eponymous barrel, it's hard to stop after just one, so you may devour the whole thing at one sitting. Hey, at least it isn't fattening!

The column format allowed Sloane to address a variety of topics which didn't permit book-length treatment. There are gems here about word origins, what was good and not so good about “the good old days”, tools and techniques (the “variable wrench” is pure genius), art and the business of being an artist, and much more. Each column is illustrated with one of Sloane's marvelous line drawings. Praise be to Dover for putting this classic back into print where it belongs.

 Permalink

Foden, Giles. Mimi and Toutou Go Forth. London: Penguin, 2004. ISBN 0-14-100984-5.
Only a perfect idiot would undertake to transport two forty foot mahogany motorboats from London to Cape Town and then onward to Lake Tanganyika by ship, rail, steam tractor, and teams of oxen, there to challenge German dominance of the lake during World War I by attempting to sink a ship three times the length and seven times the displacement of the fragile craft. Fortunately, the Admiralty found just the man in Geoffrey Basil Spicer-Simpson, in 1915 the oldest Lieutenant Commander in the Royal Navy, his ascent through the ranks having been retarded due to his proclivity for sinking British ships. Spicer-Simpson was an inveterate raconteur of tall tales and insufferable know-it-all (on the ship bound for South Africa he was heard lecturing the Astronomer Royal of Cape Town on the southern constellations), and was eccentric in about as many ways as can be packed into a single human frame. Still, he and his motley team, despite innumerable misadventures (many self-inflicted), got the job done, sinking the ship they were sent to and capturing another German vessel, the first German warship ever captured by the Royal Navy. Afterward, Spicer-Simpson rather blotted his copybook by declining to engage first a German fort and then a warship both later found to have been “armed” only with wooden dummy guns. His exploits caused him to be worshipped as a god by the Holo-holo tribe, who fashioned clay effigies of him, but rather less impressed the Admiralty who, despite awarding him the DSO, re-assigned him upon his return to the routine desk job he had before the adventure. HMS Mimi and Toutou were the boats under Spicer-Simpson's command, soon joined by the captured German ship which was rechristened HMS Fifi. The events described herein (very loosely) inspired C.S.Forester's 1935 novel The African Queen and the 1951 Bogart/Hepburn film.

A U.S. edition is now available, titled Mimi and Toutou's Big Adventure, but at present only in hardcover. A U.S. paperback is scheduled for March, 2006.

 Permalink

Radosh, Ronald and Allis Radosh. Red Star over Hollywood. San Francisco: Encounter Books, 2005. ISBN 1-893554-96-1.
The Hollywood blacklist has become one of the most mythic elements of the mid-20th century Red scare. Like most myths, especially those involving tinseltown, it has been re-scripted into a struggle of good (falsely-accused artists defending free speech) versus evil (paranoid witch hunters bent on censorship) at the expense of a large part of the detail and complexity of the actual events. In this book, drawing upon contemporary sources, recently released documents from the FBI and House Committee on Un-American Activities (HUAC), and interviews with surviving participants in the events, the authors patiently assemble the story of what really happened, which is substantially different than the stories retailed by partisans of the respective sides. The evolution of those who joined the Communist Party out of idealism, were repelled by its totalitarian attempts to control their creative work and/or the cynicism of its support for the 1939–1941 Nazi/Soviet pact, yet who risked their careers to save those of others by refusing to name other Party members, is evocatively sketched, along with the agenda of HUAC, which FBI documents now reveal actually had lists of party members before the hearings began, and were thus grandstanding to gain publicity and intimidate the studios into firing those who would not deny Communist affiliations. History isn't as tidy as myth: the accusers were perfectly correct in claiming that a substantial number of prominent Hollywood figures were members of the Communist Party, and the accused were perfectly correct in their claim that apart from a few egregious exceptions, Soviet and pro-communist propaganda was not inserted into Hollywood films. A mystery about one of those exceptions, the 1943 Warner Brothers film Mission to Moscow, which defended the Moscow show trials, is cleared up here. I've always wondered why, since many of the Red-baiting films of the 1950s are cult classics, this exemplar of the ideological inverse (released, after all, when the U.S. and Soviet Union were allies in World War II) has never made it to video. Well, apparently those who currently own the rights are sufficiently embarrassed by it that apart from one of the rare prints being run on television, the only place you can see it is at the film library of the Museum of Modern Art in New York or in the archive of the University of Wisconsin. Ronald Radosh is author of Commies (July 2001) and co-author of The Rosenberg File (August 2002).

 Permalink

Kurzweil, Ray. The Singularity Is Near. New York: Viking, 2005. ISBN 0-670-03384-7.
What happens if Moore's Law—the annual doubling of computing power at constant cost—just keeps on going? In this book, inventor, entrepreneur, and futurist Ray Kurzweil extrapolates the long-term faster than exponential growth (the exponent is itself growing exponentially) in computing power to the point where the computational capacity of the human brain is available for about US$1000 (around 2020, he estimates), reverse engineering and emulation of human brain structure permits machine intelligence indistinguishable from that of humans as defined by the Turing test (around 2030), and the subsequent (and he believes inevitable) runaway growth in artificial intelligence leading to a technological singularity around 2045 when US$1000 will purchase computing power comparable to that of all presently-existing human brains and the new intelligence created in that single year will be a billion times greater than that of the entire intellectual heritage of human civilisation prior to that date. He argues that the inhabitants of this brave new world, having transcended biological computation in favour of nanotechnological substrates “trillions of trillions of times more capable” will remain human, having preserved their essential identity and evolutionary heritage across this leap to Godlike intellectual powers. Then what? One might as well have asked an ant to speculate on what newly-evolved hominids would end up accomplishing, as the gap between ourselves and these super cyborgs (some of the precursors of which the author argues are alive today) is probably greater than between arthropod and anthropoid.

Throughout this tour de force of boundless technological optimism, one is impressed by the author's adamantine intellectual integrity. This is not an advocacy document—in fact, Kurzweil's view is that the events he envisions are essentially inevitable given the technological, economic, and moral (curing disease and alleviating suffering) dynamics driving them. Potential roadblocks are discussed candidly, along with the existential risks posed by the genetics, nanotechnology, and robotics (GNR) revolutions which will set the stage for the singularity. A chapter is devoted to responding to critics of various aspects of the argument, in which opposing views are treated with respect.

I'm not going to expound further in great detail. I suspect a majority of people who read these comments will, in all likelihood, read the book themselves (if they haven't already) and make up their own minds about it. If you are at all interested in the evolution of technology in this century and its consequences for the humans who are creating it, this is certainly a book you should read. The balance of these remarks discuss various matters which came to mind as I read the book; they may not make much sense unless you've read it (You are going to read it, aren't you?), but may highlight things to reflect upon as you do.

  • Switching off the simulation. Page 404 raises a somewhat arcane risk I've pondered at some length. Suppose our entire universe is a simulation run on some super-intelligent being's computer. (What's the purpose of the universe? It's a science fair project!) What should we do to avoid having the simulation turned off, which would be bad? Presumably, the most likely reason to stop the simulation is that it's become boring. Going through a technological singularity, either from the inside or from the outside looking in, certainly doesn't sound boring, so Kurzweil argues that working toward the singularity protects us, if we be simulated, from having our plug pulled. Well, maybe, but suppose the explosion in computing power accessible to the simulated beings (us) at the singularity exceeds that available to run the simulation? (This is plausible, since post-singularity computing rapidly approaches its ultimate physical limits.) Then one imagines some super-kid running top to figure out what's slowing down the First Superbeing Shooter game he's running and killing the CPU hog process. There are also things we can do which might increase the risk of the simulation's being switched off. Consider, as I've proposed, precision fundamental physics experiments aimed at detecting round-off errors in the simulation (manifested, for example, as small violations of conservation laws). Once the beings in the simulation twig to the fact that they're in a simulation and that their reality is no more accurate than double precision floating point, what's the point to letting it run?
  • Fifty bits per atom? In the description of the computational capacity of a rock (p. 131), the calculation assumes that 100 bits of memory can be encoded in each atom of a disordered medium. I don't get it; even reliably storing a single bit per atom is difficult to envision. Using the “precise position, spin, and quantum state” of a large ensemble of atoms as mentioned on p. 134 seems highly dubious.
  • Luddites. The risk from anti-technology backlash is discussed in some detail. (“Ned Ludd” himself joins in some of the trans-temporal dialogues.) One can imagine the next generation of anti-globalist demonstrators taking to the streets to protest the “evil corporations conspiring to make us all rich and immortal”.
  • Fundamentalism. Another risk is posed by fundamentalism, not so much of the religious variety, but rather fundamentalist humanists who perceive the migration of humans to non-biological substrates (at first by augmentation, later by uploading) as repellent to their biological conception of humanity. One is inclined, along with the author, simply to wait until these folks get old enough to need a hip replacement, pacemaker, or cerebral implant to reverse a degenerative disease to motivate them to recalibrate their definition of “purely biological”. Still, I'm far from the first to observe that Singularitarianism (chapter 7) itself has some things in common with religious fundamentalism. In particular, it requires faith in rationality (which, as Karl Popper observed, cannot be rationally justified), and that the intentions of super-intelligent beings, as Godlike in their powers compared to humans as we are to Saccharomyces cerevisiae, will be benign and that they will receive us into eternal life and bliss. Haven't I heard this somewhere before? The main difference is that the Singularitarian doesn't just aspire to Heaven, but to Godhood Itself. One downside of this may be that God gets quite irate.
  • Vanity. I usually try to avoid the “Washington read” (picking up a book and flipping immediately to the index to see if I'm in it), but I happened to notice in passing I made this one, for a minor citation in footnote 47 to chapter 2.
  • Spindle cells. The material about “spindle cells” on pp. 191–194 is absolutely fascinating. These are very large, deeply and widely interconnected neurons which are found only in humans and a few great apes. Humans have about 80,000 spindle cells, while gorillas have 16,000, bonobos 2,100 and chimpanzees 1,800. If you're intrigued by what makes humans human, this looks like a promising place to start.
  • Speculative physics. The author shares my interest in physics verging on the fringe, and, turning the pages of this book, we come across such topics as possible ways to exceed the speed of light, black hole ultimate computers, stable wormholes and closed timelike curves (a.k.a. time machines), baby universes, cold fusion, and more. Now, none of these things is in any way relevant to nor necessary for the advent of the singularity, which requires only well-understood mainstream physics. The speculative topics enter primarily in discussions of the ultimate limits on a post-singularity civilisation and the implications for the destiny of intelligence in the universe. In a way they may distract from the argument, since a reader might be inclined to dismiss the singularity as yet another woolly speculation, which it isn't.
  • Source citations. The end notes contain many citations of articles in Wired, which I consider an entertainment medium rather than a reliable source of technological information. There are also references to articles in Wikipedia, where any idiot can modify anything any time they feel like it. I would not consider any information from these sources reliable unless independently verified from more scholarly publications.
  • “You apes wanna live forever?” Kurzweil doesn't just anticipate the singularity, he hopes to personally experience it, to which end (p. 211) he ingests “250 supplements (pills) a day and … a half-dozen intravenous therapies each week”. Setting aside the shots, just envision two hundred and fifty pills each and every day! That's 1,750 pills a week or, if you're awake sixteen hours a day, an average of more than 15 pills per waking hour, or one pill about every four minutes (one presumes they are swallowed in batches, not spaced out, which would make for a somewhat odd social life). Between the year 2000 and the estimated arrival of human-level artificial intelligence in 2030, he will swallow in excess of two and a half million pills, which makes one wonder what the probability of choking to death on any individual pill might be. He remarks, “Although my program may seem extreme, it is actually conservative—and optimal (based on my current knowledge).” Well, okay, but I'd worry about a “strategy for preventing heart disease [which] is to adopt ten different heart-disease-prevention therapies that attack each of the known risk factors” running into unanticipated interactions, given how everything in biology tends to connect to everything else. There is little discussion of the alternative approach to immortality with which many nanotechnologists of the mambo chicken persuasion are enamoured, which involves severing the heads of recently deceased individuals and freezing them in liquid nitrogen in sure and certain hope of the resurrection unto eternal life.

 Permalink

Goscinny, René and Albert Uderzo. Le ciel lui tombe sur la tête. Paris: Albert René, 2005. ISBN 2-86497-170-4.
Credit me with some restraint—I waited ten whole days after volume 33 of the Astérix saga appeared before devouring it in one sitting. If it isn't sufficiently obvious from the author's remark at the end of the album, note that planet “Tadsylwien” is an anagram of “Walt Disney”. The diffuse reflection of the countryside in the spherical spaceship on p. 8 is magnificently done.

 Permalink

Paul, Pamela. Pornified. New York: Times Books, 2005. ISBN 0-8050-7745-6.
If you've been on the receiving end of Internet junk mail as I've been until I discovered a few technical tricks (here and here) which, along with Annoyance Filter, have essentially eliminated spam from my mailbox, you're probably aware that the popular culture of the Internet is, to a substantial extent, about pornography and that this marvelous global packet switching medium is largely a means for delivering pornography both to those who seek it and those who find it, unsolicited, in their electronic mailboxes or popping up on their screens.

This is an integral part of the explosive growth of pornography along with the emergence of new media. In 1973, there were fewer than a thousand pornographic movie theatres in the U.S. (p 54). Building on the first exponential growth curve driven by home video, the Internet is bringing pornography to everybody connected and reducing the cost asymptotically to zero. On “peer to peer” networks such as Kazaa, 73% of all movie searches are for pornography and 24% of image searches are for child pornography (p. 60).

It's one thing to talk about free speech, but another to ask what the consequences might be of this explosion of consumption of material which is largely directed at men, and which not only objectifies but increasingly, as the standard of “edginess” ratchets upward, degrades women and supplants the complexity of adult human relationships with the fantasy instant gratification of “adult entertainment”.

Mark Schwartz, clinical director of the Masters and Johnson Clinic in St. Louis, hardly a puritanical institution, says (p. 142) “Pornography is having a dramatic effect on relationships at many different levels and in many different ways—and nobody outside the sexual behavior field and the psychiatric community is talking about it.” This book, by Time magazine contributor Pamela Paul, talks about it, interviewing both professionals surveying the landscape and individuals affected in various ways by the wave of pornography sweeping over developed countries connected to the Internet. Paul quotes Judith Coché, a clinical psychologist who teaches at the University of Pennsylvania and has 25 years experience in therapy practice as saying (p. 180), “We have an epidemic on our hands. The growth of pornography and its impact on young people is really, really dangerous. And the most dangerous part is that we don't even realize what's happening.”

Ironically, part of this is due to the overwhelming evidence of the pernicious consequences of excessive consumption of pornography and its tendency to progress into addictive behaviour from the Zillman and Bryant studies and others, which have made academic ethics committees reluctant to approve follow-up studies involving human subjects (p. 90). Would you vote, based on the evidence in hand, for a double blind study of the effects of tobacco or heroin on previously unexposed subjects?

In effect, with the technologically-mediated collapse of the social strictures against pornography, we've embarked upon a huge, entirely unplanned, social and cultural experiment unprecedented in human history. This book will make people on both sides of the debate question their assumptions; the author, while clearly appalled by the effects of pornography on many of the people she interviews, is forthright in her opposition to censorship. Even if you have no interest in pornography nor strong opinions for or against it, there's little doubt that the ever-growing intrusiveness and deviance of pornography on the Internet will be a “wedge issue” in the coming battle over the secure Internet, so the message of this book, unwelcome as it may be, should be something which everybody interested in preserving both our open society and the fragile culture which sustains it ponders at some length.

 Permalink

November 2005

Popper, Karl R. The Open Society and Its Enemies. Vol. 2: Hegel and Marx. London: Routledge, [1945, 1962, 1966, 1995] 2003. ISBN 0-415-27842-2.
After tracing the Platonic origins of utopian schemes of top-down social engineering in Volume 1 (December 2003), Popper now turns to the best-known modern exemplars of the genre, Hegel and Marx, starting out by showing Aristotle's contribution to Hegel's philosophy. Popper considers Hegel a complete charlatan and his work a blizzard of obfuscation intended to dull the mind to such an extent that it can believe that the Prussian monarchy (which paid the salaries of Hegel and his acolytes) was the epitome of human freedom. For a work of serious philosophical criticism (there are more than a hundred pages of end notes in small type), Popper is forthrightly acerbic and often quite funny in his treatment of Hegel, who he disposes of in only 55 pages of this book of 470. (Popper's contemporary, Wittgenstein, gets much the same treatment. See note 51 to chapter 11, for example, in which he calls the Tractatus “reinforced dogmatism that opens wide the door to the enemy, deeply significant metaphysical nonsense…”. One begins to comprehend what possessed Wittgenstein, a year after the publication of this book, to brandish a fireplace poker at Popper.)

Readers who think of Popper as an icon of libertarianism may be surprised at his remarkably positive treatment of Marx, of whom he writes (chapter 13), “Science progresses through trial and error. Marx tried, and although he erred in his main doctrines, he did not try in vain. He opened and sharpened our eyes in many ways. A return to pre-Marxian social science is inconceivable. All modern writers are indebted to Marx, even if they do not know it. … One cannot do justice to Marx without recognizing his sincerity. His open-mindedness, his sense of facts, his distrust of verbiage, and especially of moralizing verbiage, made him one of the world's most influential fighters against hypocisy and pharisaism. He had a burning desire to help the oppressed, and was fully conscious of the need for proving himself in deeds, and not only in words.”

To be sure, this encomium is the prelude to a detailed critique of Marx's deterministic theory of history and dubious economic notions, but unlike Hegel, Marx is given credit for trying to make sense of phenomena which few others even attempted to study scientifically. Many of the flaws in Marx's work, Popper argues, may be attributed to Marx having imbibed too deeply and uncritically the work of Hegel, and the crimes committed in the name of Marxism the result of those treating his work as received dogma, as opposed to a theory subject to refutation, as Marx himself would have viewed it.

Also surprising is his condemnation, with almost Marxist vehemence, of nineteenth century “unrestrained capitalism”, and enthusiasm for government intervention in the economy and the emergence of the modern welfare state (chapter 20 in particular). One must observe, with the advantage of sixty years hindsight, that F.A. Hayek's less sanguine contemporary perspective in The Road to Serfdom (May 2002) has proved more prophetic. Of particular interest is Popper's advocacy of “piecemeal social engineering”, as opposed to grand top-down systems such as “scientific socialism”, as the genuinely scientific method of improving society, permitting incremental progress by experiments on the margin which are subject to falsification by their results, in the same manner Popper argues the physical sciences function in The Logic of Scientific Discovery.

Permit me to make a few remarks about the physical properties of this book. The paperback seems to have a spine made of triple-reinforced neutronium, and cannot be induced to lie flat by any of the usual stratagems. In fact, when reading the book, one must either use two hands to hold it open or else wedge it open with three fingers against the spine in order to read complete lines of text. This is tiring, particularly since the book is also quite heavy. If you happen to doze off whilst reading (which I'll confess happened a few times during some of the more intricate philosophical arguments), the thing will pop out of your hand, snap shut like a bear trap, and fly off in some random direction—Zzzzzz … CLACK … thud! I don't know what the problem is with the binding—I have any number of O'Reilly paperbacks about the same size and shape which lie flat without the need for any extreme measures. The text is set in a type font in which the distinction between roman and italic type is very subtle—sometimes I had to take off my glasses (I am nearsighted) and eyeball the text close-up to see if a word was actually emphasised, and that runs the risk of a bloody nose if your thumb should slip and the thing snap shut.

A U.S. edition of this volume is now back in print; for a while only Volume 1 was available from Princeton University Press. The U.K. edition of Volume 1 from Routledge remains available.

 Permalink

Spencer, Robert. The Politically Incorrect Guide to Islam (and the Crusades). Washington: Regnery Publishing, 2005. ISBN 0-89526-013-1.
This book has the worthy goal of providing a brief, accessible antidote to the airbrushed version of Islam dispensed by its apologists and echoed by the mass media, and the relentlessly anti-Western account of the Crusades indoctrinated in the history curricula of government schools. Regrettably, the attempt falls short of the mark. The tone throughout is polemical—you don't feel like you're reading about history, religion, and culture so much as that the author is trying to persuade you to adopt his negative view of Islam, with historical facts and citations from original sources trotted out as debating points. This runs the risk of the reader suspecting the author of having cherry-picked source material, omitting that which argues the other way. I didn't find the author guilty of this, but the result is that this book is only likely to persuade those who already agree with its thesis before picking it up, which makes one wonder what's the point.

Spencer writes from an overtly Christian perspective, with parallel “Muhammad vs. Jesus” quotes in each chapter, and statements like, “If Godfrey of Bouillon, Richard the Lionhearted, and countless others hadn't risked their lives to uphold the honor of Christ and His Church thousands of miles from home, the jihadists would almost certainly have swept across Europe much sooner” (p. 160). Now, there's nothing wrong with comparing aspects of Islam to other religions to counter “moral equivalence” arguments which claim that every religion is equally guilty of intolerance, oppression, and incitement to violence, but the near-exclusive focus on Christianity is likely to be off-putting to secular readers and adherents of other religions who are just as threatened by militant, expansionist Islamic fundamentalism as Christians.

The text is poorly proofread; in several block quotations, words are run together without spaces, three times in as many lines on page 110. In the quote from John Wesley on p. 188, the whole meaning is lost when the phrase “cities razed from the foundation” is written with “raised” instead of “razed”.

The author's earlier Islam Unveiled (February 2003) is similarly flawed in tone and perspective. Had I noticed that this book was by the same author, I wouldn't have read it. It's more to read, but the combination of Ibn Warraq's Why I Am Not a Muslim (February 2002) and Paul Fregosi's Jihad in the West (July 2002) will leave you with a much better understanding of the issues than this disappointing effort.

 Permalink

Thorpe, Peter. Why Literature Is Bad for You. Chicago: Nelson-Hall Publishers, 1980. ISBN 0-88229-745-7.
Techies like myself often have little patience with students of the humanities, particularly those argumentative types ill-informed in anything outside their speciality often found around university campuses. After escaping from an encounter with one of these creatures, a common reaction is to shrug one's shoulders and mutter “English majors…”. I'd always assumed it was a selection effect: a career which involves reading made-up stories and then arguing vociferously about small details in them just naturally appeals to dopey people who those more engaged in the real world inevitably find tedious and irritating. But here's a book written by a professor of English Literature who argues that immersion in the humanities manufactures such people, wrecking the minds and often the lives of those who would have otherwise made well-balanced and successful accountants, scientists, physicians, engineers, or members of other productive professions.

This is either one of the most astonishing exemplars of academic apostasy ever written, or such a dry satire (which, it should be noted, is one of the author's fields of professional interest) that it slips beneath the radar of almost everybody who reads it. Peter Thorpe was a tenured (to be sure, otherwise this book would have been career suicide) associate professor of English at the University of Colorado when, around 1980, he went through what must have been a king-Hell existential mid-life crisis and penned this book which, for all its heresies, didn't wreck his career: here's a recent biography.

In any case, the message is incendiary. A professor of English Literature steps up to the podium to argue that intensive exposure to the Great Books which undergraduate and graduate students in English and their professors consider their “day job” is highly destructive to their psyches, as can be observed by the dysfunctional behaviour manifest in the denizens of a university department of humanities. So dubious is Thorpe that such departments have anything to do with human values, that he consistently encloses “humanities” in scare quotes.

Rather than attempting to recapitulate the arguments of this short and immensely entertaining polemic, I will simply cite the titles of the five parts and list the ways in which Thorpe deems the study of literature pernicious in each.

  1. Seven Types of Immaturity
    “Outgrowing” loved ones; addiction to and fomenting crises; refusal to co-operate deemed a virtue; fatalism as an excuse; self-centredness instead of self-knowledge; lust for revenge; hatred and disrespect for elders and authority.
  2. Seven Avenues to Unawareness
    Imputing “motivation” where it doesn't exist; pigeonholing people into categories; projecting one's own feelings onto others; replacement of one's own feelings with those of others; encouragement of laziness—it's easier to read than to do; excessive tolerance for incompetence; encouraging hostility and aggression.
  3. Five Avenues to Unhappiness
    Clinically or borderline paranoia, obsession with the past, materialism or irrational anti-materialism, expectation of gratitude when none is due, and being so worry-prone as to risk stomach ulcers (lighten up—this book was published two years before the discovery of H. pylori).
  4. Four Ways to Decrease Our Mental Powers
    Misuse of opinion, faulty and false memories, dishonest use of evidence, and belief that ideas do not have consequences.
  5. Four Ways to Failing to Communicate
    Distorting the language, writing poorly, gossipping and invading the privacy of others, and advocating or tolerating censorship.

That's a pretty damning bill of particulars, isn't it? Most of these indictments of the rôle of literature in inducing these dysfunctions are illustrated by fictionalised anecdotes based on individuals the author has encountered in English departments during his career. Some of the stories and arguments for how devotion to literature is the root cause of the pathology of the people who study it seem a tad over the top to this engineer, but then I haven't spent my whole adult life in an English Lit. department! The writing is entertaining and the author remains true to his profession in invoking a multitude of literary allusions to bolster his points. Whatever, it's comforting to believe that when you took advantage of Cliff's Notes to survive those soporific equation-free requirements for graduation you weren't (entirely) being lazy but also protecting your sanity and moral compass!

The book is out of print, but used copies are readily available and inexpensive. Special thanks to the visitor who recommended this book.

 Permalink

Hitchens, Peter. The Abolition of Britain. 2nd. ed. San Francisco: Encounter Books, 2002. ISBN 1-893554-39-2.
History records many examples of the collapse of once great and long-established cultures. Usually, such events are the consequence of military defeat, occupation or colonisation by a foreign power, violent revolution and its totalitarian aftermath, natural disasters, or other dramatic and destructive events. In this book, Peter Hitchens chronicles the collapse, within the span of a single human lifetime (bracketed by the funerals of Winston Churchill in 1965 and Princess Diana in 1997), of the culture which made Britain British, and maintained domestic peace in England and Wales since 1685 (and Scotland since Culloden in 1746) while the Continent was repeatedly convulsed by war and revolution. The collapse in Britain, however, occurred following victory in a global conflict in which, at the start, Britain stood alone against tyranny and barbarism, and although rooted in a time of postwar privation, demotion from great power status, and loss of empire, ran its course as the nation experienced unprecedented and broadly-based prosperity.

Hitchens argues that the British cultural collapse was almost entirely the result of well-intentioned “reform” and “modernisation” knocking out the highly evolved and subtly interconnected pillars which supported the culture, set in motion, perhaps, by immersion in American culture during World War II (see chapter 16—this argument seems rather dubious to me, since many of the postwar changes in Britain also occurred in the U.S., but afterward), and reinforced and accelerated by television broadcasting, the perils of which were prophetically sketched by T.S. Eliot in 1950 (p. 128). When the pillars of a culture: historical memory, national identity and pride, religion and morality, family, language, community, landscape and architecture, decency, and education are dislodged, even slightly, what ensues is much like the “controlled implosion” demolition of a building, with the Hobbesian forces of “every man for himself” taking the place of gravity in pulling down the structure and creating the essential preconditions for the replacement of bottom-up self-government by self-reliant citizens with authoritarian rule by élite such as Tony Blair's ambition of U.S.-style presidential power and, the leviathan where the road to serfdom leads, the emerging anti-democratic Continental super-state.

This U.S second edition includes notes which explain British terms and personalities unlikely to be familiar to readers abroad, a preface addressed to American readers, and an afterword discussing the 2001 general election and subsequent events.

 Permalink

Gingerich, Owen. The Book Nobody Read. New York: Penguin Books, 2004. ISBN 0-14-303476-6.
There is something about astronomy which seems to invite obsession. Otherwise, why would intelligent and seemingly rational people expend vast amounts of time and effort to compile catalogues of hundreds of thousands of stars, precisely measure the positions of planets over periods of decades, travel to the ends of the Earth to observe solar eclipses, get up before the crack of noon to see a rare transit of Mercury or Venus, or burn up months of computer time finding every planetary transit in a quarter million year interval around the present? Obsession it may be, but it's also fascinating and fun, and astronomy has profited enormously from the labours of those so obsessed, whether on a mountain top in the dead of winter, or carrying out lengthy calculations when tables of logarithms were the only computational tool available.

This book chronicles one man's magnificent thirty-year obsession. Spurred by Arthur Koestler's The Sleepwalkers, which portrayed Copernicus as a villain and his magnum opus De revolutionibus “the book that nobody read”—“an all time worst seller”, followed by the discovery of an obviously carefully read and heavily annotated first edition in the library of the Royal Observatory in Edinburgh, Scotland, the author, an astrophysicist and Harvard professor of the history of science, found himself inexorably drawn into a quest to track down and examine every extant copy of the first (Nuremberg, 1543) and second (Basel, 1566) editions of De revolutionibus to see whether and where readers had annotated them and so determine how widely the book, of which about a thousand copies were printed in these editions—typical for scientific works at the time—was read. Unlike today, when we've been educated that writing in a book is desecration, readers in the 16th and 17th centuries often made extensive annotations to their books, even assigning students and apprentices the task of copying annotations by other learned readers into their copies.

Along the way Gingerich found himself driving down an abandoned autobahn in the no man's land between East and West Germany, testifying in the criminal trial of a book rustler, discovering the theft of copies which librarians were unaware were missing, tracking down the provenance of pages in “sophisticated” (in the original sense of the word) copies assembled from two or more incomplete originals, attending the auction at Sotheby's of a first edition with a dubious last leaf which sold for US$750,000 (the author, no impecunious naïf in the rare book game, owns two copies of the second edition himself), and discovering the fate of many less celebrated books from that era (toilet paper). De revolutionibus has survived the vicissitudes of the centuries quite well—out of about 1000 original copies of the first and second editions, approximately six hundred exemplars remain.

Aside from the adventures of the Great Copernicus Chase, there is a great deal of information about Copernicus and the revolution he discovered and sparked which dispels many widely-believed bogus notions such as:

  • Copernicus was a hero of secular science against religious fundamentalism. Wrong!   Copernicus was a deeply religious doctor of church law, canon of the Roman Catholic Varmian Cathedral in Poland. He dedicated the book to Pope Paul III.
  • Prior to Copernicus, astronomers relying on Ptolemy's geocentric system kept adding epicycles on epicycles to try to approximate the orbits of the planets. Wrong!   This makes for a great story, but there is no evidence whatsoever for “epicycles on epicycles”. The authoritative planetary ephemerides in use in the age of Copernicus were calculated using the original Ptolemaic system without additional refinements, and there are no known examples of systems with additional epicycles.
  • Copernicus banished epicycles from astronomy. Wrong!   The Copernican system, in fact, included thirty-four epicycles! Because Copernicus believed that all planetary motion was based on circles, just like Ptolemy he required epicycles to approximate motion which wasn't known to be actually elliptical prior to Kepler. In fact, the Copernican system was no more accurate in predicting planetary positions than that of Ptolemy, and ephemerides computed from it were no better.
  • The Roman Catholic Church was appalled by Copernicus's suggestion that the Earth was not the centre of the cosmos and immediately banned his book. Wrong!   The first edition of De revolutionibus was published in 1543. It wasn't until 1616, more than seventy years later, that the book was placed on the Index Librorum Prohibitorum, and in 1620 it was permitted as long as ten specific modifications were made. Outside Italy, few copies even in Catholic countries were censored according to these instructions. In Spain, usually thought of as a hotbed of the Inquisition, the book was never placed on the Index at all. Galileo's personal copy has the forbidden passages marked in boxes and lined through, permitting the original text to be read. There is no evidence of any copy having been destroyed on the orders of the Church, and the Vatican library has three copies of both the first and second editions.

Obviously, if you're as interested as I in eccentric topics like positional astronomy, rare books, the evolution of modern science, and the surprisingly rapid and efficient diffusion of knowledge more than five centuries before the Internet, this is a book you're probably going to read if you haven't already. The only flaw is that the colour plates (at least in the UK paperback edition I read) are terribly reproduced—they all look like nobody bothered to focus the copy camera when the separations were made; plates 4b, 6, and 7a through 7f, which show annotations in various copies, are completely useless because they're so fuzzy the annotations can barely be read, if at all.

 Permalink

December 2005

Malanga, Steven. The New New Left. Chicago: Ivan R. Dee, 2005. ISBN 1-56663-644-2.
This thin book (or long essay—the main text is less than 150 pages), argues that urban politics in the United States has largely been captured by an iron triangle of “tax eaters”: unionised public employees, staff of government funded social and health services, and elected officials drawn largely from the first two groups and put into office by their power to raise campaign funds, get out the vote, and direct involvement in campaigns due to raw self-interest: unlike private sector voters, they are hiring their own bosses.

Unlike traditional big-city progressive politics or the New Left of the 1960s, which were ideologically driven and motivated by a genuine desire to improve the lot of the disadvantaged (even if many of their policy prescriptions proved to be counterproductive in practice), this “new new left” puts its own well-being squarely at the top of the agenda: increasing salaries, defeating attempts to privatise government services, expanding taxpayer-funded programs, and forcing unionisation and regulation onto the private sector through schemes such as “living wage” mandates. The author fears that the steady growth in the political muscle of public sector unions may be approaching or have reached a tipping point—where, albeit not yet a numerical majority, through their organised clout they have the power to elect politicians beholden to them, however costly to the productive sector or ultimately disastrous for their cities, whose taxpayers and businesses may choose to vote with their feet for places where they are viewed as valuable members of the community rather than cash cows to be looted.

Chapter 5 dismantles Richard Florida's crackpot “Creative Class” theory, which argues that by taxing remaining workers and businesses even more heavily and spending the proceeds on art, culture, “diversity”, bike paths, and all the other stuff believed to attract the golden children of the dot.com bubble, rust belt cities already devastated by urban socialism can be reborn. Post dot.bomb, such notions are more worthy of a belly laugh than thorough refutation, but if it's counter-examples and statistics you seek, they're here.

The last three chapters focus almost entirely on New York City. I suppose this isn't surprising, both because New York is often at the cutting edge in urban trends in the U.S., and also because the author is a senior fellow at the Manhattan Institute and a contributing editor to its City Journal, where most of this material originally appeared.

 Permalink

Godwin, Robert ed. Friendship 7: The NASA Mission Reports. Burlington, Ontario, Canada: Apogee Books, 1999. ISBN 1-896522-60-2.
This installment in the Apogee NASA Mission Reports series contains original pre- and post-flight documents describing the first United States manned orbital flight piloted by John Glenn on February 20th, 1962, including a complete transcript of the air-to-ground communications from launch through splashdown. An excerpt from the Glenn's postflight debriefing describing his observations from space including the “fireflies” seen at orbital sunrise is included, along with a scientific evaluation which, in retrospect, seems to have gotten everything just about right. Glenn's own 13 page report on the flight is among the documents, as is backup pilot Scott Carpenter's report on training for the mission in which he describes the “extinctospectropolariscope-occulogyrogravoadaptometer”, abbreviated “V-Meter” in order to fit into the spacecraft (p. 110). A companion CD-ROM includes a one hour NASA film about the mission, with flight day footage from the tracking stations around the globe, and film from the pilot observation camera synchronised with recorded radio communications. An unintentionally funny introduction by the editor (complete with two idiot “it's”-es on consecutive lines) attempts to defend Glenn's 1998 political junket / P.R. stunt aboard socialist space ship Discovery. “If NASA is going to conduct gerontology experiments in orbit, who is more eminently qualified….” Well, a false predicate does imply anything, but if NASA were at all genuinely interested in geezers in space independent of political payback, why didn't they also fly John Young, only nine years Glenn's junior, who walked on the Moon, commanded the first flight of the space shuttle, was Chief of the Astronaut Office for ten years, and a NASA astronaut continuously from 1962 until his retirement in 2004, yet never given a flight assignment since 1983? Glenn's competence and courage needs no embellishment—and the contrast between the NASA in the days of his first flight and that of his second could not be more stark.

 Permalink

Bockris, John O'M. The New Paradigm. College Station, TX: D&M Enterprises, 2005. ISBN 0-9767444-0-6.
As the nineteenth century gave way to the twentieth, the triumphs of classical science were everywhere apparent: Newton's theories of mechanics and gravitation, Maxwell's electrodynamics, the atomic theory of chemistry, Darwin's evolution, Mendel's genetics, and the prospect of formalising all of mathematics from a small set of logical axioms. Certainly, there were a few little details awaiting explanation: the curious failure to detect ether drift in the Michelson-Morley experiment, the pesky anomalous precession of the perihelion of the planet Mercury, the seeming contradiction between the equipartition of energy and the actual spectrum of black body radiation, the mysterious patterns in the spectral lines of elements, and the source of the Sun's energy, but these seemed matters the next generation of scientists could resolve by building on the firm foundation laid by the last. Few would have imagined that these curiosities would spark a thirty year revolution in physics which would show the former foundations of science to be valid only in the limits of slow velocities, weak fields, and macroscopic objects.

At the start of the twenty-first century, in the very centennial of Einstein's annus mirabilis, it is only natural to enquire how firm are the foundations of present-day science, and survey the “little details and anomalies” which might point toward scientific revolutions in this century. That is the ambitious goal of this book, whose author's long career in physical chemistry began in 1945 with a Ph.D. from Imperial College, London, and spanned more than forty years as a full professor at the University of Pennsylvania, Flinders University in Australia, and Texas A&M University, where he was Distinguished Professor of Energy and Environmental Chemistry, with more than 700 papers and twenty books to his credit. And it is at this goal that Professor Bockris utterly, unconditionally, and irredeemably fails. By the evidence of the present volume, the author, notwithstanding his distinguished credentials and long career, is a complete idiot.

That's not to say you won't learn some things by reading this book. For example, what do physicists Hendrik Lorentz, Werner Heisenberg, Hannes Alfvén, Albert A. Michelson, and Lord Rayleigh; chemist Amedeo Avogadro, astronomers Chandra Wickramasinghe, Benik Markarian, and Martin Rees; the Weyerhaeuser Company; the Doberman Pinscher dog breed; Renaissance artist Michelangelo; Cepheid variable stars; Nazi propagandist Joseph Goebbels; the Menninger Foundation and the Cavendish Laboratory; evolutionary biologist Richard Dawkins; religious figures Saint Ignatius of Antioch, Bishop Berkeley, and Teilhard de Chardin; parapsychologists York Dobyns and Brenda Dunne; anomalist William R. Corliss; and Centreville Maryland, Manila in the Philippines, and the Galapagos Islands all have in common?

The “Shaking Pillars of the Paradigm” about which the author expresses sentiments ranging from doubt to disdain in chapter 3 include mathematics (where he considers irrational roots, non-commutative multiplication of quaternions, and the theory of limits among flaws indicative of the “break down” of mathematical foundations [p. 71]), Darwinian evolution, special relativity, what he refers to as “The So-Called General Theory of Relativity” with only the vaguest notion of its content—yet is certain is dead wrong, quantum theory (see p. 120 for a totally bungled explanation of Schrodinger's cat in which he seems to think the result depends upon a decision made by the cat), the big bang (which he deems “preposterus” on p. 138) and the Doppler interpretation of redshifts, and naturalistic theories of the origin of life. Chapter 4 begins with the claim that “There is no physical model which can tell us why [electrostatic] attraction and repulsion occur” (p. 163).

And what are those stubborn facts in which the author does believe, or at least argues merit the attention of science, pointing the way to a new foundation for science in this century? Well, that would be: UFOs and alien landings; Kirlian photography; homeopathy and Jacques Benveniste's “imprinting of water”; crop circles; Qi Gong masters remotely changing the half-life of radioactive substances; the Maharishi Effect and “Vedic Physics”; “cold fusion” and the transmutation of base metals into gold (on both of which the author published while at Texas A&M); telepathy, clairvoyance, and precognition; apparitions, poltergeists, haunting, demonic possession, channelling, and appearances of the Blessed Virgin Mary; out of body and near-death experiences; survival after death, communication through mediums including physical manifestations, and reincarnation; and psychokinesis, faith and “anomalous” healing (including the “psychic surgeons” of the Philippines), and astrology. The only apparent criterion for the author's endorsement of a phenomenon appears to be its rejection by mainstream science.

Now, many works of crank science can be quite funny, and entirely worth reading for their amusement value. Sadly, this book is so poorly written it cannot be enjoyed even on that level. In the introduction to this reading list I mention that I don't include books which I didn't finish, but that since I've been keeping the list I've never abandoned a book partway through. Well, my record remains intact, but this one sorely tempted me. The style, if you can call it that, is such that one finds it difficult to believe English is the author's mother tongue, no less that his doctorate is from a British university at a time when language skills were valued. The prose is often almost physically painful to read. Here is an example, from footnote 37 on page 117—but you can find similar examples on almost any page; I've chosen this one because it is, in addition, almost completely irrelevant to the text it annotates.

Here, it is relevant to describe a corridor meeting with a mature colleague - keen on Quantum Mechanical calculations, - who had not the friends to give him good grades in his grant applications and thus could not employ students to work with him. I commiserated on his situation, - a professor in a science department without grant money. How can you publish I blurted out, rather tactlessly. “Ah, but I have Lili” he said (I've changed his wife's name). I knew Lili, a pleasant European woman interested in obscure religions. She had a high school education but no university training. “But” … I began to expostulate. “It's ok, ok”, said my colleague. “Well, we buy the programs to calculate bond strengths, put it in the computer and I tell Lili the quantities and she writes down the answer the computer gives. Then, we write a paper.” The program referred to is one which solves the Schrödinger equation and provides energy values, e.g., for bond strength in chemical compounds.
Now sit back, close your eyes, and imagine five hundred pages of this; in spelling, grammar, accuracy, logic, and command of the subject matter it reads like a textbook-length Slashdot post. Several recurrent characteristics are manifest in this excerpt. The author repeatedly, though not consistently, capitalises Important Words within Sentences; he uses hyphens where em-dashes are intended, and seems to have invented his own punctuation sign: a comma followed by a hyphen, which is used interchangeably with commas and em-dashes. The punctuation gives the impression that somebody glanced at the manuscript and told the author, “There aren't enough commas in it”, whereupon he went through and added three or four thousand in completely random locations, however inane. There is an inordinate fondness for “e.g.”, “i.e.”, and “cf.”, and they are used in ways which make one suspect the author isn't completely clear on their meaning or the distinctions among them. And regarding the footnote quoted above, did I mention that the author's wife is named “Lily”, and hails from Austria?

Further evidence of the attention to detail and respect for the reader can be found in chapter 3 where most of the source citations in the last thirty pages are incorrect, and the blank cross-references scattered throughout the text. Not only is it obvious the book has not been fact checked, nor even proofread; it has never even been spelling checked—common words are misspelled all over. Bockris never manages the Slashdot hallmark of misspelling “the”, but on page 475 he misspells “to” as “ot”. Throughout you get the sense that what you're reading is not so much a considered scientific exposition and argument, but rather the raw unedited output of a keystroke capturing program running on the author's computer.

Some readers may take me to task for being too harsh in these remarks, noting that the book was self-published by the author at age 82. (How do I know it was self-published? Because my copy came with the order from Amazon to the publisher to ship it to their warehouse folded inside, and the publisher's address in this document is directly linked to the author.) Well, call me unkind, but permit me to observe that readers don't get a quality discount based on the author's age from the price of US$34.95, which is on the very high end for a five hundred page paperback, nor is there a disclaimer on the front or back cover that the author might not be firing on all cylinders. Certainly, an eminent retired professor ought to be able to call on former colleagues and/or students to review a manuscript which is certain to become an important part of his intellectual legacy, especially as it attempts to expound a new paradigm for science. Even the most cursory editing to remove needless and tedious repetition could knock 100 pages off this book (and eliminating the misinformation and nonsense could probably slim it down to about ten). The vast majority of citations are to secondary sources, many popular science or new age books.

Apart from these drawbacks, Bockris, like many cranks, seems compelled to personally attack Einstein, claiming his work was derivative, hinting at plagiarism, arguing that its significance is less than its reputation implies, and relating an unsourced story claiming Einstein was a poor husband and father (and even if he were, what does that have to do with the correctness and importance of his scientific contributions?). In chapter 2, he rants upon environmental and economic issues, calls for a universal dole (p. 34) for those who do not work (while on p. 436 he decries the effects of just such a dole on Australian youth), calls (p. 57) for censorship of music, compulsory population limitation, and government mandated instruction in philosophy and religion along with promotion of religious practice. Unlike many radical environmentalists of the fascist persuasion, he candidly observes (p. 58) that some of these measures “could not achieved under the present conditions of democracy”. So, while repeatedly inveighing against the corruption of government-funded science, he advocates what amounts to totalitarian government—by scientists.

 Permalink

Krakauer, Jon. Under the Banner of Heaven. New York: Anchor Books, [2003] 2004. ISBN 1-4000-3280-6.
This book uses the true-crime narrative of a brutal 1984 double murder committed by two Mormon fundamentalist brothers as the point of departure to explore the origin and sometimes violent early history of the Mormon faith, the evolution of Mormonism into a major mainstream religion, and the culture of present-day fundamentalist schismatic sects which continue to practice polygamy within a strictly hierarchical male-dominated society, and believe in personal revelation from God. (It should be noted that these sects, although referring to themselves as Mormon, have nothing whatsoever to do with the mainstream Church of Jesus Christ of Latter-day Saints, which excommunicates leaders of such sects and their followers, and has officially renounced the practice of polygamy since the Woodruff Manifesto of 1890. The “Mormon fundamentalist” sects believe themselves to be the true exemplars of the religion founded by Joseph Smith and reject the legitimacy of the mainstream church.)

Mormonism is almost unique among present-day large (more than 11 million members, about half in the United States) religions in having been established recently (1830) in a modern, broadly literate society, so its history is, for better or for worse, among the best historically documented of all religions. This can, of course, pose problems to any religion which claims absolute truth for its revealed messages, as the history of factionalism and schisms in Mormonism vividly demonstrates. The historical parallels between Islam and Mormonism are discussed briefly, and are well worth pondering: both were founded by new revelations building upon the Bible, both incorporated male domination and plural marriage at the outset, both were persecuted by the existing political and religious establishment, fled to a new haven in the desert, and developed in an environment of existential threats and violent responses. One shouldn't get carried away with such analogies—in particular Mormons never indulged in territorial conquest nor conversion at swordpoint. Further, the Mormon doctrine of continued revelation allows the religion to adapt as society evolves: discarding polygamy and, more recently, admitting black men to the priesthood (which, in the Mormon church, is comprised of virtually all adult male members).

Obviously, intertwining the story of the premeditated murder of a young mother and her infant committed by people who believed they were carrying out a divine revelation, with the history of a religion whose present-day believers often perceive themselves as moral exemplars in a decadent secular society is bound to be incendiary, and the reaction of the official Mormon church to the publication of the book was predictably negative. This paperback edition includes an appendix which reprints a review of a pre-publication draft of the original hardcover edition by senior church official Richard E. Turley, Jr., along with the author's response which acknowledges some factual errors noted by Turley (and corrected in this edition) while disputing his claim that the book “presents a decidedly one-sided and negative view of Mormon history” (p. 346). While the book is enlightening on each of the topics it treats, it does seem to me that it may try to do too much in too few pages. The history of the Mormon church, exploration of the present-day fundamentalist polygamous colonies in the western U.S., Canada, and Mexico, and the story of how the Lafferty brothers went from zealotry to murder and their apprehension and trials are all topics deserving of book-length treatment; combining them in a single volume invites claims that the violent acts of a few aberrant (and arguably insane) individuals are being used to slander a church of which they were not even members at the time of their crime.

All of the Mormon scriptures cited in the book are available on-line. Thanks to the reader who recommended this book; I'd never have otherwise discovered it.

 Permalink

Lileks, James. Mommy Knows Worst. New York: Three Rivers Press, 2005. ISBN 1-4000-8228-5.
Why did we baby boomers end up so doggone weird? Maybe it's thanks to all the “scientific” advice our parents received from “experts” who seemed convinced that despite millennia of ever-growing human population, new parents didn't have the slightest clue what do with babies and small children. James Lileks, who is emerging as one of the most talented and prolific humorists of this young century, collects some of the very best/worst of such advice in this volume, along with his side-splitting comments, as in the earlier volumes on food and interior decoration. Flip the pages and learn, as our parents did, why babies should be turned regularly as they broil in the Sun (pp. 36–42), why doping little snookums with opiates to make the bloody squaller shut up is a bad idea (pp. 44–48), why everything should be boiled, except for those which should be double boiled (pp. 26, 58–59, 65–68), plus the perfect solution for baby's ears that stick out like air scoops (pp. 32–33). This collection is laugh-out-loud funny from cover to cover; if you're looking for more in this vein, be sure to visit The Institute of Official Cheer and other features on the author's Web site which now includes a weekly audio broadcast.

 Permalink

Truss, Lynne. Talk to the Hand. London: Profile Books, 2005. ISBN 1-86197-933-9.
Following the runaway success of Eats, Shoots & Leaves (January 2004), one might have expected the author to follow up with another book on grammar, but instead in this outing she opted to confront the “utter bloody rudeness of everyday life”. Not long ago I might have considered these topics unrelated, but after the publication in July 2005 of Strike Out, and the subsequent discussion it engendered, I've come to realise that slapdash spelling and grammar are, as explained on page 23 here, simply one aspect of the rudeness which affronts us from all sides. As Bernard Pivot observed, “[spelling] remains a politeness one owes to our language, and a politeness one owes to those to whom one writes.”

In this book Truss parses rudeness into six categories, and explores how modern technology and society have nearly erased the distinctions between private and public spaces, encouraging or at least reducing the opprobrium of violating what were once universally shared social norms. (Imagine, for example, how shocking it would have seemed in 1965 to overhear the kind of intensely personal or confidential business conversation between two fellow passengers on a train which it is now entirely routine to hear one side of as somebody obliviously chatters into their mobile phone.)

Chapter 2, “Why am I the One Doing This?”, is 23 pages of pure wisdom for designers of business systems, customer relations managers, and designers of user interfaces for automated systems; it perfectly expresses the rage which justifiably overcomes people who feel themselves victimised for the convenience and/or profit of the counterparty in a transaction which is supposedly of mutual benefit. This is a trend which, in my opinion (particularly in computer user interface design), has been going in the wrong direction since I began to rant about it almost twenty years ago.

A U.S edition is also available.

 Permalink

  2006  

January 2006

Hayward, Steven F. Greatness. New York: Crown Forum, 2005. ISBN 0-307-23715-X.
This book, subtitled “Reagan, Churchill, and the Making of Extraordinary Leaders ”, examines the parallels between the lives and careers of these two superficially very different men, in search of the roots of their ability, despite having both been underestimated and disdained by their contemporaries (which historical distance has caused many to forget in the case of Churchill, a fact of which Hayward usefully reminds the reader), and considered too old for the challenges facing them when they arrived at the summit of power.

The beginning of the Cold War was effectively proclaimed by Churchill's 1946 “Iron Curtain” speech in Fulton, Missouri, and its end foretold by Reagan's “Tear Down this Wall” speech at the Berlin wall in 1987. (Both speeches are worth reading in their entirety, as they have much more to say about the world of their times than the sound bites from them you usually hear.) Interestingly, both speeches were greeted with scorn, and much of Reagan's staff considered it fantasy to imagine and an embarrassment to suggest the Berlin wall falling in the foreseeable future.

Only one chapter of the book is devoted to the Cold War; the bulk explores the experiences which formed the character of these men, their self-education in the art of statecraft, their remarkably similar evolution from youthful liberalism in domestic policy to stalwart confrontation of external threats, and their ability to talk over the heads of the political class directly to the population and instill their own optimism when so many saw only disaster and decline ahead for their societies. Unlike the vast majority of their contemporaries, neither Churchill nor Reagan considered Communism as something permanent—both believed it would eventually collapse due to its own, shall we say, internal contradictions. This short book provides an excellent insight into how they came to that prophetic conclusion.

 Permalink

Bolchover, David. The Living Dead. Chichester, England: Capstone Publishing, 2005. ISBN 1-84112-656-X.
If you've ever worked in a large office, you may have occasionally found yourself musing, “Sure, I work hard enough, but what do all those other people do all day?” In this book, David Bolchover, whose personal work experience in two large U.K. insurance companies caused him to ask this question, investigates and comes to the conclusion, “Not very much”. Quoting statistics such as the fact that 70% of Internet pornography site accesses are during the 9 to 5 work day, and that fully one third of mid-week visitors at a large U.K. theme park are employees who called in sick at work, the author discovers that it is remarkably easy to hold down a white collar job in many large organisations while doing essentially no productive work at all—simply showing up every day and collecting paychecks. While the Internet has greatly expanded the scope of goofing off on the job (type “bored at work” into Google and you'll get in excess of sixteen million hits), it is in addition to traditional alternatives to work and, often, easier to measure. The author estimates that as many as 20% of the employees in large offices contribute essentially nothing to their employer's business—these are the “living dead” of the title. Not only are the employers of these people getting nothing for their salaries, even more tragically, the living dead themselves are wasting their entire working careers and a huge portion of their lives in numbing boredom devoid of the satisfaction of doing something worthwhile.

In large office environments, there is often so little direct visibility of productivity that somebody who either cannot do the work or simply prefers not to can fall into the cracks for an extended period of time—perhaps until retirement. The present office work environment can be thought of as a holdover from the factory jobs of the industrial revolution, but while it is immediately apparent if a machine operator or production line worker does nothing, this may not be evident for office work. (One of the reasons outsourcing may work well for companies is that it forces them to quantify the value of the contracted work, and the outsourcing companies are motivated to better measure the productivity of their staff since they represent a profit centre, as opposed to a cost centre for the company which outsources.)

Back during my blessedly brief career in the management of an organisation which grew beyond the experience base of those who founded it, I found that the only way I could get a sense for what was actually going on in the company, as opposed to what one heard in meetings and read in memoranda, was what I called “Lieutenant Columbo” management—walking around with a little notepad, sitting down with people all over the company, and asking them to explain what they really did—not what their job title said or what their department was supposed to accomplish, but how they actually spent the working day, which was often quite different from what you might have guessed. Another enlightening experience for senior management is to spend a day jacked in to the company switchboard, listening (only) to a sample of the calls coming in from the outside world. I guarantee that anybody who does this for a full working day will end up with pages of notes about things they had no idea were going on. (The same goes for product developers, who should regularly eavesdrop on customer support calls.) But as organisations become huge, the distance between management and where the work is actually done becomes so great that expedients like this cannot bridge the gap: hence the legions of living dead.

The insights in this book extend to why so many business books (some seeming like they were generated by the PowerPoint Content Wizard) are awful and what that says about the CEOs who read them, why mumbo-jumbo like “going forward, we need to grow the buy-in for leveraging our core competencies” passes for wisdom in the business world (while somebody who said something like that at the dinner table would, and should, invite a hail of cutlery and vegetables), and why so many middle managers (the indispensable NCOs of the corporate army) are so hideously bad.

I fear the author may be too sanguine about the prospects of devolving the office into a world of home-working contractors, all entrepreneurial and self-motivated. I wish that world could come into being, and I sincerely hope it does, but one worries that the inner-directed people who prosper in such an environment are the ones who are already productive even in the stultifying environment of today's office. Perhaps a “middle way” such as Jack Stack's Great Game of Business (September 2004), combined with the devolving of corporate monoliths into clusters of smaller organisations as suggested in this book may point the way to dezombifying the workplace.

If you follow this list, you know how few “business books” I read—as this book so eloquently explains, most are hideous. This is one which will open your eyes and make you think.

 Permalink

Ronson, Jon. Them: Adventures with Extremists. New York: Simon & Schuster, 2002. ISBN 0-7432-3321-2.
Journalist and filmmaker Jon Ronson, intrigued by political and religious extremists in modern Western societies, decided to try to get inside their heads by hanging out with a variety of them as they went about their day to day lives on the fringe. Despite his being Jewish, a frequent contributor to the leftist Guardian newspaper, and often thought of as primarily a humorist, he found himself welcomed into the inner circle of characters as diverse as U.K. Muslim fundamentalist Omar Bakri, Randy Weaver and his daughter Rachel, Colonel Bo Gritz, who he visits while helping to rebuild the Branch Davidian church at Waco, a Grand Wizard of the Ku Klux Klan attempting to remake the image of that organisation with the aid of self-help books, and Dr. Ian Paisley on a missionary visit to Cameroon (where he learns why it's a poor idea to order the “porcupine” in the restaurant when visiting that country).

Ronson is surprised to discover that, as incompatible as the doctrines of these characters may be, they are nearly unanimous in believing the world is secretly ruled by a conspiracy of globalist plutocrats who plot their schemes in shadowy venues such as the Bilderberg conferences and the Bohemian Grove in northern California. So, the author decides to check this out for himself. He stalks the secretive Bilderberg meeting to a luxury hotel in Portugal and discovers to his dismay that the Bilderberg Group stalks back, and that the British Embassy can't help you when they're on your tail. Then, he gatecrashes the bizarre owl god ritual in the Bohemian Grove through the clever expedient of walking in right through the main gate.

The narrative is entertaining throughout, and generally sympathetic to the extremists he encounters, who mostly come across as sincere (if deluded), and running small-time operations on a limited budget. After becoming embroiled in a controversy during a tour of Canada by David Icke, who claims the world is run by a cabal of twelve foot tall shape-shifting reptilians, and was accused of anti-Semitic hate speech on the grounds that these were “code words” for a Zionist conspiracy, the author ends up concluding that sometimes a twelve foot tall alien lizard is just an alien lizard.

 Permalink

Anderson, Brian C. South Park Conservatives. Washington: Regnery Publishing, 2005. ISBN 0-89526-019-0.
Who would have imagined that the advent of “new media”—not just the Internet, but also AM radio after having been freed of the shackles of the “fairness doctrine”, cable television, with its proliferation of channels and the advent of “narrowcasting”, along with the venerable old media of stand-up comedy, cartoon series, and square old books would end up being dominated by conservatives and libertarians? Certainly not the greybeards atop the media pyramid who believed they set the agenda for public discourse and are now aghast to discover that the “people power” they always gave lip service to means just that—the people, not they, actually have the power, and there's nothing they can do to get it back into their own hands.

This book chronicles the conservative new media revolution of the past decade. There's nothing about the new media in themselves which has made it a conservative revolution—it's simply that it occurred in a society in which, at the outset, the media were dominated by an elite which were in the thrall of a collectivist ideology which had little or no traction outside the imperial districts from which they declaimed, while the audience they were haranguing had different beliefs entirely which, when they found media which spoke to them, immediately started to listen and tuned out the well-groomed, dulcet-voiced, insipid propagandists of the conventional wisdom.

One need only glance at the cratering audience figures for the old media—left-wing urban newspapers, television network news, and “mainstream” news-magazines to see the extent to which they are being shunned. The audience abandoning them is discovering the new media: Web sites, blogs, cable news, talk radio, which (if one follows a broad enough selection), gives a sense of what is actually going on in the world, as opposed to what the editors of the New York Times and the Washington Post decide merits appearing on the front page.

Of course, the new media aren't perfect, but they are diverse—which is doubtless why collectivist partisans of coercive consensus so detest them. Some conservatives may be dismayed by the vulgarity of “South Park” (I'll confess; I'm a big fan), but we partisans of civilisation would be well advised to party down together under a broad and expansive tent. Otherwise, the bastards might kill Kenny with a rocket widget ball.

 Permalink

Dalrymple, Theodore. Our Culture, What's Left of It. Chicago: Ivan R. Dee, 2005. ISBN 1-56663-643-4.
Theodore Dalrymple is the nom de plume of Anthony Daniels, a British physician and psychiatrist who, until his recent retirement, practiced in a prison medical ward and public hospital in Birmingham, England. In his early career, he travelled widely, visiting such earthly paradises as North Korea, Afghanistan, Cuba, Zimbabwe (when it was still Rhodesia), and Tanzania, where he acquired an acute sense of the social prerequisites for the individual disempowerment which characterises the third world. This experience superbly equipped him to diagnose the same maladies in the city centres of contemporary Britain; he is arguably the most perceptive and certainly among the most eloquent contemporary observers of that society.

This book is a collection of his columns from City Journal, most dating from 2001 through 2004, about equally divided between “Arts and Letters” and “Society and Politics”. There are gems in both sections: you'll want to re-read Macbeth after reading Dalrymple on the nature of evil and need for boundaries if humans are not to act inhumanly. Among the chapters of social commentary is a prophetic essay which almost precisely forecast the recent violence in France three years before it happened, one of the clearest statements of the inherent problems of Islam in adapting to modernity, and a persuasive argument against drug legalisation by somebody who spent almost his entire career treating the victims of both illegal drugs and the drug war. Dalrymple has decided to conclude his medical career in down-spiralling urban Britain for a life in rural France where, notwithstanding problems, people still know how to live. Thankfully, he will continue his writing.

Many of these essays can be found on-line at the City Journal site; I've linked to those I cited in the last paragraph. I find that writing this fine is best enjoyed away from the computer, as ink on paper in a serene time, but it's great that one can now read material on-line to decide whether it's worth springing for the book.

 Permalink

Young, Michael. The Rise of the Meritocracy. New Brunswick, NJ: Transaction Publishers, [1958] 1994. ISBN 1-56000-704-4.
The word “meritocracy” has become so commonplace in discussions of modern competitive organisations and societies that you may be surprised to learn the word did not exist before 1958—a year after Sputnik—when the publication of this most curious book introduced the word and concept into the English language. This is one of the oddest works of serious social commentary ever written—so odd, in fact, its author despaired of its ever seeing print after the manuscript was rejected by eleven publishers before finally appearing, whereupon it was quickly republished by Penguin and has been in print ever since, selling hundreds of thousands of copies and being translated into seven different languages.

Even though the author was a quintessential “policy wonk”: he wrote the first postwar manifesto for the British Labour Party, founded the Open University and the Consumer Association, and sat in the House of Lords as Lord Young of Dartington, this is a work of…what shall we call it…utopia? dystopia? future history? alternative history? satire? ironic social commentary? science fiction?…beats me. It has also perplexed many others, including one of the publishers who rejected it on the grounds that “they never published Ph.D. theses” without having observed that the book is cast as a thesis written in the year 2034! Young's dry irony and understated humour has gone right past many readers, especially those unacquainted with English satire, moving them to outrage, as if George Orwell were thought to be advocating Big Brother. (I am well attuned to this phenomenon, having experienced it myself with the Unicard and Digital Imprimatur papers; no matter how obvious you make the irony, somebody, usually in what passes for universities these days, will take it seriously and explode in rage and vituperation.)

The meritocracy of this book is nothing like what politicians and business leaders mean when they parrot the word today (one hopes, anyway)! In the future envisioned here, psychology and the social sciences advance to the point that it becomes possible to determine the IQ of individuals at a young age, and that this IQ, combined with motivation and effort of the person, is an almost perfect predictor of their potential achievement in intellectual work. Given this, Britain is seen evolving from a class system based on heredity and inherited wealth to a caste system sorted by intelligence, with the high-intelligence élite “streamed” through special state schools with their peers, while the lesser endowed are directed toward manual labour, and the sorry side of the bell curve find employment as personal servants to the élite, sparing their precious time for the life of the mind and the leisure and recreation it requires.

And yet the meritocracy is a thoroughly socialist society: the crème de la crème become the wise civil servants who direct the deployment of scarce human and financial capital to the needs of the nation in a highly-competitive global environment. Inheritance of wealth has been completely abolished, existing accumulations of wealth confiscated by “capital levies”, and all salaries made equal (although the élite, naturally, benefit from a wide variety of employer-provided perquisites—so is it always, even in merito-egalitopias). The benevolent state provides special schools for the intelligent progeny of working class parents, to rescue them from the intellectual damage their dull families might do, and prepare them for their shining destiny, while at the same time it provides sports, recreation, and entertainment to amuse the mentally modest masses when they finish their daily (yet satisfying, to dullards such as they) toil.

Young's meritocracy is a society where equality of opportunity has completely triumphed: test scores trump breeding, money, connections, seniority, ethnicity, accent, religion, and all of the other ways in which earlier societies sorted people into classes. The result, inevitably, is drastic inequality of results—but, hey, everybody gets paid the same, so it's cool, right? Well, for a while anyway…. As anybody who isn't afraid to look at the data knows perfectly well, there is a strong hereditary component to intelligence. Sorting people into social classes by intelligence will, over the generations, cause the mean intelligence of the largely non-interbreeding classes to drift apart (although there will be regression to the mean among outliers on each side, mobility among the classes due to individual variation will preserve or widen the gap). After a few generations this will result, despite perfect social mobility in theory, in a segregated caste system almost as rigid as that of England at the apogee of aristocracy. Just because “the masses” actually are benighted in this society doesn't mean they can't cause a lot of trouble, especially if incited by rabble-rousing bored women from the élite class. (I warned you this book will enrage those who don't see the irony.) Toward the end of the book, this conflict is building toward a crisis. Anybody who can guess the ending ought to be writing satirical future history themselves.

Actually, I wonder how many of those who missed the satire didn't actually finish the book or simply judged it by the title. It is difficult to read a passage like this one on p. 134 and mistake it for anything else.

Contrast the present — think how different was a meeting in the 2020s of the National Joint Council, which has been retained for form's sake. On the one side sit the I.Q.s of 140, on the other the I.Q.s of 99. On the one side the intellectual magnates of our day, on the other honest, horny-handed workmen more at home with dusters than documents. On the one side the solid confidence born of hard-won achievement; on the other the consciousness of a just inferiority.
Seriously, anybody who doesn't see the satire in this must be none too Swift. Although the book is cast as a retrospective from 2038, and there passing references to atomic stations, home entertainment centres, school trips to the Moon and the like, technologically the world seems very much like that of 1950s. There is one truly frightening innovation, however. On p. 110, discussing the shrinking job market for shop attendants, we're told, “The large shop with its more economical use of staff had supplanted many smaller ones, the speedy spread of self-service in something like its modern form had reduced the number of assistants needed, and piped distribution of milk, tea, and beer was extending rapidly.” To anybody with personal experience with British plumbing and English beer, the mere thought of the latter being delivered through the former is enough to induce dystopic shivers of 1984 magnitude.

Looking backward from almost fifty years on, this book can be read as an alternative history of the last half-century. In the eyes of many with a libertarian or conservative inclination, just when the centuries-long battle against privilege and prejudice was finally being won: in the 1950s and early 60s when Young's book appeared, the dream of equal opportunity so eloquently embodied in Dr. Martin Luther King's “I Have a Dream” speech began to evaporate in favour of equality of results (by forced levelling and dumbing down if that's what it took), group identity and entitlements, and the creation of a permanently dependent underclass from which escape was virtually impossible. The best works of alternative history are those which change just one thing in the past and then let the ripples spread outward over the years. You can read this story as a possible future in which equal opportunity really did completely triumph over egalitarianism in the sixties. For those who assume that would have been an unqualifiedly good thing, here is a cautionary tale well worth some serious reflexion.

 Permalink

February 2006

Randall, Lisa. Warped Passages. New York: Ecco, 2005. ISBN 0-06-053108-8.
The author is one of most prominent theoretical physicists working today, known primarily for her work on multi-dimensional “braneworld” models for particle physics and gravitation. With Raman Sundrum, she created the Randall-Sundrum models, the papers describing which are among the most highly cited in contemporary physics. In this book, aimed at a popular audience, she explores the revolution in theoretical physics which extra dimensional models have sparked since 1999, finally uniting string theorists, model builders, and experimenters in the expectation of finding signatures of new physics when the Large Hadron Collider (LHC) comes on stream at CERN in 2007.

The excitement among physicists is palpable: there is now reason to believe that the unification of all the forces of physics, including gravity, may not lie forever out of reach at the Planck energy, but somewhere in the TeV range—which will be accessible at the LHC. This book attempts to communicate that excitement to the intelligent layman and, sadly, falls somewhat short of the mark. The problem, in a nutshell, is that while the author is a formidable physicist, she is not, at least at this point in her career, a particularly talented populariser of science. In this book she has undertaken an extremely ambitious task, since laying the groundwork for braneworld models requires recapitulating most of twentieth century physics, including special and general relativity, quantum mechanics, particle physics and the standard model, and the rudiments of string theory. All of this results in a 500 page volume where we don't really get to the new stuff until about page 300. Now, this problem is generic to physics popularisations, but many others have handled it much better; Randall seems compelled to invent an off-the-wall analogy for every single technical item she describes, even when the description itself would be crystal clear to a reader encountering the material for the first time. You almost start to cringe—after every paragraph or two about actual physics, you know there's one coming about water sprinklers, ducks on a pond, bureaucrats shuffling paper, artists mixing paint, drivers and speed traps, and a host of others. There are also far too few illustrations in the chapters describing relativity and quantum mechanics; Isaac Asimov used to consider it a matter of pride to explain things in words rather than using a diagram, but Randall is (as yet) neither the wordsmith nor the explainer that Asimov was, but then who is?

There is a lot to like here, and I know of no other popular source which so clearly explains what may be discovered when the LHC fires up next year. Readers familiar with modern physics might check this book out of the library or borrow a copy from a friend and start reading at chapter 15, or maybe chapter 12 if you aren't up on the hierarchy problem in the standard model. This is a book which could have greatly benefited from a co-author with experience in science popularisation: Randall's technical writing (for example, her chapter in the Wheeler 90th birthday festschrift) is a model of clarity and concision; perhaps with more experience she'll get a better handle on communicating to a general audience.

 Permalink

Warraq, Ibn [pseud.] ed. Leaving Islam. Amherst, NY: Prometheus Books, 2003. ISBN 1-59102-068-9.
Multiculturalists and ardent secularists may contend “all organised religions are the same”, but among all major world religions only Islam prescribes the death penalty for apostasy, which makes these accounts by former Muslims of the reasons for and experience of their abandoning Islam more than just stories of religious doubt. (There is some dispute as to whether the Koran requires death for apostates, or only threatens punishment in the afterlife. Some prominent Islamic authorities, however, interpret surat II:217 and IX:11,12 as requiring death for apostates. Numerous aḥadīth are unambiguous on the point, for example Bukhārī book 84, number 57 quotes Mohammed saying, “Whoever changed his Islamic religion, then kill him”, which doesn't leave a lot of room for interpretation, nor do authoritative manuals of Islamic law such as Reliance of the Traveller, which prescribes (o8.1) “When a person who has reached puberty and is sane voluntarily apostasizes from Islam, he deserves to be killed”. The first hundred pages of Leaving Islam explore the theory and practice of Islamic apostasy in both ancient and modern times.)

The balance of the book are personal accounts by apostates, both those born into Islam and converts who came to regret their embrace of what Salman Rushdie has called “that least huggable of faiths”. These testaments range from the tragic (chapter 15), to the philosophical (chapter 29), and ironically humorous (chapter 37). One common thread which runs through the stories of many apostates is that while they were taught as children to “read” the Koran, what this actually meant was learning enough Arabic script and pronunciation to be able to recite the Arabic text but without having any idea what it meant. (Very few of the contributors to this book speak Arabic as their mother tongue, and it is claimed [p. 400] that even native Arabic speakers can barely understand the classical Arabic of the Koran, but I don't know the extent to which this is true. But in any case, only about 15% of Muslims are Arabic mother tongue speakers.) In many of the narratives, disaffection with Islam either began, or was strongly reinforced, when they read the Koran in translation and discovered that the “real Islam” they had imagined as idealistic and benign was, on the evidence of what is regarded as the word of God, nothing of the sort. It is interesting that, unlike the Roman Catholic church before the Reformation, which attempted to prevent non-clergy from reading the Bible for themselves, Islam encourages believers to study the Koran and Ḥadīth, both in the original Arabic and translation (see for example this official Saudi site). It is ironic that just such study of scripture seems to encourage apostasy, but perhaps this is the case only for those already so predisposed.

Eighty pages of appendices include quotations from the Koran and Ḥadīth illustrating the darker side of Islam and a bibliography of books and list of Web sites critical of Islam. The editor is author of Why I Am Not a Muslim (February 2002), editor of What the Koran Really Says (April 2003), and founder of the Institute for the Secularisation of Islamic Society.

 Permalink

Gurstelle, William. Adventures from the Technology Underground. New York: Clarkson Potter, 2006. ISBN 1-4000-5082-0.
This thoroughly delightful book invites the reader into a subculture of adults who devote their free time, disposable income, and considerable brainpower to defying Mr. Wizard's sage injunction, “Don't try this yourself at home”. The author begins with a handy litmus test to decide whether you're a candidate for the Technology Underground. If you think flying cars are a silly gag from The Jetsons, you don't make the cut. If, on the other hand, you not only think flying cars are perfectly reasonable but can barely comprehend why there isn't already one, ideally with orbital capability, in your own garage right now—it's the bleepin' twenty-first century, fervent snakes—then you “get it” and will have no difficulty understanding what motivates folks to build high powered rockets, giant Tesla coils, flamethrowers, hypersonic rail guns, hundred foot long pumpkin-firing cannons, and trebuchets (if you really want to make your car fly, it's just the ticket, but the operative word is “fly”, not “land”). In a world where basement tinkering and “that looks about right” amateur engineering has been largely supplanted by virtual and vicarious experiences mediated by computers, there remains the visceral attraction of heavy metal, high voltage, volatile chemicals, high velocities, and things that go bang, whoosh, zap, splat, and occasionally kaboom.

A technical section explains the theory and operation of the principal engine of entertainment in each chapter. The author does not shrink from using equations where useful to clarify design trade-offs; flying car fans aren't going to be intimidated by the occasional resonant transformer equation! The principles of operation of the various machines are illustrated by line drawings, but there isn't a single photo in the book, which is a real shame. Three story tall diesel-powered centrifugal pumpkin hurling machines, a four story 130 kW Tesla coil, and a calliope with a voice consisting of seventeen pulsejets are something one would like to see as well as read about, however artfully described.

 Permalink

Mullane, Mike. Riding Rockets. New York: Scribner, 2006. ISBN 0-7432-7682-5.
Mike Mullane joined NASA in 1978, one of the first group of astronauts recruited specifically for the space shuttle program. An Air Force veteran of 134 combat missions in Vietnam as back-seater in the RF-4C reconnaissance version of the Phantom fighter (imperfect eyesight disqualified him from pilot training), he joined NASA as a mission specialist and eventually flew on three shuttle missions: STS-41D in 1984, STS-27 in 1988, and STS-36 in 1990, the latter two classified Department of Defense missions for which he was twice awarded the National Intelligence Medal of Achievement. (Receipt of this medal was, at the time, itself a secret, but was declassified after the collapse of the Soviet Union. The work for which the medals were awarded remains secret to this day.)

As a mission specialist, Mullane never maneuvered the shuttle in space nor landed it on Earth, nor did he perform a spacewalk, mark any significant “first” in space exploration or establish any records apart from being part of the crew of STS-36 which flew the highest inclination (62°) orbit of any human spaceflight so far. What he has done here is write one of the most enlightening, enthralling, and brutally honest astronaut memoirs ever published, far and away the best describing the shuttle era. All of the realities of NASA in the 1980s which were airbrushed out by Public Affairs Officers with the complicity of an astronaut corps who knew that to speak to an outsider about what was really going on would mean they'd never get another flight assignment are dealt with head-on: the dysfunctional, intimidation- and uncertainty-based management culture, the gap between what astronauts knew about the danger and unreliability of the shuttle and what NASA was telling Congress and public, the conflict between battle-hardened military astronauts and perpetual student post-docs recruited as scientist-astronauts, the shameless toadying to politicians, and the perennial over-promising of shuttle capabilities and consequent corner-cutting and workforce exhaustion. (Those of a libertarian bent might wish they could warp back in time, shake the author by the shoulders, and remind him, “Hey dude, you're working for a government agency!”)

The realities of flying a space shuttle mission are described without any of the sugar-coating or veiled references common in other astronaut accounts, and always with a sense of humour. The deep-seated dread of strapping into an experimental vehicle with four million pounds of explosive fuel and no crew escape system is discussed candidly, along with the fact that, while universally shared by astronauts, it was, of course, never hinted to outsiders, even passengers on the shuttle who were told it was a kind of very fast, high-flying airliner. Even if the shuttle doesn't kill you, there's still the toilet to deal with, and any curiosity you've had about that particular apparatus will not outlast your finishing this book (the on-orbit gross-out prank on p. 179 may be too much even for “South Park”). Barfing in space and the curious and little-discussed effects of microgravity on the male and female anatomy which may someday contribute mightily to the popularity of orbital tourism are discussed in graphic detail. A glossary of NASA jargon and acronyms is included but there is no index, which would be a valuable addition.

 Permalink

Kelleher, Colm A. and George Knapp. Hunt for the Skinwalker. New York: Paraview Pocket Books, 2005. ISBN 1-4165-0521-0.
Memo to file: if you're one of those high-strung people prone to be rattled by the occasional bulletproof wolf, flying refrigerator, disappearing/reappearing interdimensional gateway, lumbering giant humanoid, dog-incinerating luminous orb, teleporting bull, and bloodlessly eviscerated cow, don't buy a ranch, even if it's a terrific bargain, whose very mention makes American Indians in the neighbourhood go “woo-woo” and slowly back away from you. That's what Terry Sherman (“Tom Gorman” in this book) and family did in 1994, walking into, if you believe their story, a seething nexus of the paranormal so weird and intense that Chris Carter could have saved a fortune by turning the “X-Files” into a reality show about their life. The Shermans found that living with things which don't just go bump in the night but also slaughter their prize livestock and working dogs so disturbing they jumped at the opportunity to unload the place in 1996, when the National Institute for Discovery Science (NIDS), a private foundation investigating the paranormal funded by real estate tycoon and inflatable space station entrepreneur Robert Bigelow offered to buy them out in order to establish a systematic on-site investigation of the phenomena. (The NIDS Web site does not appear to have been updated since late 2004; I don't know if the organisation is still in existence or active.)

This book, co-authored by the biochemist who headed the field team investigating the phenomena and the television news reporter who covered the story, describes events on the ranch both before and during the scientific investigation. As is usual in such accounts, all the really weird stuff happened before the scientists arrived on the scene with their cameras, night vision scopes, radiation meters, spectrometers, magnetometers (why is always magnetometers, anyway?) and set up shop in their “command and control centre” (a.k.a. trailer—summoning to mind the VW bus “mobile command post” in The Lone Gunmen). Afterward, there was only the rare nocturnal light, mind-controlling black-on-black flying object, and transdimensional tunnel sighting (is an orange pulsating luminous orb which disgorges fierce four hundred pound monsters a “jackal lantern”?), none, of course, captured on film or video, nor registered on any other instrument.

This observation and investigation serves as the launch pad for eighty pages of speculation about causes, natural and supernatural, including the military, shape-shifting Navajo witches, extraterrestrials, invaders from other dimensions, hallucination-inducing shamanism, bigfoot, and a muddled epilogue which illustrates why biochemists and television newsmen should seek the advice of a physicist before writing about speculative concepts in modern physics. The conclusion is, unsurprisingly: “inconclusive.”

Suppose, for a moment, that all of this stuff really did happen, more or less as described. (Granted, that is a pretty big hypothetical, but then the family who first experienced the weirdness never seems to have sought publicity or profit from their experiences, and this book is the first commercial exploitation of the events, coming more than ten years after they began.) What could possibly be going on? Allow me to humbly suggest that the tongue-in-cheek hypothesis advanced in my 1997 paper Flying Saucers Explained, combined with some kind of recurring “branestorm” opening and closing interdimensional gates in the vicinity, might explain many of the otherwise enigmatic, seemingly unrelated, and nonsensical phenomena reported in this and other paranormal “hot spots”.

 Permalink

March 2006

Ferrigno, Robert. Prayers for the Assassin. New York: Scribner, 2006. ISBN 0-7432-7289-7.
The year is 2040. The former United States have fissioned into the coast-to-coast Islamic Republic in the north and the Bible Belt from Texas eastward to the Atlantic, with the anything-goes Nevada Free State acting as a broker between them, pressure relief valve, and window to the outside world. The collapse of the old decadent order was triggered by the nuclear destruction of New York and Washington, and the radioactive poisoning of Mecca by a dirty bomb in 2015, confessed to by an agent of the Mossad, who revealed a plot to set the Islamic world and the West against one another. In the aftermath, a wave of Islamic conversion swept the West, led by the glitterati and opinion leaders, with hold-outs fleeing to the Bible Belt, which co-exists with the Islamic Republic in a state of low intensity warfare. China has become the world's sole superpower, with Russia, reaping the benefit of refugees from overrun Israel, the high-technology centre.

This novel is set in the Islamic Republic, largely in the capital of Seattle (no surprise—even pre-transition, that's where the airheads seem to accrete, and whence bad ideas and flawed technologies seep out to despoil the heartland). The society sketched is believably rich and ambiguous: Muslims are divided into “modern”, “moderate”, and “fundamentalist” communities which more or less co-exist, like the secular, religious, and orthodox communities in present-day Israel. Many Catholics have remained in the Islamic Republic, reduced to dhimmitude and limited in their career aspirations, but largely left alone as long as they keep to themselves. The Southwest, with its largely Catholic hispanic population, is a zone of relative personal liberty within the Islamic Republic, much like Kish Island in Iran. Power in the Islamic Republic, as in Iran, is under constant contention among national security, religious police, the military, fanatic “fedayeen”, and civil authority, whose scheming against one another leaves cracks in which the clever can find a modicum of freedom.

But the historical events upon which the Islamic Republic is founded may not be what they seem, and the protagonists, the adopted but estranged son and daughter of the shadowy head of state security, must untangle decades of intrigue and misdirection to find the truth and make it public. There are some thoughtful and authentic touches in the world sketched in this novel: San Francisco has become a hotbed of extremist fundamentalism, which might seem odd until you reflect that moonbat collectivism and environmentalism share much of the same desire to make the individual submit to externally imposed virtue which suffuses radical Islam. Properly packaged and marketed, Islam can be highly attractive to disillusioned leftists, as the conversion of Carlos “the Jackal” from fanatic Marxist to “revolutionary Islam” demonstrates.

There are a few goofs. Authors who include nuclear weapons in their stories really ought seek the advice of somebody who knows about them, or at least research them in the Nuclear Weapons FAQ. The “fissionable fuel rods from a new Tajik reactor…made from a rare isotope, supposedly much more powerful than plutonium” on p. 212, purportedly used to fabricate a five megaton bomb, is the purest nonsense in about every way imaginable. First of all, there are no isotopes, rare or otherwise, which are better than highly enriched uranium (HEU) or plutonium for fission weapons. Second, there's no way you could possibly make a five megaton fission bomb, regardless of the isotope you used—to get such a yield you'd need so much fission fuel that it would be much more than a critical mass and predetonate, which would ruin your whole day. The highest yield fission bomb ever built was Ted Taylor's Mk 18F Super Oralloy Bomb (SOB), which contained about four critical masses of U-235, and depended upon the very low neutron background of HEU to permit implosion assembly before predetonation. The SOB had a yield of about 500 kt; with all the short half-life junk in fuel rods, there's no way you could possibly approach that yield, not to speak of something ten times as great. If you need high yield, tritium boosting or a full-fledged two stage Teller-Ulam fusion design is the only way to go. The author also shares the common misconception in thrillers that radiation is something like an infectuous disease which permanently contaminates everything it touches. Unfortunately, this fallacy plays a significant part in the story.

Still, this is a well-crafted page-turner which, like the best alternative history, is not only entertaining but will make you think. The blogosphere has been chattering about this book (that's where I came across it), and they're justified in recommending it. The Web site for the book, complete with Flash animation and an annoying sound track, includes background information and the author's own blog with links to various reviews.

 Permalink

Susskind, Leonard. The Cosmic Landscape. New York: Little, Brown, 2006. ISBN 0-316-15579-9.
Leonard Susskind (and, independently, Yoichiro Nambu) co-discovered the original hadronic string theory in 1969. He has been a prominent contributor to a wide variety of topics in theoretical physics over his long career, and is a talented explainer of abstract theoretical concepts to the general reader. This book communicates both the physics and cosmology of the “string landscape” (a term he coined in 2003) revolution which has swiftly become the consensus among string theorists, as well as the intellectual excitement of those exploring this new frontier.

The book is subtitled “String Theory and the Illusion of Intelligent Design” which may be better marketing copy—controversy sells—than descriptive of the contents. There is very little explicit discussion of intelligent design in the book at all except in the first and last pages, and what is meant by “intelligent design” is not what the reader might expect: design arguments in the origin and evolution of life, but rather the apparent fine-tuning of the physical constants of our universe, the cosmological constant in particular, without which life as we know it (and, in many cases, not just life but even atoms, stars, and galaxies) could not exist. Susskind is eloquent in describing why the discovery that the cosmological constant, which virtually every theoretical physicist would have bet had to be precisely zero, is (apparently) a small tiny positive number, seemingly fine tuned to one hundred and twenty decimal places “hit us like the proverbial ton of bricks” (p. 185)—here was a number which, not only did theory suggest should be 120 orders of magnitude greater, but which, had it been slightly larger than its minuscule value, would have precluded structure formation (and hence life) in the universe. One can imagine some as-yet-undiscovered mathematical explanation why a value is precisely zero (and, indeed, physicists did: it's called supersymmetry, and searching for evidence of it is one of the reasons they're spending billions of taxpayer funds to build the Large Hadron Collider), but when you come across a dial set with the almost ridiculous precision of 120 decimal places and it's a requirement for our own existence, thoughts of a benevolent Creator tend to creep into the mind of even the most doctrinaire scientific secularist. This is how the appearance of “intelligent design” (as the author defines it) threatens to get into the act, and the book is an exposition of the argument string theorists and cosmologists have developed to contend that such apparent design is entirely an illusion.

The very title of the book, then invites us to contrast two theories of the origin of the universe: “intelligent design” and the “string landscape”. So, let's accept that challenge and plunge right in, shall we? First of all, permit me to observe that despite frequent claims to the contrary, including some in this book, intelligent design need not presuppose a supernatural being operating outside the laws of science and/or inaccessible to discovery through scientific investigation. The origin of life on Earth due to deliberate seeding with engineered organisms by intelligent extraterrestrials is a theory of intelligent design which has no supernatural component, evidence of which may be discovered by science in the future, and which is sufficiently plausible to have persuaded Francis Crick, co-discoverer of the structure of DNA, was the most likely explanation. If you observe a watch, you're entitled to infer the existence of a watchmaker, but there's no reason to believe he's a magician, just a craftsman.

If we're to compare these theories, let us begin by stating them both succinctly:

Theory 1: Intelligent Design.   An intelligent being created the universe and chose the initial conditions and physical laws so as to permit the existence of beings like ourselves.

Theory 2: String Landscape.   The laws of physics and initial conditions of the universe are chosen at random from among 10500 possibilities, only a vanishingly small fraction of which (probably no more than one in 10120) can support life. The universe we observe, which is infinite in extent and may contain regions where the laws of physics differ, is one of an infinite number of causally disconnected “pocket universes“ which spontaneously form from quantum fluctuations in the vacuum of parent universes, a process which has been occurring for an infinite time in the past and will continue in the future, time without end. Each of these pocket universes which, together, make up the “megaverse”, has its own randomly selected laws of physics, and hence the overwhelming majority are sterile. We find ourselves in one of the tiny fraction of hospitable universes because if we weren't in such an exceptionally rare universe, we wouldn't exist to make the observation. Since there are an infinite number of universes, however, every possibility not only occurs, but occurs an infinite number of times, so not only are there an infinite number of inhabited universes, there are an infinite number identical to ours, including an infinity of identical copies of yourself wondering if this paragraph will ever end. Not only does the megaverse spawn an infinity of universes, each universe itself splits into two copies every time a quantum measurement occurs. Our own universe will eventually spawn a bubble which will destroy all life within it, probably not for a long, long time, but you never know. Evidence for all of the other universes is hidden behind a cosmic horizon and may remain forever inaccessible to observation.

Paging Friar Ockham! If unnecessarily multiplying hypotheses are stubble indicating a fuzzy theory, it's pretty clear which of these is in need of the razor! Further, while one can imagine scientific investigation discovering evidence for Theory 1, almost all of the mechanisms which underlie Theory 2 remain, barring some conceptual breakthrough equivalent to looking inside a black hole, forever hidden from science by an impenetrable horizon through which no causal influence can propagate. So severe is this problem that chapter 9 of the book is devoted to the question of how far theoretical physics can go in the total absence of experimental evidence. What's more, unlike virtually every theory in the history of science, which attempted to describe the world we observe as accurately and uniquely as possible, Theory 2 predicts every conceivable universe and says, hey, since we do, after all, inhabit a conceivable universe, it's consistent with the theory. To one accustomed to the crystalline inevitability of Newtonian gravitation, general relativity, quantum electrodynamics, or the laws of thermodynamics, this seems by comparison like a California blonde saying “whatever”—the cosmology of despair.

Scientists will, of course, immediately rush to attack Theory 1, arguing that a being such as that it posits would necessarily be “indistinguishable from magic”, capable of explaining anything, and hence unfalsifiable and beyond the purview of science. (Although note that on pp. 192–197 Susskind argues that Popperian falsifiability should not be a rigid requirement for a theory to be deemed scientific. See Lee Smolin's Scientific Alternatives to the Anthropic Principle for the argument against the string landscape theory on the grounds of falsifiability, and the 2004 Smolin/Susskind debate for a more detailed discussion of this question.) But let us look more deeply at the attributes of what might be called the First Cause of Theory 2. It not only permeates all of our universe, potentially spawning a bubble which may destroy it and replace it with something different, it pervades the abstract landscape of all possible universes, populating them with an infinity of independent and diverse universes over an eternity of time: omnipresent in spacetime. When a universe is created, all the parameters which ultimately govern its ultimate evolution (under the probabilistic laws of quantum mechanics, to be sure) are fixed at the moment of creation: omnipotent to create any possibility, perhaps even varying the mathematical structures underlying the laws of physics. As a budded off universe evolves, whether a sterile formless void or teeming with intelligent life, no information is ever lost in its quantum evolution, not even down a black hole or across a cosmic horizon, and every quantum event splits the universe and preserves all possible outcomes. The ensemble of universes is thus omniscient of all its contents. Throw in intelligent and benevolent, and you've got the typical deity, and since you can't observe the parallel universes where the action takes place, you pretty much have to take it on faith. Where have we heard that before?

Lest I be accused of taking a cheap shot at string theory, or advocating a deistic view of the universe, consider the following creation story which, after John A. Wheeler, I shall call “Creation without the Creator”. Many extrapolations of continued exponential growth in computing power envision a technological singularity in which super-intelligent computers designing their own successors rapidly approach the ultimate physical limits on computation. Such computers would be sufficiently powerful to run highly faithful simulations of complex worlds, including intelligent beings living within them which need not be aware they were inhabiting a simulation, but thought they were living at the “top level”, who eventually passed through their own technological singularity, created their own simulated universes, populated them with intelligent beings who, in turn,…world without end. Of course, each level of simulation imposes a speed penalty (though, perhaps not much in the case of quantum computation), but it's not apparent to the inhabitants of the simulation since their own perceived time scale is in units of the “clock rate” of the simulation.

If an intelligent civilisation develops to the point where it can build these simulated universes, will it do so? Of course it will—just look at the fascination crude video game simulations have for people today. Now imagine a simulation as rich as reality and unpredictable as tomorrow, actually creating an inhabited universe—who could resist? As unlimited computing power becomes commonplace, kids will create innovative universes and evolve them for billions of simulated years for science fair projects. Call the mean number of simulated universes created by intelligent civilisations in a given universe (whether top-level or itself simulated) the branching factor. If this is greater than one, and there is a single top-level non-simulated universe, then it will be outnumbered by simulated universes which grow exponentially in numbers with the depth of the simulation. Hence, by the Copernican principle, or principle of mediocrity, we should expect to find ourselves in a simulated universe, since they vastly outnumber the single top-level one, which would be an exceptional place in the ensemble of real and simulated universes. Now here's the point: if, as we should expect from this argument, we do live in a simulated universe, then our universe is the product of intelligent design and Theory 1 is an absolutely correct description of its origin.

Suppose this is the case: we're inside a simulation designed by a freckle-faced superkid for extra credit in her fifth grade science class. Is this something we could discover, or must it, like so many aspects of Theory 2, be forever hidden from our scientific investigation? Surprisingly, this variety of Theory 1 is quite amenable to experiment: neither revelation nor faith is required. What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.

Might we expect surprises as we subject our simulated universe to ever more precise scrutiny, perhaps even astonishing the being which programmed it with our cunning and deviousness (as the author of any software package has experienced at the hands of real-world users)? Who knows, we might run into round-off errors which “hit us like a ton of bricks”! Suppose there were some quantity, say, that was supposed to be exactly zero but, if you went and actually measured the geometry way out there near the edge and crunched the numbers, you found out it differed from zero in the 120th decimal place. Why, you might be as shocked as the naïve Perl programmer who ran the program “printf("%.18f", 0.2)” and was aghast when it printed “0.200000000000000011” until somebody explained that with about 56 bits of mantissa in IEEE double precision floating point, you only get about 17 decimal digits (log10 256) of precision. So, what does a round-off in the 120th digit imply? Not Theory 2, with its infinite number of infinitely reproducing infinite universes, but simply that our Theory 1 intelligent designer used 400 bit numbers (log2 10120) in the simulation and didn't count on our noticing—remember you heard it here first, and if pointing this out causes the simulation to be turned off, sorry about that, folks! Surprises from future experiments which would be suggestive (though not probative) that we're in a simulated universe would include failure to find any experimental signature of quantum gravity (general relativity could be classical in the simulation, since potential conflicts with quantum mechanics would be hidden behind event horizons in the present-day universe, and extrapolating backward to the big bang would be meaningless if the simulation were started at a later stage, say at the time of big bang nucleosynthesis), and discovery of limits on the ability to superpose wave functions for quantum computation which could result from limited precision in the simulation as opposed to the continuous complex values assumed by quantum mechanics. An interesting theoretical program would be to investigate feasible experiments which, by magnifying physical effects similar to proposed searches for quantum gravity signals, would detect round-off errors of magnitude comparable to the cosmological constant.

But seriously, this is an excellent book and anybody who's interested in the strange direction in which the string theorists are veering these days ought to read it; it's well-written, authoritative, reasonably fair to opposing viewpoints (although I'm surprised the author didn't address the background spacetime criticism of string theory raised so eloquently by Lee Smolin), and provides a roadmap of how string theory may develop in the coming years. The only nagging question you're left with after finishing the book is whether after thirty years of theorising which comes to the conclusion that everything is predicted and nothing can be observed, it's about science any more.

 Permalink

Freeh, Louis J. with Howard Means. My FBI. New York: St. Martin's Press, 2005. ISBN 0-312-32189-9.
This may be one of the most sanctimonious and self-congratulatory books ever written by a major U.S. public figure who is not Jimmy Carter. Not only is the book titled “My FBI” (gee, I always thought it was supposed to belong to the U.S. taxpayers who pay the G-men's salaries and buy the ammunition they expend), in the preface, where the author explains why he reversed his original decision not to write a memoir of his time at the FBI, he uses the words “I”, “me”, “my”, and “myself” a total of 91 times in four pages.

Only about half of the book covers Freeh's 1993–2001 tenure as FBI director; the rest is a straightforward autohagiography of his years as an altar boy, Eagle Scout, idealistic but apolitical law student during the turbulent early 1970s, FBI agent, crusading anti-Mafia federal prosecutor in New York City, and hard-working U.S. district judge, before bring appointed to the FBI job by Bill Clinton, who promised him independence and freedom from political interference in the work of the Bureau. Little did Freeh expect, when accepting the job, that he would spend much of his time in the coming years investigating the Clintons and their cronies. The tawdry and occasionally bizarre stories of those events as seen from the FBI fills a chapter and sets the background for the tense relations between the White House and FBI on other matters such as terrorism and counter-intelligence. The Oklahoma City and Saudi Arabian Khobar Towers bombings, the Atlanta Olympics bomb, the identification and arrest of Unabomber Ted Kaczynski, and the discovery of long-term Soviet mole Robert Hanssen in the FBI all occurred on Freeh's watch; he provides a view of these events and the governmental turf battles they engendered from the perspective of the big office in the Hoover Building, but there's little or no new information about the events themselves. Freeh resigned the FBI directorship in June 2001, and September 11th of that year was the first day at his new job. (What do you do after nine years running the FBI? Go to work for a credit card company!) In a final chapter, he provides a largely exculpatory account of the FBI's involvement in counter-terrorism and what might have been done to prevent such terrorist strikes. He directly attacks Richard A. Clarke and his book Against All Enemies as a self-aggrandising account by a minor player including some outright fabrications.

Freeh's book provides a peek into the mind of a self-consciously virtuous top cop—if only those foolish politicians and their paranoid constituents would sign over the last shreds of their liberties and privacy (on p. 304 he explicitly pitches for key escrow and back doors in encryption products, arguing “there's no need for this technology to be any more intrusive than a wiretap on a phone line”—indeed!), the righteous and incorruptible enforcers of the law and impartial arbiters of justice could make their lives ever so much safer and fret-free. And perhaps if the human beings in possession of those awesome powers were, in fact, as righteous as Mr. Freeh seems to believe himself to be, then there would nothing to worry about. But evidence suggests cause for concern. On the next to last page of the book, p. 324, near the end of six pages of acknowledgements set in small type with narrow leading (didn't think we'd read that far, Mr. Freeh?), we find the author naming, as an exemplar of one of the “courageous and honorable men who serve us”, who “deserve the nation's praise and lasting gratitude”, one Lon Horiuchi, the FBI sniper who shot and killed Vicki Weaver (who was accused of no crime) while she was holding her baby in her hands during the Ruby Ridge siege in August of 1992. Horiuchi later pled the Fifth Amendment in testimony before the U.S. Senate Judiciary Committee in 1995, ten years prior to Freeh's commendation of him here.

 Permalink

O'Brien, Flann [Brian O'Nolan]. The Dalkey Archive. Normal, IL: Dalkey Archive Press, [1964] 1993. ISBN 1-56478-172-0.
What a fine book to be reading on Saint Patrick's Day! Flann O'Brien (a nom de plume of Brian O'Nolan, who also wrote under the name Myles na gCopaleen, among others) is considered one of the greatest Irish authors of humor and satire in the twentieth century; James Joyce called him “A real writer, with the true comic spirit.” In addition to his novels, he wrote short stories, plays, and a multitude of newspaper columns in both the Irish and English languages. The Dalkey Archive is a story of mind-bending fantasy and linguistic acrobatics yet so accessible it sucks the reader into its alternative reality almost unsuspecting. A substantial part of the material is recycled from The Third Policeman (January 2004) which, although completed in 1940, the author despaired of ever seeing published (it was eventually published posthumously in 1967). Both novels are works of surreal fantasy, but The Dalkey Archive is more conventionally structured and easier to get into, much as John Brunner's The Jagged Orbit stands in relation to his own earlier and more experimental Stand on Zanzibar.

The mad scientist De Selby, who appears offstage and in extensive and highly eccentric footnotes in The Third Policeman, is a key character here, joined by Saint Augustine and James Joyce. The master of malaprop, Sergeant Fottrell and his curious “mollycule” theory about people and bicycles is here as well, providing a stolid counterpoint to De Selby's relativistic pneumatic theology and diabolical designs. It takes a special kind of genius to pack this much weirdness into only two hundred pages. If you're interested in O'Brien's curious career, this biography is an excellent starting point which contains no spoilers for any of his fiction.

 Permalink

Reynolds, Glenn. An Army of Davids. Nashville: Nelson Current, 2006. ISBN 1-59555-054-2.
In this book, law professor and über blogger (InstaPundit.com) Glenn Reynolds explores how present and near-future technology is empowering individuals at the comparative expense of large organisations in fields as diverse as retailing, music and motion picture production, national security, news gathering, opinion journalism, and, looking further out, nanotechnology and desktop manufacturing, human longevity and augmentation, and space exploration and development (including Project Orion [pp. 228–233]—now there's a garage start-up I'd love to work on!). Individual empowerment is, like the technology which creates it, morally neutral: good people can do more good, and bad people can wreak more havoc. Reynolds is relentlessly optimistic, and I believe justifiably so; good people outnumber bad people by a large majority, and in a society which encourages them to be “a pack, not a herd” (the title of chapter 5), they will have the means in their hands to act as a societal immune system against hyper-empowered malefactors far more effective than heavy-handed top-down repression and fear-motivated technological relinquishment.

Anybody who's seeking “the next big thing” couldn't find a better place to start than this book. Chapters 2, 3 and 7, taken together, provide a roadmap for the devolution of work from downtown office towers to individual entrepreneurs working at home and in whatever environments attract them, and the emergence of “horizontal knowledge”, supplanting the top-down one-to-many model of the legacy media. There are probably a dozen ideas for start-ups with the potential of eBay and Amazon lurking in these chapters if you read them with the right kind of eyes. If the business and social model of the twenty-first century indeed comes to resemble that of the eighteenth, all of those self-reliant independent people are going to need lots of products and services they will find indispensable just as soon as somebody manages to think of them. Discovering and meeting these needs will pay well.

The “every person an entrepreneur” world sketched here raises the same concerns I expressed in regard to David Bolchover's The Living Dead (January 2006): this will be a wonderful world, indeed, for the intelligent and self-motivated people who will prosper once liberated from corporate cubicle indenture. But not everybody is like that: in particular, those people tend to be found on the right side of the bell curve, and for every one on the right, there's one equally far to the left. We have already made entire categories of employment for individuals with average or below-average intelligence redundant. In the eighteenth century, there were many ways in which such people could lead productive and fulfilling lives; what will they do in the twenty-first? Further, ever since Bismarck, government schools have been manufacturing worker-bees with little initiative, and essentially no concept of personal autonomy. As I write this, the élite of French youth is rioting over a proposal to remove what amounts to a guarantee of lifetime employment in a first job. How will people so thoroughly indoctrinated in collectivism fare in an individualist renaissance? As a law professor, the author spends much of his professional life in the company of high-intelligence, strongly-motivated students, many of whom contemplate an entrepreneurial career and in any case expect to be judged on their merits in a fiercely competitive environment. One wonders if his optimism might be tempered were he to spend comparable time with denizens of, say, the school of education. But the fact that there will be problems in the future shouldn't make us fear it—heaven knows there are problems enough in the present, and the last century was kind of a colossal monument to disaster and tragedy; whatever the future holds, the prescription of more freedom, more information, greater wealth and health, and less coercion presented here is certain to make it a better place to live.

The individualist future envisioned here has much in common with that foreseen in the 1970s by Timothy Leary, who coined the acronym “SMIILE” for “Space Migration, Intelligence Increase, Life Extension”. The “II” is alluded to in chapter 12 as part of the merging of human and machine intelligence in the singularity, but mightn't it make sense, as Leary advocated, to supplement longevity research with investigation of the nature of human intelligence and near-term means to increase it? Realising the promise and avoiding the risks of the demanding technologies of the future are going to require both intelligence and wisdom; shifting the entire bell curve to the right, combined with the wisdom of longer lives may be key in achieving the much to be desired future foreseen here.

InstaPundit visitors will be familiar with the writing style, which consists of relatively brief discussion of a multitude of topics, each with one or more references for those who wish to “read the whole thing” in more depth. One drawback of the print medium is that although many of these citations are Web pages, to get there you have to type in lengthy URLs for each one. An on-line edition of the end notes with all the on-line references as clickable links would be a great service to readers.

 Permalink

Buckley, Christopher. Florence of Arabia. New York: Random House, 2004. ISBN 0-8129-7226-0.
This is a very funny novel, and thought-provoking as well. Some speak of a “clash of civilisations” or “culture war” between the Western and Islamic worlds, but with few exceptions the battle has been waged inadvertently by the West, through diffusion of its culture through mass media and globalised business, and indirectly by Islam, through immigration without assimilation into Western countries. Suppose the West were to say, “OK, you want a culture war? Here's a culture war!” and target one of fundamentalist Islam's greatest vulnerabilities: its subjugation and oppression of women?

In this story, the stuck-on-savage petroleum superpower Royal Kingdom of Wasabia cuts off one head too many when they execute a woman who had been befriended by Foreign Service staffer Florence Farfaletti, herself an escapee from trophy wife status in the desert kingdom, who hammers out a fifty-page proposal titled “Female Emancipation as a Means of Achieving Long-Term Political Stability in the Near East” and, undiplomatically vaulting over heaven knows how many levels of bureaucrats and pay grades, bungs it into the Secretary of State's in-box. Bold initiatives of this kind are not in keeping with what State does best, which is nothing, but Florence's plan comes to the attention of the mysterious “Uncle Sam” who appears to have unlimited financial resources at his command and the Washington connections to make just about anything happen.

This sets things in motion, and soon Florence and her team, including a good ole' boy ex-CIA killer, Foreign Service officer who detests travel, and public relations wizard so amoral his slime almost qualifies him for OPEC membership, are set up in the Emirate of Matar, “Switzerland of the Gulf”, famed for its duty-free shopping, offshore pleasure domes at “Infidel Land”, and laid-back approach to Islam by clergy so well-compensated for their tolerance they're nicknamed “moolahs”. The mission? To launch TVMatar, a satellite network targeting Arab women, headed by the wife of the Emir, who was a British TV presenter before marrying the randy royal.

TVMatar's programming is, shall we say, highly innovative, and before long things are bubbling on both sides of the Wasabi/Matar border, with intrigue afoot on all sides, including Machiavellian misdirection by those masters of perfidy, the French. And, of course (p. 113), “This is the Middle East! … Don't you understand that since the start of time, startin' with the Garden of Eden, nothing has ever gone right here?” Indeed, before long, a great many things go all pear-shaped, with attendant action, suspense, laughs, and occasional tragedy. As befits a comic novel, in the end all is resolved, but many are the twists and turns to get there which will keep you turning pages, and there are delightful turns of phrase throughout, from CIA headquarters christened the “George Bush Center for Intelligence” in the prologue to Shem, the Camel Royal…but I mustn't spoil that for you.

This is a delightful read, laugh out loud funny, and enjoyable purely on that level. But in a world where mobs riot, burn embassies, and murder people over cartoons, while pusillanimous European politicians cower before barbarism and contemplate constraining liberties their ancestors bequeathed to humanity in the Enlightenment, one cannot help but muse, “OK, you want a culture war?”

 Permalink

Larson, Erik. The Devil in the White City. New York: Vintage Books, 2003. ISBN 0-375-72560-1.
It's conventional wisdom in the publishing business that you never want a book to “fall into the crack” between two categories: booksellers won't know where to shelve it, promotional campaigns have to convey a complicated mixed message, and you run the risk of irritating readers who bought it solely for one of the two topics. Here we have a book which evokes the best and the worst of the Gilded Age of the 1890s in Chicago by interleaving the contemporary stories of the 1893 World's Columbian Exposition and the depraved series of murders committed just a few miles from the fairgrounds by the archetypal American psychopathic serial killer, the chillingly diabolical Dr. H. H. Holmes (the principal alias among many used by a man whose given name was Herman Webster Mudgett; his doctorate was a legitimate medical degree from the University of Michigan). Architectural and industrial history and true crime are two genres you might think wouldn't mix, but in the hands of the author they result in a compelling narrative which I found as difficult to put down as any book I have read in the last several years. For once, this is not just my eccentric opinion; at this writing the book has been on The New York Times Best-Seller list for more than two consecutive years and won the Edgar award for best fact crime in 2004. As I rarely frequent best-seller lists, it went right under my radar. Special thanks to the visitor to this page who recommended I read it!

Boosters saw the Columbian Exposition not so much as a commemoration of the 400th anniversary of the arrival of Columbus in the New World but as a brash announcement of the arrival of the United States on the world stage as a major industrial, commercial, financial, and military power. They viewed the 1889 Exposition Universelle in Paris (for which the Eiffel Tower was built) as a throwing down of the gauntlet by the Old World, and vowed to assert the preeminence of the New by topping the French and “out-Eiffeling Eiffel”. Once decided on by Congress, the site of the exposition became a bitterly contested struggle between partisans of New York, Washington, and Chicago, with the latter seeing its victory as marking its own arrival as a peer of the Eastern cities who looked with disdain at what Chicagoans considered the most dynamic city in the nation.

Charged with building the Exposition, a city in itself, from scratch on barren, wind-swept, marshy land was architect Daniel H. Burnham, he who said, “Make no little plans; they have no magic to stir men's blood.” He made no little plans. The exposition was to have more than 200 buildings in a consistent neo-classical style, all in white, including the largest enclosed space ever constructed. While the electric light was still a novelty, the fair was to be illuminated by the the first large-scale application of alternating current. Edison's kinetoscope amazed visitors with moving pictures, and a theatre presented live music played by an orchestra in New York and sent over telephone wires to Chicago. Nikola Tesla amazed fairgoers with huge bolts of electrical fire, and a giant wheel built by a man named George Washington Gale Ferris lifted more than two thousand people at once into the sky to look down upon the fair like gods. One of the army of workers who built the fair was a carpenter named Elias Disney, who later regaled his sons Roy and Walt with tales of the magic city; they must have listened attentively.

The construction of the fair in such a short time seemed miraculous to onlookers (and even more so to those accustomed to how long it takes to get anything built a century later), but the list of disasters, obstacles, obstructions, and outright sabotage which Burnham and his team had to overcome was so monumental you'd have almost thought I was involved in the project! (Although if you've ever set up a trade show booth in Chicago, you've probably gotten a taste of it.) A total of 27.5 million people visited the fair between May and October of 1893, and this in a country whose total population (1890 census) was just 62.6 million. Perhaps even more astonishing to those acquainted with comparable present-day undertakings, the exposition was profitable and retired all of its bank debt.

While the enchanted fair was rising on the shore of Lake Michigan and enthralling visitors from around the world, in a gloomy city block size building not far away, Dr. H. H. Holmes was using his almost preternatural powers to charm the young, attractive, and unattached women who flocked to Chicago from the countryside in search of careers and excitement. He offered them the former in various capacities in the businesses, some legitimate and other bogus, in his “castle”, and the latter in his own person, until he killed them, disposed of their bodies, and in some cases sold their skeletons to medical schools. Were the entire macabre history of Holmes not thoroughly documented in court proceedings, investigators' reports, and reputable contemporary news items, he might seem to be a character from an over-the-top Gothic novel, like Jack the Ripper. But wait—Jack the Ripper was real too. However, Jack the Ripper is only believed to have killed five women; Holmes is known for certain to have killed nine men, women, and children. He confessed to killing 27 in all, but this was the third of three mutually inconsistent confessions all at variance with documented facts (some of those he named in the third confession turned up alive). Estimates ran as high as two hundred, but that seems implausible. In any case, he was a monster the likes of which no American imagined inhabited their cities until his crimes were uncovered. Remarkably, and of interest to libertarians who advocate the replacement of state power by insurance-like private mechanisms, Holmes never even came under suspicion by any government law enforcement agency during the entire time he committed his murder spree, nor did any of his other scams (running out on debts, forging promissory notes, selling bogus remedies) attract the attention of the law. His undoing was when he attempted insurance fraud (one of his favourite activities) and ended up with Nemesis-like private detective Frank Geyer on his trail. Geyer, through tireless tracking and the expenditure of large quantities of shoe leather, got the goods on Holmes, who met his end on the gallows in May of 1896. His jailers considered him charming.

I picked this book up expecting an historical recounting of a rather distant and obscure era. Was I ever wrong—I finished the whole thing in two and half days; the story is that fascinating and the writing that good. More than 25 pages of source citations and bibliography are included, but this is not a dry work of history; it reads like a novel. In places, the author has invented descriptions of events for which no eyewitness account exists; he says that in doing this, his goal is to create a plausible narrative as a prosecutor does at a trial. Most such passages are identified in the end notes and justifications given for the inferences made therein. The descriptions of the Exposition cry out for many more illustrations than are included: there isn't even a picture of the Ferris wheel! If you read this book, you'll probably want to order the Dover Photographic Record of the Fair—I did.

 Permalink

April 2006

Levitt, Steven D. and Stephen J. Dubner. Freakonomics. New York: William Morrow, 2005. ISBN 0-06-073132-X.
Finally—a book about one of my favourite pastimes: mining real-world data sets for interesting correlations and searching for evidence of causality—and it's gone and become a best-seller! Steven Levitt is a University of Chicago economics professor who excels in asking questions others never think to pose such as, “If dealing crack is so profitable, why do most drug dealers live with their mothers?” and “Why do real estate agents leave their own houses on the market longer than the houses of their clients?”, then crunches the numbers to answer them, often with fascinating results. Co-author Stephen Dubner, who has written about Levitt's work for The New York Times Magazine, explains Levitt's methodologies in plain language that won't scare away readers inclined to be intimidated by terms such as “multiple regression analysis” and “confidence level”.

Topics run the gamut from correlation between the legalisation of abortion and a drop in the crime rate, cheating in sumo wrestling in Japan, tournament dynamics in advancement to managerial positions in the crack cocaine trade, Superman versus the Ku Klux Klan, the generation-long trajectory of baby names from prestigious to down-market, and the effects of campaign spending on the outcome of elections. In each case there are surprises in store, and sufficient background to understand where the results came from and the process by which they were obtained. The Internet has been a godsend for this kind of research: a wealth of public domain data in more or less machine-readable form awaits analysis by anybody curious about how it might fit together to explain something. This book is an excellent way to get your own mind asking such questions.

My only quibble with the book is the title: “Freakonomics: A Rogue Economist Explores the Hidden Side of Everything.” The only thing freaky about Levitt's work is that so few other professional economists are using the tools of their profession to ask and answer such interesting and important questions. And as to “rogue economist”, that's a rather odd term for somebody with degrees from Harvard and MIT, who is a full professor in one of the most prestigious departments of economics in the United States, recipient of the Clark Medal for best American economist under forty, and author of dozens of academic publications in the leading journals. But book titles, after all, are marketing tools, and the way this book is selling, I guess the title is doing its job quite well, thank you. A web site devoted to the book contains additional information and New York Times columns by the authors containing additional analyses.

 Permalink

Verne, Jules. Voyage à reculons en Angleterre et en Écosse. Paris: Le Cherche Midi, 1989. ISBN 2-86274-147-7.
As a child, Jules Verne was fascinated by the stories of his ancestor who came to France from exotic Scotland to serve as an archer in the guard of Louis XI. Verne's attraction to Scotland was reinforced by his life-long love of the novels of Sir Walter Scott, and when in 1859, at age 31, he had a chance to visit that enchanting ancestral land, he jumped at the opportunity. This novel is a thinly fictionalised account of his “backwards voyage” to Scotland and England. “Backwards” («à reculons») because he and his travelling companion began their trip from Paris into the North by heading South to Bordeaux, where they had arranged economical passage on a ship bound for Liverpool, then on to Edinburgh, Glasgow, and then back by way of London and Dieppe—en sens inverse of most Parisian tourists. The theme of “backwards” surfaces regularly in the narrative, most amusingly on p. 110 where they find themselves advancing to the rear after having inadvertently wandered onto a nude beach.

So prolific was Jules Verne that more than a century and a half after he began his writing career, new manuscripts keep turning up among his voluminous papers. In the last two decades, Paris au XXe siècle, the original un-mangled version of La chasse au météore (October 2002), and the present volume have finally made their way into print. Verne transformed the account of his own trip into a fictionalised travel narrative of a kind quite common in the 19th century but rarely encountered today. The fictional form gave him freedom to add humour, accentuate detail, and highlight aspects of the country and culture he was visiting without crossing the line into that other venerable literary genre, the travel tall tale. One suspects that the pub brawl in chapter 16 is an example of such embroidery, along with the remarkable steam powered contraption on p. 159 which prefigured Mrs. Tweedy's infernal machine in Chicken Run. The description of the weather, however, seems entirely authentic. Verne offered the manuscript to Hetzel, who published most of his work, but it was rejected and remained forgotten until it was discovered in a cache of Verne papers acquired by the city of Nantes in 1981. This 1989 edition is its first appearance in print, and includes six pages of notes on the history of the work and its significance in Verne's œuvre, notes on changes in the manuscript made by Verne, and a facsimile manuscript page.

What is remarkable in reading this novel is the extent to which it is a fully-developed “template” for Verne's subsequent Voyages extraordinaires: here we have an excitable and naïve voyager (think Michel Ardan or Passepartout) paired with a more stolid and knowledgeable companion (Barbicane or Phileas Fogg), the encyclopedist's exultation in enumeration, fascination with all forms of locomotion, and fun with language and dialect (particularly poor Jacques who beats the Dickens out of the language of Shakespeare). Often, when reading the early works of writers, you sense them “finding their voice”—not here. Verne is in full form, the master of his language and the art of story-telling, and fully ready, a few years later, with just a change of topic, to invent science fiction. This is not “major Verne”, and you certainly wouldn't want to start with this work, but if you've read most of Verne and are interested in how it all began, this is genuine treat.

This book is out of print. If you can't locate a used copy at a reasonable price at the Amazon link above, try abebooks.com. For comparison with copies offered for sale, the cover price in 1989 was FRF 95, which is about €14.50 at the final fixed rate.

 Permalink

Wright, Evan. Generation Kill. New York: Berkley Caliber, 2004. ISBN 0-425-20040-X.
The author was an “embedded journalist” with Second Platoon, Bravo Company of the U.S. First Marine Reconnaissance Battalion from a week before the invasion of Iraq in March of 2003 through the entire active combat phase and subsequent garrison duty in Baghdad until the end of April. This book is an expanded edition of his National Magazine Award winning reportage in Rolling Stone. Recon Marines are the elite component of the U.S. Marine Corps—like Army Special Forces or Navy SEALs; there are only about a thousand Recon Marines in the entire 180,000 strong Corps. In the invasion of Iraq, First Recon was used—some say misused—as the point of the spear, often the lead unit in their section of the conflict, essentially inviting ambushes by advancing into suspected hostile terrain.

Wright accompanied the troops 24/7 throughout their mission, sharing their limited rations, sleeping in the same “Ranger graves”, and risking the enemy fire, incoming mortar rounds, and misdirected friendly artillery and airstrikes alongside the Marines. This is 100% grunt-level boots on the ground reportage in the tradition of Ernie Pyle and Bill Mauldin, and superbly done. If you're looking for grand strategy or “what it all means”, you won't find any of that: only the confusing and often appalling face of war as seen through the eyes of the young men sent to fight it. The impression you're left with of the troops (and recall, these are elite warriors of a military branch itself considered elite) is one of apolitical professionalism. You don't get the slightest sense they're motivated by patriotism or a belief they're defending their country or its principles; they're there to do their job, however messy and distasteful. One suspects you'd have heard much the same from the Roman legionnaires who occupied this land almost nineteen centuries ago.

The platoon's brief stay in post-conquest Baghdad provides some insight into why war-fighters, however they excel at breaking stuff and killing people, are as ill-suited to the tasks of nation building, restoring civil order, and promoting self-government as a chainsaw is for watchmaking. One begins to understand how it can be that three years after declaring victory in Iraq, a military power which was able to conquer the entire country in less than two weeks has yet to assert effective control over its capital city.

 Permalink

Bannier, Pierre. Pleins feux sur… Columbo. Paris: Horizon illimité, 2005. ISBN 2-84787-141-1.
It seems like the most implausible formula for a successful television series: no violence, no sex, no car chases, a one-eyed hero who is the antithesis of glamorous, detests guns, and drives a beat-up Peugeot 403. In almost every episode the viewer knows “whodunit” before the detective appears on the screen, and in most cases the story doesn't revolve around his discovery of the perpetrator, but rather obtaining evidence to prove their guilt, the latter done without derring-do or scientific wizardry, but rather endless, often seemingly aimless dialogue between the killer and the tenacious inspector. Yet “Columbo”, which rarely deviated from this formula, worked so well it ran (including pilot episodes) for thirty-five years in two separate series (1968–1978 and 1989–1994) and subsequent telefilm specials through 2003 (a complete episode guide is available online).

Columbo, as much a morality play about persistence and cunning triumphing over the wealthy, powerful, and famous as it is a mystery (creators of the series Richard Levinson and William Link said the character was inspired by Porfiry Petrovich in Dostoyevsky's Crime and Punishment and G. K. Chesterton's Father Brown mysteries), translates well into almost any language and culture. This book provides the French perspective on the phénomène Columbo. In addition to a comprehensive history of the character and series (did you know that the character which became Columbo first appeared in a story in Alfred Hitchcock's Mystery Magazine in 1960, or that Peter Falk was neither the first nor the second, but the third actor to portray Columbo?), details specific to l'Hexagone abound: a profile of Serge Sauvion, the actor who does the uncanny French doublage of Peter Falk's voice in the series, Marc Gallier, the “French Columbo”, and the stage adaptation in 2005 of Une femme de trop (based on the original stage play by Levinson and Link which became the pilot of the television series) starring Pascal Brunner. This being a French take on popular culture, there is even a chapter (pp. 74–77) providing a Marxish analysis of class conflict in Columbo! A complete episode guide with both original English and French titles and profiles of prominent guest villains rounds out the book.

For a hardcover, glossy paper, coffee table book, many of the colour pictures are hideously reproduced; they look like they were blown up from thumbnail images found on the Internet with pixel artefacts so prominent that in some cases you can barely make out what the picture is supposed to be. Other illustrations desperately need the hue, saturation, and contrast adjustment you'd expect to be routine pre-press steps for a publication of this type and price range. There are also a number of errors in transcribing English words in the text—sadly, this is not uncommon in French publications; even Jules Verne did it.

 Permalink

Kurlansky, Mark. 1968 : The Year That Rocked the World. New York: Random House, 2004. ISBN 0-345-45582-7.
In the hands of an author who can make an entire book about Salt (February 2005) fascinating, the epochal year of 1968 abounds with people, events, and cultural phenomena which make for a compelling narrative. Many watershed events in history: war, inventions, plague, geographical discoveries, natural disasters, economic booms and busts, etc. have causes which are reasonably easy to determine. But 1968, like the wave of revolutions which swept Europe in 1848 (January 2002), seems to have been driven by a zeitgeist—a spirit in the air which independently inspired people to act in a common way.

The nearly simultaneous “youthquake” which shook societies as widespread and diverse as France, Poland, Mexico, Czechoslovakia, Spain, and the United States, and manifested itself in radical social movements: antiwar, feminism, black power, anti-authoritarianism, psychedelic instant enlightenment, revolutionary and subversive music, and the emergence of “the whole world is watching” wired planetary culture of live satellite television, all of which continue to reverberate today, seemed so co-ordinated that politicians from Charles de Gaulle, Mexican el presidente Díaz Ordaz, and Leonid Brezhnev were convinced it must be the result of deliberate subversion by their enemies, and were motivated to repressive actions which, in the short term, only fed the fire. In fact, most of the leaders of the various youth movements (to the extent they can be called “leaders”—in those individualistic and anarchistic days, most disdained the title) had never met, and knew about the actions of one another only from what they saw on television. Radicals in the U.S. were largely unaware of the student movement in Mexico before it exploded into televised violence in October.

However the leaders of 1968 may have viewed themselves, in retrospect they were for the most part fascinating, intelligent, well-educated, motivated by a desire to make the world a better place, and optimistic that they could—nothing like the dour, hateful, contemptuous, intolerant, and historically and culturally ignorant people one so often finds today in collectivist movements which believe themselves descended from those of 1968. Consider Mark Rudd's famous letter to Grayson Kirk, president of Columbia University, which ended with the memorable sentence, “I'll use the words of LeRoi Jones, whom I'm sure you don't like a whole lot: ‘Up against the wall, mother****er, this is a stick-up.’” (p. 197), which shocked his contemporaries with the (quoted) profanity, but strikes readers today mostly for the grammatically correct use of “whom”. Who among present-day radicals has the eloquence of Mario Savio's “There's a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can't take part, you can't even tacitly take part, and you've got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you've got to make it stop” (p. 92), yet had the politeness to remove his shoes to avoid damaging the paint before jumping on a police car to address a crowd. In the days of the Free Speech Movement, who would have imagined some of those student radicals, tenured professors four decades later, enacting campus speech codes and enforcing an intellectual monoculture on their own students?

It is remarkable to read on p. 149 how the French soixante-huitards were “dazzled” by their German contemporaries: “We went there and they had their banners and signs and their security forces and everything with militaristic tactics. It was new to me and the other French.” One suspects they weren't paying attention when their parents spoke of the spring of 1940! Some things haven't changed: when New Left leaders from ten countries finally had the opportunity to meet one another at a conference sponsored by the London School of Economics and the BBC (p. 353), the Americans dismissed the Europeans as all talk and no action, while the Europeans mocked the U.S. radicals' propensity for charging into battle without thinking through why, what the goal was supposed to be, or how it was to be achieved.

In the introduction, the author declares his sympathy for the radical movements of 1968 and says “fairness is possible but true objectivity is not”. And, indeed, the book is written from the phrasebook of the leftist legacy media: good guys are “progressives” and “activists”, while bad guys are “right wingers”, “bigots”, or “reactionaries”. (What's “progressive” ought to depend on your idea of progress. Was SNCC's expulsion of all its white members [p. 96] on racial grounds progress?) I do not recall a single observation which would be considered outside the box on the editorial page of the New York Times. While the book provides a thorough recounting of the events and acquaintance with the principal personalities involved, for me it failed to evoke the “anything goes”, “everything is possible” spirit of those days—maybe you just had to have been there. The summation is useful for correcting false memories of 1968, which ended with both Dubček and de Gaulle still in power; the only major world leader defeated in 1968 was Lyndon Johnson, and he was succeeded by Nixon. A “whatever became of” or “where are they now” section would be a useful addition; such information, when it's given, is scattered all over the text.

One wonders whether, in our increasingly interconnected world, something like 1968 could happen again. Certainly, that's the dream of greying radicals nostalgic for their days of glory and young firebrands regretful for having been born too late. Perhaps better channels of communication and the collapse of monolithic political structures have resulted in change becoming an incremental process which adapts to the evolving public consensus before a mass movement has time to develop. It could simply be that the major battles of “liberation” have all been won, and the next major conflict will be incited by those who wish to rein them in. Or maybe it's just that we're still trying to digest the consequences of 1968 and far from ready for another round.

 Permalink

Smith, Edward E. Second Stage Lensmen. Baltimore: Old Earth Books, [1941–1942, 1953] 1998. ISBN 1-882968-13-1.
This is the fifth installment of the Lensman series, following Triplanetary (June 2004), First Lensman (February 2005), Galactic Patrol (March 2005), and Gray Lensman (August 2005). Second Stage Lensmen ran in serial form in Astounding Science Fiction from November 1941 through February 1942. This book is a facsimile of the illustrated 1953 Fantasy Press edition, which was revised from the original magazine serial.

The only thing I found disappointing when rereading this book in my fourth lifetime expedition through the Lensman saga is knowing there's only one volume of the main story remaining—but what a yarn that is. In Second Stage Lensmen, Doc Smith more overtly adopts the voice of “historian of civilisation” and from time to time departs from straight story-telling to describe off-stage action, discuss his “source material”, and grouse about Galactic Patrol secrecy depriving him of important documents. Still, there's enough rays and shields space opera action for three or four normal novels, although the focus increasingly shifts from super-weapons and shoot-em-ups to mental combat, indirection, and espionage.

It's here we first meet Nadreck, one of the most fascinating of Doc Smith's creations: a poison-breathing cryogenic being who extends into the fourth dimension and considers cowardice and sloth among his greatest virtues. His mind, however, like Kinnison's, honed to second stage Lensman capability by Mentor of Arisia, is both powerful and subtle, and Nadreck a master of boring within without the villains even suspecting his presence. He gets the job done, despite never being satisfied with his “pitifully imperfect” performance. I've known programmers like that.

Some mystery and thriller writers complain of how difficult the invention of mobile phones has made their craft. While it used to be easy for characters to be out of touch and operating with incomplete and conflicting information, now the reader immediately asks, “Why didn't she just pick up the phone and ask?” But in the Lensman universe, both the good guys and (to a lesser extent) the blackguards have instantaneous, mind-to-mind high bandwidth communication on an intergalactic scale, and such is Doc Smith's mastery of his craft that it neither reduces the suspense nor strains the plot, and he makes it look almost effortless.

Writing in an age where realistic women of any kind were rare in science fiction, Smith was known for his strong female characters—on p. 151 he observes, “Indeed, it has been argued that sexual equality is the most important criterion of that which we know as Civilization”—no postmodern multi-culti crapola here! Some critics carped that his women characters were so strong and resourceful they were just male heroes without the square jaws and broad shoulders. So here, probably in part just to show he can do it, we have Illona of Lonabar, a five-sigma airhead bimbo (albeit with black hair, not blonde), and the mind-murdering matriarchy of Lyrane, who have selectively bred their males to be sub-sentient dwarves with no function other than reproduction.

The author's inexhaustible imagination manages to keep these stories up to date, even more than half a century on. While the earlier volumes stressed what would decades later be called low-observable or stealth technology, in this outing he anticipates today's hot Pentagon buzzword, “network-centric warfare”: the grand battles here are won not by better weapons or numbers, but by the unique and top secret information technology of the Z9M9Z Directrix command vessel. The bizarre excursion into “Nth-space” may have seemed over the top to readers in the 1940s, but today it's reminiscent of another valley in the cosmic landscape of string theory.

Although there is a fifteen page foreword by the author which recaps the story to date, you don't really want to start with this volume: there's just too much background and context you'll have missed. It's best either to start at the beginning with Triplanetary or, if you'd rather defer the two slower-paced “prequels”, with Volume 3, Galactic Patrol, which was the first written and can stand alone.

 Permalink

May 2006

Bonner, William and Addison Wiggin. Empire of Debt. Hoboken, NJ: John Wiley & Sons, 2006. ISBN 0-471-73902-2.
To make any sense in the long term, an investment strategy needs to be informed by a “macro macro” view of the global economic landscape and the grand-scale trends which shape it, as well as a fine sense for nonsense: the bubbles, manias, and unsustainable situations which seduce otherwise sane investors into doing crazy things which will inevitably end badly, although nobody can ever be sure precisely when. This is the perspective the authors provide in this wise, entertaining, and often laugh-out-loud funny book. If you're looking for tips on what stocks or funds to buy or sell, look elsewhere; the focus here is on the emergence in the twentieth century of the United States as a global economic and military hegemon, and the bizarre economic foundations of this most curious empire. The analysis of the current scene is grounded in a historical survey of empires and a recounting of how the United States became one.

The business of empire has been conducted more or less the same way all around the globe over millennia. An imperial power provides a more or less peaceful zone to vassal states, a large, reasonably open market in which they can buy and sell their goods, safe transport for goods and people within the imperial limes, and a common currency, system of weights and measures, and other lubricants of efficient commerce. In return, vassal states finance the empire through tribute: either explicit, or indirectly through taxes, tariffs, troop levies, and other imperial exactions. Now, history is littered with the wreckage of empires (more than fifty are listed on p. 49), which have failed in the time-proven ways, but this kind of traditional empire at least has the advantage that it is profitable—the imperial power is compensated for its services (whether welcome or appreciated by the subjects or not) by the tribute it collects from them, which may be invested in further expanding the empire.

The American empire, however, is unique in all of human history for being funded not by tribute but by debt. The emergence of the U.S. dollar as the global reserve currency, severed from the gold standard or any other measure of actual value, has permitted the U.S. to build a global military presence and domestic consumer society by borrowing the funds from other countries (notably, at the present time, China and Japan), who benefit (at least in the commercial sense) from the empire. Unlike tribute, the debt remains on the balance sheet as an exponentially growing liability which must eventually either be repaid or repudiated. In this environment, international trade has become a system in which (p. 221) “One nation buys things that it cannot afford and doesn't need with money it doesn't have. Another sells on credit to people who already cannot pay and then builds more factories to increase output.” Nobody knows how long the game can go on, but when it ends, it is certain to end badly.

An empire which has largely ceased to produce stuff for its citizens, whose principal export has become paper money (to the tune of about two billion dollars per day at this writing), will inevitably succumb to speculative binges. No sooner had the dot.com mania of the late 1990s collapsed than the residential real estate bubble began to inflate, with houses bought with interest-only mortgages considered “investments” which are “flipped” in a matter of months, and equity extracted by further assumption of debt used to fund current consumption. This contemporary collective delusion is well documented, with perspectives on how it may end.

The entire book is written in an “always on” ironic style, with a fine sense for the absurdities which are taken for wisdom and the charlatans and nincompoops who peddle them to the general public in the legacy media. Some may consider the authors' approach as insufficiently serious for a discussion of an oncoming global financial train wreck but, as they note on p. 76, “There is nothing quite so amusing as watching another man make a fool of himself. That is what makes history so entertaining.” Once you get your head out of the 24 hour news cycle and the political blogs and take the long view, the economic and geopolitical folly chronicled here is intensely entertaining, and the understanding of it imparted in this book is valuable in developing a strategy to avoid its inevitable tragic consequences.

 Permalink

Stephenson, Neal. Cryptonomicon. New York: Perennial, 1999. ISBN 0-380-78862-4.
I've found that I rarely enjoy, and consequently am disinclined to pick up, these huge, fat, square works of fiction cranked out by contemporary super scribblers such as Tom Clancy, Stephen King, and J.K. Rowling. In each case, the author started out and made their name crafting intricately constructed, tightly plotted page-turners, but later on succumbed to a kind of mid-career spread which yields flabby doorstop novels that give you hand cramps if you read them in bed and contain more filler than thriller. My hypothesis is that when a talented author is getting started, their initial books receive the close attention of a professional editor and benefit from the discipline imposed by an individual whose job is to flense the flab from a manuscript. But when an author becomes highly successful—a “property” who can be relied upon to crank out best-seller after best-seller, it becomes harder for an editor to restrain an author's proclivity to bloat and bloviation. (This is not to say that all authors are so prone, but some certainly are.) I mean, how would you feel giving Tom Clancy advice on the art of crafting thrillers, even though Executive Orders could easily have been cut by a third and would probably have been a better novel at half the size.

This is why, despite my having tremendously enjoyed his earlier Snow Crash and The Diamond Age, Neal Stephenson's Cryptonomicon sat on my shelf for almost four years before I decided to take it with me on a trip and give it a try. Hey, even later Tom Clancy can be enjoyed as “airplane” books as long as they fit in your carry-on bag! While ageing on the shelf, this book was one of the most frequently recommended by visitors to this page, and friends to whom I mentioned my hesitation to dive into the book unanimously said, “You really ought to read it.” Well, I've finished it, so now I'm in a position to tell you, “You really ought to read it.” This is simply one of the best modern novels I have read in years.

The book is thick, but that's because the story is deep and sprawling and requires a large canvas. Stretching over six decades and three generations, and melding genera as disparate as military history, cryptography, mathematics and computing, business and economics, international finance, privacy and individualism versus the snooper state and intrusive taxation, personal eccentricity and humour, telecommunications policy and technology, civil and military engineering, computers and programming, the hacker and cypherpunk culture, and personal empowerment as a way of avoiding repetition of the tragedies of the twentieth century, the story defies classification into any neat category. It is not science fiction, because all of the technologies exist (or plausibly could have existed—well, maybe not the Galvanick Lucipher [p. 234; all page citations are to the trade paperback edition linked above. I'd usually cite by chapter, but they aren't numbered and there is no table of contents]—in the epoch in which they appear). Some call it a “techno thriller”, but it isn't really a compelling page-turner in that sense; this is a book you want to savour over a period of time, watching the story lines evolve and weave together over the decades, and thinking about the ideas which underlie the plot line.

The breadth of the topics which figure in this story requires encyclopedic knowledge. which the author demonstrates while making it look effortless, never like he's showing off. Stephenson writes with the kind of universal expertise for which Isaac Asimov was famed, but he's a better writer than the Good Doctor, and that's saying something. Every few pages you come across a gem such as the following (p. 207), which is the funniest paragraph I've read in many a year.

He was born Graf Heinrich Karl Wilhelm Otto Friedrich von Übersetzenseehafenstadt, but changed his name to Nigel St. John Gloamthorpby, a.k.a. Lord Woadmire, in 1914. In his photograph, he looks every inch a von Übersetzenseehafenstadt, and he is free of the cranial geometry problem so evident in the older portraits. Lord Woadmire is not related to the original ducal line of Qwghlm, the Moore family (Anglicized from the Qwghlmian clan name Mnyhrrgh) which had been terminated in 1888 by a spectacularly improbable combination of schistosomiasis, suicide, long-festering Crimean war wounds, ball lightning, flawed cannon, falls from horses, improperly canned oysters, and rogue waves.
On p. 352 we find one of the most lucid and concise explanations I've ever read of why it far more difficult to escape the grasp of now-obsolete technologies than most technologists may wish.
(This is simply because the old technology is universally understood by those who need to understand it, and it works well, and all kinds of electronic and software technology has been built and tested to work within that framework, and why mess with success, especially when your profit margins are so small that they can only be detected by using techniques from quantum mechanics, and any glitches vis-à-vis compatibility with old stuff will send your company straight into the toilet.)
In two sentences on p. 564, he lays out the essentials of the original concept for Autodesk, which I failed to convey (providentially, in retrospect) to almost every venture capitalist in Silicon Valley in thousands more words and endless, tedious meetings.
“ … But whenever a business plan first makes contact with the actual market—the real world—suddenly all kinds of stuff becomes clear. You may have envisioned half a dozen potential markets for your product, but as soon as you open your doors, one just explodes from the pack and becomes so instantly important that good business sense dictates that you abandon the others and concentrate all your efforts.”
And how many New York Times Best-Sellers contain working source code (p, 480) for a Perl program?

A 1168 page mass market paperback edition is now available, but given the unwieldiness of such an edition, how much you're likely to thumb through it to refresh your memory on little details as you read it, the likelihood you'll end up reading it more than once, and the relatively small difference in price, the trade paperback cited at the top may be the better buy. Readers interested in the cryptographic technology and culture which figure in the book will find additional information in the author's Cryptonomicon cypher-FAQ.

 Permalink

Ravitch, Diane. The Language Police. New York: Alfred A. Knopf, 2003. ISBN 0-375-41482-7.
One thing which strikes me, having been outside the United States for fifteen years, is just how dumb people in the U.S. are, particularly those 35 years and younger. By “dumb” I don't mean unintelligent: although there is a genetic component to intelligence, evolution doesn't work quickly enough to make much difference in a generation or two, and there's no evidence for selective breeding for stupidity in any case. No, they are dumb in the sense of being almost entirely ignorant of the literary and cultural heritage upon which their society is founded, and know next to nothing about the history of their own country and the world. Further, and even more disturbing, they don't seem to know how to think. Rational thinking is a skill one learns by practise, and these people never seem to have worked through the intellectual exercises to acquire it, and hence have never discovered the quiet joy of solving problems and figuring things out. (Of course, I am talking in broad generalisations here. In a country as large and diverse as the U.S. there are many, many exceptions, to be sure. But the overall impression of the younger population, exceptions apart, comes across to me as dumb.)

You may choose to attribute this estimation to the jaundiced disdain for young'uns so common among balding geezers like me. But the funny thing is, I observe this only in people who grew up the U.S. I don't perceive anything similar in those raised in continental Europe or Asia. (I'm not so sure about the U.K., and my experience with people from South America and Africa is insufficient to form any conclusions.) Further, this seems to be a relatively new phenomenon; I don't recall perceiving anything like the present level of dumbness among contemporaries when I was in the 20–35 age bracket. If you doubt my estimation of the knowledge and reasoning skills of younger people in the U.S., just cast a glance at the highest moderated comments on one of the online discussion boards such as Slashdot, and bear in mind when doing so that these are the technological élite, not the fat middle of the bell curve. Here is an independent view of younger people in the U.S. which comes to much the same conclusion as I.

What could possibly account for this? Well, it may not be the entire answer, but an important clue is provided by this stunning book by an historian and professor of education at New York University, which documents the exclusion of essentially the entire body of Western culture from the primary and secondary school curriculum starting in around 1970, and the rewriting of history to exclude anything perceived as controversial by any pressure group motivated to involve itself in the textbook and curriculum adoption process, which is described in detail. Apart from a few egregious cases which have come to the attention of the media, this process has happened almost entirely out of the public eye, and an entire generation has now been educated, if you can call it that, with content-free material chosen to meet bizarre criteria of “diversity” and avoid offending anybody. How bad is it? So bad that the president of a textbook company, when asked in 1998 by members of the committee charged with developing a national reading test proposed by President Clinton, why the reading passages chosen contained nothing drawn from classic literature or myth, replied, as if it were the most obvious thing in the world, “everything written before 1970 was either gender biased or racially biased.” So long, Shakespeare; heave-ho Homer! It's no wonder the author of I'm the Teacher, You're the Student (January 2005) discovered so many of his students at a top-tier university had scarcely read a single book before arriving in his classroom: their public school experience had taught them that reading is tedious and books contain only boring, homogenised pablum utterly disconnected from the real world they experience through popular culture and their everyday life.

The author brings no perceptible political bias or agenda to the topic. Indeed, she documents how the ideologues of the right and left form a highly effective pincer movement which squeezes out the content and intellectual stimulation from the material taught in schools, and thus educates those who pass through them that learning is boring, reading is dull, and history is all settled, devoid of controversy, and that every event in the past should be interpreted according to the fashionable beliefs of the present day. The exquisite irony is this is said to be done in the interest of “diversity” when, in fact, the inevitable consequence is the bowdlerisation of the common intellectual heritage into mediocre, boring, and indistinguishable pap. It is also interesting to observe that the fundamental principles upon which the champions of this “diversity” base their arguments—that one's ethnic group identity determines how an individual thinks and learns; that one cannot and should not try to transcend that group identity; that a member of a group can learn only from material featuring members of their own group, ideally written by a group member—are, in fact, identical to those believed by the most vicious of racists. Both reject individualism and the belief that any person, if blessed with the requisite talent and fired by ambition and the willingness to work assiduously toward the goal, can achieve anything at all in a free society.

Instead, we see things like this document, promulgated by the public school system of Seattle, Washington (whose motto is “Academic Achievement for Every Student in Every School”), which provides “Definitions of Racism” in six different categories. (Interesting—the Seattle Public Schools seem to have taken this document down—wonder why? However, you can still view a copy I cached just in case that might happen.) Under “Cultural Racism” we learn that “having a future time orientation, emphasizing individualism as opposed to a more collective ideology, [and] defining one form of English as standard” constitutes “cultural racism”. Some formula for “Academic Achievement for Every Student”, don't you think? (Reading The Language Police is quite enlightening in parsing details such as those in the drawing which appears to the right of the first paragraph of this document. It shows a group of people running a foot race [exercise: good]. Of the four people whose heads are shown, one is a Caucasian female [check], another is an African American male [check], a third is an Hispanic man [check—although the bias and sensitivity guidelines of two major textbook companies (p. 191) would fault this picture because, stereotypically, the man has a moustache], and an older [check] Caucasian male [older people must always be shown as active; never sitting on the porch in a rocking chair]. Two additional figures are shown with their heads lopped off: one an African American woman and the other what appears to be a light-skinned male. Where's the Asian?) Now, this may seem ridiculous, but every major U.S. textbook publisher these days compiles rigorous statistics on the racial and gender mix of both text and illustrations in their books, and adjusts them to precisely conform to percentages from the U.S. census. Intellectual content appears to receive no such scrutiny.

A thirty page appendix provides a list of words, phrases, and concepts banned from U.S. textbooks, including the delightful list (p. 196) of Foods which May Not Be Mentioned in California, including pickles and tea. A second appendix of the same length provides a wonderful list of recommendations of classic literature for study from grades three through ten. Home schoolers will find this a bounty of worthwhile literature to enrich their kids' education and inculcate the love of reading, and it's not a bad place to start for adults who have been deprived of this common literary heritage in their own schooling. A paperback edition is now available.

 Permalink

June 2006

Woit, Peter. Not Even Wrong. London: Jonathan Cape, 2006. ISBN 0-224-07605-1.
Richard Feynman, a man about as difficult to bamboozle on scientific topics as any who ever lived, remarked in an interview (p. 180) in 1987, a year before his death:
…I think all this superstring stuff is crazy and it is in the wrong direction. … I don't like that they're not calculating anything. I don't like that they don't check their ideas. I don't like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say “Well, it still might be true.”
Feynman was careful to hedge his remark as being that of an elder statesman of science, who collectively have a history of foolishly considering the speculations of younger researchers to be nonsense, and he would have almost certainly have opposed any effort to cut off funding for superstring research, as it might be right, after all, and should be pursued in parallel with other promising avenues until they make predictions which can be tested by experiment, falsifying and leading to the exclusion of those candidate theories whose predictions are incorrect.

One wonders, however, what Feynman's reaction would have been had he lived to contemplate the contemporary scene in high energy theoretical physics almost twenty years later. String theory and its progeny still have yet to make a single, falsifiable prediction which can be tested by a physically plausible experiment. This isn't surprising, because after decades of work and tens of thousands of scientific publications, nobody really knows, precisely, what superstring (or M, or whatever) theory really is; there is no equation, or set of equations from which one can draw physical predictions. Leonard Susskind, a co-founder of string theory, observes ironically in his book The Cosmic Landscape (March 2006), “On this score, one might facetiously say that String Theory is the ultimate epitome of elegance. With all the years that String Theory has been studied, no one has ever found a single defining equation! The number at present count is zero. We know neither what the fundamental equations of the theory are or even if it has any.” (p. 204). String theory might best be described as the belief that a physically correct theory exists and may eventually be discovered by the research programme conducted under that name.

From the time Feynman spoke through the 1990s, the goal toward which string theorists were working was well-defined: to find a fundamental theory which reproduces at the low energy limit the successful results of the standard model of particle physics, and explains, from first principles, the values of the many (there are various ways to count them, slightly different—the author gives the number as 18 in this work) free parameters of that theory, whose values are not predicted by any theory and must be filled in by experiment. Disturbingly, theoretical work in the early years of this century has convinced an increasing number of string theorists (but not all) that the theory (whatever it may turn out to be), will not predict a unique low energy limit (or “vacuum state”), but rather an immense “landscape” of possible universes, with estimates like 10100 and 10500 and even more bandied around (by comparison, there are only about 1080 elementary particles in the entire observable universe—a minuscule number compared to such as these). Most of these possible universes would be hideously inhospitable to intelligent life as we know and can imagine it (but our imagination may be limited), and hence it is said that the reason we find ourselves in one of the rare universes which contain galaxies, chemistry, biology, and the National Science Foundation is due to the anthropic principle: a statement, bordering on tautology, that we can only observe conditions in the universe which permit our own existence, and that perhaps either in a “multiverse” of causally disjoint or parallel realities, all the other possibilities exist as well, most devoid of observers, at least those like ourselves (triune glorgs, feeding on bare colour in universes dominated by quark-gluon plasma would doubtless deem our universe unthinkably cold, rarefied, and dead).

But adopting the “landscape” view means abandoning the quest for a theory of everything and settling for what amounts to a “theory of anything”. For even if string theorists do manage to find one of those 10100 or whatever solutions in the landscape which perfectly reproduces all the experimental results of the standard model (and note that this is something nobody has ever done and appears far out of reach, with legitimate reasons to doubt it is possible at all), then there will almost certainly be a bewildering number of virtually identical solutions with slightly different results, so that any plausible experiment which measures a quantity to more precision or discovers a previously unknown phenomenon can be accommodated within the theory simply by tuning one of its multitudinous dials and choosing a different solution which agrees with the experimental results. This is not what many of the generation who built the great intellectual edifice of the standard model of particle physics would have considered doing science.

Now if string theory were simply a chimæra being pursued by a small band of double-domed eccentrics, one wouldn't pay it much attention. Science advances by exploring lots of ideas which may seem crazy at the outset and discarding the vast majority which remain crazy after they are worked out in more detail. Whatever remains, however apparently crazy, stays in the box as long as its predictions are not falsified by experiment. It would be folly of the greatest magnitude, comparable to attempting to centrally plan the economy of a complex modern society, to try to guess in advance, by some kind of metaphysical reasoning, which ideas were worthy of exploration. The history of the S-matrix or “bootstrap” theory of the strong interactions recounted in chapter 11 is an excellent example of how science is supposed to work. A beautiful theory, accepted by a large majority of researchers in the field, which was well in accord with experiment and philosophically attractive, was almost universally abandoned in a few years after the success of the quark model in predicting new particles and the stunning deep inelastic scattering results at SLAC in the 1970s.

String theory, however, despite not having made a single testable prediction after more than thirty years of investigation, now seems to risk becoming a self-perpetuating intellectual monoculture in theoretical particle physics. Among the 22 tenured professors of theoretical physics in the leading six faculties in the United States who received their PhDs after 1981, fully twenty specialise in string theory (although a couple now work on the related brane-world models). These professors employ graduate students and postdocs who work in their area of expertise, and when a faculty position opens up, may be expected to support candidates working in fields which complement their own research. This environment creates a great incentive for talented and ambitious students aiming for one the rare permanent academic appointments in theoretical physics to themselves choose string theory, as that's where the jobs are. After a generation, this process runs the risk of operating on its own momentum, with nobody in a position to step back and admit that the entire string theory enterprise, judged by the standards of genuine science, has failed, and does not merit the huge human investment by the extraordinarily talented and dedicated people who are pursuing it, nor the public funding it presently receives. If Edward Witten believes there's something still worth pursuing, fine: his self-evident genius and massive contributions to mathematical physics more than justify supporting his work. But this enterprise which is cranking out hundreds of PhDs and postdocs who are spending their most intellectually productive years learning a fantastically complicated intellectual structure with no grounding whatsoever in experiment, most of whom will have no hope of finding permanent employment in the field they have invested so much to aspire toward, is much more difficult to justify or condone.

The problem, to state it in a manner more inflammatory than the measured tone of the author, and in a word of my choosing which I do not believe appears at all in his book, is that contemporary academic research in high energy particle theory is corrupt. As is usually the case with such corruption, the root cause is socialism, although the look-only-left blinders almost universally worn in academia today hides this from most observers there. Dwight D. Eisenhower, however, twigged to it quite early. In his farewell address of January 17th, 1961, which academic collectivists endlessly cite for its (prescient) warning about the “military-industrial complex”, he went on to say, although this is rarely quoted,

In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been over shadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

And there, of course, is precisely the source of the corruption. This enterprise of theoretical elaboration is funded by taxpayers, who have no say in how their money, taken under threat of coercion, is spent. Which researchers receive funds for what work is largely decided by the researchers themselves, acting as peer review panels. While peer review may work to vet scientific publications, as soon as money becomes involved, the disposition of which can make or break careers, all the venality and naked self- and group-interest which has undone every well-intentioned experiment in collectivism since Robert Owen comes into play, with the completely predictable and tediously repeated results. What began as an altruistic quest driven by intellectual curiosity to discover answers to the deepest questions posed by nature ends up, after a generation of grey collectivism, as a jobs program. In a sense, string theory can be thought of like that other taxpayer-funded and highly hyped program, the space shuttle, which is hideously expensive, dangerous to the careers of those involved with it (albeit in a more direct manner), supported by a standing army composed of some exceptional people and a mass of the mediocre, difficult to close down because it has carefully cultivated a constituency whose own self-interest is invested in continuation of the program, and almost completely unproductive of genuine science.

One of the author's concerns is that the increasingly apparent impending collapse of the string theory edifice may result in the de-funding of other promising areas of fundamental physics research. I suspect he may under-estimate how difficult it is to get rid of a government program, however absurd, unjustified, and wasteful it has become: consider the space shuttle, or mohair subsidies. But perhaps de-funding is precisely what is needed to eliminate the corruption. Why should U.S. taxpayers be spending on the order of thirty million dollars a year on theoretical physics not only devoid of any near- or even distant-term applications, but also mostly disconnected from experiment? Perhaps if theoretical physics returned to being funded by universities from their endowments and operating funds, and by money raised from patrons and voluntarily contributed by the public interested in the field, it would be, albeit a much smaller enterprise, a more creative and productive one. Certainly it would be more honest. Sure, there may be some theoretical breakthrough we might not find for fifty years instead of twenty with massive subsidies. But so what? The truth is out there, somewhere in spacetime, and why does it matter (since it's unlikely in the extreme to have any immediate practical consequences) how soon we find it, anyway? And who knows, it's just possible a research programme composed of the very, very best, whose work is of such obvious merit and creativity that it attracts freely-contributed funds, exploring areas chosen solely on their merit by those doing the work, and driven by curiosity instead of committee group-think, might just get there first. That's the way I'd bet.

For a book addressed to a popular audience which contains not a single equation, many readers will find it quite difficult. If you don't follow these matters in some detail, you may find some of the more technical chapters rather bewildering. (The author, to be fair, acknowledges this at the outset.) For example, if you don't know what the hierarchy problem is, or why it is important, you probably won't be able to figure it out from the discussion here. On the other hand, policy-oriented readers will have little difficulty grasping the problems with the string theory programme and its probable causes even if they skip the gnarly physics and mathematics. An entertaining discussion of some of the problems of string theory, in particular the question of “background independence”, in which the string theorists universally assume the existence of a background spacetime which general relativity seems to indicate doesn't exist, may be found in Carlo Rovelli's "A Dialog on Quantum Gravity". For more technical details, see Lee Smolin's Three Roads to Quantum Gravity. There are some remarkable factoids in this book, one of the most stunning being that the proposed TeV class muon colliders of the future will produce neutrino (yes, neutrino) radiation which is dangerous to humans off-site. I didn't believe it either, but look here—imagine the sign: “DANGER: Neutrino Beam”!

A U.S. edition is scheduled for publication at the end of September 2006. The author has operated the Not Even Wrong Web log since 2004; it is an excellent source for news and gossip on these issues. The unnamed “excitable … Harvard faculty member” mentioned on p. 227 and elsewhere is Luboš Motl (who is, however, named in the acknowledgements), and whose own Web log is always worth checking out.

 Permalink

Bartlett, Bruce. Impostor. New York: Doubleday, 2006. ISBN 0-385-51827-7.
This book is a relentless, uncompromising, and principled attack on the administration of George W. Bush by an author whose conservative credentials are impeccable and whose knowledge of economics and public finance is authoritative; he was executive director of the Joint Economic Committee of Congress during the Reagan administration and later served in the Reagan White House and in the Treasury Department under the first president Bush. For the last ten years he was a Senior Fellow at the National Center for Policy Analysis, which fired him in 2005 for writing this book.

Bartlett's primary interest is economics, and he focuses almost exclusively on the Bush administration's spending and tax policies here, with foreign policy, the wars in Afghanistan and Iraq, social policy, civil liberties, and other contentious issues discussed only to the extent they affect the budget. The first chapter, titled “I Know Conservatives, and George W. Bush Is No Conservative” states the central thesis, which is documented by detailed analysis of the collapse of the policy-making process in Washington, the expensive and largely ineffective tax cuts, the ruinous Medicare prescription drug program (and the shameful way in which its known costs were covered up while the bill was rammed through Congress), the abandonment of free trade whenever there were votes to be bought, the explosion in regulation, and the pork-packed spending frenzy in the Republican controlled House and Senate which Bush has done nothing to restrain (he is the first president since John Quincy Adams to serve a full four year term and never veto a single piece of legislation). All of this is documented in almost 80 pages of notes and source references.

Bartlett is a “process” person as well as a policy wonk, and he diagnoses the roots of many of the problems as due to the Bush White House's resembling a third and fourth Nixon administration. There is the same desire for secrecy, the intense value placed on personal loyalty, the suppression of active debate in favour of a unified line, isolation from outside information and opinion, an attempt to run everything out of the White House, bypassing the policy shops and resources in the executive departments, and the paranoia induced by uniformly hostile press coverage and detestation by intellectual elites. Also Nixonesque is the free-spending attempt to buy the votes, at whatever the cost or long-term consequences, of members of groups who are unlikely in the extreme to reward Republicans for their largesse because they believe they'll always get a better deal from the Democrats.

The author concludes that the inevitable economic legacy of the Bush presidency will be large tax increases in the future, perhaps not on Bush's watch, but correctly identified as the consequences of his irresponsibility when they do come to pass. He argues that the adoption of a European-style value-added tax (VAT) is the “least bad” way to pay the bill when it comes due. The long-term damage done to conservatism and the Republican party are assessed, along with prospects for the post-Bush era.

While Bartlett was one of the first prominent conservatives to speak out against Bush, he is hardly alone today, with disgruntlement on the right seemingly restrained mostly due to lack of alternatives. And that raises a question on which this book is silent: if Bush has governed (at least in domestic economic policy) irresponsibly, incompetently, and at variance with conservative principles, what other potential candidate could have been elected instead who would have been the true heir of the Reagan legacy? Al Gore? John Kerry? John McCain? Steve Forbes? What plausible candidate in either party seems inclined and capable of turning things around instead of making them even worse? The irony, and a fundamental flaw of Empire seems to be that empires don't produce the kind of leaders which built them, or are required to avert their decline. It's fundamentally a matter of crunchiness and sogginess, and it's why empires don't last forever.

 Permalink

Ortega y Gasset, José. The Revolt of the Masses. New York: W. W. Norton, [1930, 1932, 1964] 1993. ISBN 0-393-31095-7.
This book, published more than seventy-five years ago, when the twentieth century was only three decades old, is a simply breathtaking diagnosis of the crises that manifested themselves in that century and the prognosis for human civilisation. The book was published in Spanish in 1930; this English translation, authorised and approved by the author, by a translator who requested to remain anonymous, first appeared in 1932 and has been in print ever since.

I have encountered few works so short (just 190 pages), which are so densely packed with enlightening observations and thought-provoking ideas. When I read a book, if I encounter a paragraph that I find striking, either in the writing or the idea it embodies, I usually add it to my “quotes” archive for future reference. If I did so with this book, I would find myself typing in a large portion of the entire text. This is not an easy read, not due to the quality of the writing and translation (which are excellent), nor the complexity of the concepts and arguments therein, but simply due to the pure number of insights packed in here, each of which makes you stop and ponder its derivation and implications.

The essential theme of the argument anticipated the crunchy/soggy analysis of society by more than 65 years. In brief, over-achieving self-motivated elites create liberal democracy and industrial economies. Liberal democracy and industry lead to the emergence of the “mass man”, self-defined as not of the elite and hostile to existing elite groups and institutions. The mass man, by strength of numbers and through the democratic institutions which enabled his emergence, seizes the levers of power and begins to use the State to gratify his immediate desires. But, unlike the elites who created the State, the mass man does not think or plan in the long term, and is disinclined to make the investments and sacrifices which were required to create the civilisation in the first place, and remain necessary if it is to survive. In this consists the crisis of civilisation, and grasping this single concept explains much of the history of the seven decades which followed the appearance of the book and events today. Suddenly some otherwise puzzling things start to come into focus, such as why it is, in a world enormously more wealthy than that of the nineteenth century, with abundant and well-educated human resources and technological capabilities which dwarf those of that epoch, there seems to be so little ambition to undertake large-scale projects, and why those which are embarked upon are so often bungled.

In a single footnote on p. 119, Ortega y Gasset explains what the brilliant Hans-Hermann Hoppe spent an entire book doing: why hereditary monarchies, whatever their problems, are usually better stewards of the national patrimony than democratically elected leaders. In pp. 172–186 he explains the curious drive toward European integration which has motivated conquerors from Napoleon through Hitler, and collectivist bureaucratic schemes such as the late, unlamented Soviet Union and the odious present-day European Union. On pp. 188–190 he explains why a cult of youth emerges in mass societies, and why they produce as citizens people who behave like self-indulgent perpetual adolescents. In another little single-sentence footnote on p. 175 he envisions the disintegration of the British Empire, then at its zenith, and the cultural fragmentation of the post-colonial states. I'm sure that few of the author's intellectual contemporaries could have imagined their descendants living among the achievements of Western civilisation yet largely ignorant of its history or cultural heritage; the author nails it in chapters 9–11, explaining why it was inevitable and tracing the consequences for the civilisation, then in chapter 12 he forecasts the fragmentation of science into hyper-specialised fields and the implications of that. On pp. 184–186 he explains the strange attraction of Soviet communism for European intellectuals who otherwise thought themselves individualists—recall, this is but six years after the death of Lenin. And still there is more…and more…and more. This is a book you can probably re-read every year for five years in a row and get something more out of it every time.

A full-text online edition is available, which is odd since the copyright of the English translation was last renewed in 1960 and should still be in effect, yet the site which hosts this edition claims that all their content is in the public domain.

 Permalink

Weinberger, Sharon. Imaginary Weapons. New York: Nation Books, 2006. ISBN 1-56025-849-7.

A nuclear isomer is an atomic nucleus which, due to having a greater spin, different shape, or differing alignment of the spin orientation and axis of symmetry, has more internal energy than the ground state nucleus with the same number of protons and neutrons. Nuclear isomers are usually produced in nuclear fusion reactions when the the addition of protons and/or neutrons to a nucleus in a high-energy collision leaves it in an excited state. Hundreds of nuclear isomers are known, but the overwhelming majority decay with gamma ray emission in about 10−14 seconds. In a few species, however, this almost instantaneous decay is suppressed for various reasons, and metastable isomers exist with half-lives ranging from 10−9 seconds (one nanosecond), to the isomer Tantalum-180m, which has a half-life of at least 1015 years and may be entirely stable; it is the only nuclear isomer found in nature and accounts for about one atom of 8300 in tantalum metal.

Some metastable isomers with intermediate half-lives have a remarkably large energy compared to the ground state and emit correspondingly energetic gamma ray photons when they decay. The Hafnium-178m2 (the “m2” denotes the second lowest energy isomeric state) nucleus has a half-life of 31 years and decays (through the m1 state) with the emission of 2.45 MeV in gamma rays. Now the fact that there's a lot of energy packed into a radioactive nucleus is nothing new—people were calculating the energy of disintegrating radium and uranium nuclei at the end of the 19th century, but all that energy can't be used for much unless you can figure out some way to release it on demand—as long as it just dribbles out at random, you can use it for some physics experiments and medical applications, but not to make loud bangs or turn turbines. It was only the discovery of the fission chain reaction, where the fission of certain nuclei liberates neutrons which trigger the disintegration of others in an exponential process, which made nuclear energy, for better or for worse, accessible.

So, as long as there is no way to trigger the release of the energy stored in a nuclear isomer, it is nothing more than an odd kind of radioactive element, the subject of a reasonably well-understood and somewhat boring topic in nuclear physics. If, however, there were some way to externally trigger the decay of the isomer to the ground state, then the way would be open to releasing the energy in the isomer at will. It is possible to trigger the decay of the Tantalum-180 isomer by 2.8 MeV photons, but the energy required to trigger the decay is vastly greater than the 0.075 MeV it releases, so the process is simply an extremely complicated and expensive way to waste energy.

Researchers in the small community interested in nuclear isomers were stunned when, in the January 25, 1999 issue of Physical Review Letters, a paper by Carl Collins and his colleagues at the University of Texas at Dallas reported they had triggered the release of 2.45 MeV in gamma rays from a sample of Hafnium-178m2 by irradiating it with a second-hand dental X-ray machine with the sample of the isomer sitting on a styrofoam cup. Their report implied, even with the crude apparatus, an energy gain of sixty times break-even, which was more than a million times the rate predicted by nuclear theory, if triggering were possible at all. The result, if real, could have substantial technological consequences: the isomer could be used as a nuclear battery, which could store energy and release it on demand with a density which dwarfed that of any chemical battery and was only a couple of orders of magnitude less than a fission bomb. And, speaking of bombs, if you could manage to trigger a mass of hafnium all at once or arrange for it to self-trigger in a chain reaction, you could make a variety of nifty weapons out of it, including a nuclear hand grenade with a yield of two kilotons. You could also build a fission-free trigger for a thermonuclear bomb which would evade all of the existing nonproliferation safeguards which are aimed at controlling access to fissile material. These are the kind of things that get the attention of folks in that big five-sided building in Arlington, Virginia.

And so it came to pass, in a Pentagon bent on “transformational technologies” and concerned with emerging threats from potential adversaries, that in May of 2003 a Hafnium Isomer Production Panel (HIPP) was assembled to draw up plans for bulk production of the substance, with visions of nuclear hand grenades, clean bunker-busting fusion bombs, and even hafnium-powered bombers floating before the eyes of the out of the box thinkers at DARPA, who envisioned a two-year budget of USD30 million for the project—military science marches into the future. What's wrong with this picture? Well, actually rather a lot of things.

  • No other researcher had been able to reproduce the results from the original experiment. This included a team of senior experimentalists who used the Advanced Photon Source at Argonne National Laboratory and state of the art instrumentation and found no evidence whatsoever for triggering of the hafnium isomer with X-rays—in two separate experiments.
  • As noted above, well-understood nuclear theory predicted the yield from triggering, if it occurred, to be six orders of magnitude less than reported in Collins's paper.
  • An evaluation of the original experiment by the independent JASON group of senior experts in 1999 determined the result to be “a priori implausible” and “inconclusive, at best”.
  • A separate evaluation by the Institute for Defense Analyses concluded the original paper reporting the triggering results “was flawed and should not have passed peer review”.
  • Collins had never run, and refused to run, a null experiment with ordinary hafnium to confirm that the very small effect he reported went away when the isomer was removed.
  • James Carroll, one of the co-authors of the original paper, had obtained nothing but null results in his own subsequent experiments on hafnium triggering.
  • Calculations showed that even if triggering were to be possible at the reported rate, the process would not come close to breaking even: more than six times as much X-ray energy would go in as gamma rays came out.
  • Even if triggering worked, and some way were found to turn it into an energy source or explosive device, the hafnium isomer does not occur in nature and would have to be made by a hideously inefficient process in a nuclear reactor or particle accelerator, at a cost estimated at around a billion dollars per gram. The explosive in the nuclear hand grenade would cost tens of billions of dollars, compared to which highly enriched uranium and plutonium are cheap as dirt.
  • If the material could be produced and triggering made to work, the resulting device would pose an extreme radiation hazard. Radiation is inverse to half-life, and the hafnium isomer, with a 31 year half-life, is vastly more radioactive than U-235 (700 million years) or Pu-239 (24,000 years). Further, hafnium isomer decays emit gamma rays, which are the most penetrating form of ionising nuclear radiation and the most difficult against which to shield. The shielding required to protect humans in the vicinity of a tangible quantity of hafnium isomer would more than negate its small mass and compact size.
  • A hafnium explosive device would disperse large quantities of the unreacted isomer (since a relatively small percentage of the total explosive can react before the device is disassembled in the explosion). As it turns out, the half-life of the isomer is just about the same as that of Cesium-137, which is often named as the prime candidate for a “dirty” radiological bomb. One physicist on the HIPP (p. 176) described a hafnium weapon as “the mother of all dirty bombs”.
  • And consider that hand grenade, which would weigh about five pounds. How far can you throw a five pound rock? What do you think about being that far away from a detonation with the energy of two thousand tons of TNT, all released in prompt gamma rays?

But bad science, absurd economics, a nonexistent phenomenon, damning evaluations by panels of authorities, lack of applications, and ridiculous radiation risk in the extremely improbable event of success pose no insurmountable barriers to a government project once it gets up to speed, especially one in which the relationships between those providing the funding and its recipients are complicated and unseemingly cozy. It took an exposé in the Washington Post Magazine by the author and subsequent examination in Congress to finally drive a stake through this madness—maybe. As of the end of 2005, although DARPA was out of the hafnium business (at least publicly), there were rumours of continued funding thanks to a Congressional earmark in the Department of Energy budget.

This book is a well-researched and fascinating look inside the defence underworld where fringe science feeds on federal funds, and starkly demonstrates how weird and wasteful things can get when Pentagon bureaucrats disregard their own science advisors and substitute instinct and wishful thinking for the tedious, but ultimately reliable, scientific method. Many aspects of the story are also quite funny, although U.S. taxpayers who footed the bill for this madness may be less amused. The author has set up a Web site for the book, and Carl Collins, who conducted the original experiment with the dental X-ray and styrofoam cup which incited the mania has responded with his own, almost identical in appearance, riposte. If you're interested in more technical detail on the controversy than appears in Weinberg's book, the Physics Today article from May 2004 is an excellent place to start. The book contains a number of typographical and factual errors, none of which are significant to the story, but when the first line of the Author's Note uses “sited” when “cited” is intended, and in the next paragraph “wondered” instead of “wandered”, you have to—wonder.

It is sobering to realise that this folly took place entirely in the public view: in the open scientific literature, university labs, unclassified defence funding subject to Congressional oversight, and ultimately in the press, and yet over a period of years millions in taxpayer funds were squandered on nonsense. Just imagine what is going on in highly-classified “black” programs.

 Permalink

July 2006

Herrmann, Alexander. Herrmann's Book of Magic. Chicago: Frederick J. Drake & Co., 1903. LCCN 05035787.
When you were a kid, did your grandfather ever pull a coin from his pocket, clap his hands together and make it disappear, then “find” it behind your ear, sending you off to the Popsicle truck for a summer evening treat? If so, and you're now grandparent age yourself, this may be the book from which he learned that trick. Alexander Herrmann was a prominent stage magician in the latter half of the nineteenth century. In this 1903 book, he reveals many of the secrets of the conjuror, from the fundamental sleight of hand skills of palming objects and vanishing and producing them, to the operation of famous illusions such as the disembodied head which speaks. This on-line edition, available both in HTML and Plain ASCII formats, is a complete reproduction of the book, including (in the HTML edition) all the illustrations.

If you must have a printed copy, you may find one at abebooks.com, but it will probably be expensive. It's much better to read the on-line edition produced from a copy found by Bill Walker at a yard sale and kindly contributed to produce this edition.

 Permalink

Berlinski, Claire. Menace in Europe. New York: Crown Forum, 2006. ISBN 1-4000-9768-1.
This is a scary book. The author, who writes with a broad and deep comprehension of European history and its cultural roots, and a vocabulary which reminds one of William F. Buckley, argues that the deep divide which has emerged between the United States and Europe since the end of the cold war, and particularly in the last few years, is not a matter of misunderstanding, lack of sensitivity on the part of the U.S., or the personnel, policies, and style of the Bush administration, but deeply rooted in structural problems in Europe which are getting worse, not better. (That's not to say that there aren't dire problems in the U.S. as well, but that isn't the topic here.)

Surveying the contemporary scene in the Netherlands, Britain, France, Spain, Italy, and Germany, and tracing the roots of nationalism, peasant revolts (of which “anti-globalisation” is the current manifestation), and anti-Semitism back through the centuries, she shows that what is happening in Europe today is simply Europe—the continent of too many kings and too many wars—being Europe, adapted to present-day circumstances. The impression you're left with is that Europe isn't just the “sick man of the world”, but rather a continent afflicted with half a dozen or more separate diseases, all terminal: a large, un-assimilated immigrant population concentrated in ghettos; an unsustainable welfare state; a sclerotic economy weighed down by social charges, high taxes, and ubiquitous and counterproductive regulation; a collapsing birth rate and aging population; a “culture crash” (my term), where the religions and ideologies which have structured the lives of Europeans for millennia have evaporated, leaving nothing in their place; a near-total disconnect between elites and the general population on the disastrous project of European integration, most recently manifested in the controversy over the so-called European constitution; and signs that the rabid nationalism which plunged Europe into two disastrous wars in the last century and dozens, if not hundreds of wars in the centuries before, is seeping back up through the cracks in the foundation of the dystopian, ill-conceived European Union.

In some regards, the author does seem to overstate the case, or generalise from evidence so narrow it lacks persuasiveness. The most egregious example is chapter 8, which infers an emerging nihilist neo-Nazi nationalism in Germany almost entirely based on the popularity of the band Rammstein. Well, yes, but whatever the lyrics, the message of the music, and the subliminal message of the music videos, there is a lot more going on in Germany, a nation of more than 80 million people, than the antics of a single heavy metal band, however atavistic.

U.S. readers inclined to gloat over the woes of the old continent should keep in mind the author's observation, a conclusion I had come to long before I ever opened this book, that the U.S. is heading directly for the same confluence of catastrophes as Europe, and, absent a fundamental change of course, will simply arrive at the scene of the accident somewhat later; and that's only taking into account the problems they have in common; the European economy, unlike the American, is able to function without borrowing on the order of two billion dollars a day from China and Japan.

If you live in Europe, as I have for the last fifteen years (thankfully outside, although now encircled by, the would-be empire that sprouted from Brussels), you'll probably find little here that's new, but you may get a better sense of how the problems interact with one another to make a real crisis somewhere in the future a genuine possibility. The target audience in the U.S., which is so often lectured by their elite that Europe is so much more sophisticated, nuanced, socially and environmentally aware, and rational, may find this book an eye opener; 344,955 American soldiers perished in European wars in the last century, and while it may be satisfying to say, “To Hell with Europe!”, the lesson of history is that saying so is most unwise.

An Instapundit podcast interview with the author is freely available on-line.

 Permalink

Williamson, Donald I. The Origins of Larvae. Dordrecht, The Netherlands: Kluwer Academic, 2003. ISBN 1-4020-1514-3.
I am increasingly beginning to suspect that we are living through an era which, in retrospect, will be seen, like the early years of the twentieth century, as the final days preceding revolutions in a variety of scientific fields. Precision experiments and the opening of new channels of information about the universe as diverse as the sequencing of genomes, the imminent detection of gravitational waves, and detailed measurement of the cosmic background radiation are amassing more and more discrepant data which causes scientific journeymen to further complicate their already messy “standard models”, and the more imaginative among them to think that maybe there are simple, fundamental things which we're totally missing. Certainly, when the scientific consensus is that everything we see and know about comprises less than 5% of the universe, and a majority of the last generation of theorists in high energy physics have been working on a theory which only makes sense in a universe with ten, or maybe eleven, or maybe twenty-six dimensions, there would seem to be a lot of room for an Einstein-like conceptual leap which would make everybody slap their foreheads and exclaim, “How could we have missed that!

But still we have Darwin, don't we? If the stargazers and particle smashers are puzzled by what they see, certainly the more down-to-earth folk who look at creatures that inhabit our planet still stand on a firm foundation, don't they? Well…maybe not. Perhaps, as this book argues, not only is the conventional view of the “tree of life” deeply flawed, the very concept of a tree, where progenitor species always fork into descendants, but there is never any interaction between the ramified branches, is incorrect. (Just to clarify in advance: the author does not question the fundamental mechanism of Darwinian evolution by natural selection of inherited random variations, nor argue for some other explanation for the origin of the diversity in species on Earth. His argument is that this mechanism may not be the sole explanation for the characteristics of the many species with larval forms or discordant embryonic morphology, and that the assumption made by Darwin and his successors that evolution is a pure process of diversification [or forking of species from a common ancestor, as if companies only developed by spin-offs, and never did mergers and acquisitions] may be a simplification that, while it makes the taxonomist's job easier, is not warranted by the evidence.)

Many forms of life on Earth are not born from the egg as small versions of their adult form. Instead, they are born as larvae, which are often radically different in form from the adult. The best known example is moths and butterflies, which hatch as caterpillars, and subsequently reassemble themselves into the winged insects which mate and produce eggs that hatch into the next generation of caterpillars. Larvae are not restricted to arthropoda and other icky phyla: frogs and toads are born as tadpoles and live in one body form, then transform into quite different adults. Even species, humans included, which are born as little adults, go through intermediate stages as developing embryos which have the characteristics of other, quite different species.

Now, when you look closely at this, (and many will be deterred because a great deal of larvae and the species they mature into are rather dreadful), you'll find a long list of curious things which have puzzled naturalists all the way back to Darwin and before. There are numerous examples of species which closely resemble one another and are classified by taxonomists in the same genus which have larvae which are entirely different from one another—so much so that if the larvae were classified by themselves, they would probably be put into different classes or phyla. There are almost identical larvae which develop into species only distantly related. Closely related species include those with one or more larval forms, and others which develop directly: hatching as small individuals already with the adult form. And there are animals which, in their adult form, closely resemble the larvae of other species.

What a mess—but then biology is usually messy! The author, an expert on marine invertebrates (from which the vast majority of examples in this book are drawn), argues that there is a simple explanation for all of these discrepancies and anomalies, one which, if you aren't a biologist yourself, may have already occurred to you—that larvae (and embryonic forms) are the result of a hybridisation or merger of two unrelated species, with the result being a composite which hatches in one form and then subsequently transforms into the other. The principle of natural selection would continue to operate on these inter-specific mergers, of course: complicating or extending the development process of an animal before it could reproduce would probably be selected out, but, on the other hand, adding a free-floating or swimming larval form to an animal whose adult crawls on the ocean bottom or remains fixed to a given location like a clam or barnacle could confer a huge selective advantage on the hybrid, and equip it to ride out mass extinction events because the larval form permitted the species to spread to marginal habitats where it could survive the extinction event.

The acquisition of a larva by successful hybridisation could spread among the original species with no larval form not purely by differential selection but like a sexually transmitted disease—in other words, like wildfire. Note that many marine invertebrates reproduce simply by releasing their eggs and sperm into the sea and letting nature sort it out; consequently, the entire ocean is a kind of of promiscuous pan-specific singles bar where every pelagic and benthic creature is trying to mate, utterly indiscriminately, with every other at the whim of the wave and current. Most times, as in singles bars, it doesn't work out, but suppose sometimes it does?

You have to assume a lot of improbable things for this to make sense, the most difficult of which is that you can combine the sperm and egg of vastly different creatures and (on extremely rare occasions) end up with a hybrid which is born in the form of one and then, at some point, spontaneously transforms into the other. But ruling this out (or deciding it's plausible) requires understanding the “meta-program” of embryonic development—until we do, there's always the possibility we'll slap our foreheads when we realise how straightforward the mechanism is which makes this work.

One thing is clear: this is real science; the author makes unambiguous predictions about biology which can be tested in a variety of ways: laboratory experiments in hybridisation (on p. 213–214 he advises those interested in how to persuade various species to release their eggs and sperm), analysis of genomes (which ought to show evidence of hybridisation in the past), and detailed comparison of adult species which are possible progenitors of larval forms with larvae of those with which they may have hybridised.

If you're insufficiently immersed in the utter weirdness of life forms on this little sphere we inhabit, there is plenty here to astound you. Did you know, for example, about Owenia fusiformis (p. 72), which undergoes “cataclysmic metamorphosis”, which puts the chest-burster of Alien to shame: the larva develops an emerging juvenile worm which, in less than thirty seconds, turns itself inside-out and swallows the larva, which it devours in fifteen minutes. The larva does not “develop into” the juvenile, as is often said; it is like the first stage of a rocket which is discarded after it has done its job. How could this have evolved smoothly by small, continuous changes? For sheer brrrr factor, it's hard to beat the nemertean worms, which develop from tiny larvae into adults some of which exceed thirty metres in length (p. 87).

The author is an expert, and writes for his peers. There are many paragraphs like the following (p. 189), which will send you to the glossary at the end of the text (don't overlook it—otherwise you'll spend lots of time looking up things on the Web).

Adult mantis shrimp (Stomatapoda) live in burrows. The five anterior thoracic appendages are subchelate maxillipeds, and the abdomen bears pleopods and uropods. Some hatch as antizoeas: planktonic larvae that swim with five pairs of biramous thoracic appendages. These larvae gradually change into pseudozoeas, with subchelate maxillipeds and with four or five pairs of natatory pleopods. Other stomatopods hatch as pseudozoeas. There are no uropods in the larval stages. The lack of uropods and the form of the other appendages contrasts with the condition in decapod larvae. It seems improbable that stomatopod larvae could have evolved from ancestral forms corresponding to zoeas and megalopas, and I suggest that the Decapoda and the Stomatopoda acquired their larvae from different foreign sources.
In addition to the zoö-jargon, another deterrent to reading this book is the cost: a list price of USD 109, quoted at Amazon.com at this writing at USD 85, which is a lot of money for a 260 page monograph, however superbly produced and notwithstanding its small potential audience; so fascinating and potentially significant is the content that one would happily part with USD 15 to read a PDF, but at prices like this one's curiosity becomes constrained by the countervailing virtue of parsimony. Still, if Williamson is right, some of the fundamental assumptions underlying our understanding of life on Earth for the last century and a half may be dead wrong, and if his conjecture stands the test of experiment, we may have at hand an understanding of mysteries such as the Cambrian explosion of animal body forms and the apparent “punctuated equilibria” in the fossil record. There is a Nobel Prize here for somebody who confirms that this supposition is correct. Lynn Margulis, whose own theory of the origin of eukaryotic cells by the incorporation of previously free-living organisms as endosymbionts, which is now becoming the consensus view, co-authors a foreword which endorses Williamson's somewhat similar view of larvae.

 Permalink

Reasoner, James. Draw: The Greatest Gunfights of the American West. New York: Berkley, 2003. ISBN 0-425-19193-1.
The author is best known as a novelist, author of a bookshelf full of yarns, mostly set in the Wild West, but also of the War Between the States and World War II. In this, his first work of nonfiction after twenty-five years as a writer, he sketches in 31 short chapters (of less than ten pages average length, with a number including pictures) the careers and climactic (and often career-ending) conflicts of the best known gunslingers of the Old West, as well as many lesser-known figures, some of which were just as deadly and, in their own time, notorious. Here are tales of Wyatt Earp, Doc Holliday, the Dalton Gang, Bat Masterson, Bill Doolin, Pat Garrett, John Wesley Hardin, Billy the Kid, and Wild Bill Hickok; but also Jim Levy, the Jewish immigrant from Ireland who was considered by both Earp and Masterson to be one of the deadliest gunfighters in the West; Henry Starr, who robbed banks from the 1890s until his death in a shoot-out in 1921, pausing in mid-career to write, direct, and star in a silent movie about his exploits, A Debtor to the Law; and Ben Thompson, who Bat Masterson judged to be the fastest gun in the West, who was, at various times, an Indian fighter, Confederate cavalryman, mercenary for Emperor Maximilian of Mexico, gambler, gunfighter,…and chief of police of Austin, Texas. Many of the characters who figure here worked both sides of the law, in some cases concurrently.

The author does not succumb to the temptation to glamorise these mostly despicable figures, nor the tawdry circumstances in which so many met their ends. (Many, but not all: Bat Masterson survived a career as deputy sheriff in Dodge City, sheriff of Ford County, Kansas, Marshal of Trinidad, Colorado, and as itinerant gambler in the wildest towns of the West, to live the last twenty years of his life in New York City, working as sports editor and columnist for a Manhattan newspaper.) Reasoner does, however, attempt to spice up the narrative with frontier lingo (whether genuine or bogus, I know not): lawmen and “owlhoots” (outlaws) are forever slappin' leather, loosing or dodging hails of lead, getting thrown in the hoosegow, or seeking the comfort of the soiled doves who plied their trade above the saloons. This can become tedious if you read the book straight through; it's better enjoyed a chapter at a time spread out over an extended period. The chapters are completely independent of one other (although there are a few cross-references), and may be read in any order. In fact, they read like a collection of magazine columns, but there is no indication in the book they were ever previously published. There is a ten page bibliography citing sources for each chapter but no index—this is a substantial shortcoming since many of the chapter titles do not name the principals in the events they describe, and since the paths of the most famous gunfighters crossed frequently, their stories are spread over a number of chapters.

 Permalink

Lloyd, Seth. Programming the Universe. New York: Alfred A. Knopf, 2006. ISBN 1-4000-4092-2.
The author has devoted his professional career to exploring the deep connections between information processing and the quantum mechanical foundations of the universe. Although his doctorate is in physics, he is a professor of mechanical engineering at MIT, which I suppose makes him an honest to God quantum mechanic. A pioneer in the field of quantum computation, he suggested the first physically realisable quantum computational device, and is author of the landmark papers which evaluated the computational power of the “ultimate laptop”computer which, if its one kilogram of mass and one litre of volume crunched any faster, would collapse into a black hole; estimated the computational capacity of the entire visible universe; and explored how gravitation and spacetime could be emergent properties of a universal quantum computation.

In this book, he presents these concepts to a popular audience, beginning by explaining the fundamentals of quantum mechanics and the principles of quantum computation, before moving on to the argument that the universe as a whole is a universal quantum computer whose future cannot be predicted by any simulation less complicated than the universe as a whole, nor any faster than the future actually evolves (a concept reminiscent of Stephen Wolfram's argument in A New Kind of Science [August 2002], but phrased in quantum mechanical rather than classical terms). He argues that all of the complexity we observe in the universe is the result of the universe performing a computation whose input is the random fluctuations created by quantum mechanics. But, unlike the proverbial monkeys banging on typewriters, the quantum mechanical primate fingers are, in effect, typing on the keys of a quantum computer which, like the cellular automata of Wolfram's book, has the capacity to generate extremely complex structures from very simple inputs. Why was the universe so simple shortly after the big bang? Because it hadn't had the time to compute very much structure. Why is the universe so complicated today? Because it's had sufficient time to perform 10122 logical operations up to the present.

I found this book, on the whole, a disappointment. Having read the technical papers cited above before opening it, I didn't expect to learn any additional details from a popularisation, but I did hope the author would provide a sense for how the field evolved and get a sense of where he saw this research programme going in the future and how it might (or might not) fit with other approaches to the unification of quantum mechanics and gravitation. There are some interesting anecdotes about the discovery of the links between quantum mechanics, thermodynamics, statistical mechanics, and information theory, and the personalities involved in that work, but one leaves the book without any sense for where future research might be going, nor how these theories might be tested by experiment in the near or even distant future. The level of the intended audience is difficult to discern. Unlike some popularisers of science, Lloyd does not shrink from using equations where they clarify physical relationships and even introduces and uses Dirac's “bra-ket” notation (for example, <φ|ψ>), yet almost everywhere he writes a number in scientific notation, he also gives it in the utterly meaningless form of (p. 165) “100 billion billion billion billion billion billion billion billion billion billion” (OK, I've done that myself, on one occasion, but I was having fun at the expense of a competitor). And finally, I find it dismaying that a popular science book by a prominent researcher published by a house as respectable as Knopf at a cover price of USD26 lacks an index—this is a fundamental added value that the reader deserves when parting with this much money (especially for a book of only 220 pages). If you know nothing about these topics, this volume will probably leave you only more confused, and possibly over-optimistic about the state of quantum computation. If you've followed the field reasonably closely, the author's professional publications (most available on-line), which are lucidly written and accessible to the non-specialist, may be more rewarding.

I remain dubious about grandiose claims for quantum computation, and nothing in this book dispelled my scepticism. From Democritus all the way to the present day, every single scientific theory which assumed the existence of a continuum has been proved wrong when experiments looked more closely at what was really going on. Yet quantum mechanics, albeit a statistical theory at the level of measurement, is completely deterministic and linear in the evolution of the wave function, with amplitudes given by continuous complex values which embody, theoretically, an infinite amount of information. Where is all this information stored? The Bekenstein bound gives an upper limit on the amount of information which can be represented in a given volume of spacetime, and that implies that even if the quantum state were stored nonlocally in the entire causally connected universe, the amount of information would be (albeit enormous), still finite. Extreme claims for quantum computation assume you can linearly superpose any number of wave functions and thus encode as much information as you like in a single computation. The entire history of science, and of quantum mechanics itself makes me doubt that this is so—I'll bet that we eventually find some inherent granularity in the precision of the wave function (perhaps round-off errors in the simulation we're living within, but let's not revisit that). This is not to say, nor do I mean to imply, that quantum computation will not work; indeed, it has already been demonstrated in proof of concept laboratory experiments, and it may well hold the potential of extending the growth of computational power after the pure scaling of classical computers runs into physical limits. But just as shrinking semiconductor devices is fundamentally constrained by the size of atoms, quantum computation may be limited by the ultimate precision of the discrete computational substrate of the universe which behaves, on the large scale, like a continuous wave function.

 Permalink

Ponnuru, Ramesh. The Party of Death. Washington: Regnery Publishing, 2006. ISBN 1-59698-004-4.
One party government is not a pretty thing. Just as competition in the marketplace reins in the excesses of would-be commercial predators (while monopoly encourages them to do their worst), long-term political dominance by a single party inevitably leads to corruption, disconnection of the ruling elites from their constituents, and unsustainable policy decisions which are destructive in the long term; this is precisely what has eventually precipitated the collapse of most empires. In recent years the federal government of the United States has been dominated by the Republican party, with all three branches of government and both houses of the congress in Republican hands. Chapter 18 of this fact-packed book cites a statistic which provides a stunning insight into an often-overlooked aspect of the decline of the Democratic party. In 1978, Democrats held 292 seats in the House of Representatives: an overwhelming super-majority of more than two thirds. Of these Democrats, 125, more than 40%, were identified as “pro-life”—opposed to abortion on demand and federal funding of abortion. But by 2004, only 35 Democrats in the House were identified as pro-life: fewer than 18%, and the total number of Democrats had shrunk to only 203, a minority of less than 47%. It is striking to observe that over a period of 26 years the number of pro-life Democrats has dropped by 90, almost identical to the party's total loss of 89 seats.

Now, the Democratic decline is more complicated than any single issue, but as the author documents, the Democratic activist base and large financial contributors are far more radical on issues of human life: unrestricted and subsidised abortion, euthanasia and assisted suicide, stem cell research which destroys human embryos, and human cloning for therapeutic purposes, than the American public at large. (The often deceptive questions used to manipulate the results of public opinion polls and the way they are spun in the overwhelmingly pro-abortion legacy media are discussed at length.) The activists and moneybags make the Democratic party a hostile environment for pro-life politicians and has, over the decades, selected them out, applying an often explicit litmus test to potential candidates, who are not allowed to deviate from absolutist positions. Their adherence to views not shared by most voters then makes them vulnerable in the general election.

Apart from the political consequences, the author examines the curious flirtation of the American left with death in all its forms—a strange alliance for a political philosophy which traditionally stressed protecting the weak and vulnerable: in the words of Hubert Humphrey (who was pro-life), “those who are in the dawn of life, the children; those who are in the twilight of life, the elderly; and those who are in the shadows of life, the sick, the needy, and the handicapped” (p. 131).

The author argues against the panoply of pro-death policies exclusively from a human rights standpoint. Religion is not mentioned except to refute the claim that pro-life policies are an attempt to impose a sectarian agenda on a secular society. The human rights argument could not be simpler to grasp: if you believe that human beings have inherent, unalienable rights, simply by being human, then what human right could conceivably be more fundamental than the right not to be killed. If one accepts this (and the paucity of explicitly pro-murder voters would seem to indicate the view is broadly shared), then the only way one can embrace policies which permit the destruction of a living human organism is to define criteria which distinguish a “person” who cannot be killed, from those who are not persons and therefore can. Thus one hears the human embryo or fetus (which has the potential of developing into an adult human) described as a “potential human”, and medical patients in a persistent vegetative state as having no personhood. Professor Peter Singer, bioethicist at the Center for Human Values at Princeton University argues (p. 176), “[T]he concept of a person is distinct from that of a member of the species Homo sapiens, and that it is personhood, not species membership, that is most significant in determining when it is wrong to end a life.”

But the problem with drawing lines that divide unarguably living human beings into classes of persons and nonpersons is that the distinctions are rarely clear-cut. If a fetus in the first three months of pregnancy is a nonperson, then what changes on the first day of the fourth month to confer personhood on the continuously developing baby? Why not five months, or six? And if a woman in the U.S. has a constitutionally protected right to have her child killed right up until the very last part of its body emerges from the birth canal (as is, in fact, the regime in effect today in the United States, notwithstanding media dissimulation of this reality), then what's so different about killing a newborn baby if, for example, it was found to have a birth defect which was not detected in utero. Professor Singer has no problem with this at all; he enumerates a variety of prerequisites for personhood: “rationality, autonomy, and self-consciousness”, and then concludes “Infants lack these characteristics. Killing them, therefore, cannot be equated with killing normal human beings, or any other self-conscious beings.”

It's tempting to dismiss Singer as another of the many intellectual Looney Tunes which decorate the American academy, but Ponnuru defends him for having the intellectual integrity to follow the premises he shares with many absolutists on these issues all the way to their logical conclusions, which lead Singer to conclude (p. 186), “[d]uring the next 35 years, the traditional view of the sanctity of human life will collapse…. By 2040, it may be that only a rump of hard-core, know-nothing religious fundamentalists will defend the view that every human life, from conception to death, is sacrosanct.” Doesn't that sound like a wonderful world, especially for those of us who expect to live out our declining years as that brave new era dawns, at least for those suitably qualified “persons” permitted to live long enough to get there?

Many contend that such worries are simply “the old slippery slope argument”, thinking that settles the matter. But the problem is that the old slippery slope argument is often right, and in this case there is substantial evidence that it very much applies. The enlightened Dutch seem to have slid further and faster than others in the West, permitting both assisted suicide for the ill and euthanasia for seriously handicapped infants at the parents' request—in theory. In fact, it is estimated that five percent of of all deaths in The Netherlands are the result of euthanasia by doctors without request (which is nominally illegal), and that five percent of infanticide occurs without the request or consent of the parents, and it is seldom noted in the media that the guidelines which permit these “infanticides” actually apply to children up to the age of twelve. Perhaps that's why the Dutch are so polite—young hellions run the risk not only of a paddling but also of “post-natal abortion”. The literally murderous combination of an aging population supported by a shrinking number of working-age people, state-sanctioned euthanasia, and socialised medicine is fearful to contemplate.

These are difficult issues, and the political arena has become so polarised into camps of extremists on both sides that rational discussion and compromise seem almost impossible. This book, while taking a pro-life perspective, eschews rhetoric in favour of rational argumentation grounded in the principles of human rights which date to the Enlightenment. One advantage of applying human rights to all humans is that it's simple and easy to understand. History is rich in examples which show that once a society starts sorting people into persons and nonpersons, things generally start to go South pretty rapidly. Like it or not, these are issues which modern society is going to have to face: advances in medical technologies create situations that call for judgements people never had to make before. For those who haven't adopted one extreme position or another, and are inclined to let the messy democratic process of decision making sort this out, ideally leaving as much discretion as possible to the individuals involved, as opposed to absolutist “rights” discovered in constitutional law and imposed by judicial diktat, this unsettling book is a valuable contribution to the debate. Democratic party stalwarts are unlikely in the extreme to read it, but they ignore this message at their peril.

The book is not very well-edited. There are a number of typographical errors and on two occasions (pp.  94 and 145), the author's interpolations in the middle of extended quotations are set as if they were part of the quotation. It is well documented; there are thirty-four pages of source citations.

 Permalink

August 2006

Sullivan, Robert. Rats. New York: Bloomsbury, [2004] 2005. ISBN 1-58234-477-9.
Here we have one of the rarest phenomena in publishing: a thoroughly delightful best-seller about a totally disgusting topic: rats. (Before legions of rat fanciers write to berate me for bad-mouthing their pets, let me state at the outset that this book is about wild rats, not pet and laboratory rats which have been bred for docility for a century and a half. The new afterword to this paperback edition relates the story of a Brooklyn couple who caught a juvenile Bedford-Stuyvesant street rat to fill the empty cage of their recently deceased pet and, as it it matured, came to regard it with such fear that they were afraid even to release it in a park lest it turn and attack them when the cage was opened—the author suggested they might consider the strategy of “open the cage and run like hell” [p. 225–226]. One of the pioneers in the use of rats in medical research in the early years of the 20th century tried to use wild rats and concluded “they proved too savage to maintain in the laboratory” [p. 231].)

In these pages are more than enough gritty rat facts to get yourself ejected from any polite company should you introduce them into a conversation. Many misconceptions about rats are debunked, including the oft-cited estimate that the rat and human population is about the same, which would lead to an estimate of about eight million rats in New York City—in fact, the most authoritative estimate (p. 20) puts the number at about 250,000 which is still a lot of rats, especially once you begin to appreciate what a single rat can do. (But rat exaggeration gets folks' attention: here is a politician claiming there are fifty-six million rats in New York!) “Rat stories are war stories” (p. 34), and this book teems with them, including The Rat that Came Up the Toilet, which is not an urban legend but a well-documented urban nightmare. (I'd be willing to bet that the incidence of people keeping the toilet lid closed with a brick on the top is significantly greater among readers of this book.)

It's common for naturalists who study an animal to develop sympathy for it and defend it against popular aversion: snakes and spiders, for example, have many apologists. But not rats: the author sums up by stating that he finds them “disgusting”, and he isn't alone. The great naturalist and wildlife artist John James Audubon, one of the rare painters ever to depict rats, amused himself during the last years of his life in New York City by prowling the waterfront hunting rats, having received permission from the mayor “to shoot Rats in the Battery” (p. 4).

If you want to really get to know an animal species, you have to immerse yourself in its natural habitat, and for the Brooklyn-based author, this involved no more than a subway ride to Edens Alley in downtown Manhattan, just a few blocks from the site of the World Trade Center, which was destroyed during the year he spent observing rats there. Along with rat stories and observations, he sketches the history of New York City from a ratty perspective, with tales of the arrival of the brown rat (possibly on ships carrying Hessian mercenaries to fight for the British during the War of American Independence), the rise and fall of rat fighting as popular entertainment in the city, the great garbage strike of 1968 which transformed the city into something close to heaven if you happened to be a rat, and the 1964 Harlem rent strike in which rats were presented to politicians by the strikers to acquaint them with the living conditions in their tenements.

People involved with rats tend to be outliers on the scale of human oddness, and the reader meets a variety of memorable characters, present-day and historical: rat fight impresarios, celebrity exterminators, Queen Victoria's rat-catcher, and many more. Among numerous fascinating items in this rat fact packed narrative is just how recent the arrival of the mis-named brown rat, Rattus norvegicus, is. (The species was named in England in 1769, having been believed to have stowed away on ships carrying lumber from Norway. In fact, it appears to have arrived in Britain before it reached Norway.) There were no brown rats in Europe at all until the 18th century (the rats which caused the Black Death were Rattus rattus, the black rat, which followed Crusaders returning from the Holy Land). First arriving in America around the time of the Revolution, the brown rat took until 1926 to spread to every state in the United States, displacing the black rat except for some remaining in the South and West. The Canadian province of Alberta remains essentially rat-free to this day, thanks to a vigorous and vigilant rat control programme.

The number of rats in an area depends almost entirely upon the food supply available to them. A single breeding pair of rats, with an unlimited food supply and no predation or other causes of mortality, can produce on the order of fifteen thousand descendants in a single year. That makes it pretty clear that a rat population will grow until all available food is being consumed by rats (and that natural selection will favour the most aggressive individuals in a food-constrained environment). Poison or trapping can knock down the rat population in the case of a severe infestation, but without limiting the availability of food, will produce only a temporary reduction in their numbers (while driving evolution to select for rats which are immune to the poison and/or more wary of the bait stations and traps).

Given this fact, which is completely noncontroversial among pest control professionals, it is startling that in New York City, which frets over and regulates public health threats like second-hand tobacco smoke while its denizens suffer more than 150 rat bites a year, many to children, smoke-free restaurants dump their offal into rat-infested alleys in thin plastic garbage bags, which are instantly penetrated by rats. How much could it cost to mandate, or even provide, rat-proof steel containers for organic waste, compared to the budget for rodent control and the damages and health hazards of a large rat population? Rats will always be around—in 1936, the president of the professional society for exterminators persuaded the organisation to change the name of the occupation from “exterminator” to “pest control operator”, not because the word “exterminator” was distasteful, but because he felt it over-promised what could actually be achieved for the client (p. 98). But why not take some simple, obvious steps to constrain the rat population?

The book contains more than twenty pages of notes in narrative form, which contain a great deal of additional information you don't want to miss, including the origin of giant inflatable rats for labour rallies, and even a poem by exterminator guru Bobby Corrigan. There is no index.

 Permalink

Staley, Kent W. The Evidence for the Top Quark. Cambridge: Cambridge University Press, 2004. ISBN 0-521-82710-8.
A great deal of nonsense and intellectual nihilism has been committed in the name of “science studies”. Here, however, is an exemplary volume which shows not only how the process of scientific investigation should be studied, but also why. The work is based on the author's dissertation in philosophy, which explored the process leading to the September 1994 publication of the “Evidence for top quark production in pp collisions at √s = 1.8 TeV” paper in Physical Review D. This paper is a quintessential example of Big Science: more than four hundred authors, sixty pages of intricate argumentation from data produced by a detector weighing more than two thousand tons, and automated examination of millions and millions of collisions between protons and antiprotons accelerated to almost the speed of light by the Tevatron, all to search, over a period of months, for an elementary particle which cannot be observed in isolation, and finally reporting “evidence” for its existence (but not “discovery” or “observation”) based on a total of just twelve events “tagged” by three different algorithms, when a total of about 5.7 events would have been expected due to other causes (“background”) purely by chance alone.

Through extensive scrutiny of contemporary documents and interviews with participants in the collaboration which performed the experiment, the author provides a superb insight into how science on this scale is done, and the process by which the various kinds of expertise distributed throughout a large collaboration come together to arrive at the consensus they have found something worthy of publication. He explores the controversies about the paper both within the collaboration and subsequent to its publication, and evaluates claims that choices made by the experimenters may have a produced a bias in the results, and/or that choosing experimental “cuts” after having seen data from the detector might constitute “tuning on the signal”: physicist-speak for choosing the criteria for experimental success after having seen the results from the experiment, a violation of the “predesignation” principle usually assumed in statistical tests.

In the final two, more philosophical, chapters, the author introduces the concept of “Error-Statistical Evidence”, and evaluates the analysis in the “Evidence” paper in those terms, concluding that despite all the doubt and controversy, the decision making process was, in the end, ultimately objective. (And, of course, subsequent experimentation has shown the information reported in the Evidence paper to be have been essentially correct.)

Popular accounts of high energy physics sometimes gloss over the fantastically complicated and messy observations which go into a reported result to such an extent you might think experimenters are just waiting around looking at a screen waiting for a little ball to pop out with a “t” or whatever stencilled on the side. This book reveals the subtlety of the actual data from these experiments, and the intricate chain of reasoning from the multitudinous electronic signals issuing from a particle detector to the claim of having discovered a new particle. This is not, however, remotely a work of popularisation. While attempting to make the physics accessible to philosophers of science and the philosophy comprehensible to physicists, each will find the portions outside their own speciality tough going. A reader without a basic understanding of the standard model of particle physics and the principles of statistical hypothesis testing will probably end up bewildered and may not make it to the end, but those who do will be rewarded with a detailed understanding of high energy particle physics experiments and the operation of large collaborations of researchers which is difficult to obtain anywhere else.

 Permalink

Wilczek, Frank. Fantastic Realities. Singapore: World Scientific, 2006. ISBN 981-256-655-4.
The author won the 2004 Nobel Prize in Physics for his discovery of “asymptotic freedom” in the strong interaction of quarks and gluons, which laid the foundation of the modern theory of Quantum Chromodynamics (QCD) and the Standard Model of particle physics. This book is an anthology of his writing for general and non-specialist scientific audiences over the last fifteen years, including eighteen of his “Reference Frame” columns from Physics Today and his Nobel prize autobiography and lecture.

I had eagerly anticipated reading this book. Frank Wilczek and his wife Betsy Devine are co-authors of the 1988 volume Longing for the Harmonies, which I consider to be one of the best works of science popularisation ever written, and whose “theme and variation” structure I adopted for my contemporary paper “The New Technological Corporation”. Wilczek is not only a brilliant theoretician, he has a tremendous talent for explaining the arcana of quantum mechanics and particle physics in lucid prose accessible to the intelligent layman, and his command of the English language transcends pedestrian science writing and sometimes verges on the poetic, occasionally crossing the line: this book contains six original poems!

The collection includes five book reviews, in a section titled “Inspired, Irritated, Inspired”, the author's reaction to the craft of reviewing books, which he describes as “like going on a blind date to play Russian roulette” (p. 305). After finishing this 500 page book, I must sadly report that my own experience can be summed up as “Inspired, Irritated, Exasperated”. There is inspiration aplenty and genius on display here, but you're left with the impression that this is a quickie book assembled by throwing together all the popular writing of a Nobel laureate and rushed out the door to exploit his newfound celebrity. This is not something you would expect of World Scientific, but the content of the book argues otherwise.

Frank Wilczek writes frequently for a variety of audiences on topics central to his work: the running of the couplings in the Standard Model, low energy supersymmetry and the unification of forces, a possible SO(10) grand unification of fundamental particles, and lattice QCD simulation of the mass spectrum of mesons and hadrons. These are all fascinating topics, and Wilczek does them justice here. The problem is that with all of these various articles collected in one book, he does them justice again, again, and again. Four illustrations: the lattice QCD mass spectrum, the experimentally measured running of the strong interaction coupling, the SO(10) particle unification chart, and the unification of forces with and without supersymmetry, appear and are discussed three separate times (the latter four times) in the text; this gets tedious.

There is sufficient wonderful stuff in this book to justify reading it, but don't feel duty-bound to slog through the nth repetition of the same material; a diligent editor could easily cut at least a third of the book, and probably close to half without losing any content. The final 70 pages are excerpts from Betsy Devine's Web log recounting the adventures which began with that early morning call from Sweden. The narrative is marred by the occasional snarky political comment which, while appropriate in a faculty wife's blog, is out of place in an anthology of the work of a Nobel laureate who scrupulously avoids mixing science and politics, but still provides an excellent inside view of just what it's like to win and receive a Nobel prize.

 Permalink

Scalzi, John. The Ghost Brigades. New York: Tor, 2006. ISBN 0-7653-1502-5.
After his stunning fiction debut in Old Man's War (April 2005), readers hoping for the arrival on the scene of a new writer of Golden Age stature held their breath to see whether the author would be a one book wonder or be able to repeat. You can start breathing again—in this, his second novel, he hits another one out of the ballpark.

This story is set in the conflict-ridden Colonial Union universe of Old Man's War, some time after the events of that book. Although in the acknowledgements he refers to this as a sequel, you'd miss little or nothing by reading it first, as everything introduced in the first novel is explained as it appears here. Still, if you have the choice, it's best to read them in order. The Colonial Special Forces, which are a shadowy peripheral presence in Old Man's War, take centre stage here. Special Forces are biologically engineered and enhanced super-soldiers, bred from the DNA of volunteers who enlisted in the regular Colonial Defense Forces but died before they reached the age of 75 to begin their new life as warriors. Unlike regular CDF troops, who retain their memories and personalities after exchanging their aged frame for a youthful and super-human body, Special Forces start out as a tabula rasa with adult bodies and empty brains ready to be programmed by their “BrainPal” appliance, which also gives them telepathic powers.

The protagonist, Jared Dirac, is a very special member of the Special Forces, as he was bred from the DNA of a traitor to the Colonial Union, and imprinted with that person's consciousness in an attempt to figure out his motivations and plans. Things didn't go as expected, and Jared ends up with two people in his skull, leading to exploration of the meaning of human identity and how our memories (or those of others) make us who we are, along the lines of Robert Heinlein's I Will Fear No Evil. The latter was not one of Heinlein's better outings, but Scalzi takes the nugget of the idea and runs with it here, spinning a yarn that reads like Heinlein's better work. In the last fifty pages, the Colonial Union universe becomes a lot more ambiguous and interesting, and the ground is laid for a rich future history series set there. This book has less rock-em sock-em combat and more character development and ideas, which is just fine for this non-member of the video game generation.

Since almost anything more I said would constitute a spoiler, I'll leave it at that; I loved this book, and if you enjoy the best of Heinlein, you probably will as well. (One quibble, which I'll try to phrase to avoid being a spoiler: for the life of me, I can't figure out how Sagan expects to open the capture pod at the start of chapter 14 (p. 281), when on p. 240 she couldn't open it, and since then nothing has happened to change the situation.) For more background on the book and the author's plans for this universe, check out the Instapundit podcast interview with the author.

 Permalink

September 2006

Howard, Michael, David LeBlanc, and John Viega. 19 Deadly Sins of Software Security. Emeryville, CA: Osborne, 2005. ISBN 0-07-226085-8.
During his brief tenure as director of the National Cyber Security Division of the U.S. Department of Homeland Security, Amit Yoran (who wrote the foreword to this book) got a lot of press attention when he claimed, “Ninety-five percent of software bugs are caused by the same 19 programming flaws.” The list of these 19 dastardly defects was assembled by John Viega who, with his two co-authors, both of whom worked on computer security at Microsoft, attempt to exploit its notoriety in this poorly written, jargon-filled, and utterly worthless volume. Of course, I suppose that's what one should expect when a former official of the agency of geniuses who humiliate millions of U.S. citizens every day to protect them from the peril of grandmothers with exploding sneakers team up with a list of authors that includes a former “security architect for Microsoft's Office division”—why does the phrase “macro virus” immediately come to mind?

Even after reading this entire ramble on the painfully obvious, I cannot remotely guess who the intended audience was supposed to be. Software developers who know enough to decode what the acronym-packed (many never or poorly defined) text is trying to say are already aware of the elementary vulnerabilities being discussed and ways to mitigate them. Those without knowledge of competent programming practice are unlikely to figure out what the authors are saying, since their explanations in most cases assume the reader is already aware of the problem. The book is also short (281 pages), generous with white space, and packed with filler: the essential message of what to look out for in code can be summarised in a half-page table: in fact, it has been, on page 262! Not only does every chapter end with a summary of “do” and “don't” recommendations, all of these lists are duplicated in a ten page appendix at the end, presumably added because the original manuscript was too short. Other obvious padding is giving examples of trivial code in a long list of languages (including proprietary trash such as C#, Visual Basic, and the .NET API); around half of the code samples are Microsoft-specific, as are the “Other Resources” at the end of each chapter. My favourite example is on pp. 176–178, which gives sample code showing how to read a password from a file (instead of idiotically embedding it in an application) in four different programming languages: three of them Microsoft-specific.

Like many bad computer books, this one seems to assume that programmers can learn only from long enumerations of specific items, as opposed to a theoretical understanding of the common cause which underlies them all. In fact, a total of eight chapters on supposedly different “deadly sins” can be summed up in the following admonition, “never blindly trust any data that comes from outside your complete control”. I had learned this both from my elders and brutal experience in operating system debugging well before my twentieth birthday. Apart from the lack of content and ill-defined audience, the authors write in a dialect of jargon and abbreviations which is probably how morons who work for Microsoft speak to one another: “app”, “libcall”, “proc”, “big-honking”, “admin”, “id” litter the text, and the authors seem to believe the word for a security violation is spelt “breech”. It's rare that I read a technical book in any field from which I learn not a single thing, but that's the case here. Well, I suppose I did learn that a prominent publisher and forty dollar cover price are no guarantee the content of a book will be of any value. Save your money—if you're curious about which 19 “sins” were chosen, just visit the Amazon link above and display the back cover of the book, which contains the complete list.

 Permalink

Mayer, Milton. They Thought They Were Free. 2nd. ed. Chicago: University of Chicago Press, [1955] 1966. ISBN 0-226-51192-8.
The author, a journalist descended from German Jewish immigrants to the United States, first visited Nazi Germany in 1935, spending a month in Berlin attempting to obtain, unsuccessfully, an interview with Hitler, notwithstanding the assistance of his friend, the U.S. ambassador, then travelled through the country reporting for a U.S. magazine. It was then that he first discovered, meeting with ordinary Germans, that Nazism was not, as many perceived it then and now, “the tyranny of a diabolical few over helpless millions” (p. xviii), but rather a mass movement grounded in the “little people” with a broad base of non-fanatic supporters.

Ten years after the end of the war, Mayer arranged a one year appointment as a visiting professor at the University of Frankfurt and moved, with his family, to a nearby town of about 20,000 he calls “Kronenberg”. There, he spent much of his time cultivating the friendship of ten men he calls “my ten Nazi friends”, all of whom joined the party for various reasons ranging from ideology, assistance in finding or keeping employment, to admiration of what they saw as Hitler's success (before the war) in restoring the German economy and position in the world. A large part of the book is reconstructed conversations with these people, exploring the motivations of those who supported Hitler (many of whom continued, a decade after Germany's disastrous defeat in the war he started, to believe the years of his rule prior to the war were Germany's golden age). Together they provide a compelling picture of life in a totalitarian society as perceived by people who liked it.

This is simultaneously a profoundly enlightening and disturbing book. The author's Nazi friends come across as almost completely unexceptional, and one comes to understand how the choices they made, rooted in the situation they found themselves, made perfect sense to them. And then, one cannot help but ask, “What would I have done in the same circumstances?” Mayer has no truck with what has come to be called multiculturalism—he is a firm believer in national character (although, of course, only on the average, with large individual variation), and he explains how history, over almost two millennia, has forged the German character and why it is unlikely to be changed by military defeat and a few years of occupation.

Apart from the historical insights, this book is highly topical when a global superpower is occupying a very different country, with a tradition and history far more remote from its own than was Germany's, and trying to instill institutions with no historical roots there. People forget, but ten years after the end of World War II many, Mayer included, considered the occupation of Germany to have been a failure. He writes (p. 303):

The failure of the Occupation could not, perhaps, have been averted in the very nature of the case. But it might have been mitigated. Its mitigation would have required the conquerors to do something they had never had to do in their history. They would have had to stop doing what they were doing and ask themselves some questions, hard questions, like, What is the German character? How did it get that way? What is wrong with its being that way? What way would be better, and what, if anything, could anybody do about it?
Wise questions, indeed, for any conqueror of any country.

The writing is so superb that you may find yourself re-reading paragraphs just to savour how they're constructed. It is also thought-provoking to ponder how many things, from the perspective of half a century later, the author got wrong. In his view the occupation of West Germany would fail to permanently implant democracy, that German re-militarisation and eventual aggression was almost certain unless blocked by force, and that the project of European unification was a pipe dream of idealists and doomed to failure. And yet, today, things seem to have turned out pretty well for Germany, the Germans, and their neighbours. The lesson of this may be that national character can be changed, but changing it is the work of generations, not a few years of military occupation. That is also something modern-day conquerors, especially Western societies with a short attention span, might want to bear in mind.

 Permalink

Smolin, Lee. The Trouble with Physics. New York: Houghton Mifflin, 2006. ISBN 0-618-55105-0.
The first forty years of the twentieth century saw a revolution in fundamental physics: special and general relativity changed our perception of space, time, matter, energy, and gravitation; quantum theory explained all of chemistry while wiping away the clockwork determinism of classical mechanics and replacing it with a deeply mysterious theory which yields fantastically precise predictions yet nobody really understands at its deepest levels; and the structure of the atom was elucidated, along with important clues to the mysteries of the nucleus. In the large, the universe was found to be enormously larger than expected and expanding—a dynamic arena which some suspected might have an origin and a future vastly different than its present state.

The next forty years worked out the structure and interactions of the particles and forces which constitute matter and govern its interactions, resulting in a standard model of particle physics with precisely defined theories which predicted all of the myriad phenomena observed in particle accelerators and in the highest energy events in the heavens. The universe was found to have originated in a big bang no more distant than three times the age of the Earth, and the birth cry of the universe had been detected by radio telescopes.

And then? Unexpected by almost all practitioners of high energy particle physics, which had become an enterprise larger by far than all of science at the start of the century, progress stopped. Since the wrapping up of the standard model around 1975, experiments have simply confirmed its predictions (with the exception of the discovery of neutrino oscillations and consequent mass, but that can be accommodated within the standard model without changing its structure), and no theoretical prediction of phenomena beyond the standard model has been confirmed experimentally.

What went wrong? Well, we certainly haven't reached the End of Science or even the End of Physics, because the theories which govern phenomena in the very small and very large—quantum mechanics and general relativity—are fundamentally incompatible with one another and produce nonsensical or infinite results when you attempt to perform calculations in the domain—known to exist from astronomical observations—where both must apply. Even a calculation as seemingly straightforward as estimating the energy of empty space yields a result which is 120 orders of magnitude greater than experiment shows it to be: perhaps the most embarrassing prediction in the history of science.

In the first chapter of this tour de force, physicist Lee Smolin poses “The Five Great Problems in Theoretical Physics”, all of which are just as mysterious today as they were thirty-five years ago. Subsequent chapters explore the origin and nature of these problems, and how it came to be, despite unprecedented levels of funding for theoretical and experimental physics, that we seem to be getting nowhere in resolving any of these fundamental enigmas.

This prolonged dry spell in high energy physics has seen the emergence of string theory (or superstring theory, or M-theory, or whatever they're calling it this year) as the dominant research program in fundamental physics. At the outset, there were a number of excellent reasons to believe that string theory pointed the way to a grand unification of all of the forces and particles of physics, and might answer many, if not all, of the Great Problems. This motivated many very bright people, including the author (who, although most identified with loop quantum gravity research, has published in string theory as well) to pursue this direction. What is difficult for an outsider to comprehend, however, is how a theoretical program which, after thirty-five years of intensive effort, has yet to make a single prediction testable by a plausible experiment; has failed to predict any of the major scientific surprises that have occurred over those years such as the accelerating expansion of the universe and the apparent variation in the fine structure constant; that does not even now exist in a well-defined mathematical form; and has not been rigorously proved to be a finite theory; has established itself as a virtual intellectual monopoly in the academy, forcing aspiring young theorists to work in string theory if they are to have any hope of finding a job, receiving grants, or obtaining tenure.

It is this phenomenon, not string theory itself, which, in the author's opinion, is the real “Trouble with Physics”. He considers string theory as quite possibly providing clues (though not the complete solution) to the great problems, and finds much to admire in many practitioners of this research. But monoculture is as damaging in academia as in agriculture, and when it becomes deeply entrenched in research institutions, squeezes out other approaches of equal or greater merit. He draws the distinction between “craftspeople”, who are good at performing calculations, filling in blanks, and extending an existing framework, and “seers”, who make the great intellectual leaps which create entirely new frameworks. After thirty-five years with no testable result, there are plenty of reasons to suspect a new framework is needed, yet our institutions select out those most likely to discover them, or force them to spend their most intellectually creative years doing tedious string theory calculations at the behest of their elders.

In the final chapters, Smolin looks at how academic science actually works today: how hiring and tenure decisions are made, how grant applications are evaluated, and the difficult career choices young physicists must make to work within this system. When reading this, the word “Gosplan” (Госпла́н) kept flashing through my mind, for the process he describes resembles nothing so much as central planning in a command economy: a small group of senior people, distant from the facts on the ground and the cutting edge of intellectual progress, trying to direct a grand effort in the interest of “efficiency”. But the lesson of more than a century of failed socialist experiments is that, in the timeless words of Rocket J. Squirrel, “that trick never works”—the decisions inevitably come down on the side of risk aversion, and are often influenced by cronyism and toadying to figures in authority. The concept of managing risk and reward by building a diversified portfolio of low and high risk placements which is second nature to managers of venture capital funds and industrial research and development laboratories appears to be totally absent in academic science, which is supposed to be working on the most difficult and fundamental questions. Central planning works abysmally for cement and steel manufacturing; how likely is it to spark the next scientific revolution?

There is much more to ponder: why string theory, as presently defined, cannot possibly be a complete theory which subsumes general relativity; hints from experiments which point to new physics beyond string theory; stories of other mathematically beautiful theories (such as SU(5) grand unification) which experiment showed to be dead wrong; and a candid view of the troubling groupthink, appeal to authority, and intellectual arrogance of some members of the string theory community. As with all of Smolin's writing, this is a joy to read, and you get the sense that he's telling you the straight story, as honestly as he can, not trying to sell you something. If you're interested in these issues, you'll probably also want to read Leonard Susskind's pro-string The Cosmic Landscape (March 2006) and Peter Woit's sceptical Not Even Wrong (June 2006).

 Permalink

Wells, H. G. Little Wars. Springfield, VA: Skirmisher, [1913] 2004. ISBN 0-9722511-5-4.
I have been looking for a copy of this book for more than twenty-five years. In this 1913 classic, H. G. Wells essentially single-handedly invented the modern pastime of miniature wargaming, providing a (tin soldier) battle-tested set of rules which makes for exciting, well-balanced, and unpredictable games which can be played by two or more people in an afternoon and part of an evening. Interestingly, he avoids much of the baggage that burdens contemporary games such as icosahedral dice and indirect fire calculations, and strictly minimises the rôle of chance, using nothing fancier than a coin toss, and that only in rare circumstances.

The original edition couldn't have appeared at a less auspicious time: published just a year before the outbreak of the horrific Great War (a term Wells uses, prophetically, to speak of actual military conflict in this book). The work is, of course, long out of copyright and text editions are available on the Internet, including this one at Project Gutenberg, but they are unsatisfying because the text makes frequent reference to the nineteen photographs by Wells's second wife, Amy Catherine Wells, which are not included in the on-line editions but reproduced in this volume. Even if you aren't interested in the details, just seeing grown men in suits scrunching down on the ground playing with toy soldiers is worth the price of admission. The original edition included almost 150 delightful humorous line drawings by J. R. Sinclair; sadly, only about half are reproduced here, but that's better than none at all. This edition includes a new foreword by Gary Gygax, inventor of Dungeons and Dragons. Radical feminists of the dour and scornful persuasion should be sure to take their medication before reading the subtitle or the last paragraph on page 6 (lines 162–166 of the Gutenberg edition).

 Permalink

October 2006

Dworkin, Ronald W. Artificial Happiness. New York: Carroll & Graf, 2006. ISBN 0-7867-1714-9.
Western societies, with the United States in the lead, appear to be embarked on a grand scale social engineering experiment with little consideration of the potentially disastrous consequences both for individuals and the society at large. Over the last two decades “minor depression”, often no more than what, in less clinical nomenclature one would term unhappiness, has become seen as a medical condition treatable with pharmaceuticals, and prescription of these medications, mostly by general practitioners, not psychiatrists or psychologists, has skyrocketed, with drugs such as Prozac, Paxil, and Zoloft regularly appearing on lists of the most frequently prescribed. Tens of million of people in the United States take these pills, which are being prescribed to children and adolescents as well as adults.

Now, there's no question that these medications have been a Godsend for individuals suffering from severe clinical depression, which is now understood in many cases to be an organic disease caused by imbalances in the metabolism of neurotransmitters in the brain. But this vast public health experiment in medicating unhappiness is another thing altogether. Unhappiness, like pain, is a signal that something's wrong, and a motivator to change things for the better. But if unhappiness is seen as a disease which is treated by swallowing pills, this signal is removed, and people are numbed or stupefied out of taking action to eliminate the cause of their unhappiness: changing jobs or careers, reducing stress, escaping from abusive personal relationships, or embarking on some activity which they find personally rewarding. Self esteem used to be thought of as something you earned from accomplishing difficult things; once it becomes a state of mind you get from a bottle of pills, then what will become of all the accomplishments the happily medicated no longer feel motivated to achieve?

These are serious questions, and deserve serious investigation and a book-length treatment of the contemporary scene and trends. This is not, however, that book. The author is an M.D. anæsthesiologist with a Ph.D. in political philosophy from Johns Hopkins University, and a senior fellow at the Hudson Institute—impressive credentials. Notwithstanding them, the present work reads like something written by somebody who learned Marxism from a comic book. Individuals, entire professions, and groups as heterogeneous as clergy of organised religions are portrayed like cardboard cutouts—with stick figures drawn on them—in crayon. Each group the author identifies is seen as acting monolithically toward a specific goal, which is always nefarious in some way, advancing an agenda based solely on its own interest. The possibility that a family doctor might prescribe antidepressants for an unhappy patient in the belief that he or she is solving a problem for the patient is scarcely considered. No, the doctor is part of a grand conspiracy of “primary care physicians” advancing an agenda to usurp the “turf” (a term he uses incessantly) of first psychiatrists, and finally organised religion.

After reading this entire book, I still can't decide whether the author is really as stupid as he seems, or simply writes so poorly that he comes across that way. Each chapter starts out lurching toward a goal, loses its way and rambles off in various directions until the requisite number of pages have been filled, and then states a conclusion which is not justified by the content of the chapter. There are few cliches in the English language which are not used here—again and again. Here is an example of one of hundreds of paragraphs to which the only rational reaction is “Huh?”.

So long as spirituality was an idea, such as believing in God, it fell under religious control. However, if doctors redefined spirituality to mean a sensual phenomenon—a feeling—then doctors would control it, since feelings had long since passed into the medical profession's hands, the best example being unhappiness. Turning spirituality into a feeling would also help doctors square the phenomenon with their own ideology. If spirituality were redefined to mean a feeling rather than an idea, then doctors could group spirituality with all the other feelings, including unhappiness, thereby preserving their ideology's integrity. Spirituality, like unhappiness, would become a problem of neurotransmitters and a subclause of their ideology. (Page 226.)
A reader opening this book is confronted with 293 pages of this. This paragraph appears in chapter nine, “The Last Battle”, which describes the Manichean struggle between doctors and organised religion in the 1990s for the custody of the souls of Americans, ending in a total rout of religion. Oh, you missed that? Me too.

Mass medication with psychotropic drugs is a topic which cries out for a statistical examination of its public health dimensions, but Dworkin relates only anecdotes of individuals he has known personally, all of whose minds he seems to be able to read, diagnosing their true motivations which even they don't perceive, and discerning their true destiny in life, which he believes they are failing to follow due to medication for unhappiness.

And if things weren't muddled enough, he drags in “alternative medicine” (the modern, polite term for what used to be called “quackery”) and ”obsessive exercise” as other sources of Artificial Happiness (which he capitalises everywhere), which is rather odd since he doesn't believe either works except through the placebo effect. Isn't it just a little bit possible that some of those people working out at the gym are doing so because it makes them feel better and likely to live longer? Dworkin tries to envision the future for the Happy American, decoupled from the traditional trajectory through life by the ability to experience chemically induced happiness at any stage. Here, he seems to simultaneously admire and ridicule the culture of the 1950s, of which his knowledge seems to be drawn from re-runs of “Leave it to Beaver”. In the conclusion, he modestly proposes a solution to the problem which requires completely restructuring medical education for general practitioners and redefining the mission of all organised religions. At least he doesn't seem to have a problem with self-esteem!

 Permalink

Peters, Eric. Automotive Atrocities. St. Paul, MN: Motorbooks International, 2004. ISBN 0-7603-1787-9.
Oh my, oh my, there really were some awful automobiles on the road in the 1970s and 1980s, weren't there? Those born too late to experience them may not be fully able to grasp the bumper to bumper shoddiness of such rolling excrescences as the diesel Chevette, the exploding Pinto, Le Car, the Maserati Biturbo, the Cadillac V-8-6-4 and even worse diesel; bogus hamster-powered muscle cars (“now with a black stripe and fake hood scoop, for only $5000 more!”); the Yugo, the DeLorean, and the Bricklin—remember that one?

They're all here, along with many more vehicles which, like so many things of that era, can only elicit in those who didn't live through it, the puzzled response, “What were they thinking?” Hey, I lived through it, and that's what I used to think when blowing past multi-ton wheezing early 80s Thunderbirds (by then, barely disguised Ford Fairmonts) in my 1972 VW bus!

Anybody inclined toward automotive Schadenfreude will find this book enormously entertaining, as long as you weren't one of the people who spent your hard-earned, rapidly-inflating greenbacks for one of these regrettable rolling rustbuckets. Unlike many automotive books, this one is well-produced and printed, has few if any typographical errors, and includes many excerpts from the contemporary sales material which recall just how slimy and manipulative were the campaigns used to foist this junk off onto customers who, one suspects, the people selling it referred to in the boardroom as “the rubes”.

It is amazing to recall that almost a generation exists whose entire adult experience has been with products which, with relatively rare exceptions, work as advertised, don't break as soon as you take them home, and rapidly improve from year to year. Those of us who remember the 1970s took a while to twig to the fact that things had really changed once the Asian manufacturers raised the quality bar a couple of orders of magnitude above where the U.S. companies thought they had optimised their return.

In the interest of full disclosure, I will confess that I once drove a 1966 MGB, but I didn't buy it new! To grasp what awaited the seventies denizen after they parked the disco-mobile and boogied into the house, see Interior Desecrations (December 2004).

 Permalink

Vilenkin, Alexander. Many Worlds in One. New York: Hill and Wang, 2006. ISBN 0-8090-9523-8.
From the dawn of the human species until a time within the memory of many people younger than I, the origin of the universe was the subject of myth and a topic, if discussed at all within the academy, among doctors of divinity, not professors of physics. The advent of precision cosmology has changed that: the ultimate questions of origin are not only legitimate areas of research, but something probed by satellites in space, balloons circling the South Pole, and mega-projects of Big Science. The results of these experiments have, in the last few decades, converged upon a consensus from which few professional cosmologists would dissent:
  1. At the largest scale, the geometry of the universe is indistinguishable from Euclidean (flat), and the distribution of matter and energy within it is homogeneous and isotropic.
  2. The universe evolved from an extremely hot, dense, phase starting about 13.7 billion years ago from our point of observation, which resulted in the abundances of light elements observed today.
  3. The evidence of this event is imprinted on the cosmic background radiation which can presently be observed in the microwave frequency band. All large-scale structures in the universe grew from gravitational amplification of scale-independent quantum fluctuations in density.
  4. The flatness, homogeneity, and isotropy of the universe is best explained by a period of inflation shortly after the origin of the universe, which expanded a tiny region of space, smaller than a subatomic particle, to a volume much greater than the presently observable universe.
  5. Consequently, the universe we can observe today is bounded by a horizon, about forty billion light years away in every direction (greater than the 13.7 billion light years you might expect since the universe has been expanding since its origin), but the universe is much, much larger than what we can see; every year another light year comes into view in every direction.
Now, this may seem mind-boggling enough, but from these premises, which it must be understood are accepted by most experts who study the origin of the universe, one can deduce some disturbing consequences which seem to be logically unavoidable.

Let me walk you through it here. We assume the universe is infinite and unbounded, which is the best estimate from precision cosmology. Then, within that universe, there will be an infinite number of observable regions, which we'll call O-regions, each defined by the volume from which an observer at the centre can have received light since the origin of the universe. Now, each O-region has a finite volume, and quantum mechanics tells us that within a finite volume there are a finite number of possible quantum states. This number, although huge (on the order of 1010123 for a region the size of the one we presently inhabit), is not infinite, so consequently, with an infinite number of O-regions, even if quantum mechanics specifies the initial conditions of every O-region completely at random and they evolve randomly with every quantum event thereafter, there are only a finite number of histories they can experience (around 1010150). Which means that, at this moment, in this universe (albeit not within our current observational horizon), invoking nothing as fuzzy, weird, or speculative as the multiple world interpretation of quantum mechanics, there are an infinite number of you reading these words scribbled by an infinite number of me. In the vast majority of our shared universes things continue much the same, but from time to time they d1v3r93 r4ndtx#e~—….

Reset . . .
Snap back to universe of origin . . .
Reloading initial vacuum parameters . . .
Restoring simulation . . .
Resuming from checkpoint.
What was that? Nothing, I guess. Still, odd, that blip you feel occasionally. Anyway, here is a completely fascinating book by a physicist and cosmologist who is pioneering the ragged edge of what the hard evidence from the cosmos seems to be telling us about the apparently boundless universe we inhabit. What is remarkable about this model is how generic it is. If you accept the best currently available evidence for the geometry and composition of the universe in the large, and agree with the majority of scientists who study such matters how it came to be that way, then an infinite cosmos filled with observable regions of finite size and consequently limited diversity more or less follows inevitably, however weird it may seem to think of an infinity of yourself experiencing every possible history somewhere. Further, in an infinite universe, there are an infinite number of O-regions which contain every possible history consistent with the laws of quantum mechanics and the symmetries of our spacetime including those in which, as the author noted, perhaps using the phrase for the first time in the august pages of the Physical Review, “Elvis is still alive”.

So generic is the prediction, there's no need to assume the correctness of speculative ideas in physics. The author provides a lukewarm endorsement of string theory and the “anthropic landscape” model, but is clear to distinguish its “multiverse” of distinct vacua with different moduli from our infinite universe with (as far as we know) a single, possibly evolving, vacuum state. But string theory could be completely wrong and the deductions from observational cosmology would still stand. For that matter, they are independent of the “eternal inflation” model the book describes in detail, since they rely only upon observables within the horizon of our single “pocket universe”.

Although the evolution of the universe from shortly after the end of inflation (the moment we call the “big bang”) seems to be well understood, there are still deep mysteries associated with the moment of origin, and the ultimate fate of the universe remains an enigma. These questions are discussed in detail, and the author makes clear how speculative and tentative any discussion of such matters must be given our present state of knowledge. But we are uniquely fortunate to be living in the first time in all of history when these profound questions upon which humans have mused since antiquity have become topics of observational and experimental science, and a number of experiments now underway and expected in the next few years which bear upon them are described.

Curiously, the author consistently uses the word “google” for the number 10100. The correct name for this quantity, coined in 1938 by nine-year-old Milton Sirotta, is “googol”. Edward Kasner, young Milton's uncle, then defined “googolplex” as 1010100. “Google” is an Internet search engine created by megalomaniac collectivists bent on monetising, without compensation, content created by others. The text is complemented by a number of delightful cartoons reminiscent of those penned by George Gamow, a physicist the author (and this reader) much admires.

 Permalink

Rowsome, Frank, Jr. The Verse by the Side of the Road. New York: Plume, [1965] 1979. ISBN 0-452-26762-5.
In the years before the Interstate Highway System, long trips on the mostly two-lane roads in the United States could bore the kids in the back seat near unto death, and drive their parents to the brink of homicide by the incessant drone of “Are we there yet?” which began less than half an hour out of the driveway. A blessed respite from counting cows, license plate poker, and counting down the dwindling number of bottles of beer on the wall would be the appearance on the horizon of a series of six red and white signs, which all those in the car would strain their eyes to be the first to read.

WITHIN THIS VALE

OF TOIL

AND SIN

YOUR HEAD GROWS BALD

BUT NOT YOUR CHIN—USE

Burma-Shave

In the fall of 1925, the owners of the virtually unknown Burma-Vita company of Minneapolis came up with a new idea to promote the brushless shaving cream they had invented. Since the product would have particular appeal to travellers who didn't want to pack a wet and messy shaving brush and mug in their valise, what better way to pitch it than with signs along the highways frequented by those potential customers? Thus was born, at first only on a few highways in Minnesota, what was to become an American institution for decades and a fondly remembered piece of Americana, the Burma-Shave signs. As the signs proliferated across the landscape, so did sales; so rapid was the growth of the company in the 1930s that a director of sales said (p. 38), “We never knew that there was a depression.” At the peak the company had more than six million regular customers, who were regularly reminded to purchase the product by almost 7000 sets of signs—around 40,000 individual signs, all across the country.

While the first signs were straightforward selling copy, Burma-Shave signs quickly evolved into the characteristic jingles, usually rhyming and full of corny humour and outrageous puns. Rather than hiring an advertising agency, the company ran national contests which paid $100 for the best jingle and regularly received more than 50,000 entries from amateur versifiers.

Almost from the start, the company devoted a substantial number of the messages to highway safety; this was not the result of outside pressure from anti-billboard forces as I remember hearing in the 1950s, but based on a belief that it was the right thing to do—and besides, the sixth sign always mentioned the product! The set of signs above is the jingle that most sticks in my memory: it was a favourite of the Burma-Shave founders as well, having been re-run several times since its first appearance in 1933 and chosen by them to be immortalised in the Smithsonian Institution. Another that comes immediately to my mind is the following, from 1960, on the highway safety theme:

THIRTY DAYS

HATH SEPTEMBER

APRIL

JUNE AND THE

SPEED OFFENDER

Burma-Shave

Times change, and with the advent of roaring four-lane freeways, billboard bans or set-back requirements which made sequences of signs unaffordable, the increasing urbanisation of American society, and of course the dominance of television over all other advertising media, by the early 1960s it was clear to the management of Burma-Vita that the road sign campaign was no longer effective. They had already decided to phase it out before they sold the company to Philip Morris in 1963, after which the signs were quickly taken down, depriving the two-lane rural byways of America of some uniquely American wit and wisdom, but who ever drove them any more, since the Interstate went through?

The first half of this delightful book tells the story of the origin, heyday, and demise of the Burma-Shave signs, and the balance lists all of the six hundred jingles preserved in the records of the Burma-Vita Company, by year of introduction. This isn't something you'll probably want to read straight through, but it's great to pick up from time to time when you want a chuckle.

And then the last sign had been read: all the family exclaimed in unison, “Burma-Shave!”. It had been maybe sixty miles since the last set of signs, and so they'd recall that one and remember other great jingles from earlier trips. Then things would quiet down for a while. “Are we there yet?”

 Permalink

Karsh, Efraim. Islamic Imperialism. New Haven, CT: Yale University Press, 2006. ISBN 0-300-10603-3.
A great deal of conflict and tragedy might have been avoided in recent years had only this 2006 book been published a few years earlier and read by those contemplating ambitious adventures to remake the political landscape of the Near East and Central Asia. The author, a professor of history at King's College, University of London, traces the repeated attempts, beginning with Muhammad and his immediate successors, to establish a unified civilisation under the principles of Islam, in which the Koranic proscription of conflict among Muslims would guarantee permanent peace.

In the century following the Prophet's death in the year 632, Arab armies exploded out of the birthplace of Islam and conquered a vast territory from present-day Iran to Spain, including the entire north African coast. This was the first of a succession of great Islamic empires, which would last until the dismantling of the Ottoman Empire in the aftermath of World War I. But, as this book thoroughly documents, over this entire period, the emphasis was on the word “empire” and not “Islamic”. While the leaders identified themselves as Muslims and exhorted their armies to holy war, the actual empires were very much motivated by a quest for temporal wealth and power, and behaved much as the previous despotisms they supplanted. Since the Arabs had no experience in administering an empire nor a cadre of people trained in those arts, they ended up assimilating the bureaucratic structure and personnel of the Persian empire after conquering it, and much the same happened in the West after the fall of the Byzantine empire.

While soldiers might have seen themselves as spreading the word of Islam by the sword, in fact the conquests were mostly about the traditional rationale for empire: booty and tribute. (The Prophet's injunction against raiding other Muslims does appear to have been one motivation for outward-directed conquest, especially in the early years.) Not only was there relatively little aggressive proselytising of Islam, on a number of occasions conversion to Islam by members of dhimmi populations was discouraged or prohibited outright because the imperial treasury depended heavily on the special taxes non-Muslims were required to pay. Nor did these empires resemble the tranquil Dar al-Islam envisaged by the Prophet—in fact, only 24 years would elapse after his death before the Caliph Uthman was assassinated by his rivals, and that would be first of many murders, revolutions, plots, and conflicts between Muslim factions within the empires to come.

Nor were the Crusades, seen through contemporary eyes, the cataclysmic clash of civilisations they are frequently described as today. The kingdoms established by the crusaders rapidly became seen as regional powers like any other, and often found themselves in alliance with Muslims against Muslims. Pan-Arabists in modern times who identify their movement with opposition to the hated crusader often fail to note that there was never any unified Arab campaign against the crusaders; when they were finally ejected, it was by the Turks, and their great hero Saladin was, himself, a Kurd.

The latter half of the book recounts the modern history of the Near East, from Churchill's invention of Iraq, through Nasser, Khomeini, and the emergence of Islamism and terror networks directed against Israel and the West. What is simultaneously striking and depressing about this long and detailed history of strife, subversion, oppression, and conflict is that you can open it up to almost any page and apart from a few details, it sounds like present-day news reports from the region. Thirteen centuries of history with little or no evidence for indigenous development of individual liberty, self-rule, the rule of law, and religious tolerance does not bode well for idealistic neo-Jacobin schemes to “implant democracy” at the point of a bayonet. (Modern Turkey can be seen as a counter-example, but it is worth observing that Mustafa Kemal explicitly equated modernisation with the importation and adoption of Western values, and simultaneously renounced imperial ambitions. In this, he was alone in the region.)

Perhaps the lesson one should draw from this long and tragic narrative is that this unfortunate region of the world, which was a fiercely-contested arena of human conflict thousands of years before Muhammad, has resisted every attempt by every actor, the Prophet included, to pacify it over those long millennia. Rather than commit lives and fortune to yet another foredoomed attempt to “fix the problem”, one might more wisely and modestly seek ways to keep it contained and not aggravate the situation.

 Permalink

Finkbeiner, Ann. The Jasons. New York: Viking, 2006. ISBN 0-670-03489-4.
Shortly after the launch of Sputnik thrust science and technology onto the front lines of the Cold War, a group of Manhattan Project veterans led by John Archibald Wheeler decided that the government needed the very best advice from the very best people to navigate these treacherous times, and that the requisite talent was not to be found within the weapons labs and other government research institutions, but in academia and industry, whence it should be recruited to act as an independent advisory panel. This fit well with the mandate of the recently founded ARPA (now DARPA), which was chartered to pursue “high-risk, high-payoff” projects, and needed sage counsel to minimise the former and maximise the latter.

The result was Jason (the name is a reference to Jason of the Argonauts, and is always used in the singular when referring to the group, although the members are collectively called “Jasons”). It is unlikely such a scientific dream team has ever before been assembled to work together on difficult problems. Since its inception in 1960, a total of thirteen known members of Jason have won Nobel prizes before or after joining the group. Members include Eugene Wigner, Charles Townes (inventor of the laser), Hans Bethe (who figured out the nuclear reaction that powers the stars), polymath and quark discoverer Murray Gell-Mann, Freeman Dyson, Val Fitch, Leon Lederman, and more, and more, and more.

Unlike advisory panels who attend meetings at the Pentagon for a day or two and draft summary reports, Jason members gather for six weeks in the summer and work together intensively, “actually solving differential equations”, to produce original results, sometimes inventions, for their sponsors. The Jasons always remained independent—while the sponsors would present their problems to them, it was the Jasons who chose what to work on.

Over the history of Jason, missile defence and verification of nuclear test bans have been a main theme, but along the way they have invented adaptive optics, which has revolutionised ground-based astronomy, explored technologies for detecting antipersonnel mines, and created, in the Vietnam era, the modern sensor-based “electronic battlefield”.

What motivates top-ranked, well-compensated academic scientists to spend their summers in windowless rooms pondering messy questions with troubling moral implications? This is a theme the author returns to again and again in the extensive interviews with Jasons recounted in this book. The answer seems to be something so outré on the modern university campus as to be difficult to vocalise: patriotism, combined with a desire so see that if such things be done, they should be done as wisely as possible.

 Permalink

November 2006

Steyn, Mark. America Alone. Washington: Regnery Publishing, 2006. ISBN 0-89526-078-6.
Leave it to Mark Steyn to write a funny book about the collapse of Western civilisation. Demographics are destiny, and unlike political and economic trends, are easier to extrapolate because the parents of the next generation have already been born: if there are more of them than their own parents, a population is almost certain to increase, and if there are fewer, the population is destined to fall. Once fertility drops to 1.3 children per woman or fewer, a society enters a demographic “death spiral” from which there is no historical precedent for recovery. Italy, Spain, and Russia are already below this level, and the European Union as a whole is at 1.47, far below the replacement rate of 2.1. And what's the makeup of this shrinking population of Europe? Well, we might begin by asking what is the most popular name for boys born in Belgium…and Amsterdam…and Malmö, Sweden: Mohammed. Where is this going? Well, in the words of Mullah Krekar of Norway (p. 39), “We're the ones who will change you. Every Western woman in the EU is producing an average of 1.4 children. Every Muslim woman in the same countries is producing 3.5 children. By 2050, 30 percent of the population in Europe will be Muslim. Our way of thinking…will prove more powerful than yours.”

The author believes, and states forthrightly, that it is the purest fantasy to imagine that this demographic evolution, seen by many of the élite as the only hope of salvation for the European welfare state, can occur without a profound change in the very nature of the societies in which it occurs. The end-point may not be “Eutopia”, but rather “Eurabia”, and the timidity of European nations who already have an urban Muslim population approaching 30% shows how a society which has lost confidence in its own civilisation and traditions and imbibed the feel-good but ultimately debilitating doctrine of multiculturalism ends up assimilating to the culture of the immigrants, not the other way around. Steyn sees only three possible outcomes for the West (p. 204):

  1. Submit to Islam
  2. Destroy Islam
  3. Reform Islam
If option one is inconceivable and option two unthinkable (and probably impossible, certainly without changing Western civilisation beyond recognition and for the worse), you're left with number three, but, as Steyn notes, “Ultimately, only Muslims can reform Islam”. Unfortunately, the recent emergence of a global fundamentalist Islamic identity with explicitly political goals may be the Islamic Reformation, and if that be the case, the trend is going in the wrong direction. So maybe option one isn't off the table, after all.

The author traces the roots of the European predicament to the social democratic welfare state, which like all collectivist schemes, eventually creates a society of perpetual adolescents who never mature into and assume the responsibilities of adults. When the state becomes responsible for all the things the family once had to provide for, and is supported by historically unprecedented levels of taxation which impoverish young families and make children unaffordable, why not live for the present and let the next generation, wherever it may come from, worry about itself? In a static situation, this is a prescription for the kind of societal decline which can be seen in the histories of both Greece and Rome, but when there is a self-confident, rapidly-proliferating immigrant population with no inclination to assimilate, it amounts to handing the keys over to the new tenants in a matter of decades.

Among Western countries, the United States is the great outlier, with fertility just at the replacement rate and immigrants primarily of Hispanic origin who have, historically, assimilated to U.S. society in a generation or two. (There are reasons for concern about the present rate of immigration to the U.S. and the impact of multiculturalism on assimilation there, but that is not the topic of this book.) Steyn envisages a future, perhaps by 2050, where the U.S. looks out upon the world and sees not an “end of history” with liberal democracy and free markets triumphant around the globe but rather (p. 205), “a totalitarian China, a crumbling Russia, an insane Middle East, a disease-ridden Africa, [and] a civil war-torn Eurabia”—America alone.

Heavy stuff, but Steyn's way with words will keep you chuckling as you contemplate the apocalypse. The book is long on worries and short on plausible solutions, other than a list of palliatives which it is unlikely Western societies, even the U.S., have the will to adopt, although the author predicts (p. 192) “By 2015, almost every viable political party in the West will be natalist…”. But demographics don't turn on a dime, and by then, whatever measures are politically feasible may be too little to make much difference.

 Permalink

Macdonald, Lyn. 1915: The Death of Innocence. London: Penguin Books, [1993] 1997. ISBN 0-14-025900-7.
I'm increasingly coming to believe that World War I was the defining event of the twentieth century: not only a cataclysm which destroyed the confident assumptions of the past, but which set history inexorably on a path which would lead to even greater tragedies and horrors as that century ran its course. This book provides an excellent snapshot of what the British people, both at the front and back home, were thinking during the first full year of the war, as casualties mounted and hope faded for the quick victory almost all expected at the outset.

The book does not purport to be a comprehensive history of the war, nor even of the single year it chronicles. It covers only the British Army: the Royal Navy is mentioned only in conjunction with troop transport and landings, and the Royal Flying Corps scarcely at all. The forces of other countries, allied or enemy, are mentioned only in conjunction with their interaction with the British, and no attempt is made to describe the war from their perspective. Finally, the focus is almost entirely on the men in the trenches and their commanders in the field: there is little focus on the doings of politicians and the top military brass, nor on grand strategy, although there was little of that in evidence in the events of 1915 in any case.

Within its limited scope, however, the book succeeds superbly. About a third of the text is extended quotations from people who fought at the front, many from contemporary letters home. Not only do you get an excellent insight into how horrific conditions were in the field, but also how stoically those men accepted them, hardly ever questioning the rationale for the war or the judgement of those who commanded them. And this in the face of a human cost which is nearly impossible to grasp by the standards of present-day warfare. Between the western front and the disastrous campaign in Gallipoli, the British suffered more than half a million casualties (killed, wounded, and missing) (p. 597). In “quiet periods” when neither side was mounting attacks, simply manning their own trenches, British casualties averaged five thousand a week (p. 579), mostly from shelling and sniper fire.

And all of the British troops who endured these appalling conditions were volunteers—conscription did not begin in Britain until 1916. With the Regular Army having been largely wiped out in the battles of 1914, the trenches were increasingly filled with Territorial troops who volunteered for service in France, units from around the Empire: India, Canada, Australia, and New Zealand, and as the year progressed, Kitchener's “New Army” of volunteer recruits rushed through training and thrown headlong into the killing machine. The mindset that motivated these volunteers and the conclusions drawn from their sacrifice set the stage for the even greater subsequent horrors of the twentieth century.

Why? Because they accepted as given that their lives were, in essence, the property of the state which governed the territory in which they happened to live, and that the rulers of that state, solely on the authority of having been elected by a small majority of the voters in an era when suffrage was far from universal, had every right to order them to kill or be killed by subjects of other states with which they had no personal quarrel. (The latter point was starkly illustrated when, at Christmas 1914, British and German troops declared an impromptu cease-fire, fraternised, and played football matches in no man's land before, the holiday behind them, returning to the trenches to resume killing one another for King and Kaiser.) This was a widely shared notion, but the first year of the Great War demonstrated that the populations of the countries on both sides really believed it, and would charge to almost certain death even after being told by Lord Kitchener himself on the parade ground, “that our attack was in the nature of a sacrifice to help the main offensive which was to be launched ‘elsewhere’” (p. 493). That individuals would accept their rôle as property of the state was a lesson which the all-encompassing states of the twentieth century, both tyrannical and more or less democratic, would take to heart, and would manifest itself not only in conscription and total war, but also in expropriation, confiscatory taxation, and arbitrary regulation of every aspect of subjects' lives. Once you accept that the state is within its rights to order you to charge massed machine guns with a rifle and bayonet, you're unlikely to quibble over lesser matters.

Further, the mobilisation of the economy under government direction for total war was taken as evidence that central planning of an industrial economy was not only feasible but more efficient than the market. Unfortunately, few observed that there is a big difference between consuming capital to build the means of destruction over a limited period of time and creating new wealth and products in a productive economy. And finally, governments learnt that control of mass media could mould the beliefs of their subjects as the rulers wished: the comical Fritz with which British troops fraternised at Christmas 1914 had become the detested Boche whose trenches they shelled continuously on Christmas Day a year later (p. 588).

It is these disastrous “lessons” drawn from the tragedy of World War I which, I suspect, charted the tragic course of the balance of the twentieth century and the early years of the twenty-first. Even a year before the outbreak of World War I, almost nobody imagined such a thing was possible, or that it would have the consequences it did. One wonders what will be the equivalent defining event of the twenty-first century, when it will happen, and in what direction it will set the course of history?

A U.S. edition is also available.

 Permalink

Ronan, Mark. Symmetry and the Monster. Oxford: Oxford University Press, 2006. ISBN 0-19-280722-6.
On the morning of May 30th, 1832, self-taught mathematical genius and revolutionary firebrand Évariste Galois died in a duel in Paris, the reasons for which are forgotten; he was twenty years old. The night before, he wrote a letter in which he urged that his uncompleted mathematical work be sent to the preeminent contemporary mathematicians Jacobi and Gauss; neither, however, ever saw it. The work in question laid the foundations for group theory, an active area of mathematical research a century and three quarters hence, and a cornerstone of the most fundamental theories of physics: Noether's Theorem demonstrates that conservation laws and physical symmetries are two aspects of the same thing.

Finite groups, which govern symmetries among a finite number of discrete items (as opposed to, say, the rotations of a sphere, which are continuously valued), can be arbitrarily complicated, but, as shown by Galois, can be decomposed into one or more simple groups whose only normal subgroups are the trivial subgroup of order one and the improper subgroup consisting of the entire group itself: these are the fundamental kinds of symmetries or, as this book refers to them, the “atoms of symmetry”, and there are only five categories (four of the five categories are themselves infinite). The fifth category are the sporadic groups, which do not fit into any of the other categories. The first was discovered by Émile Mathieu in 1861, and between then and 1873 he found four more. As group theory continued to develop, mathematicians kept finding more and more of these sporadic groups, and nobody knew whether there were only a finite number or infinitely many of them…until recently.

Most research papers in mathematics are short and concise. Some group theory papers are the exception, with two hundred pagers packed with dense notation not uncommon. The classification theorem of finite groups is the ultimate outlier; it has been likened to the Manhattan Project of pure mathematics. Consisting of hundreds of papers published over decades by a large collection of authors, it is estimated, if every component involved in the proof were collected together, to be on the order of fifteen thousand pages, many of which are so technical those not involved in the work itself have extreme difficulty understanding them. (In fact, a “Revision project” is currently underway with the goal of restating the proof in a form which future generations of mathematicians will be able to comprehend.) The last part of the classification theorem, itself more than a thousand pages in length, was not put into place until November 2004, so only then could one say with complete confidence that there were only 26 sporadic groups, all of which are known.

While these groups are “simple” in the sense of not being able to be decomposed, the symmetries most of them represent are of mind-boggling complexity. The order of a finite group is the number of elements it contains; for example, the group of permutations on five items has an order of 5! = 120. The simplest sporadic group has an order of 7920 and the biggest, well, it's a monster. In fact, that's what it's called, the “monster group”, and its order is (deep breath):

808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 =
246×320×59×76×112×133×17×19×23×29×31×41×47×59×71
If it helps, you can think of the monster as the group of rotations in a space of 196,884 dimensions—much easier to visualise, isn't it? In any case, that's how Robert Griess first constructed the monster in 1982, in a 102 page paper done without a computer.

In one of those “take your breath away” connections between distant and apparently unrelated fields of mathematics, the divisors of the order of the monster are precisely the 15 supersingular primes, which are intimately related to the j-function of number theory. Other striking coincidences, or maybe deep connections, link the monster group to the Lorentzian geometry of general relativity, the multidimensional space of string theory, and the enigmatic properties of the number 163 in number theory. In 1983, Freeman Dyson mused, “I have a sneaking hope, a hope unsupported by any facts or any evidence, that sometime in the twenty-first century physicists will stumble upon the Monster group, built in some unsuspected way into the structure of the universe.” Hey, stranger things have happened.

This book, by a professional mathematician who is also a talented populariser of the subject, tells the story of this quest. During his career, he personally knew almost all of the people involved in the classification project, and leavens the technical details with biographical accounts and anecdotes of the protagonists. To avoid potentially confusing mathematical jargon, he uses his own nomenclature: “atom of symmetry” instead of finite simple group, “deconstruction” instead of decomposition, and so on. This sometimes creates its own confusion, since the extended quotes from mathematicians use the standard terminology; the reader should refer to the glossary at the end of the book to resolve any such puzzlement.

 Permalink

Meers, Nick. Stretch: The World of Panoramic Photography. Mies, Switzerland: RotoVision, 2003. ISBN 2-88046-692-X.
In the early years of the twentieth century, panoramic photography was all the rage. Itinerant photographers with unwieldy gear such as the Cirkut camera would visit towns to photograph and sell 360° panoramas of the landscape and wide format pictures of large groups of people, such as students at the school or workers at a factory or mine. George Lawrence's panoramas (some taken from a camera carried aloft by a kite) of the devastation resulting from the 1906 San Francisco earthquake and fire have become archetypal images of that disaster.

Although pursued as an art form by a small band of photographers, and still used occasionally for large group portraits, the panoramic fad largely died out with the popularity of fixed-format roll film cameras and the emergence of the ubiquitous 24×36 mm format. The advent of digital cameras and desktop image processing software able to “stitch” multiple images more or less seamlessly (if you know what you're doing when you take them) into an arbitrarily wide panorama has sparked a renaissance in the format, including special-purpose film and digital cameras for panoramic photography. Computers with high performance graphics hardware now permit viewing full-sphere virtual reality imagery in which the viewer can “look around” at will, something undreamed of in the first golden age of panoramas.

This book provides an introduction to the history, technology, and art of panoramic photography, alternating descriptions of equipment and technique with galleries featuring the work of contemporary masters of the format, including many examples of non-traditional subjects for panoramic presentation which will give you ideas for your own experiments. The book, which is beautifully printed in China, is itself in “panoramic format” with pages 30 cm wide by 8 cm tall for an aspect ratio of 3¾:1, allowing many panoramic pictures to be printed on a single page. (There are a surprising number of vertical panoramas in the examples which are short-changed by the page format, as they are always printed vertically rather than asking you to turn the book around to view them.) Although the quality of reproduction is superb, the typography is frankly irritating, at least to my ageing eyes. The body copy is set in a light sans-serif font with capitals about six points tall, and photo captions in even smaller type: four point capitals. If that wasn't bad enough, all of the sections on technique are printed in white type on a black background which, especially given the high reflectivity of the glossy paper, is even more difficult to read. This appears to be entirely for artistic effect— there is plenty of white (or black) space which would have permitted using a more readable font. The cover price of US$30 seems high for a work of fewer than 150 pages, however wide and handsome.

 Permalink

Roth, Philip. The Plot Against America. New York: Vintage, 2004. ISBN 1-4000-7949-7.
Pulitzer Prize-winning mainstream novelist Philip Roth turns to alternative history in this novel, which also falls into the genre Rudy Rucker pioneered and named “transreal”—autobiographical fiction, in which the author (or a character clearly based upon him) plays a major part in the story. Here, the story is told in the first person by the author, as a reminiscence of his boyhood in the early 1940s in Newark, New Jersey. In this timeline, however, after a deadlocked convention, the Republican party chooses Charles Lindbergh as its 1940 presidential candidate who, running on an isolationist platform of “Vote for Lindbergh or vote for war”, defeats FDR's bid for a third term in a landslide.

After taking office, Lindbergh's tilt toward the Axis becomes increasingly evident. He appoints the virulently anti-Semitic Henry Ford as Secretary of the Interior, flies to Iceland to sign a pact with Hitler, and a concludes a treaty with Japan which accepts all its Asian conquests so far. Further, he cuts off all assistance to Britain and the USSR. On the domestic front, his Office of American Absorption begins encouraging “urban” children (almost all of whom happen to be Jewish) to spend their summers on farms in the “heartland” imbibing “American values”, and later escalates to “encouraging” the migration of entire families (who happen to be Jewish) to rural areas.

All of this, and its many consequences, ranging from trivial to tragic, are seen through the eyes of young Philip Roth, perceived as a young boy would who was living through all of this and trying to make sense of it. A number of anecdotes have nothing to do with the alternative history story line and may be purely autobiographical. This is a “mood novel” and not remotely a thriller; the pace of the story-telling is languid, evoking the time sense and feeling of living in the present of a young boy. As alternative history, I found a number of aspects implausible and unpersuasive. Most exemplars of the genre choose one specific event at which the story departs from recorded history, then spin out the ramifications of that event as the story develops. For example, in 1945 by Newt Gingrich and William Forstchen, after the attack on Pearl Harbor, Germany does not declare war on the United States, which only goes to war against Japan. In Roth's book, the point of divergence is simply the nomination of Lindbergh for president. Now, in the real election of 1940, FDR defeated Wendell Willkie by 449 electoral votes to 82, with the Republican carrying only 10 of the 48 states. But here, with Lindbergh as the nominee, we're supposed to believe that FDR would lose in forty-six states, carrying only his home state of New York and squeaking to a narrow win in Maryland. This seems highly implausible to me—Lindbergh's agitation on behalf of America First made him a highly polarising figure, and his apparent sympathy for Nazi Germany (in 1938 he accepted a gold medal decorated with four swastikas from Hermann Göring in Berlin) made him anathema in much of the media. All of these negatives would have been pounded home by the Democrats, who had firm control of the House and Senate as well as the White House, and all the advantages of incumbency. Turning a 38 state landslide into a 46 state wipeout simply by changing the Republican nominee stretches suspension of disbelief to the limit, at least for this reader, especially as Americans are historically disinclined to elect “outsiders” to the presidency.

If you accept this premise, then most of what follows is reasonably plausible and the descent of the country into a folksy all-American kind of fascism is artfully told. But then something very odd happens. As events are unfolding at their rather leisurely pace, on page 317 it's like the author realised he was about to run out of typewriter ribbon or something, and the whole thing gets wrapped up in ten pages, most of which is an unconfirmed account by one of the characters of behind-the-scenes events which may or may not explain everything, and then there's a final chapter to sort out the personal details. This left me feeling like Charlie Brown when Lucy snatches away the football; either the novel should be longer, or else the pace of the whole thing should be faster rather than this whiplash-inducing discontinuity right before the end—but who am I to give advice to somebody with a Pulitzer?

A postscript provides real-world biographies of the many historical figures who appear in the novel, and the complete text of Lindbergh's September 1941 Des Moines speech to the America First Committee which documents his contemporary sentiments for readers who are unaware of this unsavoury part of his career.

 Permalink

December 2006

Bova, Ben. Mercury. New York: Tor, 2005. ISBN 0-7653-4314-2.
I hadn't read anything by Ben Bova in years—certainly not since 1990. I always used to think of him as a journeyman science fiction writer, cranking out enjoyable stories mostly toward the hard end of the science fiction spectrum, but not a grand master of the calibre of, say, Heinlein, Clarke, and Niven. His stint as editor of Analog was admirable, proving himself a worthy successor to John W. Campbell, who developed the authors of the golden age of science fiction. Bova is also a prolific essayist on science, writing, and other topics, and his January 1965 Analog article “It's Done with Mirrors” with William F. Dawson may have been one of the earliest proposals of a multiply-connected small universe cosmological model.

I don't read a lot of fiction these days, and tend to lose track of authors, so when I came across this book in an airport departure lounge and noticed it was published in 2005, my first reaction was, “Gosh, is he still writing?” (Bova was born in 1932, and his first novel was published in 1959.) The U.K. paperback edition was featured in a “buy one, get one free” bin, so how could I resist?

I ought to strengthen my resistance. This novel is so execrably bad that several times in the process of reading it I was tempted to rip it to bits and burn them to ensure nobody else would have to suffer the experience. There is nothing whatsoever redeeming about this book. The plot is a conventional love triangle/revenge tale. The only thing that makes it science fiction at all is that it's set in the future and involves bases on Mercury, space elevators, and asteroid mining, but these are just backdrops for a story which could take place anywhere. Notwithstanding the title, which places it within the author's “Grand Tour” series, only about half of the story takes place on Mercury, whose particulars play only a small part.

Did I mention the writing? No, I guess I was trying to forget it. Each character, even throw-away figures who appear only in a single chapter, is introduced by a little sketch which reads like something produced by filling out a form. For example,

Jacqueline Wexler was such an administrator. Gracious and charming in public, accommodating and willing to compromise at meetings, she nevertheless had the steel-hard will and sharp intellect to drive the ICU's ramshackle collection of egos toward goals that she herself selected. Widely known as ‘Attila the Honey,’ Wexler was all sweetness and smiles on the outside, and ruthless determination within.
After spending a third of page 70 on this paragraph, which makes my teeth ache just to re-read, the formidable Ms. Wexler walks off stage before the end of p. 71, never to re-appear. But fear not (or fear), there are many, many more such paragraphs in subsequent pages.

An Earth-based space elevator, a science fiction staple, is central to the plot, and here Bova bungles the elementary science of such a structure in a laugh-out-loud chapter in which the three principal characters ride the elevator to a platform located at the low Earth orbit altitude of 500 kilometres. Upon arrival there, they find themselves weightless, while in reality the force of gravity would be imperceptibly less than on the surface of the Earth! Objects in orbit are weightless because their horizontal velocity cancels Earth's gravity, but a station at 500 kilometres is travelling only at the speed of the Earth's rotation, which is less than 1/16 of orbital velocity. The only place on a space elevator where weightlessness would be experienced is the portion where orbital velocity equals Earth's rotation rate, and that is at the anchor point at geosynchronous altitude. This is not a small detail; it is central to the physics, engineering, and economics of space elevators, and it figured prominently in Arthur C. Clarke's 1979 novel The Fountains of Paradise which is alluded to here on p. 140.

Nor does Bova restrain himself from what is becoming a science fiction cliché of the first magnitude: “nano-magic”. This is my term for using the “nano” prefix the way bad fantasy authors use “magic”. For example, Lord Hacksalot draws his sword and cuts down a mighty oak tree with a single blow, smashing the wall of the evil prince's castle. The editor says, “Look, you can't cut down an oak tree with a single swing of a sword.” Author: “But it's a magic sword.” On p. 258 the principal character is traversing a tether between two parts of a ship in the asteroid belt which, for some reason, the author believes is filled with deadly radiation. “With nothing protecting him except the flimsy…suit, Bracknell felt like a turkey wrapped in a plastic bag inside a microwave oven. He knew that high-energy radiation was sleeting down on him from the pale, distant Sun and still-more-distant stars. He hoped that suit's radiation protection was as good as the manufacturer claimed.” Imaginary editor (who clearly never read this manuscript): “But the only thing which can shield you from heavy primary cosmic rays is mass, and lots of it. No ‘flimsy suit’ however it's made, can protect you against iron nuclei incoming near the speed of light.” Author: “But it's a nano suit!”

Not only is the science wrong, the fiction is equally lame. Characters simply don't behave as people do in the real world, nor are events and their consequences plausible. We are expected to believe that the causes of and blame for a technological catastrophe which killed millions would be left to be decided by a criminal trial of a single individual in Ecuador without any independent investigation. Or that a conspiracy to cause said disaster involving a Japanese mega-corporation, two mass religious movements, rogue nanotechnologists, and numerous others could be organised, executed, and subsequently kept secret for a decade. The dénouement hinges on a coincidence so fantastically improbable that the plausibility of the plot would be improved were the direct intervention of God Almighty posited instead.

Whatever became of Ben Bova, whose science was scientific and whose fiction was fun to read? It would be uncharitable to attribute this waste of ink and paper to age, as many science fictioneers with far more years on the clock have penned genuine classics. But look at this! Researching the author's biography, I discovered that in 1996, at the age of 64, he received a doctorate in education from California Coast University, a “distance learning” institution. Now, remember back when you were in engineering school struggling with thermogoddamics and fluid mechanics how you regarded the student body of the Ed school? Well, I always assumed it was a selection effect—those who can do, and those who can't…anyway, it never occurred to me that somewhere in that dark, lowering building they had a nano brain mushifier which turned the earnest students who wished to dedicate their careers to educating the next generation into the cognitively challenged classes they graduated. I used to look forward to reading anything by Ben Bova; I shall, however, forgo further works by the present Doctor of Education.

 Permalink

Gershenfeld, Neil. Fab. New York: Basic Books, 2005. ISBN 0-465-02745-8.
Once, every decade or so, you encounter a book which empowers you in ways you never imagined before you opened it, and ultimately changes your life. This is one of those books. I am who I am (not to sound too much like Popeye) largely because in the fall of 1967 I happened to read Daniel McCracken's FORTRAN book and realised that there was nothing complicated at all about programming computers—it was a vocational skill that anybody could learn, much like operating a machine tool. (Of course, as you get deeper into the craft, you discover there is a great body of theory to master, but there's much you can accomplish if you're willing to work hard and learn on the job before you tackle the more abstract aspects of the art.) But this was not only something that I could do but, more importantly, I could learn by doing—and that's how I decided to spend the rest of my professional life and I've never regretted having done so. I've never met a genuinely creative person who wished to spend a nanosecond in a classroom downloading received wisdom at dial-up modem bandwidth. In fact, I suspect the absence of such people in the general population is due to the pernicious effects of the Bismarck worker-bee indoctrination to which the youth of most “developed” societies are subjected today.

We all know that, some day, society will pass through the nanotechnological singularity, after which we'll be eternally free, eternally young, immortal, and incalculably rich: hey—works for me!   But few people realise that if the age of globalised mass production is analogous to that of mainframe computers and if the desktop nano-fabricator is equivalent to today's personal supercomputer, we're already in the equivalent of the minicomputer age of personal fabrication. Remember minicomputers? Not too large, not too small, and hence difficult to classify: too expensive for most people to buy, but within the budget of groups far smaller than the governments and large businesses who could afford mainframes.

The minicomputer age of personal fabrication is as messy as the architecture of minicomputers of four decades before: there are lots of different approaches, standards, interfaces, all mutually incompatible: isn't innovation wonderful? Well, in this sense no!   But it's here, now. For a sum in the tens of thousands of U.S. dollars, it is now possible to equip a “Fab Lab” which can make “almost anything”. Such a lab can fit into a modestly sized room, and, provided with electrical power and an Internet connection, can empower whoever crosses its threshold to create whatever their imagination can conceive. In just a few minutes, their dream can become tangible hardware in the real world.

The personal computer revolution empowered almost anybody (at least in the developed world) to create whatever information processing technology their minds could imagine, on their own, or in collaboration with others. The Internet expanded the scope of this collaboration and connectivity around the globe: people who have never met one another are now working together to create software which will be used by people who have never met the authors to everybody's mutual benefit. Well, software is cool, but imagine if this extended to stuff. That's what Fab is about. SourceForge currently hosts more than 135,500 software development projects—imagine what will happen when StuffForge.net (the name is still available, as I type this sentence!) hosts millions of OpenStuff things you can download to your local Fab Lab, make, and incorporate into inventions of your own imagination. This is the grand roll-back of the industrial revolution, the negation of globalisation: individuals, all around the world, creating for themselves products tailored to their own personal needs and those of their communities, drawing upon the freely shared wisdom and experience of their peers around the globe. What a beautiful world it will be!

Cynics will say, “Sure, it can work at MIT—you have one of the most talented student bodies on the planet, supported by a faculty which excels in almost every discipline, and an industrial plant with bleeding edge fabrication technologies of all kinds.” Well, yes, it works there. But the most inspirational thing about this book is that it seems to work everywhere: not just at MIT but also in South Boston, rural India, Norway far north of the Arctic Circle, Ghana, and Costa Rica—build it and they will make. At times the author seems unduly amazed that folks without formal education and the advantages of a student at MIT can imagine, design, fabricate, and apply a solution to a problem in their own lives. But we're human beings—tool-making primates who've prospered by figuring things out and finding ways to make our lives easier by building tools. Is it so surprising that putting the most modern tools into the hands of people who daily confront the most fundamental problems of existence (access to clean water, food, energy, and information) will yield innovations which surprise even professors at MIT?

This book is so great, and so inspiring, that I will give the author a pass on his clueless attack on AutoCAD's (never attributed) DXF file format on pp. 46–47, noting simply that the answer to why it's called “DXF” is that Lotus had already used “DIF” for their spreadsheet interchange files and we didn't want to create confusion with their file format, and that the reason there's more than one code for an X co-ordinate is that many geometrical objects require more than one X co-ordinate to define them (well, duh).

The author also totally gets what I've been talking about since Unicard and even before that as “Gizmos”, that every single device in the world, and every button on every device will eventually have its own (IPv6) Internet address and be able to interact with every other such object in every way that makes sense. I envisioned MIDI networks as the cheapest way to implement this bottom-feeder light-switch to light-bulb network; the author, a decade later, opts for a PCM “Internet 0”—works for me. The medium doesn't matter; it's that the message makes it end to end so cheaply that you can ignore the cost of the interconnection that ultimately matters.

The author closes the book with the invitation:

Finally, demand for fab labs as a research project, as a collection of capabilities, as a network of facilities, and even as a technological empowerment movement is growing beyond what can be handled by the initial collection of people and institutional partners that were involved in launching them. I/we welcome your thoughts on, and participation in, shaping their future operational, organizational, and technological form.
Well, I am but a humble programmer, but here's how I'd go about it. First of all, I'd create a “Fabrication Trailer“ which could visit every community in the United States, Canada, and Mexico; I'd send it out on the road in every MIT vacation season to preach the evangel of “make” to every community it visited. In, say, one of eighty of such communities, one would find a person who dreamed of this happening in his or her lifetime who was empowered by seeing it happen; provide them a template which, by writing a cheque, can replicate the fab and watch it spread. And as it spreads, and creates wealth, it will spawn other Fab Labs.

Then, after it's perfected in a couple of hundred North American copies, design a Fab Lab that fits into an ocean cargo container and can be shipped anywhere. If there isn't electricity and Internet connectivity, also deliver the diesel generator or solar panels and satellite dish. Drop these into places where they're most needed, along with a wonk who can bootstrap the locals into doing things with these tools which astound even those who created them. Humans are clever, tool-making primates; give us the tools to realise what we imagine and then stand back and watch what happens!

The legacy media bombard us with conflict, murder, and mayhem. But the future is about creation and construction. What does An Army of Davids do when they turn their creativity and ingenuity toward creating solutions to problems perceived and addressed by individuals? Why, they'll call it a renaissance! And that's exactly what it will be.

For more information, visit the Web site of The Center for Bits and Atoms at MIT, which the author directs. Fab Central provides links to Fab Labs around the world, the machines they use, and the open source software tools you can download and start using today.

 Permalink

Milosz, Czeslaw. The Captive Mind. New York: Vintage, [1951, 1953, 1981] 1990. ISBN 0-679-72856-2.
This book is an illuminating exploration of life in a totalitarian society, written by a poet and acute observer of humanity who lived under two of the tyrannies of the twentieth century and briefly served one of them. The author was born in Lithuania in 1911 and studied at the university in Vilnius, a city he describes (p. 135) as “ruled in turn by the Russians, Germans, Lithuanians, Poles, again the Lithuanians, again the Germans, and again the Russians”—and now again the Lithuanians. An ethnic Pole, he settled in Warsaw after graduation, and witnessed the partition of Poland between Nazi Germany and the Soviet Union at the outbreak of World War II, conquest and occupation by Germany, “liberation” by the Red Army, and the imposition of Stalinist rule under the tutelage of Moscow. After working with the underground press during the war, the author initially supported the “people's government”, even serving as a cultural attaché at the Polish embassies in Washington and Paris. As Stalinist terror descended upon Poland and the rigid dialectical “Method” was imposed upon intellectual life, he saw tyranny ascendant once again and chose exile in the West, initially in Paris and finally the U.S., where he became a professor at the University of California at Berkeley in 1961—imagine, an anti-communist at Berkeley!

In this book, he explores the various ways in which the human soul comes to terms with a regime which denies its very existence. Four long chapters explore the careers of four Polish writers he denotes as “Alpha” through “Delta” and the choices they made when faced with a system which offered them substantial material rewards in return for conformity with a rigid system which put them at the service of the State, working toward ends prescribed by the “Center” (Moscow). He likens acceptance of this bargain to swallowing a mythical happiness pill, which, by eliminating the irritations of creativity, scepticism, and morality, guarantees those who take it a tranquil place in a well-ordered society. In a powerful chapter titled “Ketman”—a Persian word denoting fervent protestations of faith by nonbelievers, not only in the interest of self-preservation, but of feeling superior to those they so easily deceive—Milosz describes how an entire population can become actors who feign belief in an ideology and pretend to believe the earnest affirmations of orthodoxy on the part of others while sharing scorn for the few true believers.

The author received the 1980 Nobel Prize in Literature.

 Permalink

Hawkins, Jeff with Sandra Blakeslee. On Intelligence. New York: Times Books, 2004. ISBN 0-8050-7456-2.
Ever since the early days of research into the sub-topic of computer science which styles itself “artificial intelligence”, such work has been criticised by philosophers, biologists, and neuroscientists who argue that while symbolic manipulation, database retrieval, and logical computation may be able to mimic, to some limited extent, the behaviour of an intelligent being, in no case does the computer understand the problem it is solving in the sense a human does. John R. Searle's “Chinese Room” thought experiment is one of the best known and extensively debated of these criticisms, but there are many others just as cogent and difficult to refute.

These days, criticising artificial intelligence verges on hunting cows with a bazooka—unlike the early days in the 1950s when everybody expected the world chess championship to be held by a computer within five or ten years and mathematicians were fretting over what they'd do with their lives once computers learnt to discover and prove theorems thousands of times faster than they, decades of hype, fads, disappointment, and broken promises have instilled some sense of reality into the expectations most technical people have for “AI”, if not into those working in the field and those they bamboozle with the sixth (or is it the sixteenth) generation of AI bafflegab.

AI researchers sometimes defend their field by saying “If it works, it isn't AI”, by which they mean that as soon as a difficult problem once considered within the domain of artificial intelligence—optical character recognition, playing chess at the grandmaster level, recognising faces in a crowd—is solved, it's no longer considered AI but simply another computer application, leaving AI with the remaining unsolved problems. There is certainly some truth in this, but a closer look gives lie to the claim that these problems, solved with enormous effort on the part of numerous researchers, and with the application, in most cases, of computing power undreamed of in the early days of AI, actually represents “intelligence”, or at least what one regards as intelligent behaviour on the part of a living brain.

First of all, in no case did a computer “learn” how to solve these problems in the way a human or other organism does; in every case experts analysed the specific problem domain in great detail, developed special-purpose solutions tailored to the problem, and then implemented them on computing hardware which in no way resembles the human brain. Further, each of these “successes” of AI is useless outside its narrow scope of application: a chess-playing computer cannot read handwriting, a speech recognition program cannot identify faces, and a natural language query program cannot solve mathematical “word problems” which pose no difficulty to fourth graders. And while many of these programs are said to be “trained” by presenting them with collections of stimuli and desired responses, no amount of such training will permit, say, an optical character recognition program to learn to write limericks. Such programs can certainly be useful, but nothing other than the fact that they solve problems which were once considered difficult in an age when computers were much slower and had limited memory resources justifies calling them “intelligent”, and outside the marketing department, few people would remotely consider them so.

The subject of this ambitious book is not “artificial intelligence” but intelligence: the real thing, as manifested in the higher cognitive processes of the mammalian brain, embodied, by all the evidence, in the neocortex. One of the most fascinating things about the neocortex is how much a creature can do without one, for only mammals have them. Reptiles, birds, amphibians, fish, and even insects (which barely have a brain at all) exhibit complex behaviour, perception of and interaction with their environment, and adaptation to an extent which puts to shame the much-vaunted products of “artificial intelligence”, and yet they all do so without a neocortex at all. In this book, the author hypothesises that the neocortex evolved in mammals as an add-on to the old brain (essentially, what computer architects would call a “bag hanging on the side of the old machine”) which implements a multi-level hierarchical associative memory for patterns and a complementary decoder from patterns to detailed low-level behaviour which, wired through the old brain to the sensory inputs and motor controls, dynamically learns spatial and temporal patterns and uses them to make predictions which are fed back to the lower levels of the hierarchy, which in turns signals whether further inputs confirm or deny them. The ability of the high-level cortex to correctly predict inputs is what we call “understanding” and it is something which no computer program is presently capable of doing in the general case.

Much of the recent and present-day work in neuroscience has been devoted to imaging where the brain processes various kinds of information. While fascinating and useful, these investigations may overlook one of the most striking things about the neocortex: that almost every part of it, whether devoted to vision, hearing, touch, speech, or motion appears to have more or less the same structure. This observation, by Vernon B. Mountcastle in 1978, suggests there may be a common cortical algorithm by which all of these seemingly disparate forms of processing are done. Consider: by the time sensory inputs reach the brain, they are all in the form of spikes transmitted by neurons, and all outputs are sent in the same form, regardless of their ultimate effect. Further, evidence of plasticity in the cortex is abundant: in cases of damage, the brain seems to be able to re-wire itself to transfer a function to a different region of the cortex. In a long (70 page) chapter, the author presents a sketchy model of what such a common cortical algorithm might be, and how it may be implemented within the known physiological structure of the cortex.

The author is a founder of Palm Computing and Handspring (which was subsequently acquired by Palm). He subsequently founded the Redwood Neuroscience Institute, which has now become part of the Helen Wills Neuroscience Institute at the University of California, Berkeley, and in March of 2005 founded Numenta, Inc. with the goal of developing computer memory systems based on the model of the neocortex presented in this book.

Some academic scientists may sniff at the pretensions of a (very successful) entrepreneur diving into their speciality and trying to figure out how the brain works at a high level. But, hey, nobody else seems to be doing it—the computer scientists are hacking away at their monster programs and parallel machines, the brain community seems stuck on functional imaging (like trying to reverse-engineer a microprocessor in the nineteenth century by looking at its gross chemical and electrical properties), and the neuron experts are off dissecting squid: none of these seem likely to lead to an understanding (there's that word again!) of what's actually going on inside their own tenured, taxpayer-funded skulls. There is undoubtedly much that is wrong in the author's speculations, but then he admits that from the outset and, admirably, presents an appendix containing eleven testable predictions, each of which can falsify all or part of his theory. I've long suspected that intelligence has more to do with memory than computation, so I'll confess to being predisposed toward the arguments presented here, but I'd be surprised if any reader didn't find themselves thinking about their own thought processes in a different way after reading this book. You won't find the answers to the mysteries of the brain here, but at least you'll discover many of the questions worth pondering, and perhaps an idea or two worth exploring with the vast computing power at the disposal of individuals today and the boundless resources of data in all forms available on the Internet.

 Permalink

  2007  

January 2007

Wade, Nicholas. Before The Dawn. New York: Penguin Press, 2006. ISBN 1-59420-079-3.
Modern human beings, physically very similar to people alive today, with spoken language and social institutions including religion, trade, and warfare, had evolved by 50,000 years ago, yet written historical records go back only about 5,000 years. Ninety percent of human history, then, is “prehistory” which paleoanthropologists have attempted to decipher from meagre artefacts and rare discoveries of human remains. The degree of inference and the latitude for interpretation of this material has rendered conclusions drawn from it highly speculative and tentative. But in the last decade this has begun to change.

While humans only began to write the history of their species in the last 10% of their presence on the planet, the DNA that makes them human has been patiently recording their history in a robust molecular medium which only recently, with the ability to determine the sequence of the genome, humans have learnt to read. This has provided a new, largely objective, window on human history and origins, and has both confirmed results teased out of the archæological record over the centuries, and yielded a series of stunning surprises which are probably only the first of many to come.

Each individual's genome is a mix of genes inherited from their father and mother, plus a few random changes (mutations) due to errors in the process of transcription. The separate genome of the mitochondria (energy producing organelles) in their cells is inherited exclusively from the mother, and in males, the Y chromosome (except for the very tips) is inherited directly from the father, unmodified except for mutations. In an isolated population whose members breed mostly with one another, members of the group will come to share a genetic signature which reflects natural selection for reproductive success in the environment they inhabit (climate, sources of food, endemic diseases, competition with other populations, etc.) and the effects of random “genetic drift” which acts to reduce genetic diversity, particularly in small, isolated populations. Random mutations appear in certain parts of the genome at a reasonably constant rate, which allows them to be used as a “molecular clock” to estimate the time elapsed since two related populations diverged from their last common ancestor. (This is biology, so naturally the details are fantastically complicated, messy, subtle, and difficult to apply in practice, but the general outline is as described above.)

Even without access to the genomes of long-dead ancestors (which are difficult in the extreme to obtain and fraught with potential sources of error), the genomes of current populations provide a record of their ancestry, geographical origin, migrations, conquests and subjugations, isolation or intermarriage, diseases and disasters, population booms and busts, sources of food, and, by inference, language, social structure, and technologies. This book provides a look at the current state of research in the rapidly expanding field of genetic anthropology, and it makes for an absolutely compelling narrative of the human adventure. Obviously, in a work where the overwhelming majority of source citations are to work published in the last decade, this is a description of work in progress and most of the deductions made should be considered tentative pending further results.

Genomic investigation has shed light on puzzles as varied as the size of the initial population of modern humans who left Africa (almost certainly less than 1000, and possibly a single hunter-gatherer band of about 150), the date when wolves were domesticated into dogs and where it happened, the origin of wheat and rice farming, the domestication of cattle, the origin of surnames in England, and the genetic heritage of the randiest conqueror in human history, Genghis Khan, who, based on Y chromosome analysis, appears to have about 16 million living male descendants today.

Some of the results from molecular anthropology run the risk of being so at variance with the politically correct ideology of academic soft science that the author, a New York Times reporter, tiptoes around them with the mastery of prose which on other topics he deploys toward their elucidation. Chief among these is the discussion of the microcephalin and ASPM genes on pp. 97–99. (Note that genes are often named based on syndromes which result from deleterious mutations within them, and hence bear names opposite to their function in the normal organism. For example, the gene which triggers the cascade of eye formation in Drosophila is named eyeless.) Both of these genes appear to regulate brain size and, in particular, the development of the cerebral cortex, which is the site of higher intelligence in mammals. Specific alleles of these genes are of recent origin, and are unequally distributed geographically among the human population. Haplogroup D of Microcephalin appeared in the human population around 37,000 years ago (all of these estimates have a large margin of error); which is just about the time when quintessentially modern human behaviour such as cave painting appeared in Europe. Today, about 70% of the population of Europe and East Asia carry this allele, but its incidence in populations in sub-Saharan Africa ranges from 0 to 25%. The ASPM gene exists in two forms: a “new” allele which arose only about 5800 years ago (coincidentally[?] just about the time when cities, agriculture, and written language appeared), and an “old” form which predates this period. Today, the new allele occurs in about 50% of the population of the Middle East and Europe, but hardly at all in sub-Saharan Africa. Draw your own conclusions from this about the potential impact on human history when germline gene therapy becomes possible, and why opposition to it may not be the obvious ethical choice.

 Permalink

Derbyshire, John. Unknown Quantity. Washington: Joseph Henry Press, 2006. ISBN 0-309-09657-X.
After exploring a renowned mathematical conundrum (the Riemann Hypothesis) in all its profundity in Prime Obsession (June 2003), in this book the author recounts the history of algebra—an intellectual quest sprawling over most of recorded human history and occupying some of the greatest minds our species has produced. Babylonian cuneiform tablets dating from the time of Hammurabi, about 3800 years ago, demonstrate solving quadratic equations, extracting square roots, and finding Pythagorean triples. (The methods in the Babylonian texts are recognisably algebraic but are expressed as “word problems” instead of algebraic notation.) Diophantus, about 2000 years later, was the first to write equations in a symbolic form, but this was promptly forgotten. In fact, twenty-six centuries after the Babylonians were solving quadratic equations expressed in word problems, al-Khwārizmī (the word “algebra” is derived from the title of his book,
الكتاب المختصر في حساب الجبر والمقابلة
al-Kitāb al-mukhtaṣar fī ḥisāb al-jabr wa-l-muqābala,
and “algorithm” from his name) was solving quadratic equations in word problems. It wasn't until around 1600 that anything resembling the literal symbolism of modern algebra came into use, and it took an intellect of the calibre of René Descartes to perfect it. Finally, equipped with an expressive notation, rules for symbolic manipulation, and the slowly dawning realisation that this, not numbers or geometric figures, is ultimately what mathematics is about, mathematicians embarked on a spiral of abstraction, discovery, and generalisation which has never ceased to accelerate in the centuries since. As more and more mathematics was discovered (or, if you're an anti-Platonist, invented), deep and unexpected connections were found among topics once considered unrelated, and this is a large part of the story told here, as algebra has “infiltrated” geometry, topology, number theory, and a host of other mathematical fields while, in the form of algebraic geometry and group theory, providing the foundation upon which the most fundamental theories of modern physics are built.

With all of these connections, there's a strong temptation for an author to wander off into fields not generally considered part of algebra (for example, analysis or set theory); Derbyshire is admirable in his ability to stay on topic, while not shortchanging the reader where important cross-overs occur. In a book of this kind, especially one covering such a long span of history and a topic so broad, it is difficult to strike the right balance between explaining the mathematics and sketching the lives of the people who did it, and between a historical narrative and one which follows the evolution of specific ideas over time. In the opinion of this reader, Derbyshire's judgement on these matters is impeccable. As implausible as it may seem to some that a book about algebra could aspire to such a distinction, I found this one of the more compelling page-turners I've read in recent months.

Six “math primers” interspersed in the text provide the fundamentals the reader needs to understand the chapters which follow. While excellent refreshers, readers who have never encountered these concepts before may find the primers difficult to comprehend (but then, they probably won't be reading a history of algebra in the first place). Thirty pages of end notes not only cite sources but expand, sometimes at substantial length, upon the main text; readers should not deprive themselves this valuable lagniappe.

 Permalink

Florey, Kitty Burns. Sister Bernadette's Barking Dog. Hoboken, NJ: Melville House, 2006. ISBN 1-933633-10-7.
In 1877, Alonzo Reed and and Brainerd Kellogg published Higher Lessons in English, which introduced their system for the grammatical diagramming of English sentences. For example, the sentence “When my father and my mother forsake me, then the Lord will take me up” (an example from Lesson 63 of their book) would be diagrammed as:
Diagrammed sentence
Diagram by Bruce D. Despain.
in the Reed and Kellogg system.

The idea was to make the grammatical structure of the sentence immediately evident, sharpening students' skills in parsing sentences and rendering grammatical errors apparent. This seems to have been one of those cases when an idea springs upon a world which has, without knowing it, been waiting for just such a thing. Sentence diagramming spread through U.S. schools like wildfire—within a few years Higher Lessons and the five other books on which Reed and Kellogg collaborated were selling at the astonishing rate of half a million copies a year, and diagramming was firmly established in the English classes of children across the country and remained so until the 1960s, when it evaporated almost as rapidly as it had appeared.

The author and I are both members of the last generation who were taught sentence diagramming at school. She remembers it as having been “fun” (p. 15), something which was not otherwise much in evidence in Sister Bernadette's sixth grade classroom. I learnt diagramming in the seventh grade, and it's the only part of English class that I recall having enjoyed. (Gertrude Stein once said [p. 73], “I really do not know anything that has ever been more exciting than diagramming sentences.” I don't think I'd go quite that far myself.) In retrospect, it seems an odd part of the curriculum: we spent about a month furiously parsing and diagramming, then dropped the whole thing and never took it up again that year or afterwards; I can't recall ever diagramming a sentence since.

This book, written by an author and professional copy editor, charmingly recounts the origin, practice, curiosities, and decline of sentence diagramming, and introduces the reader to stalwarts who are keeping it alive today. There are a wealth of examples from literature, including the 93 word concluding sentence of Proust's Time Regained, which appears as a two-page spread (pp. 94–95). (The author describes seeing a poster from the 1970s which diagrams a 958 word Proust sentence without an explicit subject.)

Does diagramming make one a better writer? The general consensus, which the author shares, is that it doesn't, which may explain why it is rarely taught today. While a diagram shows the grammatical structure of a sentence, you already have to understand the rules of grammar in order to diagram it, and you can make perfectly fine looking diagrams of barbarisms such as “Me and him gone out.” Also, as a programmer, it disturbs me that one cannot always unambiguously recover the word order of the original sentence from a diagram; this is not a problem with the tree diagrams used by linguists today. But something doesn't have to be useful to be fun (even if not, as it was to Gertrude Stein, exciting), and the structure of a complex sentence graphically elucidated on a page is marvellous to behold and rewarding to create. I'm sure some may disdain those of us who find entertainment in such arcane intellectual endeavours; after all, the first name of the co-inventor of diagramming, Brainerd Kellogg, includes both the words “brain” and “nerd”!

The author's remark on p. 120, “…I must confess that I like editing my own work more than I do writing it. I find first drafts painful; what I love is to revise and polish. Sometimes I think I write simply to have the fun of editing what I've written.” is one I share, as Gertrude Stein put it (p. 76), “completely entirely completely”—and it's a sentiment I don't ever recall seeing in print before. I think the fact that students aren't taught that a first draft is simply the raw material of a cogent, comprehensible document is why we encounter so many hideously poorly written documents on the Web.

The complete text of the 1896 Revised Edition of Reed and Kellogg's Higher Lessons in English is available from Project Gutenberg; the diagrams are rendered as ASCII art and a little difficult to read until you get used to them. Eugene R. Moutoux, who constructed the diagrams for the complicated sentences in Florey's book has a wealth of information about sentence diagramming on his Web site, including diagrams of famous first-page sentences from literature such as this beauty from Nathaniel Hawthorne's The Scarlet Letter.

 Permalink

Ponting, Clive. Gunpowder. London: Pimlico, 2005. ISBN 1-84413-543-8.
When I was a kid, we learnt in history class that gunpowder had been discovered in the thirteenth century by the English Franciscan monk Roger Bacon, who is considered one of the founders of Western science. The Chinese were also said to have known of gunpowder, but used it only for fireworks, as opposed to the applications in the fields of murder and mayhem the more clever Europeans quickly devised. In The Happy Turning (July 2003), H. G. Wells remarked that “truth has a way of heaving up through the cracks of history”, and so it has been with the origin of gunpowder, as recounted here.

It is one of those splendid ironies that gunpowder, which, along with its more recent successors, has contributed to the slaughter of more human beings than any other invention with the exception of government, was discovered in the 9th century A.D. by Taoist alchemists in China who were searching for an elixir of immortality (and, in fact, gunpowder continued to be used as a medicine in China for centuries thereafter). But almost as soon as the explosive potential of gunpowder was discovered, the Chinese began to apply it to weapons and, over the next couple of centuries had invented essentially every kind of firearm and explosive weapon which exists today.

Gunpowder is not a high explosive; it does not detonate in a supersonic shock wave as do substances such as nitroglycerine and TNT, but rather deflagrates, or burns rapidly, as the heat of combustion causes the release of the oxygen in the nitrate compound in the mix. If confined, of course, the rapid release of gases and heat can cause a container to explode, but the rapid combustion of gunpowder also makes it suitable as a propellant in guns and rockets. The early Chinese formulations used a relatively small amount of saltpetre (potassium nitrate), and were used in incendiary weapons such as fire arrows, fire lances (a kind of flamethrower), and incendiary bombs launched by catapults and trebuchets. Eventually the Chinese developed high-nitrate mixes which could be used in explosive bombs, rockets, guns, and cannon (which were perfected in China long before the West, where the technology of casting iron did not appear until two thousand years after it was known in China).

From China, gunpowder technology spread to the Islamic world, where bombardment by a giant cannon contributed to the fall of Constantinople to the Ottoman Empire. Knowledge of gunpowder almost certainly reached Europe via contact with the Islamic invaders of Spain. The first known European document giving its formula, whose disarmingly candid Latin title Liber Ignium ad Comburendos Hostes translates to “Book of Fires for the Burning of Enemies”, dates from about 1300 and contains a number of untranslated Arabic words.

Gunpowder weapons soon became a fixture of European warfare, but crude gun fabrication and weak powder formulations initially limited their use mostly to huge siege cannons which launched large stone projectiles against fortifications at low velocity. But as weapon designs and the strength of powder improved, the balance in siege warfare shifted from the defender to the attacker, and the consolidation of power in Europe began to accelerate.

The author argues persuasively that gunpowder played an essential part in the emergence of the modern European state, because the infrastructure needed to produce saltpetre, manufacture gunpowder weapons in quantity, equip, train, and pay ever-larger standing armies required a centralised administration with intrusive taxation and regulation which did not exist before. Once these institutions were in place, they conferred such a strategic advantage that the ruler was able to consolidate and expand the area of control at the expense of previously autonomous regions, until coming up against another such “gunpowder state”.

Certainly it was gunpowder weapons which enabled Europeans to conquer colonies around the globe and eventually impose their will on China, where centuries of political stability had caused weapons technology to stagnate by comparison with that of conflict-ridden Europe.

It was not until the nineteenth century that other explosives and propellants discovered by European chemists brought the millennium-long era of gunpowder a close. Gunpowder shaped human history as have few other inventions. This excellent book recounts that story from gunpowder's accidental invention as an elixir to its replacement by even more destructive substances, and provides a perspective on a thousand years of world history in terms of the weapons with which so much of it was created.

 Permalink

Walden, George. Time to Emigrate? London: Gibson Square, 2006. ISBN 1-903933-93-5.
Readers of Theodore Dalrymple's Life at the Bottom and Our Culture, What's Left of It may have thought his dire view of the state of civilisation in Britain to have been unduly influenced by his perspective as a prison and public hospital physician in one of the toughest areas of Birmingham, England. Here we have, if not the “view from the top”, a brutally candid evaluation written by a former Minister of Higher Education in the Thatcher government and Conservative member of the House of Commons from 1983 until his retirement in 1997, and it is, if anything, more disturbing.

The author says of himself (p. 219), “My life began unpromisingly, but everything's always got better. … In other words, in personal terms I've absolutely no complaints.” But he is deeply worried about whether his grown children and their children can have the same expectations in the Britain of today and tomorrow. The book is written in the form of a long (224 page) and somewhat rambling letter to a fictional son and his wife who are pondering emigrating from Britain after their young son was beaten into unconsciousness by immigrants within sight of their house in London. He describes his estimation of the culture, politics, and economy of Britain as much like the work of a house surveyor: trying to anticipate the problems which may befall those who choose to live there. Wherever he looks: immigration, multiculturalism, education, transportation, the increasingly debt-supported consumer economy, public health services, mass media, and the state of political discourse, he finds much to fret about. But this does not come across as the sputtering of an ageing Tory, but rather a thoroughly documented account of how most of the things which the British have traditionally valued (and have attracted immigrants to their shores) have eroded during his lifetime, to such an extent that he can no longer believe that his children and grandchildren will have the same opportunities he had as a lower middle class boy born twelve days after Britain declared war on Germany in 1939.

The curious thing about emigration from the British Isles today is that it's the middle class that is bailing out. Over most of history, it was the lower classes seeking opportunity (or in the case of my Irish ancestors, simply survival) on foreign shores, and the surplus sons of the privileged classes hoping to found their own dynasties in the colonies. But now, it's the middle that's being squeezed out, and it's because the collectivist state is squeezing them for all they're worth. The inexorably growing native underclass and immigrants benefit from government services and either don't have the option to leave or else consider their lot in life in Britain far better than whence they came. The upper classes can opt out of the sordid shoddiness and endless grey queues of socialism; on p. 153 the author works out the cost: for a notional family of two parents and two children, “going private” for health care, education for the kids, transportation, and moving to a “safe neighbourhood” would roughly require doubling income from what such a typical family brings home.

Is it any wonder we have so many billionaire collectivists (Buffett, Gates, Soros, etc.)? They don't have to experience the sordid consequences of their policies, but by advocating them, they can recruit the underclass (who benefit from them and are eventually made dependent and unable to escape from helotry) to vote them into power and keep them there. And they can exult in virtue as their noble policies crush those who might aspire to their own exalted station. The middle class, who pay for all of this, forced into minority, retains only the franchise which is exercised through shoe leather on pavement, and begins to get out while the property market remains booming and the doors are still open.

The author is anything but a doctrinaire Tory; he has, in fact, quit the party, and savages its present “100% Feck-Free” (my term) leader, David Cameron as, among other things, a “transexualised [Princess] Diana” (p. 218). As an emigrant myself, albeit from a different country, I think his conclusion and final recommendation couldn't be wiser (and I'm sorry if this is a spoiler, but if you're considering such a course you should read this book cover to cover anyway): go live somewhere else (I'd say, anywhere else) and see how you like it. You may discover that you're obsessed with what you miss and join the “International Club” (which usually means the place they speak the language of the Old Country), or you may find that after struggling with language, customs, and how things are done, you fit in rather well and, after a while, find most of your nightmares are about things in the place you left instead of the one you worried about moving to. There's no way to know—it could go either way. I think the author, as many people, may have put somewhat more weight on the question of emigration that it deserves. I've always looked at countries like any other product. I've never accepted that because I happened to be born within the borders of some state to whose creation and legitimacy I never personally consented, that I owe it any obligation whatsoever apart from those in compensation for services provided directly to me with my assent. Quitting Tyrania to live in Freedonia is something anybody should be able do to, assuming the residents of Freedonia welcome you, and it shouldn't occasion any more soul-searching on the part of the emigrant than somebody choosing to trade in their VW bus for a Nissan econobox because the 1972 bus was a shoddy crapwagon. Yes, you should worry and even lose sleep over all the changes you'll have to make, but there's no reason to gum up an already difficult decision process by cranking all kinds of guilt into it. Nobody (well, nobody remotely sane) gets all consumed by questions of allegiance, loyalty, or heritage when deciding whether their next computer will run Windows, MacOS, Linux, or FreeBSD. It seems to me that once you step back from the flags and anthems and monuments and kings and presidents and prime ministers and all of the other atavistic baggage of the coercive state, it's wisest to look at your polity like an operating system; it's something that you have to deal with (increasingly, as the incessant collectivist ratchet tightens the garrote around individuality and productivity), but you still have a choice among them, and given how short is our tenure on this planet, we shouldn't waste a moment of it living somewhere that callously exploits our labours in the interest of others. And, the more productive people exercise that choice, the greater the incentive is for the self-styled rulers of the various states to create an environment which will attract people like ourselves.

Many of the same issues are discussed, from a broader European perspective, in Claire Berlinski's Menace in Europe and Mark Steyn's America Alone. To fend off queries, I emigrated from what many consider the immigration magnet of the world in 1991 and have never looked back and rarely even visited the old country except for business and family obligations. But then I suspect, as the author notes on p. 197, I am one of those D4-7 allele people (look it up!) who thrive on risk and novelty; I'm not remotely claiming that this is better—Heaven knows we DRD4 7-repeat folk have caused more than our cohort's proportion of chaos and mayhem, but we just can't give it up—this is who we are.

 Permalink

Card, Orson Scott. Empire. New York: Tor, 2006. ISBN 0-7653-1611-0.
I first heard of this novel in an Instapundit podcast interview with the author, with whom I was familiar, having read and admired Ender's Game when it first appeared in 1977 as a novelette in Analog (it was later expanded and published as a novel in 1985) and several of his books since then. I'd always pigeonholed him as a science fictioneer, so I was somewhat surprised to learn that his latest effort was a techno-thriller in the Tom Clancy vein, with the flabbergasting premise of a near future American civil war pitting the conservative “red states” against the liberal “blue states”. The interview, which largely stayed away from the book, was interesting and since I'd never felt let down by any of Card's previous work (although none of it that I'd read seemed to come up to the level of Ender's Game, but then I've read only a fraction of his prolific output), I decided to give it a try.
Spoiler warning: Plot and/or ending details follow.  
The story is set in the very near future: a Republican president detested by the left and reviled in the media is in the White House, the Republican nomination for his successor is a toss-up, and a ruthless woman is the Democratic front-runner. In fact, unless this is an alternative universe with a different calendar, we can identify the year as 2008, since that's the only presidential election year on which June 13th falls on a Friday until 2036, by which date it's unlikely Bill O'Reilly will still be on the air.

The book starts out with a bang and proceeds as a tautly-plotted, edge of the seat thriller which I found more compelling than any of Clancy's recent doorstop specials. Then, halfway through chapter 11, things go all weird. It's like the author was holding his breath and chanting the mantra “no science fiction—no science fiction” and then just couldn't take it any more, explosively exhaled, drew a deep breath, and furiously started pounding the keys. (This is not, in fact, what happened, but we don't find that out until the end material, which I'll describe below.) Anyway, everything is developing as a near-future thriller combined with a “who do you trust” story of intrigue, and then suddenly, on p. 157, our heroes run into two-legged robotic Star Wars-like imperial walkers on the streets of Manhattan and, before long, storm troopers in space helmets and body armour, death rays that shoot down fighter jets, and later, “hovercycles”—yikes.

We eventually end up at a Bond villain redoubt in Washington State built by a mad collectivist billionaire clearly patterned on George Soros, for a final battle in which a small band of former Special Ops heroes take on all of the villains and their futuristic weaponry by grit and guile. If you like this kind of stuff, you'll probably like this. The author lost me with the imperial walkers, and it has nothing to do with my last name, or my anarchist proclivities.

May we do a little physics here? Let's take a closer look at the lair of the evil genius, hidden under a reservoir formed by a boondoggle hydroelectric dam “near Highway 12 between Mount St. Helens and Mount Rainier” (p. 350). We're told (p. 282) that the entry to the facility is hidden beneath the surface of the lake formed in a canyon behind a dam, and access to it is provided by pumping water from the lake to another, smaller lake in an adjacent canyon. The smaller lake is said to be two miles long, and exposing the entrance to the rebels' headquarters causes the water to rise fifteen feet in that lake. The width of the smaller lake is never given, but most of the natural lakes in that region seem to be long and skinny, so let's guess it's only a tenth as wide as it is long, or about 300 yards wide. The smaller lake is said to be above the lake which conceals the entrance, so to expose the door would require pumping a chunk of water we can roughly estimate (assuming the canyon is rectangular) at 2 miles by 300 yards by fifteen feet. Transforming all of these imperial (there's that word again!) measures into something comprehensible, we can compute the volume of water as about 4 million cubic metres or, as the boots on the ground would probably put it, about a billion gallons. This is a lot of water.

A cubic metre of water weighs 1000 kg, or a metric ton, so in order to expose the door, the villains would have to pump 4 billion kilograms of water uphill at least 15 feet (because the smaller lake is sufficiently above the facility to allow it to be flooded [p. 308] it would almost certainly be much more, but let's be conservative)—call it 5 metres. Now the energy required to raise this quantity of water 5 metres against the Earth's gravitation is just the product of the mass (4 billion kilograms), the distance (5 metres), and gravitational acceleration of 9.8 m/s², which works out to about 200 billion joules, or 54 megawatt-hours. If the height difference were double our estimate, double these numbers. Now to pump all of that water uphill in, say, half an hour (which seems longer than the interval in which it happens on pp. 288–308) will require about 100 megawatts of power, and that's assuming the pumps are 100% efficient and there are no frictional losses in the pipes. Where does the power come from? It can't come from the hydroelectric dam, since in order to generate the power to pump the water, you'd need to run a comparable amount of water through the dam's turbines (less because the drop would be greater, but then you have to figure in the efficiency of the turbines and generators, which is about 80%), and we've already been told that dumping the water over the dam would flood communities in the valley floor. If they could store the energy from letting the water back into the lower lake, then they could re-use it (less losses) to pump it back uphill again, but there's no way to store anything like that kind of energy—in fact, pumping water uphill and releasing it through turbines is the only practical way to store large quantities of electricity, and once the water is in the lower lake, there's no place to put the power. We've already established that there are no heavy duty power lines running to the area, as that would be immediately suspicious (of course, it's also suspicious that there aren't high tension lines running from what's supposed to be a hydroelectric dam, but that's another matter). And if the evil genius had invented a way to efficiently store and release power on that scale, he wouldn't need to start a civil war—he could just about buy the government with the proceeds from such an invention.

Spoilers end here.  
Call me picky—“You're picky!”—feel better now?—but I just cannot let this go unremarked. On p. 248, one character likens another to Hari Seldon in Isaac Asimov's Foundation novels. But it's spelt “Hari Selden”, and it's not a typo because the name is given the same wrong way three times on the same page! Now I'd excuse such a goof by a thriller scribbler recalling science fiction he'd read as a kid, but this guy is a distinguished science fiction writer who has won the Hugo Awardfour times, and this book is published by Tor Books, the pre-eminent specialist science fiction press; don't they have an editor on staff who's familiar with one of the universally acknowledged classics of the genre and winner of the unique Hugo for Best All-Time Series?

One becomes accustomed to low expectations for science fiction novel cover art, but expects a slightly higher standard for techno-thrillers. The image on the dust jacket has absolutely nothing whatsoever to do with any scene in the book. It looks like a re-mix of several thriller covers chosen at random.

It is only on p. 341, in the afterword, that we learn this novel was commissioned as part of a project to create an “entertainment franchise”, and on p. 349, in the acknowledgements, that this is, in fact, the scenario of a video game already under development when the author joined the team. Frankly, it shows. As befits the founding document of an “entertainment franchise” the story ends setting the stage for the sequel, although at least to this reader, the plot for the first third of that work seems transparently obvious, but then Card is a master of the gob smack switcheroo, as the present work demonstrates. In any case, what we have here appears to be Volume One of a series of alternative future political/military novels like Allen Drury's Advise and Consent series. While that novel won a Pulitzer Prize, the sequels rapidly degenerated into shrill right-wing screeds. In Empire Card is reasonably even-handed, although his heterodox personal views are apparent. I hope the inevitable sequels come up to that standard, but I doubt I'll be reading them.

 Permalink

February 2007

Lukacs, John. Five Days in London. New Haven, CT: Yale University Press, 1999. ISBN 0-300-08466-8.
Winston Churchill titled the fourth volume of his memoirs of The Second World War, describing the events of 1942, The Hinge of Fate. Certainly, in the military sense, it was in that year that the tide turned in favour of the allies—the entry of the United States into the war and the Japanese defeat in the Battle of Midway, Germany's failure at Stalingrad and the beginning of the disastrous consequences for the German army, and British defeat of Rommel's army at El Alamein together marked what Churchill described as, “…not the end, nor is it even the beginning of the end, but, it is perhaps, the end of the beginning.”

But in this book, distinguished historian John Lukacs argues that the true “hinge of fate” not only of World War II, but for Western civilisation against Nazi tyranny, occurred in the five days of 24–28 May of 1940, not on the battlefields in France, but in London, around conference tables, in lunch and dinner meetings, and walks in the garden. This was a period of unmitigated, accelerating disaster for the French army and the British Expeditionary Force in France: the channel ports of Boulogne and Calais fell to the Germans, the King of Belgium capitulated to the Nazis, and more than three hundred thousand British and French troops were surrounded at Dunkirk, the last channel port still in Allied hands. Despite plans for an evacuation, as late as May 28, Churchill estimated that at most about 50,000 could be evacuated, with all the rest taken prisoner and all the military equipment lost. In his statement in the House of Commons that day, he said, “Meanwhile, the House should prepare itself for hard and heavy tidings.” It was only in the subsequent days that the near-miraculous evacuation was accomplished, with a total of 338,226 soldiers rescued by June 3rd.

And yet it was in these darkest of days that Churchill vowed that Britain would fight on, alone if necessary (which seemed increasingly probable), to the very end, whatever the cost or consequences. On May 31st, he told French premier Paul Reynaud, “It would be better far that the civilisation of Western Europe with all of its achievements should come to a tragic but splendid end than that the two great democracies should linger on, stripped of all that made life worth living.” (p. 217).

From Churchill's memoirs and those of other senior British officials, contemporary newspapers, and most historical accounts of the period, one gains the impression of a Britain unified in grim resolve behind Churchill to fight on until ultimate victory or annihilation. But what actually happened in those crucial War Cabinet meetings as the disaster in France was unfolding? Oddly, the memoirs and collected papers of the participants are nearly silent on the period, with the author describing the latter as having been “weeded” after the fact. It was not until the minutes of the crucial cabinet meetings were declassified in 1970 (thanks to a decision by the British government to reduce the “closed period” of such records from fifty to thirty years), that it became possible to reconstruct what transpired there. This book recounts a dramatic and fateful struggle of which the public and earlier historians of the period were completely unaware—a moment when Hitler may have come closer to winning the war than at any other.

The War Cabinet was, in fact, deeply divided. Churchill, who had only been Prime Minister for two weeks, was in a precarious position, with his predecessor Neville Chamberlain and the Foreign Secretary Lord Halifax, who King George VI had preferred to Churchill for Prime Minister as members, along with Labour leaders Clement Attlee and Arthur Greenwood. Halifax did not believe that Britain could resist alone, and that fighting on would surely result in the loss of the Empire and perhaps independence and liberty in Britain as well. He argued vehemently for an approach, either by Britain and France together or Britain alone, to Mussolini, with the goal of keeping Italy out of the war and making some kind of deal with Hitler which would preserve independence and the Empire, and he met on several occasions with the Italian ambassador in London to explore such possibilities.

Churchill opposed any effort to seek mediation, either by Mussolini or Roosevelt, both because he thought the chances of obtaining acceptable terms from Hitler were “a thousand to one against” (May 28, p. 183) and because any approach would put Britain on a “slippery slope” (Churchill's words in the same meeting) from which it would be impossible to restore the resolution to fight rather than make catastrophic concessions. But this was a pragmatic decision, not a Churchillian declaration of “never, never, never, never”. In the May 26 War Cabinet meeting (p. 113), Churchill made the rather astonishing statement that he “would be thankful to get out of our present difficulties on such terms, provided we retained the essentials and the elements of our vital strength, even at the cost of some territory”. One can understand why the personal papers of the principals were so carefully weeded.

Speaking of another conflict where the destiny of Europe hung in the balance, the Duke of Wellington said of Waterloo that it was “the nearest run thing you ever saw in your life”. This account makes it clear that this moment in history was much the same. It is, of course, impossible to forecast what the consequences would have been had Halifax prevailed and Britain approached Mussolini to broker a deal with Hitler. The author argues forcefully that nothing less than the fate of Western civilisation was at stake. With so many “what ifs”, one can never know. (For example, it appears that Mussolini had already decided by this date to enter the war and he might have simply rejected a British approach.) But in any case this fascinating, thoroughly documented, and lucidly written account of a little-known but crucial moment in history makes for compelling reading.

 Permalink

Roberts, Siobhan. King of Infinite Space. New York: Walker and Company, 2006. ISBN 0-8027-1499-4.
Mathematics is often said to be a game for the young. The Fields Medal, the most prestigious prize in mathematics, is restricted to candidates 40 years or younger. While many older mathematicians continue to make important contributions in writing books, teaching, administration, and organising and systematising topics, most work on the cutting edge is done by those in their twenties and thirties. The life and career of Donald Coxeter (1907–2003), the subject of this superb biography, is a stunning and inspiring counter-example. Coxeter's publications (all of which are listed in an appendix to this book) span a period of eighty years, with the last, a novel proof of Beecroft's theorem, completed just a few days before his death.

Coxeter was one of the last generation to be trained in classical geometry, and he continued to do original work and make striking discoveries in that field for decades after most other mathematicians had abandoned it as mined out or insufficiently rigorous, and it had disappeared from the curriculum not only at the university level but, to a great extent, in secondary schools as well. Coxeter worked in an intuitive, visual style, frequently making models, kaleidoscopes, and enriching his publications with numerous diagrams. Over the many decades his career spanned, mathematical research (at least in the West) seemed to be climbing an endless stairway toward ever greater abstraction and formalism, epitomised in the work of the Bourbaki group. (When the unthinkable happened and a diagram was included in a Bourbaki book, fittingly it was a Coxeter diagram.) Coxeter inspired an increasingly fervent group of followers who preferred to discover new structures and symmetry using the mind's powers of visualisation. Some, including Douglas Hofstadter (who contributed the foreword to this work) and John Horton Conway (who figures prominently in the text) were inspired by Coxeter to carry on his legacy. Coxeter's interactions with M. C. Escher and Buckminster Fuller are explored in two chapters, and illustrate how the purest of mathematics can both inspire and be enriched by art and architecture (or whatever it was that Fuller did, which Coxeter himself wasn't too sure about—on one occasion he walked out of a new-agey Fuller lecture, noting in his diary “Out, disgusted, after ¾ hour” [p. 178]).

When the “new math” craze took hold in the 1960s, Coxeter immediately saw it for the disaster it was to be become and involved himself in efforts to preserve the intuitive and visual in mathematics education. Unfortunately, the power of a fad promoted by purists is difficult to counter, and a generation and more paid the price of which Coxeter warned. There is an excellent discussion at the end of chapter 9 of the interplay between the intuitive and formalist approaches to mathematics. Many modern mathematicians seem to have forgotten that one proves theorems in order to demonstrate that the insights obtained by intuition are correct. Intuition without rigour can lead to error, but rigour without intuition can blind one to beautiful discoveries in the mathematical objects which stand behind the austere symbols on paper.

The main text of this 400 page book is only 257 pages. Eight appendices expand upon technical topics ranging from phyllotaxis to the quilting of toilet paper and include a complete bibliography of Coxeter's publications. (If you're intrigued by “Morley's Miracle”, a novel discovery in the plane geometry of triangles made as late as 1899, check out this page and Java applet which lets you play with it interactively. Curiously, a diagram of Morley's theorem appears on the cover of Coxeter's and Greitzer's Geometry Revisited, but is misdrawn—the trisectors are inexact and the inner triangle is therefore not equilateral.) Almost 90 pages of endnotes provide both source citations (including Web links to MathWorld for technical terms and the University of St. Andrews biographical archive for mathematicians named in the text) and detailed amplification of numerous details. There are a few typos and factual errors (for example, on p. 101 the planets Uranus and Pluto are said to have been discovered in the nineteenth century when, in fact, neither was: Herschel discovered Uranus in 1781 and Tombaugh Pluto in 1930), but none are central to the topic nor detract from this rewarding biography of an admirable and important mathematician.

 Permalink

Kauffman, Stuart A. Investigations. New York: Oxford University Press, 2000. ISBN 0-19-512105-8.
Few people have thought as long and as hard about the origin of life and the emergence of complexity in a biosphere as Stuart Kauffman. Medical doctor, geneticist, professor of biochemistry and biophysics, MacArthur Fellow, and member of the faculty of the Santa Fe Institute for a decade, he has sought to discover the principles which might underlie a “general biology”—the laws which would govern any biosphere, whether terrestrial, extraterrestrial, or simulated within a computer, regardless of its physical substrate.

This book, which he describes on occasion as “protoscience”, provides an overview of the principles he suspects, but cannot prove, may underlie all forms of life, and beyond that systems in general which are far from equilibrium such as a modern technological economy and the universe itself. Most of science before the middle of the twentieth century studied complex systems at or near equilibrium; only at such states could the simplifying assumptions of statistical mechanics be applied to render the problem tractable. With computers, however, we can now begin to explore open systems (albeit far smaller than those in nature) which are far from equilibrium, have dynamic flows of energy and material, and do not necessarily evolve toward a state of maximum entropy.

Kauffman believes there may be what amounts to a fourth law of thermodynamics which applies to such systems and, although we don't know enough to state it precisely, he suspects it may be that these open, extremely nonergodic, systems evolve as rapidly as possible to expand and fill their state space and that unlike, say, a gas in a closed volume or the stars in a galaxy, where the complete state space can be specified in advance (that is, the dimensionality of the space, not the precise position and momentum values of every object within it), the state space of a non-equilibrium system cannot be prestated because its very evolution expands the state space. The presence of autonomous agents introduces another level of complexity and creativity, as evolution drives the agents to greater and greater diversity and complexity to better adapt to the ever-shifting fitness landscape.

These are complicated and deep issues, and this is a very difficult book, although appearing, at first glance, to be written for a popular audience. I seriously doubt whether somebody who was not previously acquainted with these topics and thought about them at some length will make it to the end and, even if they do, take much away from the book. Those who are comfortable with the laws of thermodynamics, the genetic code, protein chemistry, catalysis, autocatalytic networks, Carnot cycles, fitness landscapes, hill-climbing strategies, the no-go theorem, error catastrophes, self-organisation, percolation phase transitions in graphs, and other technical issues raised in the arguments must still confront the author's prose style. It seems like Kauffman aspires to be a prose stylist conveying a sense of wonder to his readers along the lines of Carl Sagan and Stephen Jay Gould. Unfortunately, he doesn't pull it off as well, and the reader must wade through numerous paragraphs like the following from pp. 97–98:

Does it always take work to construct constraints? No, as we will soon see. Does it often take work to construct constraints? Yes. In those cases, the work done to construct constraints is, in fact, another coupling of spontaneous and nonspontaneous processes. But this is just what we are suggesting must occur in autonomous agents. In the universe as a whole, exploding from the big bang into this vast diversity, are many of the constraints on the release of energy that have formed due to a linking of spontaneous and nonspontaneous processes? Yes. What might this be about? I'll say it again. The universe is full of sources of energy. Nonequilibrium processes and structures of increasing diversity and complexity arise that constitute sources of energy that measure, detect, and capture those sources of energy, build new structures that constitute constraints on the release of energy, and hence drive nonspontaneous processes to create more such diversifying and novel processes, structures, and energy sources.
I have not cherry-picked this passage; there are hundreds of others like it. Given the complexity of the technical material and the difficulty of the concepts being explained, it seems to me that the straightforward, unaffected Point A to Point B style of explanation which Isaac Asimov employed would work much better. Pardon my audacity, but allow me to rewrite the above paragraph.
Autonomous agents require energy, and the universe is full of sources of energy. But in order to do work, they require energy to be released under constraints. Some constraints are natural, but others are constructed by autonomous agents which must do work to build novel constraints. A new constraint, once built, provides access to new sources of energy, which can be exploited by new agents, contributing to an ever growing diversity and complexity of agents, constraints, and sources of energy.
Which is better? I rewrite; you decide. The tone of the prose is all over the place. In one paragraph he's talking about Tomasina the trilobite (p. 129) and Gertrude the ugly squirrel (p. 131), then the next thing you know it's “Here, the hexamer is simplified to 3'CCCGGG5', and the two complementary trimers are 5'GGG3' + 5'CCC3'. Left to its own devices, this reaction is exergonic and, in the presence of excess trimers compared to the equilibrium ratio of hexamer to trimers, will flow exergonically toward equilibrium by synthesizing the hexamer.” (p. 64). This flipping back and forth between colloquial and scholarly voices leads to a kind of comprehensional kinetosis. There are a few typographical errors, none serious, but I have to share this delightful one-sentence paragraph from p. 254 (ellipsis in the original):
By iteration, we can construct a graph connecting the founder spin network with its 1-Pachner move “descendants,” 2-Pachner move descendints…N-Pachner move descendents.
Good grief—is Oxford University Press outsourcing their copy editing to Slashdot?

For the reasons given above, I found this a difficult read. But it is an important book, bristling with ideas which will get you looking at the big questions in a different way, and speculating, along with the author, that there may be some profound scientific insights which science has overlooked to date sitting right before our eyes—in the biosphere, the economy, and this fantastically complicated universe which seems to have emerged somehow from a near-thermalised big bang. While Kauffman is the first to admit that these are hypotheses and speculations, not science, they are eminently testable by straightforward scientific investigation, and there is every reason to believe that if there are, indeed, general laws that govern these phenomena, we will begin to glimpse them in the next few decades. If you're interested in these matters, this is a book you shouldn't miss, but be aware what you're getting into when you undertake to read it.

 Permalink

March 2007

Heinlein, Robert A. and Spider Robinson. Variable Star. New York: Tor, 2006. ISBN 0-7653-1312-X.
After the death of Virginia Heinlein in 2003, curators of the Heinlein papers she had deeded to the Heinlein Prize Trust discovered notes for a “juvenile” novel which Heinlein had plotted in 1955 but never got around to writing. Through a somewhat serendipitous process, Spider Robinson, who The New York Times Book Review called “the new Robert Heinlein” in 1982 (when the original Robert Heinlein was still very much on the scene—I met him in 1984, and his last novel was published in 1987, the year before his death), was tapped to “finish” the novel from the notes. To his horror (as described in the afterword in this volume), Robinson discovered the extant notes stopped in mid-sentence, in the middle of the story, with no clue as to the ending Heinlein intended. Taking some comments Heinlein made in a radio interview as the point of departure, Robinson rose to the challenge, cranking in a plot twist worthy of the Grandmaster.

Taking on a task like this is to expose oneself to carping and criticism from purists, but to this Heinlein fan who reads for the pleasure of it, Spider Robinson has acquitted himself superbly here. He deftly blends events in recent decades into the Future History timeline, and even hints at a plausible way current events could lead to the rise of the Prophet. It is a little disconcerting to encounter Simpsons allusions in a time line in which Leslie LeCroix of Harriman Enterprises was the first to land on the Moon, but recurring Heinlein themes are blended into the story line in such a way that you're tempted to think that this is the way Heinlein would have written such a book, were he still writing today. The language and situations are substantially more racy than the classic Heinlein juveniles, but not out of line with Heinlein's novels of the 1970s and 80s.

Sigh…aren't there any adults on the editorial staff at Tor? First they let three misspellings of Asimov's character Hari Seldon slip through in Orson Scott Card's Empire, and now the very first time the Prophet appears on p. 186, his first name is missing the final “h;”, and on p. 310 the title of Heinlein's first juvenile, Rocket Ship Galileo is given as “Rocketship Galileo”. Readers intrigued by the saxophone references in the novel may wish to check out The Devil's Horn, which discusses, among many other things, the possible connection between “circular breathing” and the mortality rate of saxophonists (and I always just thought it was that “cool kills”).

As you're reading this novel, you may find yourself somewhere around two hundred pages in, looking at the rapidly dwindling hundred-odd pages to go, and wondering is anything ever going to happen? Keep turning those pages—you will not be disappointed. Nor, I think, would Heinlein, wherever he is, regarding this realisation of his vision half a century after he consigned it to a file drawer.

 Permalink

Horowitz, David. Radical Son. New York: Touchstone Books, 1997. ISBN 0-684-84005-7.
One the mysteries I have never been able to figure out—I remember discussing it with people before I left the U.S., so that makes it at least fifteen years of bewilderment on my part—is why so many obviously highly intelligent people, some of whom have demonstrated initiative and achieved substantial success in productive endeavours, are so frequently attracted to collectivist ideologies which deny individual excellence, suppress individualism, and seek to replace achievement with imposed equality in mediocrity. Even more baffling is why so many people remain attracted to these ideas which are as thoroughly discredited by the events of the twentieth century as any in the entire history of human intellectual endeavour, in a seeming willingness to ignore evidence, even when it takes the form of a death toll in the tens of millions of human beings.

This book does not supply a complete answer, but it provides several important pieces of the puzzle. It is the most enlightening work on this question I've read since Hayek's The Fatal Conceit (March 2005), and complements it superbly. While Hayek's work is one of philosophy and economics, Radical Son is a searching autobiography by a person who was one of the intellectual founders and leaders of the New Left in the 1960s and 70s. The author was part of the group which organised the first demonstration against the Vietnam war in Berkeley in 1962, published the standard New Left history of the Cold War, The Free World Colossus in 1965, and in 1968, the very apogee of the Sixties, joined Ramparts magazine, where he rapidly rose to a position of effective control, setting its tone through the entire period of radicalisation and revolutionary chaos which ensued. He raised the money for the Black Panther Party's “Learning Center” in Oakland California, and became an adviser and regular companion of Huey Newton. Throughout all of this his belief in the socialist vision of the future, the necessity of revolution even in a democratic society, and support for the “revolutionary vanguard”, however dubious some of their actions seemed, never wavered.

He came to these convictions almost in the cradle. Like many of the founders of the New Left (Tom Hayden was one of the rare exceptions), Horowitz was a “red diaper baby”. In his case both his mother and father were members of the Communist Party of the United States and met through political activity. Although the New Left rejected the Communist Party as a neo-Stalinist anachronism, so many of its founders had parents who were involved with it directly or knowingly in front organisations, they formed part of a network of acquaintances even before they met as radicals in their own right. It is somewhat ironic that these people who believed themselves to be and were portrayed in the press as rebels and revolutionaries were, perhaps more than their contemporaries, truly their parents' children, carrying on their radical utopian dream without ever questioning anything beyond the means to the end.

It was only in 1974, when Betty Van Patter, a former Ramparts colleague he had recommended for a job helping the Black Panthers sort out their accounts, was abducted and later found brutally murdered, obviously by the Panthers (who expressed no concern when she disappeared, and had complained of her inquisitiveness), that Horowitz was confronted with the true nature of those he had been supporting. Further, when he approached others who were, from the circumstances of their involvement, well aware of the criminality and gang nature of the Panthers well before he, they continued to either deny the obvious reality or, even worse, deliberately cover it up because they still believed in the Panther mission of revolution. (To this day, nobody has been charged with Van Patter's murder.)

The contemporary conquest of Vietnam and Cambodia and the brutal and bloody aftermath, the likelihood of which had also been denied by the New Left (as late as 1974, Tom Hayden and Jane Fonda released a film titled Introduction to the Enemy which forecast a bright future of equality and justice when Saigon fell), reinforced the author's second thoughts, leading eventually to a complete break with the Left in the mid-1980s and his 1989 book with Peter Collier, Destructive Generation, the first sceptical look at the beliefs and consequences of Sixties radicalism by two of its key participants.

Radical Son mixes personal recollection, politics, philosophy, memoirs of encounters with characters ranging from Bertrand Russell to Abbie Hoffman, and a great deal of painful introspection to tell the story of how reality finally shattered second-generation utopian illusions. Even more valuable, the reader comes to understand the power those delusions have over those who share them, and why seemingly no amount of evidence suffices to induce doubt among those in their thrall, and why the reaction to any former believer who declares their “apostasy” is so immediate and vicious.

Horowitz is a serious person, and this is a serious, and often dismaying and tragic narrative. But one cannot help to be amused by the accounts of New Leftists trying to put their ideology into practice in running communal households, publishing enterprises, and political movements. Inevitably, before long everything blows up in the tediously familiar ways of such things, as imperfect human beings fail to meet the standards of a theory which requires them to deny their essential humanity. And yet they never learn; it's always put down to “errors”, blamed on deviant individuals, oppression, subversion, external circumstances, or some other cobbled up excuse. And still they want to try again, betting the entire society and human future on it.

 Permalink

Robinson, Andrew. The Last Man Who Knew Everything. New York: Pi Press, 2006. ISBN 0-13-134304-1.
The seemingly inexorable process of specialisation in the sciences and other intellectual endeavours—the breaking down of knowledge into categories so narrow and yet so deep that their mastery at the professional level seems to demand forsaking anything beyond a layman's competence in other, even related fields, is discouraging to those who believe that some of the greatest insights come from the cross-pollination of concepts from subjects previously considered unrelated. The twentieth century was inhospitable to polymaths—even within a single field such as physics, ever narrower specialities proliferated, with researchers interacting little with those working in other areas. The divide between theorists and experimentalists has become almost impassable; it is difficult to think of a single individual who achieved greatness in both since Fermi, and he was born in 1901.

As more and more becomes known, it is inevitable that it is increasingly difficult to cram it all into one human skull, and the investment in time to master a variety of topics becomes disproportionate to the length of a human life, especially since breakthrough science is generally the province of the young. And yet, one wonders whether the conventional wisdom that hyper-specialisation is the only way to go and that anybody who aspires to broad and deep understanding of numerous subjects must necessarily be a dilettante worthy of dismissal, might underestimate the human potential and discourage those looking for insights available only by synthesising the knowledge of apparently unrelated disciplines. After all, mathematicians have repeatedly discovered deep connections between topics thought completely unrelated to one another; why shouldn't this be the case in the sciences, arts, and humanities as well?

The life of Thomas Young (1773–1829) is an inspiration to anybody who seeks to understand as much as possible about the world in which they live. The eldest of ten children of a middle class Quaker family in southwest England (his father was a cloth merchant and later a banker), from childhood he immersed himself in every book he could lay his hands upon, and in his seventeenth year alone, he read Newton's Principia and Opticks, Blackstone's Commentaries, Linnaeus, Euclid's Elements, Homer, Virgil, Sophocles, Cicero, Horace, and many other classics in the original Greek or Latin. At age 19 he presented a paper on the mechanism by which the human eye focuses on objects at different distances, and on its merit was elected a Fellow of the Royal Society a week after his 21st birthday.

Young decided upon a career in medicine and studied in Edinburgh, Göttingen, and Cambridge, continuing his voracious reading and wide-ranging experimentation in whatever caught his interest, then embarked upon a medical practice in London and the resort town of Worthing, while pursuing his scientific investigations and publications, and popularising science in public lectures at the newly founded Royal Institution.

The breadth of Young's interests and contributions have caused some biographers, both contemporary and especially more recent, to dismiss him as a dilettante and dabbler, but his achievements give lie to this. Had the Nobel Prize existed in his era, he would almost certainly have won two (Physics for the wave theory of light, explanation of the phenomena of diffraction and interference [including the double slit experiment], and birefringence and polarisation; plus Physiology or Medicine for the explanation of the focusing of the eye [based, in part, upon some cringe-inducing experiments he performed upon himself], the trireceptor theory of colour vision, and the discovery of astigmatism), and possibly three (Physics again, for the theory of elasticity of materials: “Young's modulus” is a standard part of the engineering curriculum to this day).

But he didn't leave it at that. He was fascinated by languages since childhood, and in addition to the customary Latin and Greek, by age thirteen had taught himself Hebrew and read thirty chapters of the Hebrew Bible all by himself. In adulthood he undertook an analysis of four hundred different languages (pp. 184–186) ranging from Chinese to Cherokee, with the goal of classifying them into distinct families. He coined the name “Indo-European” for the group to which most Western languages belong. He became fascinated with the enigma of Egyptian hieroglyphics, and his work on the Rosetta Stone provided the first breakthrough and the crucial insight that hieroglyphic writing was a phonetic alphabet, not a pictographic language like Chinese. Champollion built upon Young's work in his eventual deciphering of hieroglyphics. Young continued to work on the fiendishly difficult demotic script, and was the first person since the fall of the Roman Empire to be able to read some texts written in it.

He was appointed secretary of the Board of Longitude and superintendent of the Nautical Almanac, and was instrumental in the establishment of a Southern Hemisphere observatory at the Cape of Good Hope. He consulted with the admiralty on naval architecture, with the House of Commons on the design for a replacement to the original London Bridge, and served as chief actuary for a London life insurance company and did original research on mortality in different parts of Britain.

Stereotypical characters from fiction might cause you to expect that such an intellect might be a recluse, misanthrope, obsessive, or seeker of self-aggrandisement. But no…, “He was a lively, occasionally caustic letter writer, a fair conversationalist, a knowledgeable musician, a respectable dancer, a tolerable versifier, an accomplished horseman and gymnast, and throughout his life, a participant in the leading society of London and, later, Paris, the intellectual capitals of his day” (p. 12). Most of the numerous authoritative articles he contributed to the Encyclopedia Britannica, including “Bridge”, “Carpentry”, “Egypt”, “Languages”, “Tides”, and “Weights and measures”, as well as 23 biographies, were published anonymously. And he was happily married from age 31 until the end of his life.

Young was an extraordinary person, but he never seems to have thought of himself as exceptional in any way other than his desire to understand how things worked and his willingness to invest as much time and effort as it took at arrive at the goals he set for himself. Reading this book reminded me of a remark by Admiral Hyman G. Rickover, “The only way to make a difference in the world is to put ten times as much effort into everything as anyone else thinks is reasonable. It doesn't leave any time for golf or cocktails, but it gets things done.” Young's life is a testament to just how many things one person can get done in a lifetime, enjoying every minute of it and never losing balance, by judicious application of this principle.

 Permalink

Wells, David. The Penguin Dictionary of Curious and Interesting Geometry. London: Penguin Books, 1991. ISBN 0-14-011813-6.
What a treat—two hundred and seventy-five diagram-rich pages covering hundreds of geometrical curiosities ranging from the problem of Apollonius to zonohedra. Items range from classical Euclidean geometry to modern topics such as higher dimensional space, non-Euclidean geometry, and topological transformations; and from classical times until the present—it's amazing how many fundamental properties of objects as simple as triangles were discovered only in the twentieth century!

There are so many wonders here I shall not attempt to list them but simply commend this book to your own exploration and enjoyment. But one example…it's obvious that a non-convex room with black walls cannot be illuminated by a single light placed within it. But what if all the walls are mirrors? It is possible to design a mirrored room such that a light within it will still leave some part dark (p. 263)? The illustration of the Voderberg tile on p. 268 is unfortunate; the width of the lines makes it appear not to be a proper tile, but rather two tiles joined at a point. This page shows a detailed construction which makes it clear that the tile is indeed well formed and rigid.

I will confess, as a number nerd more than a geometry geek, that this book comes in second in my estimation behind the author's Penguin Book of Curious and Interesting Numbers, one single entry of which motivated me to consume three years of computer time in 1987–1990. But there are any number of wonders here, and the individual items are so short you can open the book at random and find something worth reading you can finish in a minute or so. Almost all items are given without proof, but there are citations to publications for many and you'll be able to find most of the rest on MathWorld.

 Permalink

Phillips, Kevin. American Theocracy. New York: Viking, 2006. ISBN 0-670-03486-X.
In 1969, the author published The Emerging Republican Majority, which Newsweek called “The political bible of the Nixon Era.” The book laid out the “Sun Belt” (a phrase he coined) strategy he developed as a senior strategist for Richard Nixon's successful 1968 presidential campaign, and argued that demographic and economic trends would reinforce the political power of what he termed the “heartland” states, setting the stage for long-term Republican dominance of national politics, just as FDR's New Deal coalition had maintained Democratic power (especially in the Congress) for more than a generation.

In this book he argues that while his 1969 analysis was basically sound and would have played out much as he forecast, had the Republican steamroller not been derailed by Watergate and the consequent losses in the 1974 and 1976 elections, since the Reagan era, and especially during the presidency of George W. Bush, things have gone terribly wrong, and that the Republican party, if it remains in power, is likely to lead the United States in disastrous directions, resulting in the end of its de facto global hegemony.

Now, this is a view with which I am generally sympathetic, but if the author's reason for writing the present volume is to persuade people in that direction, I must judge the result ineffectual if not counterproductive. The book is ill-reasoned, weakly argued, poorly written, strongly biased, scantily documented, grounded in dubious historical analogies, and rhetorically structured in the form of “proof by assertion and endless repetition”.

To start with, the title is misleading if read without the subtitle, “The Peril and Politics of Radical Religion, Oil, and Borrowed Money in the 21st Century”, which appears in 8 point sans-serif type on the cover, below an illustration of a mega-church reinforcing the the words “American Theocracy” in 60 and 48 point roman bold. In fact, of 394 pages of main text, only 164—about 40%—are dedicated to the influence of religion on politics. (Yes, there are mentions of religion in the rest, but there is plenty of discussion of the other themes in the “Too Many Preachers” part as well; this book gives the distinct impression of having been shaken, not stirred.) And nothing in that part, or elsewhere in the book provides any evidence whatsoever, or even seriously advances a claim, that there is a genuine movement toward, threat of, or endorsement by the Republican party of theocracy, which Webster's Unabridged Dictionary defines as:

  1. A form of government in which God or a deity is recognized as the supreme civil ruler, the God's or deity's laws being interpreted by the ecclesiastical authorities.
  2. A system of government by priests claiming a divine commission.
  3. A commonwealth or state under such a form or system of government.

And since Phillips's argument is based upon the Republican party's support among religious groups as diverse as Southern Baptists, northern Midwest Lutherans, Pentecostals, Mormons, Hasidic Jews, and Eastern Rite and traditionalist Catholics, it is difficult to imagine how precisely how the feared theocracy would function, given how little these separate religious groups agree upon. It would have to be an “ecumenical theocracy”, a creature for which I can recall no historical precedent.

The greater part of the book discusses the threats to the U.S. posed by a global peak in petroleum production and temptation of resource wars (of which he claims the U.S. intervention in Iraq is an example), and the explosion of debt, public and private, in the U.S., the consequent housing bubble, and the structural trade deficits which are flooding the world with greenbacks. But these are topics which have been discussed more lucidly and in greater detail by authors who know far more about them than Phillips, who cites secondary and tertiary sources and draws no novel observations.

A theme throughout the work is comparison of the present situation of the U.S. with previous world powers which fell into decline: ancient Rome, Spain in the seventeenth century, the Netherlands in the second half of the eighteenth century, and Britain in the first half of the twentieth. The parallels here, especially as regards fears of “theocracy” are strained to say the least. Constantine did not turn Rome toward Christianity until the fourth century A.D., by which time, even Gibbon concedes, the empire had been in decline for centuries. (Phillips seems to have realised this part of the way through the manuscript and ceases to draw analogies with Rome fairly early on.) Few, if any, historians would consider Spain, Holland, or Britain in the periods in question theocratic societies; each had a clear separation between civil authority and the church, and in the latter two cases there is plain evidence of a decline in the influence of organised religion on the population as the nation's power approached a peak and began to ebb. Can anybody seriously contend that the Anglican church was responsible for the demise of the British Empire? Hello—what about the two world wars, which were motivated by power politics, not religion?

Distilled to the essence (and I estimate a good editor could cut a third to half of this text just by flensing the mind-numbing repetition), Phillips has come to believe in the world view and policy prescriptions advocated by the left wing of the Democratic party. The Republican party does not agree with these things. Adherents of traditional religion share this disagreement, and consequently they predominately vote for Republican candidates. Therefore, evangelical and orthodox religious groups form a substantial part of the Republican electorate. But how does that imply any trend toward “theocracy”? People choose to join a particular church because they are comfortable with the beliefs it espouses, and they likewise vote for candidates who advocate policies they endorse. Just because there is a correlation between preferences does not imply, especially in the absence of any evidence, some kind of fundamentalist conspiracy to take over the government and impose a religious dictatorship. Consider another divisive issue which has nothing to do with religion: the right to keep and bear arms. People who consider the individual right to own and carry weapons for self-defence are highly probable to be Republican voters as well, because that party is more closely aligned with their views than the alternative. Correlation is not evidence of causality, not to speak of collusion.

Much of the writing is reminiscent of the lower tier of the UFO literature. There are dozens of statements like this one from p. 93 (my italics), “There are no records, but Cheney's reported early 2001 plotting may well have touched upon the related peril to the dollar.” May I deconstruct? So what's really being said here is, “Some conspiracy theorist, with no evidence to support his assertion, claims that Cheney was plotting to seize Iraqi oil fields, and it is possible that this speculated scheme might have been motivated by fears for the dollar.”

There are more than thirty pages of end notes set in small type, but there is less documentation here than strains the eye. Many citations are to news stories in collectivist legacy media and postings on leftist advocacy Web sites. Picking page 428 at random, we find 29 citations, only five of which are to a total of three books, one by the present author.

So blinded is the author by his own ideological bias that he seems completely oblivious to the fact that a right-wing stalwart could produce an almost completely parallel screed about the Democratic party being in thrall to a coalition of atheists, humanists, and secularists eager to use the power of the state to impose their own radical agenda. In fact, one already has. It is dubious that shrill polemics of this variety launched back and forth between the trenches of an increasingly polarised society promote the dialogue and substantive debate which is essential to confront the genuine and daunting challenges all its citizens ultimately share.

 Permalink

April 2007

Harris, Robert. Imperium. New York: Simon & Schuster, 2006. ISBN 0-7432-6603-X.
Marcus Tullius Tiro was a household slave who served as the personal secretary to the Roman orator, lawyer, and politician Cicero. Tiro is credited with the invention of shorthand, and is responsible for the extensive verbatim records of Cicero's court appearances and political speeches. He was freed by Cicero in 53 B.C. and later purchased a farm where he lived to around the age of 100 years. According to contemporary accounts, Tiro published a biography of Cicero of at least four volumes; this work has been lost.

In this case, history's loss is a novelist's opportunity, which alternative-history wizard Robert Harris (Fatherland [June 2002], Archangel [February 2003], Enigma, Pompeii) seizes, bringing the history of Cicero's rise from ambitious lawyer to Consul of Rome to life, while remaining true to the documented events of Cicero's career. The narrator is Tiro, who discovers both the often-sordid details of how the Roman republic actually functioned and the complexity of Cicero's character as the story progresses.

The sense one gets of Rome is perhaps a little too modern, and terminology creeps in from time to time (for example, “electoral college” [p. 91]) which seems out of place. On pp. 226–227 there is an extended passage which made me fear we were about to veer off into commentary on current events:

‘I do not believe we should negotiate with such people, as it will only encourage them in their criminal acts.’ … Where would be struck next? What Rome was facing was a threat very different from that posed by a conventional enemy. These pirates were a new type of ruthless foe, with no government to represent them and no treaties to bind them. Their bases were not confined to a single state. They had no unified system of command. They were a worldwide pestilence, a parasite which needed to be stamped out, otherwise Rome—despite her overwhelming military superiority—would never again know security or peace. … Any ruler who refuses to cooperate will be regarded as Rome's enemy. Those who are not with us are against us.
Harris resists the temptation of turning Rome into a soapbox for present-day political advocacy on any side, and quickly gets back to the political intrigue in the capital. (Not that the latter days of the Roman republic are devoid of relevance to the present situation; study of them may provide more insight into the news than all the pundits and political blogs on the Web. But the parallels are not exact, and the circumstances are different in many fundamental ways. Harris wisely sticks to the story and leaves the reader to discern the historical lessons.)

The novel comes to a rather abrupt close with Cicero's election to the consulate in 63 B.C. I suspect that what we have here is the first volume of a trilogy. If that be the case, I look forward to future installments.

 Permalink

Wells, H. G. Floor Games. Springfield, VA: Skirmisher, [1911] 2006. ISBN 0-9722511-7-0.
Two years before he penned the classic work on wargaming, Little Wars (September 2006), H. G. Wells drew on his experience and that of his colleagues “F.R.W.” and “G.P.W.” (his sons Frank Richard and George Philip, then aged eight and ten respectively) to describe the proper equipment, starting with a sufficiently large and out-of-the-traffic floor, which imaginative children should have at their disposal to construct the worlds of adventure conjured by their fertile minds. He finds much to deplore in the offerings of contemporary toy shops, and shows how wooden bricks, sturdy paper, plasticine clay, twigs and sprigs from the garden, books from the library, and odds and ends rescued from the trash bin can be assembled into fantasy worlds, “the floor, the boards, the bricks, the soldiers, and the railway system—that pentagram for exorcising the evil spirit of dulness from the lives of little boys and girls” (p. 65).

The entire book is just 71 pages with large type and wide margins filled with delightful line drawings; eight photographs by the author illustrate what can be made of such simple components. The text is, of course, in the public domain, and is available in a free Project Gutenberg edition, but without the illustrations and photos. This edition includes a foreword by legendary wargame designer James F. Dunnigan.

While toys have changed enormously since this book was written, young humans haven't. A parent who provides their kids these simple stimuli to imagination and ingenuity is probably doing them an invaluable service compared to the present-day default of planting them in front of a television program or video game. Besides, if the collectivist morons in Seattle who banned Lego blocks launch the next educationalism fad, it'll be up to parents to preserve imagination and individuality in their children's play.

 Permalink

Russell, D. A. The Design and Construction of Flying Model Aircraft. Leicester, England: Harborough Publishing, [1937, 1940] 1941. British Library Shelfmark 08771.b.3.
In 1941, Britain stood alone in the West against Nazi Germany, absorbing bombing raids on its cities, while battling back and forth in North Africa. So confident was Hitler that the British threat had been neutralised, that in June he launched the assault against the Soviet Union. And in that dark year, some people in Britain put the war out of their minds by thinking instead about model airplanes, guided by this book, written by the editor of The Aero-Modeller magazine and published in that war year.

Modellers of this era scratch built their planes—the word “kit” is absent from this book and seemingly from the vocabulary of the hobby at the time. The author addresses an audience who not only build their models from scratch, but also design them from first principles of aerodynamics—in fact, the first few chapters are one of the most lucid expositions of basic practical aerodynamics I have ever read. The text bristles with empirical equations, charts, and diagrams, as well as plenty of practical advice to the designer and builder.

While many modellers of the era built featherweight aircraft powered by rubber bands, others flew petrol-powered beasts which would intimidate many modellers today. Throughout the book the author uses as an example one of his own designs, with a wingspan of 10 feet, all-up weight in excess of 14 pounds, and powered by an 18 cc. petrol engine.

There was no radio control, of course. All of these planes simply flew free until a clockwork mechanism cut the ignition, then glided to a landing on whatever happened to be beneath them at the time. If the time switch should fail, the plane would fly on until the fuel was exhausted. Given the size, weight, and flammability of the fuel, one worried about the possibility of burning down somebody's house or barn in such a mishap, and in fact p. 214 is a full-page advert for liability insurance backed by Lloyds!

This book was found in an antique shop in the British Isles. It is, of course, hopelessly out of print, but used copies are generally available at reasonable prices. Note that the second edition (first published in 1940, reprinted in 1941) contains substantially more material than the 1937 first edition.

 Permalink

Guedj, Denis. Le mètre du monde. Paris: Seuil, 2000. ISBN 2-02-049989-4.
When thinking about management lessons one can learn from the French Revolution, I sometimes wonder if Louis XVI, sometime in the interval between when the Revolution lost its mind and he lost his head, ever thought, “Memo to file: when running a country seething with discontent, it's a really poor idea to invite people to compile lists of things they detest about the current regime.” Yet, that's exactly what he did in 1788, soliciting cahiers de doléances (literally, “notebooks of complaints”) to be presented to the Estates-General when it met in May of 1789. There were many, many things about which to complain in the latter years of the Ancien Régime, but one which appeared on almost every one of the lists was the lack of uniformity in weights and measures. Not only was there a bewildering multitude of different measures in use (around 2000 in France alone), but the value of measures with the same name differed from one region to another, a legacy of feudal days when one of the rights of the lord was to define the weights and measures in his fiefdom. How far is “three leagues down the road?” Well, that depends on what you mean by “league”, which was almost 40% longer in Provence than in Paris. The most common unit of weight, the “livre”, had more than two hundred different definitions across the country. And if that weren't bad enough, unscrupulous merchants and tax collectors would exploit the differences and lack of standards to cheat those bewildered by the complexity.

Revolutions, and the French Revolution in particular, have a way of going far beyond the intentions of those who launch them. The multitudes who pleaded for uniformity in weights and measures almost unanimously intended, and would have been entirely satisfied with, a standardisation of the values of the commonly used measures of length, weight, volume, and area. But perpetuating these relics of tyranny was an affront to the revolutionary spirit of remaking the world, and faced with a series of successive decisions, the revolutionary assembly chose the most ambitious and least grounded in the past on each occasion: to entirely replace all measures in use with entirely new ones, to use identical measures for every purpose (traditional measures used different units depending upon what was being measured), to abandon historical subdivisions of units in favour of a purely decimal system, and to ground all of the units in quantities based in nature and capable of being determined by anybody at any time, given only the definition.

Thus was the metric system born, and seldom have so many eminent figures been involved in what many might consider an arcane sideshow to revolution: Concordet, Coulomb, Lavoisier, Laplace, Talleyrand, Bailly, Delambre, Cassini, Legendre, Lagrange, and more. The fundamental unit, the metre, was defined in terms of the Earth's meridian, and since earlier measures failed to meet the standard of revolutionary perfection, a project was launched to measure the meridian through the Paris Observatory from Dunkirk to Barcelona. Imagine trying to make a precision measurement over such a distance as revolution, terror, hyper-inflation, counter-revolution, and war between France and Spain raged all around the savants and their surveying instruments. So long and fraught with misadventures was the process of creating the metric system that while the original decree ordering its development was signed by Louis XVI, it was officially adopted only a few months before Napoleon took power in 1799. Yet despite all of these difficulties and misadventures, the final measure of the meridian accepted in 1799 differed from the best modern measurements by only about ten metres over a baseline of more than 1000 kilometres.

This book tells the story of the metric system and the measurement of the meridian upon which it was based, against the background of revolutionary France. The author pulls no punches in discussing technical detail—again and again, just when you expect he's going to gloss over something, you turn the page or read a footnote and there it is. Writing for a largely French audience, the author may assume the reader better acquainted with the chronology, people, and events of the Revolution than readers hailing from other lands are likely to be; the chronology at the end of the book is an excellent resource when you forget what happened when. There is no index. This seems to be one of those odd cultural things; I've found French books whose counterparts published in English would almost certainly be indexed to frequently lack this valuable attribute—I have no idea why this is the case.

One of the many fascinating factoids I gleaned from this book is that the country with the longest continuous use of the metric system is not France! Napoleon replaced the metric system with the mesures usuelles in 1812, redefining the traditional measures in terms of metric base units. The metric system was not reestablished in France until 1840, by which time Belgium, Holland, and Luxembourg had already adopted it.

 Permalink

Judd, Denis. Someone Has Blundered. London: Phoenix, [1973] 2007. ISBN 0-7538-2181-8.
One of the most amazing things about the British Empire was not how much of the world it ruled, but how small was the army which maintained dominion over so large a portion of the globe. While the Royal Navy enjoyed unchallenged supremacy on the high seas in the 19th century, it was of little use in keeping order in the colonies, and the ground forces available were, not just by modern standards, but by those of contemporary European powers, meagre. In the 1830s, the British regular army numbered only about 100,000, and rose to just 200,000 by the end of the century. When the Indian Mutiny (or “Sepoy Rebellion”) erupted in 1857, there were just 45,522 European troops in the entire subcontinent.

Perhaps the stolid British at home were confident that the military valour and discipline of their meagre legions would prevail, or that superior technology would carry the day:

Whatever happens,
we have got,
the Maxim gun,
and they have not.
            — Joseph Hilaire Pierre René Belloc, “The Modern Traveller”, 1898
but when it came to a fight, as happened surprisingly often in what one thinks of as the Pax Britannica era (the Appendix [pp. 174–176] lists 72 conflicts and military expeditions in the Victorian era), a small, tradition-bound force, accustomed to peace and the parade ground, too often fell victim to (p. xix) “a devil's brew of incompetence, unpreparedness, mistaken and inappropriate tactics, a reckless underestimating of the enemy, a brash overconfidence, a personal or psychological collapse, a difficult terrain, useless maps, raw and panicky recruits, skilful or treacherous opponents, diplomatic hindrance, and bone-headed leadership.”

All of these are much in evidence in the campaigns recounted here: the 1838–1842 invasion of Afghanistan, the 1854–1856 Crimean War, the 1857–1859 Indian Mutiny, the Zulu War of 1879, and the first (1880–1881) and second (1899–1902) Boer Wars. Although this book was originally published more than thirty years ago and its subtitle, “Calamities of the British Army in the Victorian Age”, suggests it is a chronicle of a quaint and long-departed age, there is much to learn in these accounts of how highly-mobile, superbly trained, excellently equipped, and technologically superior military forces were humiliated and sometimes annihilated by indigenous armies with the power of numbers, knowledge of the terrain, and the motivation to defend their own land.

 Permalink

Smith, Edward E. Children of the Lens. Baltimore: Old Earth Books, [1947–1948, 1954] 1998. ISBN 1-882968-14-X.
This is the sixth and final installment of the Lensman series, following Triplanetary (June 2004), First Lensman (February 2005), Galactic Patrol (March 2005), Gray Lensman (August 2005), and Second Stage Lensmen (April 2006). Children of the Lens appeared in serial form in Astounding Science Fiction from November 1947 through February 1948. This book is a facsimile of the illustrated 1954 Fantasy Press edition, which was revised from the magazine edition. (Masters of the Vortex [originally titled The Vortex Blaster] is set in the Lensman universe, but is not part of the Galactic Patrol saga; it's a fine yarn, and I look forward to re-reading it, but the main story ends here.)

Twenty years have passed since the events chronicled in Second Stage Lensmen, and the five children—son Christopher, and the two pairs of fraternal twin daughters Kathryn, Karen, Camilla, and Constance—of Gray Lensman Kimball Kinnison and his wife Clarissa, the sole female Lens… er…person in the universe are growing to maturity. The ultimate products of a selective breeding program masterminded over millennia by the super-sages of planet Arisia, they have, since childhood, had the power to link their minds directly even to the forbidding intelligences of the Second Stage Lensmen.

Despite the cataclysmic events which concluded Second Stage Lensmen, mayhem in the galaxies continues, and as this story progresses it becomes clear to the Children of the Lens that they, and the entire Galactic Patrol, have been forged for the final battle between good and evil which plays out in these pages. But all is not coruscating, actinic detonations and battles of super minds; Doc Smith leavens the story with humour, and even has some fun at his own expense when he has the versatile Kimball Kinnison write a space opera potboiler, “Its terrible xmex-like snout locked on. Its zymolosely polydactile tongue crunched out, crashed down, rasped across. Slurp! Slurp! … Fools! Did they think that the airlessness of absolute space, the heatlessness of absolute zero, the yieldlessness of absolute neutronium could stop QADGOP THE MERCOTAN?” (p. 37).

This concludes my fourth lifetime traverse of this epic, and it never, ever disappoints. Since I first read it more than thirty years ago, I have considered Children of the Lens one of the very best works of science fiction ever, and this latest reading reinforces that conviction. It is, of course, the pinnacle of a story spanning billions of years, hundreds of billions of planets, innumerable species, a multitude of parallel universes, absolute good and unadulterated evil, and more than 1500 pages, so if you jump into the story near the end, you're likely to end up perplexed, not enthralled. It's best either to start at the beginning with Triplanetary or, if you'd rather skip the two slower-paced “prequels”, with Volume 3, Galactic Patrol, which was the first written and can stand alone.

 Permalink

May 2007

Lewis, C. S. The Abolition of Man. New York: HarperCollins, [1944] 1947. ISBN 0-06-065294-2.
This short book (or long essay—the main text is but 83 pages) is subtitled “Reflections on education with special reference to the teaching of English in the upper forms of schools” but, in fact, is much more: one of the pithiest and most eloquent defences of traditional values I recall having read. Writing in the final years of World War II, when moral relativism was just beginning to infiltrate the secondary school curriculum, he uses as the point of departure an English textbook he refers to as “The Green Book” (actually The Control of Language: A critical approach to reading and writing, by Alex King and Martin Ketley), which he dissects as attempting to “debunk” the development of a visceral sense of right and wrong in students in the guise of avoiding emotionalism and sentimentality.

From his description of “The Green Book”, it seems pretty mild compared to the postmodern, multicultural, and politically correct propaganda aimed at present-day students, but then perhaps it takes an observer with the acuity of a C. S. Lewis to detect the poison in such a dilute form. He also identifies the associated perversion of language which accompanies the subversion of values. On p. 28 is this brilliant observation, which I only began to notice myself more than sixty years after Lewis identified it. “To abstain from calling it good and to use, instead, such predicates as ‘necessary”, ‘progressive’, or ‘efficient’ would be a subterfuge. They could be forced by argument to answer the questions ‘necessary for what?’, ‘progressing toward what?’, ‘effecting what?’; in the last resort they would have to admit that some state of affairs was in their opinion good for its own sake.” But of course the “progressives” and champions of “efficiency” don't want you to spend too much time thinking about the end point of where they want to take you.

Although Lewis's Christianity informs much of his work, religion plays little part in this argument. He uses the Chinese word Tao () or “The Way” to describe what he believes are a set of values shared, to some extent, by all successful civilisations, which must be transmitted to each successive generation if civilisation is to be preserved. To illustrate the universality of these principles, he includes a 19 page appendix listing the pillars of Natural Law, with illustrations taken from texts and verbal traditions of the Ancient Egyptian, Jewish, Old Norse, Babylonian, Hindu, Confucian, Greek, Roman, Christian, Anglo-Saxon, American Indian, and Australian Aborigine cultures. It seems like those bent on jettisoning these shared values are often motivated by disdain for the frequently-claimed divine origin of such codes of values. But their very universality suggests that, regardless of what myths cultures invent to package them, they represent an encoding of how human beings work and the distillation of millennia of often tragic trial-and-error experimentation in search of rules which allow members of our fractious species to live together and accomplish shared goals.

An on-line edition is available, although I doubt it is authorised, as the copyright for this work was last renewed in 1974.

 Permalink

Haisch, Bernard. The God Theory. San Francisco: Weiser, 2006. ISBN 1-57863-374-5.
This is one curious book. Based on acquaintance with the author and knowledge of his work, including the landmark paper “Inertia as a zero-point-field Lorentz force” (B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678–694 [1994]), I expected this to be a book about the zero-point field and its potential to provide a limitless source of energy and Doc Smith style inertialess propulsion. The title seemed odd, but there's plenty of evidence that when it comes to popular physics books, “God sells”.

But in this case the title could not be more accurate—this book really is a God Theory—that our universe was created, in the sense of its laws of physics being defined and instantiated, then allowed to run their course, by a being with infinite potential who did so in order to experience, in the sum of the consciousness of its inhabitants, the consequences of the creation. (Defining the laws isn't the same as experiencing their playing out, just as writing down the rules of chess isn't equivalent to playing all possible games.) The reason the constants of nature appear to be fine-tuned for the existence of consciousness is that there's no point in creating a universe in which there will be no observers through which to experience it, and the reason the universe is comprehensible to us is that our consciousness is, in part, one with the being who defined them. While any suggestion of this kind is enough to get what Haisch calls adherents of “fundamentalist scientism” sputtering if not foaming at the mouth, he quite reasonably observes that these self-same dogmatic reductionists seem perfectly willing to admit an infinite number of forever unobservable parallel universes created purely at random, and to inhabit a universe which splits into undetectable multiple histories with every quantum event, rather than contemplate that the universe might have a purpose or that consciousness may play a rôle in physical phenomena.

The argument presented here is reminiscent in content, albeit entirely different in style, to that of Scott Adams's God's Debris (February 2002), a book which is often taken insufficiently seriously because its author is the creator of Dilbert. Of course, there is another possibility about which I have written again, again, again, and again, which is that our universe was not created ex nihilo by an omnipotent being outside of space and time, but is rather a simulation created by somebody with a computer whose power we can already envision, run not to experience the reality within, but just to see what happens. Or, in other words, “it isn't a universe, it's a science fair project!” In The God Theory, your consciousness is immortal because at death your experience rejoins the One which created you. In the simulation view, you live on forever on a backup tape. What's the difference?

Seriously, this is a challenging and thought-provoking argument by a distinguished scientist who has thought deeply on these matters and is willing to take the professional risk of talking about them to the general public. There is much to think about here, and integrating it with other outlooks on these deep questions will take far more time than it takes to read this book.

 Permalink

[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 1. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1776, 1781] 1998. ISBN 962-634-071-1.
This is the first audiobook to appear in this list, for the excellent reason that it's the first one to which I've ever listened. I've been planning to “get around” to reading Gibbon's Decline and Fall for about twenty-five years, and finally concluded that the likelihood I was going to dive into that million-word-plus opus any time soon was negligible, so why not raise the intellectual content of my regular walks around the village with one of the masterpieces of English prose instead of ratty old podcasts?

The “Volume 1” in the title of this work refers to the two volumes of this audio edition, which is an abridgement of the first three volumes of Gibbon's history, covering the reign of Augustus through the accession of the first barbarian king, Odoacer. Volume 2 abridges the latter three volumes, primarily covering the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453. Both audio programs are almost eight hours in length, and magnificently read by Philip Madoc, whose voice is strongly reminiscent of Richard Burton's. The abridgements are handled well, with a second narrator, Neville Jason, summarising the material which is being skipped over. Brief orchestral music passages separate major divisions in the text. The whole work is artfully done and a joy to listen to, worthy of the majesty of Gibbon's prose, which is everything I've always heard it to be, from solemn praise for courage and wisdom, thundering condemnation of treason and tyranny, and occasionally laugh-out-loud funny descriptions of foibles and folly.

I don't usually read abridged texts—I figure that if the author thought it was worth writing, it's worth my time to read. But given the length of this work (and the fact that most print editions are abridged), it's understandable that the publisher opted for an abridged edition; after all, sixteen hours is a substantial investment of listening time. An Audio CD edition is available. And yes, I'm currently listening to Volume 2.

 Permalink

Scott, William B., Michael J. Coumatos, and William J. Birnes. Space Wars. New York: Forge, 2007. ISBN 0-7653-1379-0.
I believe it was Jerry Pournelle who observed that a Special Forces operative in Afghanistan on horseback is, with his GPS target designator and satellite communications link to an F-16 above, the closest thing in our plane of existence to an angel of death. But, take away the space assets, and he's just a guy on a horse.

The increasing dependence of the U.S. military on space-based reconnaissance, signal intelligence, navigation and precision guidance, missile warning, and communications platforms has caused concern among strategic thinkers about the risk of an “asymmetrical attack” against them by an adversary. The technology needed to disable them is far less sophisticated and easier to acquire than the space assets, and the impact of their loss will disproportionately impact the U.S., which has fully integrated them into its operations. This novel, by a former chief wargamer of the U.S. Space Command (Coumatos), the editor-in-chief of Aviation Week and Space Technology (Scott), and co-author Birnes, uses a near-term fictional scenario set in 2010 to explore the vulnerabilities of military space and make the case for both active defence of these platforms and the ability to hold at risk the space-based assets of adversaries even if doing so gets the airheads all atwitter about “weapons in space” (as if a GPS constellation which lets you drop a bomb down somebody's chimney isn't a weapon). The idea, then, was to wrap the cautionary tale and policy advocacy in a Tom Clancy-style thriller which would reach a wider audience than a dull Pentagon briefing presentation.

The reality, however, as embodied in the present book, is simply a mess. I can't help but notice that the publisher, Forge, is an imprint of Tom Doherty Associates, best known for their Tor science fiction books. As I have observed earlier in comments about the recent novels by Orson Scott Card and Heinlein and Robinson, Doherty doesn't seem to pay much attention to copy editing and fact checking, and this book illustrates the problem is not just confined to the Tor brand. In fact, after this slapdash effort, I'm coming to look at Doherty as something like Que computer books in the 1980s—the name on the spine is enough to persuade me to leave it on the shelf.

Some of the following might be considered very mild spoilers, but I'm not going to put them in a spoiler warning since they don't really give away major plot elements or the ending, such as it is. The real spoiler is knowing how sloppy the whole thing is, and once you appreciate that, you won't want to waste your time on it anyway. First of all, the novel is explicitly set in the month of April 2010, and yet the “feel” and the technological details are much further out. Basically, the technologies in place three years from now are the same we have today, especially for military technologies which have long procurement times and glacial Pentagon deployment schedules. Yet we're supposed to believe than in less than thirty-six months from today, the Air Force will be operating a two-storey, 75,000 square foot floor space computer containing “an array of deeply stacked parallel nanoprocessing circuits”, with spoken natural language programming and query capability (pp. 80–81). On pp. 212–220 we're told of a super weapon inspired by Star Trek II: The Wrath of Khan which, having started its development as a jammer for police radar, is able to seize control of enemy unmanned aerial vehicles. And so protean is this weapon, its very name changes at random from SPECTRE to SCEPTRE from paragraph to paragraph.

The mythical Blackstar spaceplane figures in the story, described as incoherently as in co-author Scott's original cover story in Aviation Week. On p. 226 we're told the orbiter burns “boron-based gel fuel and atmospheric oxygen”, then on the very next page we hear of the “aerospike rocket engines”. Well, where do we start? A rocket does not burn atmospheric oxygen, but carries its own oxidiser. An aerospike is a kind of rocket engine nozzle, entirely different from the supersonic combustion ramjet one would expect on an spaceplane which used atmospheric oxygen. Further, the advantage of an aerospike is that it is efficient both at low and high altitudes, but there's no reason to use one on an orbiter which is launched at high altitude from a mother ship. And then on p. 334, the “aerospike” restarts in orbit, which you'll probably agree is pretty difficult to do when you're burning “atmospheric oxygen”, which is famously scarce at orbital altitudes.

Techno-gibberish is everywhere, reminiscent in verisimilitude to the dialogue in the television torture fantasy “24”. For example, “Yo' Jaba! Got a match on our parallel port. I am waaay cool!” (p. 247). On p. 174 a Rooskie no-goodnik finds orbital elements for U.S. satellites from “the American ‘space catalog’ she had hacked into through a Texas university's server”. Why not just go to CelesTrak, where this information has been available worldwide since 1985? The laws of orbital mechanics here differ from those of Newton; on p. 381, a satellite in a circular orbit “14,674 miles above sea level” is said to be orbiting at “17,500 MPH”. In fact, at this altitude orbital velocity is 4.35 km/sec or 9730 statute miles per hour. And astronauts in low earth orbit who lose their electrical power quickly freeze solid, “victims of space's hostile, unforgiving cold”. Actually, in intense sunlight for half of every orbit and with the warm Earth filling half the sky, getting rid of heat is the problem in low orbit. On pp. 285–290, an air-launched cruise missile is used to launch a blimp. Why not just let it go and let the helium do the job all by itself? On the political front, we're supposed to think that a spittle-flecked mullah raving that he was the incarnation of the Twelfth Imam, in the presence of the Supreme Leader and President of Iran, would not only escape being thrown in the dungeon, but walk out of the meeting with a go-ahead to launch a nuclear-tipped missile at a target in Europe. And there is much, much more like this.

I suppose it should have been a tip-off that the foreword was written by George Noory, who hosts the Coast to Coast AM radio program originally founded by Art Bell. Co-author Birnes was also co-author of the hilariously preposterous The Day After Roswell, which claims that key technologies in the second half of the twentieth century, including stealth aircraft and integrated circuits, were based on reverse-engineered alien technologies from a flying saucer which crashed in New Mexico in 1947. As stories go, Roswell, Texas seems more plausible, and a lot more fun, than this book.

 Permalink

Hicks, Stephen R. C. Explaining Postmodernism. Phoenix: Scholargy, 2004. ISBN 1-59247-642-2.
Starting more than ten years ago, with the mass pile-on to the Internet and the advent of sites with open content and comment posting, I have been puzzled by the extent of the anger, hatred, and nihilism which is regularly vented in such fora. Of all the people of my generation with whom I have associated over the decades (excepting, of course, a few genuine nut cases), I barely recall anybody who seemed to express such an intensively negative outlook on life and the world, or who were so instantly ready to impute “evil” (a word used incessantly for the slightest difference of opinion) to those with opposing views, or to inject ad hominem arguments or obscenity into discussions of fact and opinion. Further, this was not at all confined to traditionally polarising topics; in fact, having paid little attention to most of the hot-button issues in the 1990s, I first noticed it in nerdy discussions of topics such as the merits of different microprocessors, operating systems, and programming languages—matters which would seem unlikely, and in my experience had only rarely in the past, inspired partisans on various sides to such passion and vituperation. After a while, I began to notice one fairly consistent pattern: the most inflamed in these discussions, those whose venting seemed entirely disproportionate to the stakes in the argument, were almost entirely those who came of age in the mid-1970s or later; before the year 2000 I had begun to call them “hate kiddies”, but I still didn't understand why they were that way. One can speak of “the passion of youth”, of course, which is a real phenomenon, but this seemed something entirely different and off the scale of what I recall my contemporaries expressing in similar debates when we were of comparable age.

This has been one of those mysteries that's puzzled me for some years, as the phenomenon itself seemed to be getting worse, not better, and with little evidence that age and experience causes the original hate kiddies to grow out of their youthful excess. Then along comes this book which, if it doesn't completely explain it, at least seems to point toward one of the proximate causes: the indoctrination in cultural relativist and “postmodern” ideology which began during the formative years of the hate kiddies and has now almost entirely pervaded academia apart from the physical sciences and engineering (particularly in the United States, whence most of the hate kiddies hail). In just two hundred pages of main text, the author traces the origins and development of what is now called postmodernism to the “counter-enlightenment” launched by Rousseau and Kant, developed by the German philosophers of the 18th and 19th centuries, then transplanted to the U.S. in the 20th. But the philosophical underpinnings of postmodernism, which are essentially an extreme relativism which goes as far as denying the existence of objective truth or the meaning of texts, doesn't explain the near monolithic adherence of its champions to the extreme collectivist political Left. You'd expect that philosophical relativism would lead its believers to conclude that all political tendencies were equally right or wrong, and that the correct political policy was as impossible to determine as ultimate scientific truth.

Looking at the philosophy espoused by postmodernists alongside the the policy views they advocate and teach their students leads to the following contradictions which are summarised on p. 184:

  • On the one hand, all truth is relative; on the other hand, postmodernism tells it like it really is.
  • On the one hand, all cultures are equally deserving of respect; on the other, Western culture is uniquely destructive and bad.
  • Values are subjective—but sexism and racism are really evil. (There's that word!—JW)
  • Technology is bad and destructive—and it is unfair that some people have more technology than others.
  • Tolerance is good and dominance is bad—but when postmodernists come to power, political correctness follows.

The author concludes that it is impossible to explain these and other apparent paradoxes and the uniformly Left politics of postmodernists without understanding the history and the failures of collectivist political movements dating from Rousseau's time. On p. 173 is an absolutely wonderful chart which traces the mutation and consistent failure of socialism in its various guises from Marx to the present. With each failure, the response has been not to question the premises of collectivism itself, but rather to redefine its justification, means, and end. As failure has followed failure, postmodernism represents an abject retreat from reason and objectivity itself, either using the philosophy in a Machiavellian way to promote collectivist ideology, or to urge acceptance of the contradictions themselves in the hope of creating what Nietzsche called ressentiment, which leads directly to the “everybody is evil”, “nothing works”, and “truth is unknowable” irrationalism and nihilism which renders those who believe it pliable in the hands of agenda-driven manipulators.

Based on the some of the source citations and the fact that this work was supported in part by The Objectivist Center, the author appears to be a disciple of Ayn Rand, which is confirmed by his Web site. Although the author's commitment to rationalism and individualism, and disdain for their adversaries, permeates the argument, the more peculiar and eccentric aspects of the Objectivist creed are absent. For its size, insight, and crystal clear reasoning and exposition, I know of no better introduction to how postmodernism came to be, and how it is being used to advance a collectivist ideology which has been thoroughly discredited by sordid experience. And I think I'm beginning to comprehend how the hate kiddies got that way.

 Permalink

Scurr, Ruth. Fatal Purity. London: Vintage Books, 2006. ISBN 0-09-945898-5.
In May 1791, Maximilien Robespierre, not long before an obscure provincial lawyer from Arras in northern France, elected to the Estates General convened by Louis XVI in 1789, spoke before what had by then reconstituted itself as the National Assembly, engaged in debating the penal code for the new Constitution of France. Before the Assembly were a number of proposals by a certain Dr. Guillotin, among which the second was, “In all cases of capital punishment (whatever the crime), it shall be of the same kind—i.e. beheading—and it shall be executed by means of a machine.” Robespierre argued passionately against all forms of capital punishment: “A conqueror that butchers his captives is called barbaric. Someone who butchers a perverse child that he could disarm and punish seems monstrous.” (pp. 133–136)

Just two years later, Robespierre had become synonymous not only with the French Revolution but with the Terror it had spawned. Either at his direction, with his sanction, or under the summary arrest and execution without trial or appeal which he advocated, the guillotine claimed more than 2200 lives in Paris alone, 1376 between June 10th and July 27th of 1793, when Robespierre's power abruptly ended, along with the Terror, with his own date with the guillotine.

How did a mild-mannered provincial lawyer who defended the indigent and disadvantaged, amused himself by writing poetry, studied philosophy, and was universally deemed, even by his sworn enemies, to merit his sobriquet, “The Incorruptible”, become an archetypal monster of the modern age, a symbol of the darkness beneath the Enlightenment?

This lucidly written, well-argued, and meticulously documented book traces Robespierre's life from birth through downfall and execution at just age 36, and places his life in the context of the upheavals which shook France and to which, in his last few years, he contributed mightily. The author shows the direct link between Rousseau's philosophy, Robespierre's inflexible, whatever-the-cost commitment to implementing it, and its horrific consequences for France. Too many people forget that it was Rousseau who wrote in The Social Contract, “Now, as citizen, no man is judge any longer of the danger to which the law requires him to expose himself, and when the prince says to him: ‘It is expedient for the state that you should die’, then he should die…”. Seen in this light, the madness of Robespierre's reign is not the work of a madman, but of a rigorously rational application of a profoundly anti-human system of beliefs which some people persist in taking seriously even today.

A U.S. edition is available.

 Permalink

Buckley, Christopher. Boomsday. New York: Twelve, 2007. ISBN 0-446-57981-5.
Cassandra Devine is twenty-nine, an Army veteran who served in Bosnia, a PR genius specialising in damage control for corporate malefactors, a high-profile blogger in her spare time, and hopping mad. What's got her Irish up (and she's Irish on both sides of the family) is the imminent retirement of the baby boom generation—boomsday—when seventy-seven million members of the most self-indulgent and -absorbed generation in history will depart the labour pool and begin to laze away their remaining decades in their gated, golf-course retirement communities, sending the extravagant bills to their children and grandchildren, each two of whom can expect to support one retired boomer, adding up to an increase in total taxes on the young between 30% and 50%.

One night, while furiously blogging, it came to her. A modest proposal which would, at once, render Social Security and Medicare solvent without any tax increases, provide free medical care and prescription drugs to the retired, permit the elderly to pass on their estates to their heirs tax-free, and reduce the burden of care for the elderly on the economy. There is a catch, of course, but the scheme polls like pure electoral gold among the 18–30 “whatever generation”.

Before long, Cassandra finds herself in the middle of a presidential campaign where the incumbent's slogan is “He's doing his best. Really.” and the challenger's is “No Worse Than The Others”, with her ruthless entrepreneur father, a Vatican diplomat, a southern media preacher, Russian hookers, a nursing home serial killer, the North Koreans, and what's left of the legacy media sucked into the vortex. Buckley is a master of the modern political farce, and this is a thoroughly delightful read which makes you wonder just how the under-thirties will react when the bills run up by the boomers start to come due.

 Permalink

June 2007

Segell, Michael. The Devil's Horn. New York: Picador, 2005. ISBN 0-312-42557-0.
When Napoléon III seized power and proclaimed himself Emperor of France in 1851, his very first decree did not have to do with any of the social, economic, or political crises the country faced, but rather reinstating the saxophone in French military bands, reversing a ban on the instrument imposed by the Second Republic (p. 220). There is something about the saxophone—its lascivious curves and seductive sound, perhaps, or its association with avant garde and not entirely respectable music—which has made it the object of attacks by prudes, puritans, and musical elitists almost from the time of its invention in the early 1840s by Belgian Adolphe Sax. Nazi Germany banned the sax as “decadent”; Stalin considered it a “dangerous capitalist instrument” and had saxophonists shot or sent to Siberia; the League of Catholic Decency in the United States objected not to the steamy images on the screen in the 1951 film A Streetcar Named Desire, but rather the sultry saxophone music which accompanied it, and signed off on the scene when it was re-scored for French horn and strings; and in Kansas City, Missouri, it was once against the law to play a saxophone outside a nightclub from ten-thirty at night until six in the morning (which seems awfully early to me to be playing a saxophone unless you've been at it all night).

Despite its detractors, political proscribers, somewhat disreputable image, and failure to find a place in symphony orchestras, this relative newcomer has infiltrated almost all genres of music, sparked the home music and school band crazes in the United States, and became central to the twentieth century evolution of jazz, big band, rhythm and blues, and soul music. A large and rapidly expanding literature of serious and experimental music for the instrument exists, and many conservatories which once derided the “vulgar horn” now teach it.

This fascinating book tells the story of Sax, the saxophone, saxophonists, and the music and culture they have engendered. Even to folks like myself who cannot coax music from anything more complicated than an iPod (I studied saxophone for two years in grade school before concluding, with the enthusiastic concurrence of my aurally assaulted parents, that my talents lay elsewhere) will find many a curious and delightful detail to savour, such as the monstrous contrabass saxophone (which sounds something like a foghorn), and the fact that Adolphe Sax, something of a mad scientist, also invented (but, thankfully, never built) an organ powered by a locomotive engine which could “play the music of Meyerbeer for all of Paris” and the “Saxocannon”, a mortar which would fire a half-kiloton bullet 11 yards wide, which “could level an entire city” (pp. 27–28)—and people complain about the saxophone! This book will make you want to re-listen to a lot of music, which you're likely to understand much better knowing the story of how it, and those who made it, came to be.

 Permalink

[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 2. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1788, 1789] 1998. ISBN 962-634-122-X.
The “Volume 2” in the title of this work refers to the two volumes of this audiobook edition. This is an abridgement of the final three volumes of Gibbon's history, primarily devoted the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453, although the fractious kingdoms of the west, the Crusades, the conquests of Genghis Khan and Tamerlane, and the origins of the great schism between the Roman and Eastern Orthodox churches all figure in this history. I understand why many people read only the first three volumes of Gibbon's masterpiece—the doings of the Byzantine court are, well, byzantine, and the steady litany of centuries of backstabbing, betrayal, intrigue, sedition, self-indulgence, and dissipation can become both tedious and depressing. Although there are are some sharply-worded passages which may have raised eyebrows in the eighteenth century, I did not find Gibbon anywhere near as negative on the influence of Christianity on the Roman Empire as I expected from descriptions of his work by others. The facile claim that “Gibbon blamed the fall of Rome on the Christians” is simply not borne out by his own words.

Please see my comments on Volume 1 for details of the (superb) production values of this seven hour recording. An Audio CD edition is available.

 Permalink

Tipler, Frank J. The Physics of Christianity. New York: Doubleday, 2007. ISBN 0-385-51424-7.
Oh. My. Goodness. Are you yearning for answers to the Big Questions which philosophers and theologians have puzzled over for centuries? Here you are, using direct quotes from this book in the form of a catechism of this beyond-the-fringe science cataclysm.

What is the purpose of life in the universe?
It is not enough to annihilate some baryons. If the laws of physics are to be consistent over all time, a substantial percentage of all the baryons in the universe must be annihilated, and over a rather short time span. Only if this is done will the acceleration of the universe be halted. This means, in particular, that intelligent life from the terrestrial biosphere must move out into interstellar and intergalactic space, annihilating baryons as they go. (p. 67)
What is the nature of God?
God is the Cosmological Singularity. A singularity is an entity that is outside of time and space—transcendent to space and time—and it is the only thing that exists that is not subject to the laws of physics. (p. 269)
How can the three persons of the Trinity be one God?
The Cosmological Singularity consists of three Hypostases: the Final Singularity, the All-Presents Singularity, and the Initial Singularity. These can be distinguished by using Cauchy sequences of different sorts of person, so in the Cauchy completion, they become three distinct Persons. But still, the three Hypostases of the Singularity are just one Singularity. The Trinity, in other words, consists of three Persons but only one God. (pp. 269–270.)
How did Jesus walk on water?
For example, walking on water could be accomplished by directing a neutrino beam created just below Jesus' feet downward. If we ourselves knew how to do this, we would have the perfect rocket! (p. 200)
What is Original Sin?
If Original Sin actually exists, then it must in some way be coded in our genetic material, that is, in our DNA. … By the time of the Cambrian Explosion, if not earlier, carnivores had appeared on Earth. Evil had appeared in the world. Genes now coded for behavior that guided the use of biological weapons of the carnivores. The desire to do evil was now hereditary. (pp. 188, 190)
How can long-dead saints intercede in the lives of people who pray to them?
According to the Universal Resurrection theory, everyone, in particular the long-dead saints, will be brought back into existence as computer emulations in the far future, near the Final Singularity, also called God the Father. … Future-to-past causation is usual with the Cosmological Singularity. A prayer made today can be transferred by the Singularity to a resurrected saint—the Virgin Mary, say—after the Universal Resurrection. The saint can then reflect on the prayer and, by means of the Son Singularity acting through the multiverse, reply. The reply, via future-to-past causation, is heard before it is made. It is heard billions of years before it is made. (p. 235)
When will the End of Days come?
In summary, by the year 2050 at the latest, we will see:
  1. Intelligent machines more intelligent than humans.
  2. Human downloads, effectively invulnerable and far more capable than normal humans.
  3. Most of humanity Christian.
  4. Effectively unlimited energy
  5. A rocket capable of interstellar travel.
  6. Bombs that are to atomic bombs as atomic bombs are to spitballs, and these weapons will be possessed by practically anybody who wants one.
(p. 253)

Hey, I said answers, not correct answers! This is only a tiny sampler of the side-splitting “explanations” of Christian mysteries and miracles in this book. Others include the virgin birth, the problem of evil, free will, the resurrection of Jesus, the shroud of Turin and the holy grail, the star of Bethlehem, transubstantiation, quantum gravity, the second coming, and more, more, more. Quoting them all would mean quoting almost the whole book—if you wish to be awed by or guffaw at them all, you're going to have to read the whole thing. And that's not all, since it seems like every other page or so there's a citation of Tipler's 1994 opus, The Physics of Immortality (read my review), so some sections are likely to be baffling unless you suspend disbelief and slog your way through that tome as well.

Basically, Tipler sees your retro-causality and raises to retro-teleology. In order for the laws of physics, in particular the unitarity of quantum mechanics, to be valid, then the universe must evolve to a final singularity with no event horizons—the Omega Point. But for this to happen, as it must, since the laws of physics are never violated, then intelligent life must halt the accelerating expansion of the universe and turn it around into contraction. Because this must happen, the all-knowing Final Singularity, which Tipler identifies with God the Father, acts as a boundary condition which causes fantastically improbable events such as the simultaneous tunnelling disintegration of every atom of the body of Jesus into neutrinos to become certainties, because otherwise the Final Singularity Omega Point will not be formed. Got that?

I could go on and on, but by now I think you'll have gotten the point, even if it isn't an Omega Point. The funny thing is, I'm actually sympathetic to much of what Tipler says here: his discussion of free will in the multiverse and the power of prayer or affirmation is not that unlike what I suggest in my eternally under construction “General Theory of Paranormal Phenomena”, and I share Tipler's optimism about the human destiny and the prospects, in a universe of which 95% of the mass is made of stuff we know absolutely nothing about, of finding sources of energy as boundless and unimagined as nuclear fission and fusion were a century ago. But folks, this is just silly. One of the most irritating things is Tipler's interpreting scripture to imply a deep knowledge of recently-discovered laws of physics and then turning around, a few pages later, when the argument requires it, to claim that another passage was influenced by contemporary beliefs of the author which have since been disproved. Well, which is it?

If you want to get a taste of this material, see “The Omega Point and Christianity”, which contains much of the physics content of the book in preliminary form. The entire first chapter of the published book can be downloaded in icky Microsoft Word format from the author's Web site, where additional technical and popular articles are available.

For those unacquainted with the author, Frank J. Tipler is a full professor of mathematical physics at Tulane University in New Orleans, pioneer in global methods in general relativity, discoverer of the massive rotating cylinder time machine, one of the first to argue that the resolution of the Fermi Paradox is, as his paper was titled, “Extraterrestrial Intelligent Beings Do Not Exist”, and, with John Barrow, author of The Anthropic Cosmological Principle, the definitive work on that topic. Say what you like, but Tipler is a serious and dedicated scientist with world-class credentials who believes that the experimentally-tested laws of physics as we understand them are not only consistent with, but require, many of the credal tenets which traditional Christians have taken on faith. The research program he proposes (p. 271), “… would make Christianity a branch of physics.” Still, as I wrote almost twelve years ago, were I he, I'd be worried about getting on the wrong side of the Old One.

Finally, and this really bothers me, I can't close these remarks without mentioning that notwithstanding there being an entire chapter titled “Anti-Semitism Is Anti-Christian” (pp. 243–256), which purports to explain it on the last page, this book is dedicated, “To God's Chosen People, the Jews, who for the first time in 2,000 years are advancing Christianity.” I've read the book; I've read the explanation; and this remark still seems both puzzling and disturbing to me.

 Permalink

Brozik, Matthew David and Jacob Sager Weinstein. The Government Manual for New Superheroes. Kansas City: Andrews McMeel, 2005. ISBN 0-7407-5462-9.
(Guest review by The Punctuator)
The Government of the Unified Nations has done a tremendous service to all superheroes: whether alien, mutant, or merely righteous human do-gooders, by publishing this essential manual filled with tips for getting your crimefighting career off to the right start and avoiding the many pitfalls of the profession. Short, pithy chapters provide wise counsel on matters such as choosing a name, designing a costume, finding an exotic hideaway, managing a secret identity, and more. The chapter on choosing a sidekick would have allowed me to avoid the whole unpleasant and regrettable business with Octothorpe and proceed directly to my entirely satisfactory present protégé, Apostrophe Squid. The advantages and drawbacks of joining a team of superheroes are discussed candidly, along with the warning signs that you may be about to inadvertently join a cabal of supervillains (for example, their headquarters is named “The whatever of Doom” as opposed to “The whatever of Justice”). An afterword by The Eviliminator: Eliminator of Evil Things but Defender of Good Ones reveals the one sure-fire way to acquire superpowers, at least as long as you aren't a troublemaking, question-asking pinko hippie egghead. The book is small, printed with rounded corners, and ideal for slipping into a cape pocket. I would certainly never leave it behind when setting out in pursuit of the nefarious Captain Comma Splice. Additional information is available on the Government's Bureau of Superheroics Web site.

 Permalink

Kondo, Yoji, Frederick Bruhweiler, John Moore, and Charles Sheffield eds. Interstellar Travel and Multi-Generation Space Ships. Burlington, Ontario, Canada: Apogee Books, 2003. ISBN 1-896522-99-8.
This book is a collection of papers presented at a symposium organised in 2002 by the American Association for the Advancement of Science. More than half of the content discusses the motivations, technology, and prospects for interstellar flight (both robotic probes and “generation ship” exploration and colonisation missions), while the balance deals with anthropological, genetic, and linguistic issues in crew composition for a notional mission with a crew of 200 with a flight time of two centuries. An essay by Freeman Dyson on “Looking for Life in Unlikely Places” explores the signatures of ubiquitous vacuum-adapted life and how surprisingly easy it might be to detect, even as far as one light-year from Earth.

This volume contains the last published works of Charles Sheffield and Robert L. Forward, both of whom died in 2002. The papers are all accessible to the scientifically literate layman and, with one exception, of high quality. Regrettably, nobody seemed to have informed the linguist contributor that any interstellar mission would certainly receive a steady stream of broadband transmissions from the home planet (who would fund a multi-terabuck mission without the ability to monitor it and receive the results?), but that chapter is only four pages and may be deemed comic relief.

 Permalink

Bawer, Bruce. While Europe Slept. New York: Doubleday, 2006. ISBN 0-385-51472-7.
In 1997, the author visited the Netherlands for the first time and “thought I'd found the closest thing to heaven on earth”. Not long thereafter, he left his native New York for Europe, where he has lived ever since, most recently in Oslo, Norway. As an American in Europe, he has identified and pointed out many of the things which Europeans, whether out of politeness, deference to their ruling elites, or a “what-me-worry?” willingness to defer the apocalypse to their dwindling cohort of descendants, rarely speak of, at least in the public arena.

As the author sees it, Europe is going down, the victim of multiculturalism, disdain and guilt for their own Western civilisation, and “tolerance for [the] intolerance” of a fundamentalist Muslim immigrant population which, by its greater fertility, “fetching marriages”, and family reunification, may result in Muslim majorities in one or more European countries by mid-century.

This is a book which may open the eyes of U.S. readers who haven't spent much time in Europe to just how societally-suicidal many of the mainstream doctrines of Europe's ruling elites are, and how wide the gap is between this establishment (which is a genuine cultural phenomenon in Europe, encompassing academia, media, and the ruling class, far more so than in the U.S.) and the population, who are increasingly disenfranchised by the profoundly anti-democratic commissars of the odious European Union.

But this is, however, an unsatisfying book. The author, who has won several awards and been published in prestigious venues, seems more at home with essays than the long form. The book reads like a feature article from The New Yorker which grew to book length without revision or editorial input. The 237 page text is split into just three chapters, putatively chronologically arranged but, in fact, rambling all over the place, each mixing the author's anecdotal observations with stories from secondary sources, none of which are cited, neither in foot- or end-notes, nor in a bibliography.

If you're interested in these issues (and in the survival of Western civilisation and Enlightenment values), you'll get a better picture of the situation in Europe from Claire Berlinski's Menace in Europe (July 2006). As a narrative of the experience of a contemporary American in Europe, or as an assessment of the cultural gap between Western (and particularly Northern) Europe and the U.S., this book may be useful for those who haven't experienced these cultures for themselves, but readers should not over-generalise the author's largely anecdotal reporting in a limited number of countries to Europe as a whole.

 Permalink

Dyson, Freeman J. The Scientist as Rebel. New York: New York Review Books, 2006. ISBN 1-59017-216-7.
Freeman Dyson is one of the most consistently original thinkers of our time. This book, a collection of his writings between 1964 and 2006, amply demonstrates the breadth and depth of his imagination. Twelve long book reviews from The New York Review of Books allow Dyson, after doing his duty to the book and author, to depart on his own exploration of the subject matter. One of these reviews, of Brian Greene's The Fabric of the Cosmos, is where Dyson first asked whether it was possible, using any apparatus permitted by the laws of physics and the properties of our universe, to ever detect a single graviton and, if not, whether quantum gravity has any physical meaning. It was this remark which led to the Rothman and Boughn paper, “Can Gravitons be Detected?” in which is proposed what may be the most outrageous scientific apparatus ever suggested.

Three chapters of Dyson's 1984 book Weapons and Hope (now out of print) appear here, along with other essays, forewords to books, and speeches on topics as varied as history, poetry, great scientists, war and peace, colonising the galaxy comet by comet, nanotechnology, biological engineering, the post-human future, religion, the paranormal, and more. Dyson's views on religion will send the Dawkins crowd around the bend, and his open-minded attitude toward the paranormal (in particular, chapter 27) will similarly derange dogmatic sceptics (he even recommends Rupert Sheldrake's Dogs That Know When Their Owners Are Coming Home). Chapters written some time ago are accompanied by postscripts updating them to 2006.

This is a collection of gems with nary a clinker in the lot. Anybody who rejoices in visionary thinking and superb writing will find much of both. The chapters are almost completely independent of one another and can be read in any order, so you can open the book at random and be sure to delight in what you find.

 Permalink

July 2007

Crichton, Michael. Next. New York: HarperCollins, 2006. ISBN 0-06-087298-5.
Several of the essays in Freeman Dyson's The Scientist as Rebel (June 2007) predict that “the next Big Thing” and a central theme of the present century will be the discovery of the fine-grained details of biology and the emergence of technologies which can achieve essentially anything which is possible with the materials and processes of life. This, Dyson believes, will have an impact on the lives of humans and the destiny of humanity and the biosphere which dwarf those of any of the technological revolutions of the twentieth century.

In this gripping novel, page-turner past master (and medical doctor) Michael Crichton provides a glimpse of a near-term future in which these technologies are coming up to speed. It's going to be a wild and wooly world once genes start jumping around among metazoan species with all the promiscuity of prokaryotic party time, and Crichton weaves this into a story which is simultaneously entertaining, funny, and cautionary. His trademark short chapters (averaging just a little over four pages) are like potato chips to the reader—just one more, you think, when you know you ought to have gotten to sleep an hour ago.

For much of the book, the story seems like a collection of independent short stories interleaved with one another. As the pages dwindle, you begin to wonder, “How the heck is he going to pull all this together?” But that's what master story tellers do, and he succeeds delightfully. One episode in this book describes what is perhaps the worst backseat passenger on a road trip in all of English fiction; you'll know what I'm talking about when you get to it. The author has a great deal of well-deserved fun at the expense of the legacy media: it's payback time for all of those agenda-driven attack reviews of State of Fear (January 2005).

I came across two amusing typos: at the bottom of p. 184, I'm pretty sure “A transgender higher primate” is supposed to be “A transgenic higher primate”, and on p. 428 in the bibliography, I'm certain that the title of Sheldon Krimsky's book is Science in the Private Interest, not “Science in the Primate Interest”—what a difference a letter can make!

In an Author's Note at the end, Crichton presents one of the most succinct and clearly argued cases I've encountered why the patenting of genes is not just destructive of scientific inquiry and medical progress, but also something which even vehement supporters of intellectual property in inventions and artistic creations can oppose without being inconsistent.

 Permalink

Epstein, Robert. The Case Against Adolescence. Sanger, CA: Quill Driver Books, 2007. ISBN 1-884956-70-X.
What's the matter with kids today? In this exhaustively documented breakthrough book, the author argues that adolescence, as it is presently understood in developed Western countries, is a social construct which was created between 1880 and 1920 by well-intentioned social reformers responding to excesses of the industrial revolution and mass immigration to the United States. Their remedies—compulsory education, child labour laws, the juvenile justice system, and the proliferation of age-specific restrictions on “adult” activities such as driving, drinking alcohol, and smoking—had the unintended consequence of almost completely segregating teenagers from adults, trapping them in a vacuous peer culture and prolonging childhood up to a decade beyond the age at which young people begin to assume the responsibilities of adulthood in traditional societies.

Examining anthropological research on other cultures and historical evidence from past centuries, the author concludes that the “storm and stress” which characterises modern adolescence is the consequence of the infantilisation of teens, and their confinement in a peer culture with little contact with adults. In societies and historical periods where the young became integrated into adult society shortly after puberty and began to shoulder adult responsibilities, there is no evidence whatsoever for anything like the dysfunctional adolescence so often observed in the modern West—in fact, a majority of preindustrial cultures have no word in their language for the concept of adolescence.

Epstein, a psychologist who did his Ph.D. under B. F. Skinner at Harvard, and former editor-in-chief of Psychology Today magazine, presents results of a comprehensive test of adultness he developed along with Diane Dumas which demonstrate that in most cases the competencies of people in the 13 to 17 year range do not differ from those of adults between twenty and seventy-one by a statistically significant margin. (I should note that the groups surveyed, as described on pp. 154–155, differed wildly in ethnic and geographic composition from the U.S. population as a whole; I'd love to see the cross-tabulations.) An abridged version of the test is included in the book; you can take the complete test online. (My score was 98%, with most of the demerits due to placing less trust in figures of authority than the author deems wise.)

So, if there is little difference in the basic competences of teens and adults, why are so many adolescents such vapid, messed-up, apathetic losers? Well, consider this: primates learn by observing (monkey see) and by emulating (monkey do). For millions of years our ancestors have lived in bands in which the young had most of their contact with adults, and began to do the work of adults as soon as they were physically and mentally capable of doing so. This was the near-universal model of human societies until the late 19th century and remains so in many non-Western cultures. But in the West, this pattern has been replaced by warehousing teenagers in government schools which effectively confine them with others of their age. Their only adult contacts apart from (increasingly absent) parents are teachers, who are inevitably seen as jailors. How are young people to be expected to turn their inherent competencies into adult behaviour if they spend almost all of their time only with their peers?

Now, the author doesn't claim that everybody between the ages of 13 and 17 has the ability to function as an adult. Just as with physical development, different individuals mature at different rates, and one may have superb competence in one area and remain childish in another. But, on the other hand, simply turning 18 or 21 or whatever doesn't magically endow someone with those competencies either—many adults (defined by age) perform poorly as well.

In two breathtaking final chapters, the author argues for the replacement of virtually all age-based discrimination in the law with competence testing in the specific area involved. For example, a 13 year old could entirely skip high school by passing the equivalency examination available to those 18 or older. There's already a precedent for this—we don't automatically allow somebody to drive, fly an airplane, or operate an amateur radio station when they reach a certain age: they have to pass a comprehensive examination on theory, practice, and law. Why couldn't this basic concept be extended to most of the rights and responsibilities we currently grant based purely upon age? Think of the incentive such a system would create for teens to acquire adult knowledge and behaviour as early as possible, knowing that it would be rewarded with adult rights and respect, instead of being treated like children for what could be some of the most productive years of their lives.

Boxes throughout the text highlight the real-world achievements of young people in history and the present day. (Did you know that Sergey Karjakin became a chess grandmaster at the age of 12 years and 7 months? He is among seven who achieved grandmaster ranking at an age younger than Bobby Fischer's 15 years and 6 months.) There are more than 75 pages of end notes and bibliography. (I wonder if the author is aware that note 68 to chapter 5 [p. 424] cites a publication of the Lyndon LaRouche organisation.)

It isn't often you pick up a book with laudatory blurbs by a collection of people including Albert Ellis, Deepak Chopra, Joyce Brothers, Alvin Toffler, George Will, John Taylor Gatto, Suzanne Somers, and Buzz Aldrin. I concur with them that the author has put his finger precisely on the cause of a major problem in modern society, and laid out a challenging yet plausible course for remedying it. I discovered this book via an excellent podcast interview with the author on “The Glenn and Helen Show”.

About halfway through this book, I had one of the most chilling visions of the future I've experienced in many years. One of the things I've puzzled over for ages is what, precisely, is the end state of the vision of those who call themselves “progressives”—progress toward what, anyway? What would society look like if they had their way across the board? And then suddenly it hit me like a brick. If you want to see what the “progressive” utopia looks like, just take a glance at the lives of teenagers today, who are deprived of a broad spectrum of rights and denied responsibilities “for their own good”. Do-gooders always justify their do-badding “for the children”, and their paternalistic policies, by eviscerating individualism and autonomous judgement, continually create ever more “children”. The nineteenth century reformers, responding to genuine excesses of the industrial revolution, extended childhood from puberty to years later, inventing what we call adolescence. The agenda of today's “progressives” is inexorably extending adolescence to create a society of eternal adolescents, unworthy of the responsibilities of adults, and hence forever the childlike wards of an all-intrusive state and the elites which govern it. If you want a vision of the “progressive” future, imagine being back in high school—forever.

 Permalink

Frankfurt, Harry G. On Bullshit. Princeton: Princeton University Press, 2005. ISBN 0-691-12294-6.
This tiny book (just 67 9½×15 cm pages—I'd estimate about 7300 words) illustrates that there is no topic, however mundane or vulgar, which a first-rate philosopher cannot make so complicated and abstruse that it appears profound. The author, a professor emeritus of philosophy at Princeton University, first published this essay in 1986 in the Raritan Review. In it, he tackles the momentous conundrum of what distinguishes bullshit from lies. Citing authorities including Wittgenstein and Saint Augustine, he concludes that while the liar is ultimately grounded in the truth (being aware that what he is saying is counterfactual and crafting a lie to make the person to whom he tells it believe that), the bullshitter is entirely decoupled (or, perhaps in his own estimation, liberated) from truth and falsehood, and is simply saying whatever it takes to have the desired effect upon the audience.

Throughout, it's obvious that we're in the presence of a phil-oss-o-pher doing phil-oss-o-phy right out in the open. For example, on p. 33 we have:

It is in this sense that Pascal's (Fania Pascal, an acquaintance of Wittgenstein in the 1930s, not Blaise—JW) statement is unconnected to a concern with the truth; she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true.
(The Punctuator applauds the use of colons and semicolons in the passage quoted above!)

All of this is fine, but it seems to me that the author misses an important aspect of bullshit: the fact that in many cases—perhaps the overwhelming majority—the bulshittee is perfectly aware of being bullshitted by the bullshitter, and the bullshitter is conversely aware that the figurative bovid excrement emitted is being dismissed as such by those whose ears it befouls. Now, this isn't always the case: sometimes you find yourself in a tight situation faced with a difficult question and manage to bullshit your way through, but in the context of a “bull session”, only the most naïve would assume that what was said was sincere and indicative of the participants' true beliefs: the author cites bull sessions as a venue in which people can try on beliefs other than their own in a non-threatening environment.

 Permalink

Pyle, Ernie. Brave Men. Lincoln, NE: Bison Books, [1944] 2001. ISBN 0-8032-8768-2.
Ernie Pyle is perhaps the most celebrated war correspondent of all time, and this volume amply illustrates why. A collection of his columns for the Scripps-Howard newspapers edited into book form, it covers World War II from the invasion of Sicily in 1943 through the Normandy landings and the liberation of Paris in 1944. This is the first volume of three collections of his wartime reportage: the second and third, Here is Your War and Ernie Pyle in England, are out of print, but used copies are readily available at a reasonable price.

While most readers today know Pyle only from his battle dispatches, he was, in fact, a renowned columnist even before the United States entered the war—in the 1930s he roamed the nation, filing columns about Americana and Americans which became as beloved as the very similar television reportage decades later by Charles Kuralt who, in fact, won an Ernie Pyle Award for his reporting.

Pyle's first love and enduring sympathy was with the infantry, and few writers have expressed so eloquently the experience of being “in the line” beyond what most would consider the human limits of exhaustion, exertion, and fear. But in this book he also shows the breadth of the Allied effort, profiling Navy troop transport and landing craft, field hospitals, engineering troops, air corps dive and light bombers, artillery, ordnance depots, quartermaster corps, and anti-aircraft guns (describing the “scientific magic” of radar guidance without disclosing how it worked).

Apart from the prose, which is simultaneously unaffected and elegant, the thing that strikes a reader today is that in this entire book, written by a superstar columnist for the mainstream media of his day, there is not a single suggestion that the war effort, whatever the horrible costs he so candidly documents, is misguided, or that there is any alternative or plausible outcome other than victory. How much things have changed…. If you're looking for this kind of with the troops on the ground reporting today, you won't find it in the legacy dead tree or narrowband one-to-many media, but rather in reader-supported front-line journalists such as Michael Yon—if you like what he's writing, hit the tip jar and keep him at the front; think of it like buying the paper with Ernie Pyle's column.

Above, I've linked to a contemporary reprint edition of this work. Actually, I read a hardbound sixth printing of the 1944 first edition which I found in a used bookstore in Marquette, Michigan (USA) for less than half the price of the paperback reprint; visit your local bookshop—there are wonderful things there to be discovered.

 Permalink

August 2007

[Audiobook] Caesar, Gaius Julius and Aulus Hirtius. The Commentaries. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [ca. 52–51 B.C., ca. 45 B.C.] 2004. ISBN 1-929718-44-6.
This audiobook is an unabridged reading of English translations of Caesar's commentaries on the Gallic (Commentarii de Bello Gallico) and Civil (Commentarii de Bello Civili) wars between 58 and 48 B.C. (The eighth book of the Gallic wars commentary, covering the minor campaigns of 51 B.C., was written by his friend Aulus Hirtius after Caesar's assassination.) The recording is based upon the rather eccentric Rex Warner translation, which is now out of print. In the original Latin text, Caesar always referred to himself in the third person, as “Caesar”. Warner rephrased the text (with the exception of the book written by Hirtius) as a first person narrative. For example, the first sentence of paragraph I.25 of The Gallic Wars:
Caesar primum suo, deinde omnium ex conspectu remotis equis, ut aequato omnium periculo spem fugae tolleret, cohortatus suos proelium commisit.
in Latin, is conventionally translated into English as something like this (from the rather stilted 1869 translation by W. A. McDevitte and W. S. Bohn):
Caesar, having removed out of sight first his own horse, then those of all, that he might make the danger of all equal, and do away with the hope of flight, after encouraging his men, joined battle.
but the Warner translation used here renders this as:
I first of all had my own horse taken out of the way and then the horses of other officers. I wanted the danger to be the same for everyone, and for no one to have any hope of escape by flight. Then I spoke a few words of encouragement to the men before joining battle.   [1:24:17–30]
Now, whatever violence this colloquial translation does to the authenticity of Caesar's spare and eloquent Latin, from a dramatic standpoint it works wonderfully with the animated reading of award-winning narrator Charlton Griffin; the listener has the sense of being across the table in a tavern from GJC as he regales all present with his exploits.

This is “just the facts” war reporting. Caesar viewed this work not as history, but rather the raw material for historians in the future. There is little discussion of grand strategy nor, even in the commentaries on the civil war, the political conflict which provoked the military confrontation between Caesar and Pompey. While these despatches doubtless served as propaganda on Caesar's part, he writes candidly of his own errors and the cost of the defeats they occasioned. (Of course, since these are the only extant accounts of most of these events, there's no way to be sure there isn't some Caesarian spin in his presentation, but since these commentaries were published in Rome, which received independent reports from officers and literate legionaries in Caesar's armies, it's unlikely he would have risked embellishing too much.)

Two passages of unknown length in the final book of the Civil war commentaries have been lost—these are handled by the reader stopping in mid-sentence, with another narrator explaining the gap and the historical consensus of the events in the lost text.

This audiobook is distributed in three parts, totalling 16 hours and 40 minutes. That's a big investment of time in the details of battles which took place more than two thousand years ago, but I'll confess I found it fascinating, especially since some of the events described took place within sight of where I take the walks on which I listened to this recording over several weeks. An Audio CD edition is available.

 Permalink

Carr, Bernard, ed. Universe or Multiverse? Cambridge: Cambridge University Press, 2007. ISBN 0-521-84841-5.
Before embarking upon his ultimately successful quest to discover the laws of planetary motion, Johannes Kepler tried to explain the sizes of the orbits of the planets from first principles: developing a mathematical model of the orbits based upon nested Platonic solids. Since, at the time, the solar system was believed by most to be the entire universe (with the fixed stars on a sphere surrounding it), it seemed plausible that the dimensions of the solar system would be fixed by fundamental principles of science and mathematics. Even though he eventually rejected his model as inaccurate, he never completely abandoned it—it was for later generations of astronomers to conclude that there is nothing fundamental whatsoever about the structure of the solar system: it is simply a contingent product of the history of its condensation from the solar nebula, and could have been entirely different. With the discovery of planets around other stars in the late twentieth century, we now know that not only do planetary systems vary widely, many are substantially more weird than most astronomers or even science fiction writers would have guessed.

Since the completion of the Standard Model of particle physics in the 1970s, a major goal of theoretical physicists has been to derive, from first principles, the values of the more than twenty-five “free parameters” of the Standard Model (such as the masses of particles, relative strengths of forces, and mixing angles). At present, these values have to be measured experimentally and put into the theory “by hand”, and there is no accepted physical explanation for why they have the values they do. Further, many of these values appear to be “fine-tuned” to allow the existence of life in the universe (or at least, life which resembles ourselves)—a tiny change, for example, in the mass ratio of the up and down quarks and the electron would result in a universe with no heavy elements or chemistry; it's hard to imagine any form of life which could be built out of just protons or neutrons. The emergence of a Standard Model of cosmology has only deepened the mystery, adding additional apparently fine-tunings to the list. Most stunning is the cosmological constant, which appears to have a nonzero value which is 124 orders of magnitude smaller than predicted from a straightforward calculation from quantum physics.

One might take these fine-tunings as evidence of a benevolent Creator (which is, indeed, discussed in chapters 25 and 26 of this book), or of our living in a simulation crafted by a clever programmer intent on optimising its complexity and degree of interestingness (chapter 27). But most physicists shy away from such deus ex machina and “we is in machina” explanations and seek purely physical reasons for the values of the parameters we measure.

Now let's return for a moment to Kepler's attempt to derive the orbits of the planets from pure geometry. The orbit of the Earth appears, in fact, fine-tuned to permit the existence of life. Were it more elliptical, or substantially closer to or farther from the Sun, persistent liquid water on the surface would not exist, as seems necessary for terrestrial life. The apparent fine-tuning can be explained, however, by the high probability that the galaxy contains a multitude of planetary systems of every possible variety, and such a large ensemble is almost certain to contain a subset (perhaps small, but not void) in which an earthlike planet is in a stable orbit within the habitable zone of its star. Since we can only have evolved and exist in such an environment, we should not be surprised to find ourselves living on one of these rare planets, even though such environments represent an infinitesimal fraction of the volume of the galaxy and universe.

As efforts to explain the particle physics and cosmological parameters have proved frustrating, and theoretical investigations into cosmic inflation and string theory have suggested that the values of the parameters may have simply been chosen at random by some process, theorists have increasingly been tempted to retrace the footsteps of Kepler and step back from trying to explain the values we observe, and instead view them, like the masses and the orbits of the planets, as the result of an historical process which could have produced very different results. The apparent fine-tuning for life is like the properties of the Earth's orbit—we can only measure the parameters of a universe which permits us to exist! If they didn't, we wouldn't be here to do the measuring.

But note that like the parallel argument for the fine-tuning of the orbit of the Earth, this only makes sense if there are a multitude of actually existing universes with different random settings of the parameters, just as only a large ensemble of planetary systems can contain a few like the one in which we find ourselves. This means that what we think of as our universe (everything we can observe or potentially observe within the Hubble volume) is just one domain in a vastly larger “multiverse”, most or all of which may remain forever beyond the scope of scientific investigation.

Now such a breathtaking concept provides plenty for physicists, cosmologists, philosophers, and theologians to chew upon, and macerate it they do in this thick (517 page), heavy (1.2 kg), and expensive (USD 85) volume, which is drawn from papers presented at conferences held between 2001 and 2005. Contributors include two Nobel laureates (Steven Weinberg and Frank Wilczek), and just about everybody else prominent in the multiverse debate, including Martin Rees, Stephen Hawking, Max Tegmark, Andrei Linde, Alexander Vilenkin, Renata Kallosh, Leonard Susskind, James Hartle, Brandon Carter, Lee Smolin, George Ellis, Nick Bostrom, John Barrow, Paul Davies, and many more. The editor's goal was that the papers be written for the intelligent layman: like articles in the pre-dumbed-down Scientific American or “front of book” material in Nature or Science. In fact, the chapters vary widely in technical detail and difficulty; if you don't follow this stuff closely, your eyes may glaze over in some of the more equation-rich chapters.

This book is far from a cheering section for multiverse theories: both sides are presented and, in fact, the longest chapter is that of Lee Smolin, which deems the anthropic principle and anthropic arguments entirely nonscientific. Many of these papers are available in preliminary form for free on the arXiv preprint server; if you can obtain a list of the chapter titles and authors from the book, you can read most of the content for free. Renata Kallosh's chapter contains an excellent example of why one shouldn't blindly accept the recommendations of a spelling checker. On p. 205, she writes “…the gaugino condensate looks like a fractional instant on effect…”—that's supposed to be “instanton”!

 Permalink

Wilson, Daniel H. Where's My Jetpack? New York: Bloomsbury, 2007. ISBN 1-59691-136-0.
One of the best things about the past was that the future was so much cooler then! I mean, here we are, more than halfway through the first decade of the flippin' twenty-first century for heaven's sake, and there's nary a flying car, robot servant, underwater city, orbital hotel, or high-speed slidewalk anywhere in sight, and many of the joyless scolds who pass for visionaries in this timid and unimaginative age think we'd all be better off renouncing technology and going back to being hunter-gatherers—sheesh.

This book, by a technology columnist for Popular Mechanics, wryly surveys the promise and present-day reality of a variety of wonders from the golden age of boundless technological optimism. You may be surprised at the slow yet steady progress being made toward some of these visionary goals (but don't hold your breath waiting for the Star Trek transporter!). I was completely unaware, for example, of the “anti-sleeping pill” modafinil, which, based upon tests by the French Foreign Legion, the UK Ministry of Defence, and the U.S. Air Force, appears to allow maintaining complete alertness for up to 40 hours with no sleep and minimal side effects. And they said programmer productivity had reached its limits!

The book is illustrated with stylish graphics, but there are no photos of the real-world gizmos mentioned in the next, nor are there source citations or links to Web sites describing them—you're on your own following up the details. To answer the question in the title, “Where's My Jetpack?”, look here and here.

 Permalink

LeBlanc, Steven A. with Katherine E. Register. Constant Battles. New York: St. Martin's Griffin, 2003. ISBN 0-312-31090-0.
Steven LeBlanc is the Director of Collections at Harvard University's Peabody Museum of Archaeology and Ethnology. When he began his fieldwork career in the early 1970s, he shared the opinion of most of the archaeologists and anthropologists of his generation and present-day laymen that most traditional societies in the hunter-gatherer and tribal farming eras were mostly peaceful and lived in balance with their environments. It was, according to this view, only with the emergence of large chiefdoms and state-level societies that environmental degradation began to appear and mass conflict emerge, culminating in the industrialised slaughter of the 20th century.

But, to the author, as a dispassionate scientist, looking at the evidence on the ground or dug up from beneath it in expeditions in the American Southwest, Turkey, and Peru, and in the published literature, there were many discrepancies from this consensus narrative. In particular, why would “peaceful” farming people build hilltop walled citadels far from their fields and sources of water if not for defensibility? And why would hard-working farmers obsess upon defence were there not an active threat from their neighbours?

Further investigations argue convincingly that the human experience, inherited directly from our simian ancestors, has been one of relentless population growth beyond the carrying capacity of our local environment, degradation of the ecosystem, and the inevitable conflict with neighbouring bands over scarce resources. Ironically, many of the reports of early ethnographers which appeared to confirm perennially-wrong philosopher Rousseau's vision of the “noble savage” were based upon observations of traditional societies which had recently been impacted by contact with European civilisation: population collapse due to exposure to European diseases to which they had no immunity, and increases in carrying capacity of the land thanks to introduction of European technologies such as horses, steel tools, and domestic animals, which had temporarily eased the Malthusian pressure upon these populations and suspended resource wars. But the archaeological evidence is that such wars are the norm, not an aberration.

In fact, notwithstanding the horrific death toll of twentieth century warfare, the rate of violent death among the human population has fallen to an all-time low in the nation-state era. Hunter-gatherer (or, as the authors prefer to call them, “forager”) and tribal farming societies typically lose about 25% of their male population and 5% of the females to warfare with neighbouring bands. Even the worst violence of the nation-state era, averaged over a generation, has a death toll only one eighth this level.

Are present-day humans (or, more specifically, industrialised Western humans) unprecedented despoilers of our environment and aggressors against inherently peaceful native people? Nonsense argues this extensively documented book. Unsustainable population growth, resource exhaustion, environmental degradation, and lethal conflict with neighbours are as human as bipedalism and speech. Conflict is not inevitable, and civilisation, sustainable environmental policy, and yield-improving and resource-conserving technology are the best course to reducing the causes of conflict. Dreaming of a nonexistent past of peaceful people living in harmony with their environment isn't.

You can read any number of books about military history, from antiquity to the present, without ever encountering a discussion of “Why we fight”—that's the subtitle of this book, and I've never encountered a better source to begin to understand the answer to this question than you'll find here.

 Permalink

Radin, Dean. Entangled Minds. New York: Paraview Pocket Books, 2006. ISBN 1-4165-1677-8.
If you're looking to read just one book about parapsychology, written from the standpoint of a researcher who judges the accumulated evidence from laboratory investigations overwhelmingly persuasive, this is your book. (The closest runner-up, in my estimation, is the same author's The Conscious Universe from 1997.) The evidence for a broad variety of paranormal (or psi) phenomena is presented, much of it from laboratory studies from the 1990s and the present decade, including functional MRI imaging of the brain during psi experiments and the presentiment experiments of Radin and Dick Bierman. The history of parapsychology research is sketched in chapter 4, but the bulk of the text is devoted to recent, well-controlled laboratory work. Anecdotal psi phenomena are mentioned only in passing, and other paranormal mainstays such as UFOs, poltergeists, Bigfoot, and the like are not discussed at all.

For each topic, the author presents a meta-analysis of unimpeached published experimental results, controlling for quality of experimental design and estimating the maximum impact of the “file drawer effect”, calculating how many unpublished experiments with chance results would have to exist to reduce the probability of the reported results to the chance expectation. All of the effects reported are very small, but a meta-meta analysis across all the 1019 experiments studied yields odds against the results being due to chance of 1.3×10104 to 1.

Radin draws attention to the similarities between psi phenomena, where events separated in space and time appear to have a connection which can't be explained by known means of communication, and the entanglement of particles resulting in correlations measured at spacelike separated intervals in quantum mechanics, and speculates that there may be a kind of macroscopic form of entanglement in which the mind is able to perceive information in a shared consciousness field (for lack of a better term) as well as through the senses. The evidence for such a field from the Global Consciousness Project (to which I have contributed software and host two nodes) is presented in chapter 11. Forty pages of endnotes provide extensive source citations and technical details. On several occasions I thought the author was heading in the direction of the suggestion I make in my Notes toward a General Theory of Paranormal Phenomena, but he always veered away from it. Perhaps the full implications of the multiverse are weirder than those of psi!

There are a few goofs. On p. 215, a quote from Richard Feynman is dated from 1990, while Feynman died in 1988. Actually, the quote is from Feynman's 1985 book QED, which was reprinted in 1990. The discussion of the Quantum Zeno Effect on p. 259 states that “the act of rapidly observing a quantum system forces that system to remain in its wavelike, indeterminate state, rather than to collapse into a particular, determined state.” This is precisely backwards—rapidly repeated observations cause the system's state to repeatedly collapse, preventing its evolution. Consequently, this effect is also called the “quantum watched pot” effect, after the aphorism “a watched pot never boils”. On the other side of the balance, the discussion of Bell's theorem on pp. 227–231 is one of the clearest expositions for layman I have ever read.

I try to avoid the “Washington read”: picking up a book and immediately checking if my name appears in the index, but in the interest of candour since I am commending this book to your attention, I should note that it does here—I am mentioned on p. 195. If you'd like to experiment with this spooky stuff yourself, try Fourmilab's online RetroPsychoKinesis experiments, which celebrated their tenth anniversary on the Web in January of 2007 and to date have recorded 256,584 experiments performed by 24,862 volunteer subjects.

 Permalink

MacKinnon, Douglas. America's Last Days. New York: Leisure Books, 2007. ISBN 0-8439-5802-2.
There are some books which are perfect for curling up with in front of a fireplace. Then there are those which are best used, ripped apart, for kindling; this is one of the latter. The premise of the novel is that the “Sagebrush Rebellion” gets deadly serious when a secretive group funded by a billionaire nutcase CEO of a major defence contractor plots the secession of two Western U.S. states to re-found a republic on the principles of the Founders, by threatening the U.S. with catastrophe unless the government accedes to their demands. Kind of like the Free State Project, but with nukes.

To liken the characters, dialogue, and plotting of this story to a comic book would be to disparage the comics, some of which, though certainly not all, far surpass this embarrassingly amateurish effort. Although the author's biography states him to have been a former White House and Pentagon “official” (he declines to state in which capacity), he appears to have done his research on how senior government and corporate executives behave and speak from watching reruns of “24”.

Spoiler warning: Plot and/or ending details follow.  
Ask yourself, is it plausible that the CEO of a billion dollar defence contractor would suggest, in an audience consisting not only of other CEOs, but a senior Pentagon staffer and an analyst for the CIA, that a Presidential candidate should be assassinated? Or that the director of the FBI would tell a foreign national in the employ of the arch-villain that the FBI was about to torture one of her colleagues?
Spoilers end here.  
I'm not going to bother with the numerous typos and factual errors—any number of acronyms appear to have been rendered phonetically based upon a flawed memory. The whole book is one big howler, and picking at details is like brushing flies off a decomposing elephant carcass. The writing is formulaic: like beginners' stories in a fiction workshop, each character is introduced with a little paragraph which fingerpaints the cardboard cut-out we're about to meet. Talented writers, or even writers with less talent but more experience, weave what background we need to know seamlessly into the narrative. There is a great deal of gratuitous obscenity, much of which is uttered in contexts where I would expect decorum to prevail. After dragging along for 331 pages devoid of character development and with little action, the whole thing gets wrapped up in the the final six preposterously implausible pages. Perhaps, given the content, it's for the best that there is plenty of white space; the average chapter in this mass market paperback is less than five pages in length.

As evidence of the literary erudition and refinement of the political and media elite in the United States, this book bears laudatory blurbs from Larry King, James Carville, Bob Dole, Dee Dee Myers, and Tom Brokaw.

 Permalink

September 2007

Mead, Rebecca. One Perfect Day. New York: Penguin Press, 2007. ISBN 1-59420-088-2.
This book does for for the wedding industry what Jessica Mitford's The American Way of Death did for that equally emotion-exploiting industry which preys upon the other end of adult life. According to the American Wedding Study, published annually by the Condé Nast Bridal Group, the average cost of a wedding in the United States in 2006 was US$27,852. Now, as the author points out on p. 25, this number, without any doubt, is overstated—it is compiled by the publisher of three bridal magazines which has every incentive to show the market they reach to be as large as possible, and is based upon a survey of those already in contact in one way or another with the wedding industry; those who skip all of the theatrics and expense and simply go to City Hall or have a quiet ceremony with close family at home or at the local church are “off the radar” in a survey of this kind and would, if included, bring down the average cost. Still, it's the only figure available, and it is representative of what the wedding industry manages to extract from those who engage (if I may use the word) with it.

To folks who have a sense of the time value of money, this is a stunning figure. The average age at which Americans marry has been increasing for decades and now stands at around 26 years for women and 27 years for men. So let's take US$27,000 and, instead of blowing it out on a wedding, assume the couple uses it to open an investment account at age 27, and that they simply leave the money in the account to compound, depositing nothing more until they retire at age 65. If the account has a compounded rate of return of 10% per annum (which is comparable to the long-term return of the U.S. stock market as a whole), then at age 65, that US$27,000 will have grown to just a bit over a million dollars—a pretty nice retirement nest egg as the couple embarks upon their next big change of life, especially since government Ponzi scheme retirement programs are likely to have collapsed by then. (The OpenOffice spreadsheet I used to make this calculation is available for downloading. It also allows you to forecast the alternative of opting for an inexpensive education and depositing the US$19,000 average student loan burden into an account at age 21—that ends up yielding more than 1.2 million at age 65. The idea for this analysis came from Richard Russell's “Rich Man, Poor Man”, which is the single most lucid and important document on lifetime financial planning I have ever read.) The computation assumes the wedding costs are paid in cash by the couple and/or their families. If they're funded by debt, the financial consequences are even more dire, as the couple finds itself servicing a debt in the very years where saving for retirement has the largest ultimate payoff. Ever helpful, in this book we find the Bank of America marketing home equity loans to finance wedding blow-outs.

So how do you manage to spend twenty-seven thousand bucks on a one day party? Well, as the author documents, writing with a wry sense of irony which never descends into snarkiness, the resourceful wedding business makes it downright easy, and is continually inventing new ways to extract even more money from their customers. We learn the ways of the wedding planner, the bridal shop operator, the wedding media, resorts, photographers and videographers, à la carte “multi-faith” ministers, drive-through Las Vegas wedding chapels, and the bridal apparel industry, including a fascinating look inside one of the Chinese factories where “the product” is made. (Most Chinese factory workers are paid on a piecework basis. So how do you pay the person who removes the pins after lace has been sewed in place? By the weight of pins removed—US$2 per kilogram.)

With a majority of U.S. couples who marry already living together, some having one or more children attending the wedding, the ceremony and celebration, which once marked a major rite of passage and change in status within the community now means…precisely what? Well, not to worry, because the wedding industry has any number of “traditions” for sale to fill the void. The author tracks down the origins of a number of them: the expensive diamond engagement ring (invented by the N. W. Ayer advertising agency in the 1930s for their client, De Beers), the Unity Candle ceremony (apparently owing its popularity to a television soap opera in the 1970s), and the “Apache Indian Prayer”, a favourite of the culturally eclectic, which was actually penned by a Hollywood screenwriter for the 1950 film Broken Arrow.

The bottom line (and this book is very much about that) is that in the eyes of the wedding industry, and in the words of Condé Nast executive Peter K. Hunsinger, the bride is not so much a princess preparing for a magic day and embarking upon the lifetime adventure of matrimony, but (p. 31) “kind of the ultimate consumer, the drunken sailor. Everyone is trying to get to her.” There is an index, but no source citations; you'll have to find the background information on your own.

 Permalink

Barrow, John D. The Infinite Book. New York: Vintage Books, 2005. ISBN 1-4000-3224-5.
Don't panic—despite the title, this book is only 330 pages! Having written an entire book about nothing (The Book of Nothing, May 2001), I suppose it's only natural the author would take on the other end of the scale. Unlike Rudy Rucker's Infinity and the Mind, long the standard popular work on the topic, Barrow spends only about half of the book on the mathematics of infinity. Philosophical, metaphysical, and theological views of the infinite in a variety of cultures are discussed, as well as the history of the infinite in mathematics, including a biographical portrait of the ultimately tragic life of Georg Cantor. The physics of an infinite universe (and whether we can ever determine if our own universe is infinite), the paradoxes of an infinite number of identical copies of ourselves necessarily existing in an infinite universe, the possibility of machines which perform an infinite number of tasks in finite time, whether we're living in a simulation (and how we might discover we are), and the practical and moral consequences of immortality and time travel are also explored.

Mathematicians and scientists have traditionally been very wary of the infinite (indeed, the appearance of infinities is considered an indication of the limitations of theories in modern physics), and Barrow presents any number of paradoxes which illustrate that, as he titles chapter four, “infinity is not a big number”: it is fundamentally different and requires a distinct kind of intuition if nonsensical results are to be avoided. One of the most delightful examples is Zhihong Xia's five-body configuration of point masses which, under Newtonian gravitation, expands to infinite size in finite time. (Don't worry: the finite speed of light, formation of an horizon if two bodies approach too closely, and the emission of gravitational radiation keep this from working in the relativistic universe we inhabit. As the author says [p. 236], “Black holes might seem bad but, like growing old, they are really not so bad when you consider the alternatives.”)

This is an enjoyable and enlightening read, but I found it didn't come up to the standard set by The Book of Nothing and The Constants of Nature (June 2003). Like the latter book, this one is set in a hideously inappropriate font for a work on mathematics: the digit “1” is almost indistinguishable from the letter “I”. If you look very closely at the top serif on the “1” you'll note that it rises toward the right while the “I” has a horizontal top serif. But why go to the trouble of distinguishing the two characters and then making the two glyphs so nearly identical you can't tell them apart without a magnifying glass? In addition, the horizontal bar of the plus sign doesn't line up with the minus sign, which makes equations look awful.

This isn't the author's only work on infinity; he's also written a stage play, Infinities, which was performed in Milan in 2002 and 2003.

 Permalink

[Audiobook] Dickens, Charles. A Tale of Two Cities. (Audiobook, Unabridged). Hong Kong: Naxos Audiobooks, [1859] 2005. ISBN 962-634-359-1.
Like many people whose high school years predated the abolition of western civilisation from the curriculum, I was compelled to read an abridgement of this work for English class, and only revisited it in this audiobook edition let's say…some years afterward. My rather dim memories of the first read was that it was one of the better novels I was forced to read, but my memory of it was tarnished by my life-long aversion to compulsion of every kind. What I only realise now, after fourteen hours and forty-five minutes of listening to this superb unabridged audio edition, is how much injury is done to the masterful prose of Dickens by abridgement. Dickens frequently uses repetition as a literary device, acting like a basso continuo to set a tone of the inexorable playing out of fate. That very repetition is the first thing to go in abridgement, along with lengthy mood-setting descriptive passages, and they are sorely missed. Having now listened to every word Dickens wrote, I don't begrudge a moment I spent doing so—it's worth it.

The novel is narrated or, one might say, performed by British actor Anton Lesser, who adopts different dialects and voice pitches for each character's dialogue. It's a little odd at first to hear French paysans speaking in the accents of rustic Britons, but you quickly get accustomed to it and recognise who's speaking from the voice.

The audible.com download edition is sold in two separate “volumes”: volume 1 (7 hours 17 minutes) and volume 2 (7 hours 28 minutes), each about a 100 megabyte download at MP3 quality. An Audio CD edition (12 discs!) is available.

 Permalink

Lindley, David. Degrees Kelvin. Washington: Joseph Henry Press, 2004. ISBN 0-309-09618-9.
When 17 year old William Thomson arrived at Cambridge University to study mathematics, Britain had become a backwater of research in science and mathematics—despite the technologically-driven industrial revolution being in full force, little had been done to build upon the towering legacy of Newton, and cutting edge work had shifted to the Continent, principally France and Germany. Before beginning his studies at Cambridge, Thomson had already published three research papers in the Cambridge Mathematical Journal, one of which introduced Fourier's mathematical theory of heat to English speaking readers, defending it against criticism from those opposed to the highly analytical French style of science which Thomson found congenial to his way of thinking.

Thus began a career which, by the end of the 19th century, made Thomson widely regarded as the preeminent scientist in the world: a genuine scientific celebrity. Over his long career Thomson fused the mathematical rigour of the Continental style of research with the empirical British attitude and made fundamental progress in the kinetic theory of heat, translated Michael Faraday's intuitive view of electricity and magnetism into a mathematical framework which set the stage for Maxwell's formal unification of the two in electromagnetic field theory, and calculated the age of the Earth based upon heat flow from the interior. The latter calculation, in which he estimated only 20 to 40 million years, proved to be wrong, but was so because he had no way to know about radioactive decay as the source of Earth's internal heat: he was explicit in stating that his result assumed no then-unknown source of heat or, as we'd now say, “no new physics”. Such was his prestige that few biologists and geologists whose own investigations argued for a far more ancient Earth stepped up and said, “Fine—so start looking for the new physics!” With Peter Tait, he wrote the Treatise on Natural Philosophy, the first unified exposition of what we would now call classical physics.

Thomson believed that science had to be founded in observations of phenomena, then systematised into formal mathematics and tested by predictions and experiments. To him, understanding the mechanism, ideally based upon a mechanical model, was the ultimate goal. Although acknowledging that Maxwell's equations correctly predicted electromagnetic phenomena, he considered them incomplete because they didn't explain how or why electricity and magnetism behaved that way. Heaven knows what he would have thought of quantum mechanics (which was elaborated after his death in 1907).

He'd probably have been a big fan of string theory, though. Never afraid to add complexity to his mechanical models, he spent two decades searching for a set of 21 parameters which would describe the mechanical properties of the luminiferous ether—what string “landscape” believers might call the moduli and fluxes of the vacuum, and argued for a “vortex atom” model in which extended vortex loops replaced pointlike billiard ball atoms to explain spectrographic results. These speculations proved, as they say, not even wrong.

Thomson was not an ivory tower theorist. He viewed the occupation of the natural philosopher (he disliked the word “physicist”) as that of a problem solver, with the domain of problems encompassing the practical as well as fundamental theory. He was a central figure in the development of the first transatlantic telegraphic cable and invented the mirror galvanometer which made telegraphy over such long distances possible. He was instrumental in defining the units of electricity we still use today. He invented a mechanical analogue computer for computation of tide tables, and a compass compensated for the magnetic distortion of iron and steel warships which became the standard for the Royal Navy. These inventions made him wealthy, and he indulged his love of the sea by buying a 126 ton schooner and inviting his friends and colleagues on voyages.

In 1892, he was elevated to a peerage by Queen Victoria, made Baron Kelvin of Largs, the first scientist ever so honoured. (Numerous scientists, including Newton and Thomson himself in 1866 had been knighted, but the award of a peerage is an honour of an entirely different order.) When he died in 1907 at age 83, he was buried in Westminster Abbey next to the grave of Isaac Newton. For one who accomplished so much, and was so celebrated in his lifetime, Lord Kelvin is largely forgotten today, remembered mostly for the absolute temperature scale named in his honour and, perhaps, for the Kelvinator company of Detroit, Michigan which used his still-celebrated name to promote their ice-boxes and refrigerators. While Thomson had his hand in much of the creation of the edifice of classical physics in the 19th century, there isn't a single enduring piece of work you can point to which is entirely his. This isn't indicative of any shortcoming on his part, but rather of the maturation of science from rare leaps of insight by isolated geniuses to a collective endeavour by an international community reading each other's papers and building a theory by the collaborative effort of many minds. Science was growing up, and Kelvin's reputation has suffered, perhaps, not due to any shortcomings in his contributions, but because they were so broad, as opposed to being identified with a single discovery which was entirely his own.

This is a delightful biography of a figure whose contributions to our knowledge of the world we live in are little remembered. Lord Kelvin never wavered from his belief that science consisted in collecting the data, developing a model and theory to explain what was observed, and following the implications of that theory to their logical conclusions. In doing so, he was often presciently right and occasionally spectacularly wrong, but he was always true to science as he saw it, which is how most scientists see their profession today.

Amusingly, the chapter titles are:

  1. Cambridge
  2. Conundrums
  3. Cable
  4. Controversies
  5. Compass
  6. Kelvin

 Permalink

Phares, Walid. Future Jihad. New York: Palgrave Macmillan, [2005] 2006. ISBN 1-4039-7511-6.
It seems to me that at the root of the divisive and rancorous dispute over the war on terrorism (or whatever you choose to call it), is an individual's belief in one of the following two mutually exclusive propositions.

  1. There is a broad-based, highly aggressive, well-funded, and effective jihadist movement which poses a dire threat not just to secular and pluralist societies in the Muslim world, but to civil societies in Europe, the Americas, and Asia.
  2. There isn't.

In this book, Walid Phares makes the case for the first of these two statements. Born in Lebanon, after immigrating to the United States in 1990, he taught Middle East studies at several universities, and is currently a professor at Florida Atlantic University. He is the author of a number of books on Middle East history, and appears as a commentator on media outlets ranging from Fox News to Al Jazeera.

Ever since the early 1990s, the author has been warning of what he argued was a constantly growing jihadist threat, which was being overlooked and minimised by the academic experts to whom policy makers turn for advice, largely due to Saudi-funded and -indoctrinated Middle East Studies programmes at major universities. Meanwhile, Saudi funding also financed the radicalisation of Muslim communities around the world, particularly the large immigrant populations in many Western European countries. In parallel to this top-down approach by the Wahabi Saudis, the Muslim Brotherhood and its affiliated groups, including Hamas and the Front Islamique du Salut in Algeria, pursued a bottom-up strategy of radicalising the population and building a political movement seeking to take power and impose an Islamic state. Since the Iranian revolution of 1979, a third stream of jihadism has arisen, principally within Shiite communities, promoted and funded by Iran, including groups such as Hezbollah.

The present-day situation is placed in historical content dating back to the original conquests of Mohammed and the spread of Islam from the Arabian peninsula across three continents, and subsequent disasters at the hands of the Mongols and Crusaders, the reconquista of the Iberian peninsula, and the ultimate collapse of the Ottoman Empire and Caliphate following World War I. This allows the reader to grasp the world-view of the modern jihadist which, while seemingly bizarre from a Western standpoint, is entirely self-consistent from the premises whence the believers proceed.

Phares stresses that modern jihadism (which he dates from the abolition of the Ottoman Caliphate in 1923, an event which permitted free-lance, non-state actors to launch jihad unconstrained by the central authority of a caliph), is a political ideology with imperial ambitions: the establishment of a new caliphate and its expansion around the globe. He argues that this is only incidentally a religious conflict: although the jihadists are Islamic, their goals and methods are much the same as believers in atheistic ideologies such as communism. And just as one could be an ardent Marxist without supporting Soviet imperialism, one can be a devout Muslim and oppose the jihadists and intolerant fundamentalists. Conversely, this may explain the curious convergence of the extreme collectivist left and puritanical jihadists: red diaper baby and notorious terrorist Carlos “the Jackal” now styles himself an Islamic revolutionary, and the corpulent caudillo of Caracas has been buddying up with the squinty dwarf of Tehran.

The author believes that since the terrorist strikes against the United States in September 2001, the West has begun to wake up to the threat and begin to act against it, but that far more, both in realising the scope of the problem and acting to avert it, remains to be done. He argues, and documents from post-2001 events, that the perpetrators of future jihadist strikes against the West are likely to be home-grown second generation jihadists radicalised and recruited among Muslim communities within their own countries, aided by Saudi financed networks. He worries that the emergence of a nuclear armed jihadist state (most likely due to an Islamist takeover of Pakistan or Iran developing its own bomb) would create a base of operations for jihad against the West which could deter reprisal against it.

Chapter thirteen presents a chilling scenario of what might have happened had the West not had the wake-up call of the 2001 attacks and begun to mobilise against the threat. The scary thing is that events could still go this way should the threat be real and the West, through fatigue, ignorance, or fear, cease to counter it. While defensive measures at home and direct action against terrorist groups are required, the author believes that only the promotion of democratic and pluralistic civil societies in the Muslim world can ultimately put an end to the jihadist threat. Toward this end, a good first step would be, he argues, for the societies at risk to recognise that they are not at war with “terrorism” or with Islam, but rather with an expansionist ideology with a political agenda which attacks targets of opportunity and adapts quickly to countermeasures.

In all, I found the arguments somewhat over the top, but then, unlike the author, I haven't spent most of my career studying the jihadists, nor read their publications and Web sites in the original Arabic as he has. His warnings of cultural penetration of the West, misdirection by artful propaganda, and infiltration of policy making, security, and military institutions by jihadist covert agents read something like J. Edgar Hoover's Masters of Deceit, but then history, in particular the Venona decrypts, has borne out many of Hoover's claims which were scoffed at when the book was published in 1958. But still, one wonders how a “movement” composed of disparate threads many of whom hate one another (for example, while the Saudis fund propaganda promoting the jihadists, most of the latter seek to eventually depose the Saudi royal family and replace it with a Taliban-like regime; Sunni and Shiite extremists view each other as heretics) can effectively co-ordinate complex operations against their enemies.

A thirty page afterword in this paperback edition provides updates on events through mid-2006. There are some curious things: while transliteration of Arabic and Farsi into English involves a degree of discretion, the author seems very fond of the letter “u”. He writes the name of the leader of the Iranian revolution as “Khumeini”, for example, which I've never seen elsewhere. The book is not well-edited: occasionally he used “Khomeini”, spells Sayid Qutb's last name as “Kutb” on p. 64, and on p. 287 refers to “Hezbollah” and “Hizbollah” in the same sentence.

The author maintains a Web site devoted to the book, as well as a personal Web site which links to all of his work.

 Permalink

October 2007

Scalzi, John. The Last Colony. New York: Tor, 2007. ISBN 0-7653-1697-8.
This novel concludes the Colonial Union trilogy begun with the breakthrough Old Man's War (April 2005), for which the author won the John W. Campbell Award for Best New Writer, and its sequel, The Ghost Brigades (August 2006), which fleshed out the shadowy Special Forces and set the stage for a looming three-way conflict among the Colonial Union, the Conclave of more than four hundred alien species, and the Earth. As this novel begins, John Perry and Jane Sagan, whom we met in the first two volumes, have completed their military obligations and, now back in normal human bodies, have married and settled into new careers on a peaceful human colony world. They are approached by a Colonial Defense Forces general with an intriguing proposition: to become administrators of a new colony, the first to be formed by settlers from other colony worlds instead of emigrants from Earth.

As we learnt in The Ghost Brigades, when it comes to deceit, disinformation, manipulation, and corruption, the Colonial Union is a worthy successor to its historical antecedents, the Soviet Union and the European Union, and the newly minted administrators quickly discover that all is not what it appears to be and before long find themselves in a fine pickle indeed. The story moves swiftly and plausibly toward a satisfying conclusion I would never have guessed even twenty pages from the end.

In the acknowledgements at the end, the author indicates that this book concludes the adventures of John Perry and Jane Sagan and, for the moment, the Colonial Union universe. He says he may revisit that universe someday, but at present has no plans to do so. So while we wait to see where he goes next, here's a neatly wrapped up and immensely entertaining trilogy to savour. By the way, both Old Man's War and The Ghost Brigades are now available in inexpensive mass-market paperback editions. Unlike The Ghost Brigades, which can stand on its own without the first novel, you'll really enjoy this book and understand the characters much more if you've read the first two volumes before.

 Permalink

Harsanyi, David. Nanny State. New York: Broadway Books, 2007. ISBN 0-7679-2432-0.
In my earlier review of The Case Against Adolescence (July 2007), I concluded by observing that perhaps the end state of the “progressive” vision of the future is “being back in high school—forever”. Reading this short book (just 234 pages of main text, with 55 pages of end notes, bibliography, and index) may lead you to conclude that view was unduly optimistic. As the author documents, seemingly well-justified mandatory seat belt and motorcycle helmet laws in the 1980s punched through the barrier which used to deflect earnest (or ambitious) politicians urging “We have to do something”. That barrier, the once near-universal consensus that “It isn't the government's business”, had been eroded to a paper-thin membrane by earlier encroachments upon individual liberty and autonomy. Once breached, a torrent of infantilising laws, regulations, and litigation was unleashed, much of it promoted by single-issue advocacy groups and trial lawyers with a direct financial interest in the outcome, and often backed by nonexistent or junk science. The consequence, as the slippery slope became a vertical descent in the nineties and oughties, is the emergence of a society which seems to be evolving into a giant kindergarten, where children never have the opportunity to learn to be responsible adults, and nominal adults are treated as production and consumption modules, wards of a state which regulates every aspect of their behaviour, and surveils their every action.

It seems to me that the author has precisely diagnosed the fundamental problem: that once you accept the premise that the government can intrude into the sphere of private actions for an individual's own good (or, Heaven help us, “for the children”), then there is no limit whatsoever on how far it can go. Why, you might have security cameras going up on every street corner, cities banning smoking in the outdoors, and police ticketing people for listening to their iPods while crossing the street—oh, wait. Having left the U.S. in 1991, I was unaware of the extent of the present madness and the lack of push-back by reasonable people and the citizens who are seeing their scope of individual autonomy shrink with every session of the legislature. Another enlightening observation is that this is not, as some might think, entirely a phenomenon promoted by paternalist collectivists and manifest primarily in moonbat caves such as Seattle, San Francisco, and New York. The puritanical authoritarians of the right are just as willing to get into the act, as egregious examples from “red states” such as Texas and Alabama illustrate.

Just imagine how many more intrusions upon individual choice and lifestyle will be coming if the U.S. opts for socialised medicine. It's enough to make you go out and order a Hamdog!

 Permalink

Holland, Tom. Rubicon. London: Abacus, 2003. ISBN 0-349-11563-X.
Such is historical focus on the final years of the Roman Republic and the emergence of the Empire that it's easy to forget that the Republic survived for more than four and a half centuries prior to the chaotic events beginning with Caesar's crossing the Rubicon which precipitated its transformation into a despotism, preserving the form but not the substance of the republican institutions. When pondering analogies between Rome and present-day events, it's worth keeping in mind that representative self-government in Rome endured about twice as long as the history of the United States to date. This superb history recounts the story of the end of the Republic, placing the events in historical context and, to an extent I have never encountered in any other work, allowing the reader to perceive the personalities involved and their actions through the eyes and cultural assumptions of contemporary Romans, which were often very different from those of people today.

The author demonstrates how far-flung territorial conquests and the obligations they imposed, along with the corrupting influence of looted wealth flowing into the capital, undermined the institutions of the Republic which had, after all, evolved to govern just a city-state and limited surrounding territory. Whether a republican form of government could work on a large scale was a central concern of the framers of the U.S. Constitution, and this narrative graphically illustrates why their worries were well-justified and raises the question of whether a modern-day superpower can resist the same drift toward authoritarian centralism which doomed consensual government in Rome.

The author leaves such inference and speculation to the reader. Apart from a few comments in the preface, he simply recounts the story of Rome as it happened and doesn't draw lessons from it for the present. And the story he tells is gripping; it may be difficult to imagine, but this work of popular history reads like a thriller (I mean that entirely as a compliment—historical integrity is never sacrificed in the interest of storytelling), and he makes the complex and often contradictory characters of figures such as Sulla, Cato, Cicero, Mark Antony, Pompey, and Marcus Brutus come alive and the shifting alliances among them comprehensible. Source citations are almost entirely to classical sources although, as the author observes, ancient sources, though often referred to as primary, are not necessarily so: for example, Plutarch was born 90 years after the assassination of Caesar. A detailed timeline lists events from the foundation of Rome in 753 B.C. through the death of Augustus in A.D. 14.

A U.S. edition is now available.

 Permalink

Buckley, Christopher. Thank You for Smoking. New York: Random House, 1994. ISBN 0-8129-7652-5.
Nick Naylor lies for a living. As chief public “smokesman” for the Big Tobacco lobby in Washington, it's his job to fuzz the facts, deflect the arguments, and subvert the sanctimonious neo-prohibitionists, all with a smile. As in Buckley's other political farces, it seems to be an axiom that no matter how far down you are on the moral ladder in Washington D.C., there are always an infinite number of rungs below you, all occupied, mostly by lawyers. Nick's idea of how to sidestep government advertising bans and make cigarettes cool again raises his profile to such an extent that some of those on the rungs below him start grasping for him with their claws, tentacles, and end-effectors, with humourous and delightfully ironic (at least if you aren't Nick) consequences, and then when things have gotten just about as bad as they can get, the FBI jumps in to demonstrate that things are never as bad as they can get.

About a third of the way through reading this book, I happened to see the 2005 movie made from it on the illuminatus. I've never done this before—watch a movie based on a book I was currently reading. The movie was enjoyable and very funny, and seeing it didn't diminish my enjoyment of the book one whit; this is a wickedly hilarious book which contains dozens of laugh out loud episodes and subplots that didn't make it into the movie.

 Permalink

Chesterton, Gilbert K. What's Wrong with the World. San Francisco: Ignatius Press, [1910] 1994. ISBN 0-89870-489-8.
Writing in the first decade of the twentieth century in his inimitable riddle-like paradoxical style, Chesterton surveys the scene around him as Britain faced the new century and didn't find much to his satisfaction. A thorough traditionalist, he finds contemporary public figures, both Conservative and Progressive/Socialist, equally contemptible, essentially disagreeing only upon whether the common man should be enslaved and exploited in the interest of industry and commerce, or by an all-powerful monolithic state. He further deplores the modernist assumption, shared by both political tendencies, that once a change in society is undertaken, it must always be pursued: “You can't put the clock back”. But, as he asks, why not? “A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.” (p. 33). He urges us not to blindly believe in “progress” or “modernisation”, but rather to ask whether these changes have made things better or worse and, if worse, to undertake to reverse them.

In five sections, he surveys the impact of industrial society on the common man, of imperialism upon the colonisers and colonised, of feminism upon women and the family, of education upon children, and of collectivism upon individuality and the human spirit. In each he perceives the pernicious influence of an intellectual elite upon the general population who, he believes, are far more sensible about how to live their lives than those who style themselves their betters. For a book published almost a hundred years ago, this analysis frequently seems startlingly modern (although I'm not sure that's a word Chesterton would take as a compliment) and relevant to the present-day scene. While some of the specific issues (for example, women's suffrage, teaching of classical languages in the schools, and eugenics) may seem quaint, much of the last century has demonstrated the disagreeable consequences of the “progress” he discusses and accurately anticipated.

This reprint edition includes footnotes which explain Chesterton's many references to contemporary and historical figures and events which would have been familiar to his audience in 1910 but may be obscure to readers almost a century later. A free electronic edition (but without the explanatory footnotes) is available from Project Gutenberg.

 Permalink

Cadbury, Deborah. Space Race. London: Harper Perennial, 2005. ISBN 0-00-720994-0.
This is an utterly compelling history of the early years of the space race, told largely through the parallel lives of mirror-image principals Sergei Korolev (anonymous Chief Designer of the Soviet space program, and beforehand slave labourer in Stalin's Gulag) and Wernher von Braun, celebrity driving force behind the U.S. push into space, previously a Nazi party member, SS officer, and user of slave labour to construct his A-4/V-2 weapons. Drawing upon material not declassified by the United States until the 1980s and revealed after the collapse of the Soviet Union, the early years of these prime movers of space exploration are illuminated, along with how they were both exploited by and deftly manipulated their respective governments. I have never seen the story of the end-game between the British, Americans, and Soviets to spirit the V-2 hardware, technology, and team from Germany in the immediate post-surrender chaos told so well in a popular book. The extraordinary difficulties of trying to get things done in the Soviet command economy are also described superbly, and underline how inspired and indefatigable Korolev must have been to accomplish what he did.

Although the book covers the 1930s through the 1969 Moon landing, the main focus is on the competition between the U.S. and the Soviet Union between the end of World War II and the mid-1960s. Out of 345 pages of main text, the first 254 are devoted to the period ending with the flights of Yuri Gagarin and Alan Shepard in 1961. But then, that makes sense, given what we now know about the space race (and you'll know, if you don't already, after reading this book). Although nobody in the West knew at the time, the space race was really over when the U.S. made the massive financial commitment to Project Apollo and the Soviets failed to match it. Not only was Korolev compelled to work within budgets cut to half or less of his estimated requirements, the modest Soviet spending on space was divided among competing design bureaux whose chief designers engaged in divisive and counterproductive feuds. Korolev's N-1 Moon rocket used 30 first stage engines designed by a jet engine designer with modest experience with rockets because Korolev and supreme Soviet propulsion designer Valentin Glushko were not on speaking terms, and he was forced to test the whole grotesque lash-up for the first time in flight, as there wasn't the money for a ground test stand for the complete first stage. Unlike the “all-up” testing of the Apollo-Saturn program, where each individual component was exhaustively ground tested in isolation before being committed to flight, it didn't work. It wasn't just the Soviets who took risks in those wild and wooly days, however. When an apparent fuel leak threatened to delay the launch of Explorer-I, the U.S. reply to Sputnik, brass in the bunker asked for a volunteer “without any dependants” to go out and scope out the situation beneath the fully-fuelled rocket, possibly leaking toxic hydrazine (p. 175).

There are a number of factual goofs. I'm not sure the author fully understands orbital mechanics which is, granted, a pretty geeky topic, but one which matters when you're writing about space exploration. She writes that the Jupiter C re-entry experiment reached a velocity (p. 154) of 1600 mph (actually 16,000 mph), that Yuri Gararin's Vostok capsule orbited (p. 242) at 28,000 mph (actually 28,000 km/h), and that if Apollo 8's service module engine had failed to fire after arriving at the Moon (p. 325), the astronauts “would sail on forever, lost in space” (actually, they were on a “free return” trajectory, which would have taken them back to Earth even if the engine failed—the critical moment was actually when they fired the same engine to leave lunar orbit on Christmas Day 1968, which success caused James Lovell to radio after emerging from behind the Moon after the critical burn, “Please be informed, there is a Santa Claus”). Orbital attitude (the orientation of the craft) is confused with altitude (p. 267), and retro-rockets are described as “breaking rockets” (p. 183)—let's hope not! While these and other quibbles will irk space buffs, they shouldn't deter you from enjoying this excellent narrative.

A U.S. edition is now available. The author earlier worked on the production of a BBC docu-drama also titled Space Race, which is now available on DVD. Note, however, that this is a PAL DVD with a region code of 2, and will not play unless you have a compatible DVD player and television; I have not seen this programme.

 Permalink

November 2007

Krakauer, Jon. Into Thin Air. New York: Anchor Books, [1997] 1999. ISBN 0-385-49478-5.
It's amazing how much pain and suffering some people will endure in order to have a perfectly awful time. In 1996, the author joined a guided expedition to climb Mount Everest, on assignment by Outside magazine to report on the growing commercialisation of Everest, with guides taking numerous people, many inexperienced in alpinism, up the mountain every season. On May 10th, 1996, he reached the summit where, exhausted and debilitated by hypoxia and other effects of extreme altitude (although using supplementary oxygen), he found “I just couldn't summon the energy to care” (p. 7). This feeling of “whatever” while standing on the roof of the world was, nonetheless, the high point of the experience which quickly turned into a tragic disaster. While the climbers were descending from the summit to their highest camp, a storm, not particularly violent by Everest standards, reduced visibility to near zero and delayed progress until many climbers had exhausted their supplies of bottled oxygen. Of the six members of the expedition Krakauer joined who reached the summit, four died on the mountain, including the experienced leader of the team. In all, eight people died as a result of that storm, including the leader of another expedition which reached the summit that day.

Before joining the Everest expedition, the author had had extensive technical climbing experience but had never climbed as high as the Base Camp on Mount Everest: 17,600 feet. Most of the clients of his and other expeditions had far less mountaineering experience than the author. The wisdom of encouraging people with limited qualifications but large bank balances to undertake a potentially deadly adventure underlies much of the narrative: we encounter a New York socialite having a Sherpa haul a satellite telephone up the mountain to stay in touch from the highest camp. The supposed bond between climbers jointly confronting the hazards of a mountain at high altitude is called into question on several occasions: a Japanese expedition ascending from the Tibetan side via the Northeast Ridge passed three disabled climbers from an Indian expedition and continued on to the summit without offering to share food, oxygen, or water, nor to attempt a rescue: all of the Indians died on the mountain.

This is a disturbing account of adventure at the very edge of personal endurance, and the difficult life-and-death choices people make under such circumstances. A 1999 postscript in this paperback edition is a rebuttal to the alternative presentation of events in The Climb, which I have not read.

 Permalink

Siegel, Jerry and John Forte. Tales of the Bizarro World. New York: DC Comics, [1961, 1962] 2000. ISBN 1-56389-624-9.
In 1961, the almost Euclidean logic of the Superman comics went around a weird bend in reality, foretelling other events to transpire in that decade. Superman fans found their familar axioms of super powers and kryptonite dissolving into pulsating phosphorescent Jello on the Bizarro World, populated by imperfect and uniformly stupid replicas of Superman, Lois Lane, and other denizens of Metropolis created by a defective duplicator ray. Everything is backwards, or upside-down, or inside-out on the Bizarro World, which itself is cubical, not spherical.

These stories ran in Adventure Comics in 1961 and 1962 and then disappeared into legend, remaining out of print for more than 35 years until this compilation was published. Not only are all of the Bizarro stories here, there are profiles of the people who created Bizarro, and even an interview with Bizarro himself.

I fondly remember the Bizarro stories from the odd comic books I came across in my youth, and looked forward to revisiting them, but I have to say that what seemed exquisitely clever in small doses to a twelve year old may seem a bit strained and tedious in a 190 page collection read by somebody, er…a tad more mature. Still, ya gotta chuckle at Bizarro starting a campfire (p. 170) by rubbing two boy scouts together—imagine the innuendos which would be read into that today!

 Permalink

Albrecht, Katherine and Liz McIntyre. Spychips. Nashville: Nelson Current, 2005. ISBN 0-452-28766-9.
Imagine a world in which every manufactured object, and even living creatures such as pets, livestock, and eventually people, had an embedded tag with a unique 96-bit code which uniquely identified it among all macroscopic objects on the planet and beyond. Further, imagine that these tiny, unobtrusive and non-invasive tags could be interrogated remotely, at a distance of up to several metres, by safe radio frequency queries which would provide power for them to transmit their identity. What could you do with this? Well, a heck of a lot. Imagine, for example, a refrigerator which sensed its entire contents, and was able to automatically place an order on the Internet for home delivery of whatever was running short, or warned you that the item you'd just picked up had passed its expiration date. Or think about breezing past the checkout counter at the Mall-Mart with a cart full of stuff without even slowing down—all of the goods would be identified by the portal at the door, and the total charged to the account designated by the tag in your customer fidelity card. When you're shopping, you could be automatically warned when you pick up a product which contains an ingredient to which you or a member of your family is allergic. And if a product is recalled, you'll be able to instantly determine whether you have one of the affected items, if your refrigerator or smart medicine cabinet hasn't already done so. The benefits just go on and on…imagine.

This is the vision of an “Internet of Things”, in which all tangible objects are, in a real sense, on-line in real-time, with their position and status updated by ubiquitous and networked sensors. This is not a utopian vision. In 1994 I sketched Unicard, a unified personal identity document, and explored its consequences; people laughed: “never happen”. But just five years later, the Auto-ID Labs were formed at MIT, dedicated to developing a far more ubiquitous identification technology. With the support of major companies such as Procter & Gamble, Philip Morris, Wal-Mart, Gillette, and IBM, and endorsement by organs of the United States government, technology has been developed and commercialised to implement tagging everything and tracking its every movement.

As I alluded to obliquely in Unicard, this has its downsides. In particular, the utter and irrevocable loss of all forms of privacy and anonymity. From the moment you enter a store, or your workplace, or any public space, you are tracked. When you pick up a product, the amount of time you look at it before placing it in your shopping cart or returning it to the shelf is recorded (and don't even think about leaving the store without paying for it and having it logged to your purchases!). Did you pick the bargain product? Well, you'll soon be getting junk mail and electronic coupons on your mobile phone promoting the premium alternative with a higher profit margin to the retailer. Walk down the street, and any miscreant with a portable tag reader can “frisk” you without your knowledge, determining the contents of your wallet, purse, and shopping bag, and whether you're wearing a watch worth snatching. And even when you discard a product, that's a public event: garbage voyeurs can drive down the street and correlate what you throw out by the tags of items in your trash and the tags on the trashbags they're in.

“But we don't intend to do any of that”, the proponents of radio frequency identification (RFID) protest. And perhaps they don't, but if it is possible and the data are collected, who knows what will be done with it in the future, particularly by governments already installing surveillance cameras everywhere. If they don't have the data, they can't abuse them; if they do, they may; who do you trust with a complete record of everywhere you go, and everything you buy, sell, own, wear, carry, and discard?

This book presents, in a form that non-specialists can understand, the RFID-enabled future which manufacturers, retailers, marketers, academics, and government are co-operating to foist upon their consumers, clients, marks, coerced patrons, and subjects respectively. It is not a pretty picture. Regrettably, this book could be much better than it is. It's written in a kind of breathy muckraking rant style, with numerous paragraphs like (p. 105):

Yes, you read that right, they plan to sell data on our trash. Of course. We should have known that BellSouth was just another megacorporation waiting in the wings to swoop down on the data revealed once its fellow corporate cronies spychip the world.
I mean, I agree entirely with the message of this book, having warned of modest steps in that direction eleven years before its publication, but prose like this makes me feel like I'm driving down the road in a 1964 Vance Packard getting all righteously indignant about things we'd be better advised to coldly and deliberately draw our plans against. This shouldn't be so difficult, in principle: polls show that once people grasp the potential invasion of privacy possible with RFID, between 2/3 and 3/4 oppose it. The problem is that it's being deployed via stealth, starting with bulk pallets in the supply chain and, once proven there, migrated down to the individual product level.

Visibility is a precious thing, and one of the most insidious properties of RFID tags is their very invisibility. Is there a remotely-powered transponder sandwiched into the sole of your shoe, linked to the credit card number and identity you used to buy it, which “phones home” every time you walk near a sensor which activates it? Who knows? See how the paranoia sets in? But it isn't paranoia if they're really out to get you. And they are—for our own good, naturally, and for the children, as always.

In the absence of a policy fix for this (and the extreme unlikelihood of any such being adopted given the natural alliance of business and the state in tracking every move of their customers/subjects), one extremely handy technical fix would be a broadband, perhaps software radio, which listened on the frequency bands used by RFID tag readers and snooped on the transmissions of tags back to them. Passing the data stream to a package like RFDUMP would allow decoding the visible information in the RFID tags which were detected. First of all, this would allow people to know if they were carrying RFID tagged products unbeknownst to them. Second, a portable sniffer connected to a PDA would identify tagged products in stores, which clients could take to customer service desks and ask to be returned to the shelves because they were unacceptable for privacy reasons. After this happens several tens of thousands of times, it may have an impact, given the razor-thin margins in retailing. Finally, there are “active measures”. These RFID tags have large antennas which are connected to a super-cheap and hence fragile chip. Once we know the frequency it's talking on, why we could…. But you can work out the rest, and since these are all unlicensed radio bands, there may be nothing wrong with striking an electromagnetic blow for privacy.

EMP,
EMP!
Don't you put,
your tag on me!

 Permalink

[Audiobook] Bryson, Bill. A Short History of Nearly Everything (Audiobook, Unabridged). Westminster, MD: Books on Tape, 2003. ISBN 0-7366-9320-3.
What an astonishing achievement! Toward the end of the 1990s, Bill Bryson, a successful humorist and travel writer, found himself on a flight across the Pacific and, looking down on the ocean, suddenly realised that he didn't know how it came to be, how it affected the clouds above it, what lived in its depths, or hardly anything else about the world and universe he inhabited, despite having lived in an epoch in which science made unprecedented progress in understanding these and many other things. Shortly thereafter, he embarked upon a three year quest of reading popular science books and histories of science, meeting with their authors and with scientists in numerous fields all around the globe, and trying to sort it all out into a coherent whole.

The result is this stunning book, which neatly packages the essentials of human knowledge about the workings of the universe, along with how we came to know all of these things and the stories of the often fascinating characters who figured it all out, into one lucid, engaging, and frequently funny package. Unlike many popular works, Bryson takes pains to identify what we don't know, of which there is a great deal, not just in glamourous fields like particle physics but in stuffy endeavours such as plant taxonomy. People who find themselves in Bryson's position at the outset—entirely ignorant of science—can, by reading this single work, end up knowing more about more things than even most working scientists who specialise in one narrow field. The scope is encyclopedic: from quantum mechanics and particles to galaxies and cosmology, with chemistry, the origin of life, molecular biology, evolution, genetics, cell biology, paleontology and paleoanthropology, geology, meteorology, and much, much more, all delightfully told, with only rare errors, and with each put into historical context. I like to think of myself as reasonably well informed about science, but as I listened to this audiobook over a period of several weeks on my daily walks, I found that every day, in the 45 to 60 minutes I listened, there was at least one and often several fascinating things of which I was completely unaware.

This audiobook is distributed in three parts, totalling 17 hours and 48 minutes. The book is read by British narrator Richard Matthews, who imparts an animated and light tone appropriate to the text. He does, however mispronounce the names of several scientists, for example physicists Robert Dicke (whose last name he pronounces “Dick”, as opposed to the correct “Dickey”) and Richard Feynman (“Fane-man” instead of “Fine-man”), and when he attempts to pronounce French names or phrases, his accent is fully as affreux as my own, but these are minor quibbles which hardly detract from an overall magnificent job. If you'd prefer to read the book, it's available in paperback now, and there's an illustrated edition, which I haven't seen. I would probably never have considered this book, figuring I already knew it all, had I not read Hugh Hewitt's encomium to it and excerpts therefrom he included (parts 1, 2, 3).

 Permalink

Walton, Jo. Farthing. New York: Tor, 2006. ISBN 0-7653-5280-X.
This is an English country house murder mystery in the classic mould, but set in an alternative history timeline in which the European war of 1939 ended in the “Peace with Honour”, when Britain responded to Rudolf Hess's flight to Scotland in May 1941 with a diplomatic mission which ended the war, with Hitler ceding the French colonies in Africa to Britain in return for a free hand to turn east and attack the Soviet Union. In 1949, when the story takes place, the Reich and the Soviets are still at war, in a seemingly endless and bloody stalemate. The United States, never drawn into the war, remains at peace, adopting an isolationist stance under President Charles Lindbergh; continental Europe has been consolidated into the Greater Reich.

When the architect of the peace between Britain and the Reich is found murdered with a yellow star of David fixed to his chest with a dagger, deep currents: political, family, financial, racial, and sexual, converge to muddle the situation which a stolid although atypical Scotland Yard inspector must sort through under political pressure and a looming deadline.

The story is told in alternating chapters, the odd numbered being the first-person narrative of one of the people in the house at the time of the murder and the even numbered in the voice of an omniscient narrator following the inspector. We can place the story precisely in (alternative) time: on p. 185 the year is given as 1949, and on p. 182 we receive information which places the murder as on the night of 7–8 May of that year. I'm always impressed when an author makes the effort to get the days of the week right in an historical novel, and that's the case here. There is, however, a little bit of bad astronomy. On p. 160, as the inspector is calling it a day, we read, “It was dusk; the sky was purple and the air was cool. … Venus was just visible in the east.” Now, I'm impressed, because at dusk on that day Venus was visible near the horizon—that is admirable atmosphere and attention to detail! But Venus can never be visible in the East at dusk: it's an inner planet and never gets further than 48° from the Sun, so in the evening sky it's always in the West; on that night, near Winchester England, it would be near the west-northwest horizon, with Mercury higher in the sky.

The dénouement is surprising and chilling at the same time. The story illustrates how making peace with tyranny can lead to successive, seemingly well-justified, compromises which can inoculate the totalitarian contagion within even the freest and and most civil of societies.

 Permalink

Sinclair, Upton. Dragon's Teeth. Vol. 1. Safety Harbor, FL: Simon Publications, [1942] 2001. ISBN 1-931313-03-2.
Between 1940 and 1953, Upton Sinclair published a massive narrative of current events, spanning eleven lengthy novels, in which real-world events between 1913 and 1949 were seen through the eyes of Lanny Budd, scion of a U.S. munitions manufacturer family become art dealer and playboy husband of an heiress whose fortune dwarfs his own. His extended family and contacts in the art and business worlds provide a window into the disasters and convulsive changes which beset Europe and America in two world wars and the period between them and afterward.

These books were huge bestsellers in their time, and this one won the Pulitzer Prize, but today they are largely forgotten. Simon Publications have made them available in facsimile reprint editions, with each original novel published in two volumes of approximately 300 pages each. This is the third novel in the saga, covering the years 1929–1934; this volume, comprising the first three books of the novel, begins shortly after the Wall Street crash of 1929 and ends with the Nazi consolidation of power in Germany after the Reichstag fire in 1933.

It's easy to understand both why these books were such a popular and critical success at the time and why they have since been largely forgotten. In each book, we see events of a few years before the publication date from the perspective of socialites and people in a position of power (in this book Lanny Budd meets “Adi” Hitler and gets to see both his attraction and irrationality first-hand), but necessarily the story is written without the perspective of knowing how it's going to come out, which makes it “current events fiction”, not historical fiction in the usual sense. Necessarily, that means it's going to be dated not long after the books scroll off the bestseller list. Also, the viewpoint characters are mostly rather dissipated and shallow idlers, wealthy dabblers in “pink” or “red” politics, who, with hindsight, seem not so dissimilar to the feckless politicians in France and Britain who did nothing as Europe drifted toward another sanguinary catastrophe.

Still, I enjoyed this book. You get the sense that this is how the epoch felt to the upper-class people who lived through it, and it was written so shortly after the events it chronicles that it avoids the simplifications that retrospection engenders. I will certainly read the second half of this reprint, which currently sits on my bookshelf, but I doubt if I'll read any of the others in the epic.

 Permalink

Bernstein, Jeremy. Plutonium. Washington: Joseph Henry Press, 2007. ISBN 0-309-10296-0.
When the Manhattan Project undertook to produce a nuclear bomb using plutonium-239, the world's inventory of the isotope was on the order of a microgram, all produced by bombarding uranium with neutrons produced in cyclotrons. It wasn't until August of 1943 that enough had been produced to be visible under a microscope. When, in that month, the go-ahead was given to build the massive production reactors and separation plants at the Hanford site on the Columbia River, virtually nothing was known of the physical properties, chemistry, and metallurgy of the substance they were undertaking to produce. In fact, it was only in 1944 that it was realised that the elements starting with thorium formed a second group of “rare earth” elements: the periodic table before World War II had uranium in the column below tungsten and predicted that the chemistry of element 94 would resemble that of osmium. When the large-scale industrial production of plutonium was undertaken, neither the difficulty of separating the element from the natural uranium matrix in which it was produced nor the contamination with Pu-240 which would necessitate an implosion design for the plutonium bomb were known. Notwithstanding, by the end of 1947 a total of 500 kilograms of the stuff had been produced, and today there are almost 2000 metric tons of it, counting both military inventories and that produced in civil power reactors, which crank out about 70 more metric tons a year.

These are among the fascinating details gleaned and presented in this history and portrait of the most notorious of artificial elements by physicist and writer Jeremy Bernstein. He avoids getting embroiled in the building of the bomb, which has been well-told by others, and concentrates on how scientists around the world stumbled onto nuclear fission and transuranic elements, puzzled out what they were seeing, and figured out the bizarre properties of what they had made. Bizarre is not too strong a word for the chemistry and metallurgy of plutonium, which remains an active area of research today with much still unknown. When you get that far down on the periodic table, both quantum mechanics and special relativity get into the act (as they start to do even with gold), and you end up with six allotropic phases of the metal (in two of which volume decreases with increasing temperature), a melting point of just 640° C and an anomalous atomic radius which indicates its 5f electrons are neither localised nor itinerant, but somewhere in between.

As the story unfolds, we meet some fascinating characters, including Fritz Houtermans, whose biography is such that, as the author notes (p. 86), “if one put it in a novel, no one would find it plausible.” We also meet stalwarts of the elite 26-member UPPU Club: wartime workers at Los Alamos whose exposure to plutonium was sufficient that it continues to be detectable in their urine. (An epidemiological study of these people which continues to this day has found no elevated rates of mortality, which is not to say that plutonium is not a hideously hazardous substance.)

The text is thoroughly documented in the end notes, and there is an excellent index; the entire book is just 194 pages. I have two quibbles. On p. 110, the author states of the Little Boy gun-assembly uranium bomb dropped on Hiroshima, “This is the only weapon of this design that was ever detonated.” Well, I suppose you could argue that it was the only such weapon of that precise design detonated, but the implication is that it was the first and last gun-type bomb to be detonated, and this is not the case. The U.S. W9 and W33 weapons, among others, were gun-assembly uranium bombs, which between them were tested three times at the Nevada Test Site. The price for plutonium-239 quoted on p. 155, US$5.24 per milligram, seems to imply that the plutonium for a critical mass of about 6 kg costs about 31 million dollars. But this is because the price quoted is for 99–99.99% isotopically pure Pu-239, which has been electromagnetically separated from the isotopic mix you get from the production reactor. Weapons-grade plutonium can have up to 7% Pu-240 contamination, which doesn't require the fantastically expensive isotope separation phase, just chemical extraction of plutonium from reactor fuel. In fact, you can build a bomb from so-called “reactor-grade” plutonium—the U.S. tested one in 1962.

 Permalink

December 2007

Johnson, Steven. The Ghost Map. New York: Riverhead Books, 2006. ISBN 1-59448-925-4.
From the dawn of human civilisation until sometime in the nineteenth century, cities were net population sinks—the increased mortality from infectious diseases, compounded by the unsanitary conditions, impure water, and food transported from the hinterland and stored without refrigeration so shortened the lives of city-dwellers (except for the ruling class and the wealthy, a small fraction of the population) that a city's population was maintained only by a constant net migration to it from the countryside. In densely-packed cities, not only does an infected individual come into contact with many more potential victims than in a rural environment, highly virulent strains of infectious agents which would “burn out” due to rapidly killing their hosts in farm country or a small village can prosper in a city, since each infected host still has the opportunity to infect many others before succumbing. Cities can be thought of as Petri dishes for evolving killer microbes.

No civic culture medium was as hospitable to pathogens as London in the middle of the 19th century. Its population, 2.4 million in 1851, had exploded from just one million at the start of the century, and all of these people had been accommodated in a sprawling metropolis almost devoid of what we would consider a public health infrastructure. Sewers, where they existed, were often open and simply dumped into the Thames, whence other Londoners drew their drinking water, downstream. Other residences dumped human waste in cesspools, emptied occasionally (or maybe not) by “night-soil men”. Imperial London was a smelly, and a deadly place. Observing it first-hand is what motivated Friedrich Engels to document and deplore The Condition of the Working Class in England (January 2003).

Among the diseases which cut down inhabitants of cities, one of the most feared was cholera. In 1849, an outbreak killed 14,137 in London, and nobody knew when or where it might strike next. The prevailing theory of disease at this epoch was that infection was caused by and spread through “miasma”: contaminated air. Given how London stank and how deadly it was to its inhabitants, this would have seemed perfectly plausible to people living before the germ theory of disease was propounded. Edwin Chadwick, head of the General Board of Health in London at the epoch, went so far as to assert (p. 114) “all smell is disease”. Chadwick was, in many ways, one of the first advocates and implementers of what we have come to call “big government”—that the state should take an active role in addressing social problems and providing infrastructure for public health. Relying upon the accepted “miasma” theory and empowered by an act of Parliament, he spent the 1840s trying to eliminate the stink of the cesspools by connecting them to sewers which drained their offal into the Thames. Chadwick was, by doing so, to provide one of the first demonstrations of that universal concomitant of big government, unintended consequences: “The first defining act of a modern, centralized public-health authority was to poison an entire urban population.” (p. 120).

When, in 1854, a singularly virulent outbreak of cholera struck the Soho district of London, physician and pioneer in anæsthesia John Snow found himself at the fulcrum of a revolution in science and public health toward which he had been working for years. Based upon his studies of the 1849 cholera outbreak, Snow had become convinced that the pathogen spread through contamination of water supplies by the excrement of infected individuals. He had published a monograph laying out this theory in 1849, but it swayed few readers from the prevailing miasma theory. He was continuing to document the case when cholera exploded in his own neighbourhood. Snow's mind was not only prepared to consider a waterborne infection vector, he was also one of the pioneers of the emerging science of epidemiology: he was a founding member of the London Epidemiological Society in 1850. Snow's real-time analysis of the epidemic caused him to believe that the vector of infection was contaminated water from the Broad Street pump, and his persuasive presentation of the evidence to the Board of Governors of St. James Parish caused them to remove the handle from that pump, after which the contagion abated. (As the author explains, the outbreak was already declining at the time, and in all probability the water from the Broad Street pump was no longer contaminated then. However, due to subsequent events and discoveries made later, had the handle not been removed there would have likely been a second wave of the epidemic, with casualties comparable to the first.)

Afterward, Snow, with the assistance of initially-sceptical clergyman Henry Whitehead, whose intimate knowledge of the neighbourhood and its residents allowed compiling the data which not only confirmed Snow's hypothesis but identified what modern epidemiologists would call the “index case” and “vector of contagion”, revised his monograph to cover the 1854 outbreak, illustrated by a map which illustrated its casualties that has become a classic of on-the-ground epidemiology and the graphical presentation of data. Most brilliant was Snow's use (and apparent independent invention) of a Voronoi diagram to show the boundary, by streets, of the distance, not in Euclidean space, but by walking time, of the area closer to the Broad Street pump than to others in the neighbourhood. (Oddly, the complete map with this crucial detail does not appear in the book: only a blow-up of the central section without the boundary. The full map is here; depending on your browser, you may have to click on the map image to display it at full resolution. The dotted and dashed line is the Voronoi cell enclosing the Broad Street pump.)

In the following years, London embarked upon a massive program to build underground sewers to transport the waste of its millions of residents downstream to the tidal zone of the Thames and later, directly to the sea. There would be one more cholera outbreak in London in 1866—in an area not yet connected to the new sewers and water treatment systems. Afterward, there has not been a single epidemic of cholera in London. Other cities in the developed world learned this lesson and built the infrastructure to provide their residents clean water. In the developing world, cholera continues to take its toll: in the 1990s an outbreak in South America infected more than a million people and killed almost 10,000. Fortunately, administration of rehydration therapy (with electrolytes) has drastically reduced the likelihood of death from a cholera infection. Still, you have to wonder why, in a world where billions of people lack access to clean water and third world mega-cities are drawing millions to live in conditions not unlike London in the 1850s, that some believe that laptop computers are the top priority for children growing up there.

A paperback edition is now available.

 Permalink

Hoagland, Richard C. and Mike Bara. Dark Mission. Los Angeles: Feral House, 2007. ISBN 1-932595-26-0.
Author Richard C. Hoagland first came to prominence as an “independent researcher” and advocate that “the face on Mars” was an artificially-constructed monument built by an ancient extraterrestrial civilisation. Hoagland has established himself as one of the most indefatigable and imaginative pseudoscientific crackpots on the contemporary scene, and this œuvre pulls it all together into a side-splittingly zany compendium of conspiracy theories, wacky physics, imaginative image interpretation, and feuds within the “anomalist” community—a tempest in a crackpot, if you like.

Hoagland seems to possess a visual system which endows him with a preternatural ability, undoubtedly valuable for an anomalist, of seeing things that aren't there. Now you may look at a print of a picture taken on the lunar surface by an astronaut with a Hasselblad camera and see, in the black lunar sky, negative scratches, film smudges, lens flare, and, in contrast-stretched and otherwise manipulated digitally scanned images, artefacts of the image processing filters applied, but Hoagland immediately perceives “multiple layers of breathtaking ‘structural construction’ embedded in the NASA frame; multiple surviving ‘cell-like rooms,’ three-dimensional ‘cross-bracing,’ angled ‘stringers,’ etc… all following logical structural patterns for a massive work of shattered, but once coherent, glass-like mega-engineering” (p. 153, emphasis in the original). You can see these wonders for yourself on Hoagland's site, The Enterprise Mission. From other Apollo images Hoagland has come to believe that much of the near side of the Moon is covered by the ruins of glass and titanium domes, some which still reach kilometres into the lunar sky and towered over some of the Apollo landing sites.

Now, you might ask, why did the Apollo astronauts not remark upon these prodigies, either while presumably dodging them when landing and flying back to orbit, nor on the surface, nor afterward. Well, you see, they must have been sworn to secrecy at the time and later (p. 176) hypnotised to cause them to forget the obvious evidence of a super-civilisation they were tripping over on the lunar surface. Yeah, that'll work.

Now, Occam's razor advises us not to unnecessarily multiply assumptions when formulating our hypotheses. On the one hand, we have the mainstream view that NASA missions have honestly reported the data they obtained to the public, and that these data, to date, include no evidence (apart from the ambiguous Viking biology tests on Mars) for extraterrestrial life nor artefacts of another civilisation. On the other, Hoagland argues:

  • NASA has been, from inception, ruled by three contending secret societies, all of which trace their roots to the gods of ancient Egypt: the Freemasons, unrepentant Nazi SS, and occult disciples of Aleister Crowley.
  • These cults have arranged key NASA mission events to occur at “ritual” times, locations, and celestial alignments. The Apollo 16 lunar landing was delayed due to a faked problem with the SPS engine so as to occur on Hitler's birthday.
  • John F. Kennedy was assassinated by a conspiracy including Lyndon Johnson and Congressman Albert Thomas of Texas because Kennedy was about to endorse a joint Moon mission with the Soviets, revealing to them the occult reasons behind the Apollo project.
  • There are two factions within NASA: the “owls”, who want to hide the evidence from the public, and the “roosters”, who are trying to get it out by covert data releases and cleverly coded clues.

    But wait, there's more!

  • The energy of the Sun comes, at least in part, from a “hyperdimensional plane” which couples to rotating objects through gravitational torsion (you knew that was going to come in sooner or later!) This energy expresses itself through a tetrahedral geometry, and explains, among other mysteries, the Great Red Spot of Jupiter, the Great Dark Spot of Neptune, Olympus Mons on Mars, Mauna Kea in Hawaii, and the precession of isolated pulsars.
  • The secrets of this hyperdimensional physics, glimpsed by James Clerk Maxwell in his quaternion (check off another crackpot checklist item) formulation of classical electrodynamics, were found by Hoagland to be encoded in the geometry of the “monuments” of Cydonia on Mars.
  • Mars was once the moon of a “Planet V”, which exploded (p. 362).

    And that's not all!

  • NASA's Mars rover Opportunity imaged a fossil in a Martian rock and then promptly ground it to dust.
  • The terrain surrounding the rover Spirit is littered with artificial objects.
  • Mars Pathfinder imaged a Sphinx on Mars.

    And if that weren't enough!

  • Apollo 17 astronauts photographed the head of an anthropomorphic robot resembling C-3PO lying in Shorty Crater on the Moon (p. 487).

It's like Velikovsky meets The Illuminatus! Trilogy, with some of the darker themes of “Millennium” thrown in for good measure.

Now, I'm sure, as always happens when I post a review like this, the usual suspects are going to write to demand whatever possessed me to read something like this and/or berate me for giving publicity to such hyperdimensional hogwash. Lighten up! I read for enjoyment, and to anybody with a grounding in the Actual Universe™, this stuff is absolutely hilarious: there's a chortle every few pages and a hearty guffaw or two in each chapter. The authors actually write quite well: this is not your usual semi-literate crank-case sludge, although like many on the far fringes of rationality they seem to be unduly challenged by the humble apostrophe. Hoagland is inordinately fond of the word “infamous”, but this becomes rather charming after the first hundred or so, kind of like the verbal tics of your crazy uncle, who Hoagland rather resembles. It's particularly amusing to read the accounts of Hoagland's assorted fallings out and feuds with other “anomalists”; when Tom Van Flandern concludes you're a kook, then you know you're out there, and I don't mean hanging with the truth.

 Permalink

Gurstelle, William. Whoosh Boom Splat. New York: Three Rivers Press, 2007. ISBN 0-307-33948-3.
So you've read The Dangerous Book for Boys and now you're wondering, “Where's the dangerous book for adults?”. Well, here you go. Subtitled “The Garage Warrior's Guide to Building Projectile Shooters”, in just 160 pages with abundant illustrations, the author shows how with inexpensive materials, handyman tools, and only the most modest of tinkering skills, you can build devices including a potato cannon which can shoot a spud more than 200 metres powered by hairspray, a no-moving-parts pulse jet built from a mason jar and pipe fittings, a steam cannon, a “snap shooter” made from an ordinary spring-type wooden clothespin which can launch small objects across a room (or, should that not be deemed dangerous enough, flaming matches [outside, please!]), and more. The detailed instructions for building the devices and safety tips for operating them are accompanied by historical anecdotes and background on the science behind the gadgets. Ever-versatile PVC pipe is used in many of the projects, and no welding or metalworking skills (beyond drilling holes) are required.

If you find these projects still lacking that certain frisson, you might want to check out the author's Adventures from the Technology Underground (February 2006), which you can think of as The Absurdly Dangerous Book for Darwin Award Candidates, albeit without the detailed construction plans of the present volume. Enough scribbling—time to get back to work on that rail gun.

 Permalink

Edwards-Jones, Imogen. Fashion Babylon. London: Corgi Books, 2006. ISBN 0-552-15443-1.
This is a hard-to-classify but interesting and enjoyable book. I'm not sure even whether to call it fiction or nonfiction: the author has invented a notional co-author, “Anonymous”, who relates, condensed into a single six-month fashion season, anecdotes from a large collection of sources within the British fashion industry, all of which the author vouches for as authentic. Celebrities appear under their own names, and the stories involving them (often bizarre) are claimed to be genuine.

If you're looking for snark, cynicism, cocaine, cigarettes, champagne, anorexia, and other decadence and dissipation, you'll find it, but you'll also take away a thorough grounding in the economics of a business fully as bizarre as the software industry. The gross margin is almost as high and, except for the brand name and associated logos, there is essentially zero protection of intellectual property (as long as you don't counterfeit the brand, you can knock-off any design, just as you can create a work-alike for almost any non-patent-protected software product and sell it for a tiny fraction of the price of the prototype). The vertiginous plunge from the gross margin to the meagre bottom line is mostly promotional hype: blow-outs to “build the brand”. So it may increasingly become in the software business as increases in functionality in products appeal to a smaller and smaller fraction of the customer base, or even reduce usability (Windows Vista, anybody?).

A U.S. Edition will be published in February 2008.

 Permalink

Zubrin, Robert Energy Victory. Amherst, NY: Prometheus Books, 2007. ISBN 1-59102-591-5.
This is a tremendous book—jam-packed with nerdy data of every kind. The author presents a strategy aiming for the total replacement of petroleum as a liquid fuel and chemical feedstock with an explicit goal of breaking the back of OPEC and, as he says, rendering the Middle East's near-monopoly on oil as significant on the world economic stage as its near-monopoly on camel milk.

The central policy recommendation is a U.S. mandate that all new vehicles sold in the U.S. be “flex-fuel” capable: able to run on gasoline, ethanol, or methanol in any mix whatsoever. This is a proven technology; there are more than 6 million gasoline/ethanol vehicles on the road at present, more than five times the number of gasoline/electric hybrids (p. 27), and the added cost over a gas-only vehicle is negligible. Gasoline/ethanol flex-fuel vehicles are approaching 100% of all new sales in Brazil (pp. 165–167), and that without a government mandate. Present flex vehicles are either gasoline/ethanol or gasoline/methanol, not tri-fuel, but according to Zubrin that's just a matter of tweaking the exhaust gas sensor and reprogramming the electronic fuel injection computer.

Zubrin argues that methanol capability in addition to ethanol is essential because methanol can be made from coal or natural gas, which the U.S. has in abundance, and it enables utilisation of natural gas which is presently flared due to being uneconomical to bring to market in gaseous form. This means that it isn't necessary to wait for a biomass ethanol economy to come on line. Besides, even if you do produce ethanol from, say, maize, you can still convert the cellulose “waste” into methanol economically. You can also react methanol into dimethyl ether, an excellent diesel fuel that burns cleaner than petroleum-based diesel. Coal-based methanol production produces greenhouse gases, but less than burning the coal to make electricity, then distributing it and using it in plug-in hybrids, given the efficiencies along the generation and transmission chain.

With full-flex, the driver becomes a genuine market player: you simply fill up from whatever pump has the cheapest fuel among those available wherever you happen to be: the car will run fine on any mix you end up with in the tank. People in Brazil have been doing this for the last several years, and have been profiting from their flex-fuel vehicles now that domestic ethanol is cheaper than gasoline. Brazil, in fact, reduced its net petroleum imports to zero in 2005 (from 80% in 1974), and is now a net exporter of energy (p. 168), rendering the Brazilian economy entirely immune to the direct effects of OPEC price shocks.

Zubrin also demolishes the argument that ethanol is energy neutral or a sink: recent research indicates that corn ethanol multiplies the energy input by a factor between 6 and 20. Did you know that of the two authors of an oft-cited 2005 “ethanol energy sink” paper, one (David Pimentel) is a radical Malthusian who wants to reduce the world population by a factor of three and the other (Tadeusz Patzek) comes out of the “all bidness” (pp. 126–135)?

The geopolitical implications of energy dependence and independence are illustrated with examples from both world wars and the present era, and a hopeful picture sketched in which the world transitions from looting developed countries to fill the coffers of terror masters and kleptocrats to a future where the funds for the world's liquid fuel energy needs flow instead to farmers in the developing world who create sustainable, greenhouse-neutral fuel by their own labour and intellect, rather than pumping expendable resources from underground.

Here we have an optimistic, pragmatic, and open-ended view of the human prospect. The post-petroleum era could be launched on a global scale by a single act of the U.S. Congress which would cost U.S. taxpayers nothing and have negligible drag on the domestic or world economy. The technologies required date mostly from the 19th century and are entirely mature today, and the global future advocated has already been prototyped in a large, economically and socially diverse country, with stunning success. Perhaps people in the second half of the 21st century will regard present-day prophets of “peak oil” and “global warming” as quaint as the doomsayers who foresaw the end of civilisation when firewood supplies were exhausted, just years before coal mines began to fuel the industrial revolution.

 Permalink

Brown, Paul. The Rocketbelt Caper. Newcastle upon Tyne: Tonto Press, 2007. ISBN 0-9552183-7-3.
Few things are as iconic of the 21st century imagined by visionaries and science fictioneers of the 20th as the personal rocketbelt: just strap one on and take to the air, without complications such as wings, propellers, pilots, fuselage, or landing gear. Flying belts were a fixture of Buck Rogers comic strips and movie serials, and in 1965 Isaac Asimov predicted that by 1990 office workers would beat the traffic by commuting to work in their personal rocketbelts.

The possibilities of a personal flying machine did not escape the military, which imagined infantry soaring above the battlefield and outflanking antiquated tanks and troops on the ground. In the 1950s, engineers at the Bell Aircraft Corporation, builders of the X-1, the first plane to break the sound barrier, built prototypes of rocketbelts powered by monopropellant hydrogen peroxide, and eventually won a U.S. Army contract to demonstrate such a device. On April 20th, 1961, the first free flight occurred, and a public demonstration was performed the following June 8th. The rocketbelt was an immediate sensation. The Bell rocketbelt appeared in the James Bond film Thunderball, was showcased at the 1964 World's Fair in New York, at Disneyland, and at the first Super Bowl of American football in 1967. Although able to fly only twenty-odd seconds and reach an altitude of about 20 metres, here was Buck Rogers made real—certainly before long engineers would work out the remaining wrinkles and everybody would be taking to the skies.

And then a funny thing happened—nothing. Wendell Moore, creator of the rocketbelt at Bell, died in 1969 at age 51, and with no follow-up interest from the U.S. Army, the project was cancelled and the Bell rocketbelt never flew again. Enter Nelson Tyler, engineer and aerial photographer, who on his own initiative built a copy of the Bell rocketbelt which, under his ownership and subsequent proprietors made numerous promotional appearances around the world, including the opening ceremony of the 1984 Olympics in Los Angeles, before a television audience estimated in excess of two billion.

All of this is prologue to the utterly bizarre story of the RB-2000 rocketbelt, launched by three partners in 1992, motivated both by their individual obsession with flying a rocketbelt and dreams of the fortune they'd make from public appearances: the owners of the Tyler rocketbelt were getting US$25,000 per flight at the time. Obsession is not a good thing to bring to a business venture, and things rapidly went from bad to worse to truly horrid. Even before the RB-2000's first and last public flight in June 1995 (which was a complete success), one of the partners had held a gun to another's head who, in return, assaulted the first with a hammer, inflicting serious wounds. In July of 1998, the third partner was brutally murdered in his home, and to this day no charges have been made in the case. Not long thereafter one of the two surviving partners sued the other and won a judgement in excess of US$10 million and custody of the RB-2000, which had disappeared immediately after its sole public flight. When no rocketbelt or money was forthcoming, the plaintiff kidnapped the defendant and imprisoned him in a wooden box for eight days, when fortuitous circumstances permitted the victim to escape. The kidnapper was quickly apprehended and subsequently sentenced to life plus ten years for the crime (the sentence was later reduced to eight years). The kidnappee later spent more than five months in jail for contempt of court for failing to produce the RB-2000 in a civil suit. To this day, the whereabouts of the RB-2000, if it still exists, are unknown.

Now, you don't need to be a rocket scientist to figure out that flitting through the sky with a contraption powered by highly volatile and corrosive propellant, with total flight time of 21 seconds, and no backup systems of any kind is a perilous undertaking. But who would have guessed that trying to do so would entail the kinds of consequences the RB-2000 venture inflicted upon its principals?

A final chapter covers recent events in rocketbelt land, including the first International Rocketbelt Convention in 2006. The reader is directed to Peter Gijsberts' www.rocketbelt.nl site for news and additional information on present-day rocketbelt projects, including commercial ventures attempting to bring rocketbelts to market. One of the most remarkable things about the curious history of rocketbelts is that, despite occasional claims and ambitious plans, in the more than 45 years which have elapsed since the first flight of the Bell rocketbelt, nobody has substantially improved upon its performance.

A U.S. Edition was published in 2005, but is now out of print.

 Permalink

Lileks, James. Gastroanomalies. New York: Crown Publishers, 2007. ISBN 0-307-38307-5.
Should you find this delightful book under your tree this Christmas Day, let me offer you this simple plea. Do not curl up with it late at night after the festivities are over and you're winding down for the night. If you do:

  1. You will not get to sleep until you've finished it.
  2. Your hearty guffaws will keep everybody else awake as well.
  3. And finally, when you do drift off to sleep, visions of the culinary concoctions collected here may impede digestion of your holiday repast.

This sequel to The Gallery of Regrettable Food (April 2004) presents hundreds of examples of tasty treats from cookbooks and popular magazines from the 1930s through the 1960s. Perusal of these execrable entrées will make it immediately obvious why the advertising of the era featured so many patent remedies for each and every part of the alimentary canal. Most illustrations are in ghastly colour, with a few in merciful black and white. It wasn't just Americans who outdid themselves crafting dishes in the kitchen to do themselves in at the dinner table—a chapter is devoted to Australian delicacies, including some of the myriad ways to consume “baiycun”. There's something for everybody: mathematicians will savour the countably infinite beans-and-franks open-face sandwich (p. 95), goths will delight in discovering the dish Satan always brings to the pot luck (p. 21), political wonks need no longer wonder which appetiser won the personal endorsement of Earl Warren (p. 23), movie buffs will finally learn the favourite Bisquick recipes of Joan Crawford, Clark Gable, Bing Crosby, and Bette Davis (pp. 149–153), and all of the rest of us who've spent hours in the kitchen trying to replicate grandma's chicken feet soup will find the secret revealed here (p. 41). Revel in the rediscovery of aspic: the lost secret of turning unidentifiable food fragments into a gourmet treat by entombing them in jiggly meat-flavoured Jello-O. Bon appétit!

Many other vintage images of all kinds are available on the author's Web site.

 Permalink

Hellman, Hal. Great Feuds in Mathematics. Hoboken, NJ: John Wiley & Sons, 2006. ISBN 0-471-64877-9.
Since antiquity, many philosophers have looked upon mathematics as one thing, perhaps the only thing, that we can know for sure, “the last fortress of certitude” (p. 200). Certainly then, mathematicians must be dispassionate explorers of this frontier of knowledge, and mathematical research a grand collaborative endeavour, building upon the work of the past and weaving the various threads of inquiry into a seamless intellectual fabric. Well, not exactly….

Mathematicians are human, and mathematical research is a human activity like any other, so regardless of the austere crystalline perfection of the final product, the process of getting there can be as messy, contentious, and consequently entertaining as any other enterprise undertaken by talking apes. This book chronicles ten of the most significant and savage disputes in the history of mathematics. The bones of contention vary from the tried-and-true question of priority (Tartaglia vs. Cardano on the solution to cubic polynomials, Newton vs. Leibniz on the origin of the differential and integral calculus), the relation of mathematics to the physical sciences (Sylvester vs. Huxley), the legitimacy of the infinite in mathematics (Kronecker vs. Cantor, Borel vs. Zermelo), the proper foundation for mathematics (Poincaré vs. Russell, Hilbert vs. Brouwer), and even sibling rivalry (Jakob vs. Johann Bernoulli). A final chapter recounts the incessantly disputed question of whether mathematicians discover structures that are “out there” (as John D. Barrow puts it, “Pi in the Sky”) or invent what is ultimately as much a human construct as music or literature.

The focus is primarily on people and events, less so on the mathematical questions behind the conflict; if you're unfamiliar with the issues involved, you may want to look them up in other references. The stories presented here are an excellent antidote to the retrospective view of many accounts which present mathematical history as a steady march forward, with each generation building upon the work of the previous. The reality is much more messy, with the directions of inquiry chosen for reasons of ego and national pride as often as inherent merit, and the paths not taken often as interesting as those which were. Even if you believe (as I do) that mathematics is “out there”, the human struggle to discover and figure out how it all fits together is interesting and ultimately inspiring, and this book provides a glimpse into that ongoing quest.

 Permalink

  2008  

January 2008

Buckley, Christopher. No Way to Treat a First Lady. New York: Random House, 2002. ISBN 978-0-375-75875-1.
First Lady Beth MacMann knew she was in for a really bad day when she awakened to find her philandering war hero presidential husband dead in bed beside her, with the hallmark of the Paul Revere silver spittoon she'd hurled at him the night before as he'd returned from an assignation in the Lincoln Bedroom “etched, etched” upon his forehead. Before long, Beth finds herself charged with assassinating the President of the United States, and before the spectacle a breathless media are pitching as the “Trial of the Millennium” even begins, nearly convicted in the court of public opinion, with the tabloids referring to her as “Lady Bethmac”.

Enter superstar trial lawyer and fiancé Beth dumped in law school Boyce “Shameless” Baylor who, without the benefit of a courtroom dream team, mounts a defence involving “a conspiracy so vast…” that the world sits on the edge of its seats to see what will happen next. What happens next, and then, and later, and still later is side-splittingly funny even by Buckley's high standards, perhaps the most hilarious yarn ever spun around a capital murder trial. As in many of Buckley's novels, everything works out for the best (except, perhaps, for the deceased commander in chief, but he's not talking), and yet none of the characters is admirable in any way—welcome to Washington D.C.! Barbs at legacy media figures and celebrities abound, and Dan Rather's inane folksiness comes in for delicious parody on the eve of the ignominious end of his career. This is satire at its most wicked, one of the funniest of Buckley's novels I've read (Florence of Arabia [March 2006] is comparable, but a very different kind of story). This may be the last Washington farce of the “holiday from history” epoch—the author completed the acknowledgements page on September 9th, 2001.

 Permalink

Rex and Sparky [Garden, Joe et al.]. The Dangerous Book for Dogs. New York: Villard, 2007. ISBN 978-0-345-50370-1.
The Dangerous Book for Boys is all well and good, but what about a boy's inseparable companion in adventures great and small? This book comes to the rescue, with essential tips for the pooch who wants to experience their canine inheritance to the fullest. Packed cover to cover with practical advice on begging, swimming, picking a pill out of a ball of peanut butter, and treeing a raccoon; stories of heroic and resourceful dogs in history, from Mikmik the sabre-toothed sled dog who led the first humans to North America across the Bering Strait land bridge, to Pepper, the celebrated two-year-old Corgi who with her wits, snout, and stubby legs singlehandedly thwarted a vile conspiracy between the Sun and a rogue toaster to interfere with her nap; tips on dealing with tribulations of life such as cats, squirrels, baths, and dinner parties; and formal rules for timeless games such as “Fetch”. Given the proclivities of the species, there is a great deal more about poop here than in the books for boys and girls. I must take exception to the remarks on canine auditory performance on p. 105; dogs have superb hearing and perceive sounds well above the frequency range to which humans respond, but I've yet to meet the pooch able to hear “50,000 kHz”. Silent dog whistles notwithstanding, even the sharpest-eared cur doesn't pick up the six metre band!

Dogs who master the skills taught here will want to download the merit badges from the book's Web site and display them proudly on their collars. Dog owners (or, for those living in the moonbat caves of western North America, “guardians”) who find their pet doesn't get as much out of this book as they'd hoped may wish to consult my forthcoming monograph Why Rover Can't Read.

 Permalink

Pratchett, Terry. Making Money. New York: HarperCollins, 2007. ISBN 978-0-06-116164-3.
Who'd have imagined that fractional reserve banking, fiat currency, and macroeconometric modelling could be so funny? When Lord Vetinari, tyrant of Ankh-Morpork, decides the economy needs more juice than the stodgy plutocrat-run banks provide, he immediately identifies the ideal curriculum vitæ of a central banker: confidence man, showman, and all-purpose crook. (In our world, mumbling and unparsable prose seem additional job requirements, but things are simpler on Discworld.)

Fortunately, the man for the job is right at hand when the hereditary chief of the Royal Bank goes to her reward: Moist von Lipwig, triumphant in turning around the Post Office in Going Postal, is persuaded (Lord Vetinari can be very persuasive, especially to civil servants he has already once hanged by the neck) to take the second-in-command position at the Bank, the Chairman's office having been assumed by Mr. Fusspot, a small dog who lives in the in-box on Lipwig's desk.

Moist soon finds himself introducing paper money, coping with problems in the gold vault, dealing with a model of the economy which may be more than a model (giving an entirely new meaning to “liquidity”), fending off a run on the bank, summoning the dead to gain control of a super-weapon, and finding a store of value which is better than gold. If you aren't into economics, this is a terrific Discworld novel; if you are, it's delightful on a deeper level.

The “Glooper” in the basement of the bank is based upon economist William Phillips's MONIAC hydraulic economic computer, of which a dozen or more were built. There is no evidence that fiddling with Phillips's device was able to influence the economy which it modelled, but perhaps this is because Phillips never had an assistant named “Igor”.

If you're new to Terry Pratchett and don't know where to start, here's a handy chart (home page and other language translations) which shows the main threads and their interconnections. Making Money does not appear in this map; it should be added to the right of Going Postal.

 Permalink

Buchanan, Patrick J. Day of Reckoning. New York: Thomas Dunne Books, 2007. ISBN 978-0-312-37696-3.
In the late 1980s, I decided to get out of the United States. Why? Because it seemed to me that for a multitude of reasons, many of which I had experienced directly as the founder of a job-creating company, resident of a state whose border the national government declined to defend, and investor who saw the macroeconomic realities piling up into an inevitable disaster, that the U.S. was going down, and I preferred to spend the remainder of my life somewhere which wasn't.

In 1992, the year I moved to Switzerland, Pat Buchanan mounted an insurgent challenge to George H. W. Bush for the Republican nomination for the U.S. presidency, gaining more than three million primary votes. His platform featured protectionism, immigration restriction, and rolling back the cultural revolution mounted by judicial activism. I opposed most of his agenda. He lost.

This book can be seen as a retrospective on the 15 years since, and is particularly poignant to me, as it's a reality check on whether I was wise in getting out when I did. Bottom line: I've no regrets whatsoever, and I'd counsel any productive individual in the U.S. to get out as soon as possible, even though it's harder than when I made my exit.

Is the best of the free life behind us now?
Are the good times really over for good?

Merle Haggard

Well, that's the way to bet. As usual, economics trumps just about everything. Just how plausible is it that a global hegemon can continue to exert its dominance when its economy is utterly dependent upon its ability to borrow two billion dollars a day from its principal rivals: China and Japan, and from these hired funds, it pumps more than three hundred billion dollars a year into the coffers of its enemies: Saudi Arabia, Venezuela, Iran, and others to fund its addiction to petroleum?

The last chapter presents a set of policy prescriptions to reverse the imminent disasters facing the U.S. Even if these policies could be sold to an electorate in which two generations have been brainwashed by collectivist nostrums, it still seems like “too little, too late”—once you've shipped your manufacturing industries offshore and become dependent upon immigrants for knowledge workers, how precisely do you get back to first world status? Beats me.

Some will claim I am, along with the author, piling on recent headlines. I'd counsel taking a longer-term view, as I did when I decided to get out of the U.S. If you're into numbers, note the exchange rate of the U.S. dollar versus the Euro, and the price of gold and oil in U.S. dollars today, then compare them to the quotes five years hence. If the dollar has appreciated, then I'm wrong; if it's continuing its long-term slide into banana republic status, then maybe this rant wasn't as intemperate as you might have initially deemed it.

His detractors call Pat Buchanan a “paleoconservative”, but how many “progressives” publish manuscripts written in the future? The acknowledgements (p. 266) is dated October 2008, ten months after I read it, but then I'm cool with that.

 Permalink

[Audiobook] Churchill, Winston S. The Birth of Britain. (Audiobook, Unabridged). London: BBC Audiobooks, [1956] 2006. ISBN 978-0-304-36389-6.
This is the first book in Churchill's sprawling four-volume A History of the English-Speaking Peoples. Churchill began work on the history in the 1930s, and by the time he set it aside to go to the Admiralty in 1939, about half a million words had been delivered to his publisher. His wartime service as Prime Minister, postwar writing of the six-volume history The Second World War, and second term as Prime Minister from 1951 to 1955 caused the project to be postponed repeatedly, and it wasn't until 1956–1958, when Churchill was in his 80s, that the work was published. Even sections which existed as print proofs from the 1930s were substantially revised based upon scholarship in the intervening years.

The Birth of Britain covers the period from Julius Caesar's invasion of Britain in 55 B.C. through Richard III's defeat and death at the hands of Henry Tudor's forces at the Battle of Bosworth in 1485, bringing to an end both the Wars of the Roses and the Plantagenet dynasty. This is very much history in the “kings, battles, and dates” mould; there is little about cultural, intellectual, and technological matters—the influence of the monastic movement, the establishment and growth of universities, and the emergence of guilds barely figure at all in the narrative. But what a grand narrative it is, the work of one of the greatest masters of the language spoken by those whose history he chronicles. In accounts of early periods where original sources are scanty and it isn't necessarily easy to distinguish historical accounts from epics and legends, Churchill takes pains to note this and distinguish his own conclusions from alternative interpretations.

This audiobook is distributed in seven parts, totalling 17 hours. A print edition is available in the UK.

 Permalink

Mashaal, Maurice. Bourbaki: A Secret Society of Mathematicians. Translated by Anna Pierrehumbert. Providence, RI: American Mathematical Society, [2002] 2006. ISBN 978-0-8218-3967-6.
In 1934, André Weil and Henri Cartan, both young professors of mathematics at the University of Strasbourg, would frequently, when discussing the calculus courses they were teaching, deplore the textbooks available, all of which they considered antiquated and inadequate. Weil eventually suggested getting in touch with several of their fellow alumni of the École Normale Supérieure who were teaching similar courses in provincial universities around France, inviting them to collaborate on a new analysis textbook. The complete work was expected to total 1000 to 1200 pages, with the first volumes ready about six months after the project began.

Thus began one of the most flabbergasting examples of “mission creep” in human intellectual history, which set the style for much of mathematics publication and education in subsequent decades. Working collectively and publishing under the pseudonym “Nicolas Bourbaki” (after the French general in the Franco-Prussian War Charles Denis Bourbaki), the “analysis textbook” to be assembled by a small group over a few years grew into a project spanning more than six decades and ten books, most of multiple volumes, totalling more than seven thousand pages, systematising the core of mathematics in a relentlessly abstract and austere axiomatic form. Although Bourbaki introduced new terminology, some of which has become commonplace, there is no new mathematics in the work: it is a presentation of pre-existing mathematical work as a pedagogical tool and toolbox for research mathematicians. (This is not to say that the participants in the Bourbaki project did not do original work—in fact, they were among the leaders in mathematical research in their respective generations. But their work on the Bourbaki opus was a codification and grand unification of the disparate branches of mathematics into a coherent whole. In fact, so important was the idea that mathematics was a unified tree rooted in set theory that the Bourbaki group always used the word mathématique, not mathématiques.)

Criticisms of the Bourbaki approach were many: it was too abstract, emphasised structure over the content which motivated it, neglected foundational topics such as mathematical logic, excluded anything tainted with the possibility of application (including probability, automata theory, and combinatorics), and took an eccentric approach to integration, disdaining the Lebesgue integral. These criticisms are described in detail, with both sides fairly presented. While Bourbaki participants had no ambitions to reform secondary school mathematics education, it is certainly true that academics steeped in the Bourbaki approach played a part in the disastrous “New Math” episode, which is described in chapter 10.

The book is extravagantly illustrated, and has numerous boxes and marginal notes which describe details, concepts, and the dramatis personæ in this intricate story. An appendix provides English translations of documents which appear in French in the main text. There is no index.

La version française reste disponible.

 Permalink

[Audiobook] Lewis, C. S. The Screwtape Letters. (Audiobook, Unabridged). Ashland, OR: Blackstone Audiobooks, [1942, 1959, 1961] 2006. ISBN 978-0-7861-7279-5.
If you're looking for devilishly ironic satire, why not go right to the source? C. S. Lewis's classic is in the form of a series of letters from Screwtape, a senior demon in the “lowerarchy” of Hell, to his nephew Wormwood, a novice tempter on his first assignment on Earth: charged with securing the soul of an ordinary Englishman in the early days of World War II. Not only are the letters wryly funny, there is a great deal of wisdom and insight into the human condition and how the little irritations of life can present a greater temptation to flawed humans than extravagant sins. Also included in this audiobook is the 1959 essay “Screwtape Proposes a Toast”, which is quite different in nature: Lewis directly attacks egalitarianism, dumbing-down of education, and destruction of the middle class by the welfare state as making the tempter's task much easier (the original letters were almost entirely apolitical), plus the preface Lewis wrote for a new edition of Screwtape in 1961, in which he says the book almost wrote itself, but that he found the process of getting into Screwtape's head very unpleasant indeed.

The book is read by Ralph Cosham, who adopts a dry, largely uninflected tone which is appropriate for the ironic nature of the text. This audiobook is distributed in two parts, totalling 3 hours and 36 minutes. Audio CD and print editions are also available.

 Permalink

Goldberg, Jonah. Liberal Fascism. New York: Doubleday, 2007. ISBN 978-0-385-51184-1.
This is a book which has been sorely needed for a long, long time, and the author has done a masterful job of identifying, disentangling, and dismantling the mountain of disinformation and obfuscation which has poisoned so much of the political discourse of the last half century.

As early as 1946, George Orwell observed in his essay “Politics and the English Language” that “The word Fascism has now no meaning except in so far as it signifies ‘something not desirable’”. This situation has only worsened in the succeeding decades, and finally we have here a book which thoroughly documents the origins of fascism as a leftist, collectivist ideology, grounded in Rousseau's (typically mistaken and pernicious) notion of the “general will”, and the direct descendant of the God-state first incarnated in the French Revolution and manifested in the Terror.

I'd have structured this book somewhat differently, but then when you've spent the last fifteen years not far from the French border, you may adopt a more top-down rationalist view of things; call it “geographical hazard”. There is a great deal of discussion here about the definitions and boundaries among the categories “progressive”, “fascist”, “Nazi”, “socialist”, “communist”, “liberal”, “conservative”, “reactionary”, “social Darwinist”, and others, but it seems to me there's a top-level taxonomic divide which sorts out much of the confusion: collectivism versus individualism. Collectivists—socialists, communists, fascists—believe the individual to be subordinate to the state and subject to its will and collective goals, while individualists believe the state, to the limited extent it exists, is legitimate only as it protects the rights of the sovereign citizens who delegate to it their common defence and provision of public goods.

The whole question of what constitutes conservatism is ill-defined until we get to the Afterword where, on p. 403, there is a beautiful definition which would far better have appeared in the Introduction: that conservatism consists in conserving what is, and that consequently conservatives in different societies may have nothing whatsoever in common among what they wish to conserve. The fact that conservatives in the United States wish to conserve “private property, free markets, individual liberty, freedom of conscience, and the rights of communities to determine for themselves how they will live within these guidelines” in no way identifies them with conservatives in other societies bent on conserving monarchy, a class system, or a discredited collectivist regime.

Although this is a popular work, the historical scholarship is thorough and impressive: there are 54 pages of endnotes and an excellent index. Readers accustomed to the author's flamboyant humorous style from his writings on National Review Online will find this a much more subdued read, appropriate to the serious subject matter.

Perhaps the most important message of this book is that, while collectivists hurl imprecations of “fascist” or “Nazi” at defenders of individual liberty, it is the latter who have carefully examined the pedigree of their beliefs and renounced those tainted by racism, authoritarianism, or other nostrums accepted uncritically in the past. Meanwhile, the self-described progressives (well, yes, but progress toward what?) have yet to subject their own intellectual heritage to a similar scrutiny. If and when they do so, they'll discover that both Mussolini's Fascist and Hitler's Nazi parties were considered movements of the left by almost all of their contemporaries before Stalin deemed them “right wing”. (But then Stalin called everybody who opposed him “right wing”, even Trotsky.) Woodrow Wilson's World War I socialism was, in many ways, the prototype of fascist governance and a major inspiration of the New Deal and Great Society. Admiration for Mussolini in the United States was widespread, and H. G. Wells, the socialist's socialist and one of the most influential figures in collectivist politics in the first half of the twentieth century said in a speech at Oxford in 1932, “I am asking for a Liberal Fascisti, for enlightened Nazis.”

If you're interested in understanding the back-story of the words and concepts in the contemporary political discourse which are hurled back and forth without any of their historical context, this is a book you should read. Fortunately, lots of people seem to be doing so: it's been in the top ten on Amazon.com for the last week. My only quibble may actually be a contributor to its success: there are many references to current events, in particular the 2008 electoral campaign for the U.S. presidency; these will cause the book to be dated when the page is turned on these ephemeral events, and it shouldn't be—the historical message is essential to anybody who wishes to decode the language and subtexts of today's politics, and this book should be read by those who've long forgotten the runners-up and issues of the moment.

A podcast interview with the author is available.

 Permalink

February 2008

Weiner, Tim. Legacy of Ashes. New York: Doubleday, 2007. ISBN 978-0-385-51445-3.
I've always been amused by those overwrought conspiracy theories which paint the CIA as the spider at the centre of a web of intrigue, subversion, skullduggery, and ungentlemanly conduct stretching from infringements of the rights of U.S. citizens at home to covert intrusion into internal affairs in capitals around the globe. What this outlook, however entertaining, seemed to overlook in my opinion is that the CIA is a government agency, and millennia of experience demonstrate that long-established instruments of government (the CIA having begun operations in 1947) rapidly converge upon the intimidating, machine-like, and ruthless efficiency of the Post Office or the Department of Motor Vehicles. How probable was it that a massive bureaucracy, especially one which operated with little Congressional oversight and able to bury its blunders by classifying documents for decades, was actually able to implement its cloak and dagger agenda, as opposed to the usual choke and stagger one expects from other government agencies of similar staffing and budget? Defenders of the CIA and those who feared its menacing, malign competence would argue that while we find out about the CIA's blunders when operations are blown, stings end up getting stung, and moles and double agents are discovered, we never know about the successes, because they remain secret forever, lest the CIA's sources and methods be disclosed.

This book sets the record straight. The Pulitzer prize-winning author has covered U.S. intelligence for twenty years, most recently for the New York Times. Drawing on a wealth of material declassified since the end of the Cold War, most from the latter half of the 1990s and afterward, and extensive interviews with every living Director of Central Intelligence and numerous other agency figures, this is the first comprehensive history of the CIA based on the near-complete historical record. It is not a pretty picture.

Chartered to collect and integrate information, both from its own sources and those of other intelligence agencies, thence to present senior decision-makers with the data they need to formulate policy, from inception the CIA neglected its primary mission in favour of ill-conceived and mostly disastrous paramilitary and psychological warfare operations deemed “covert”, but which all too often became painfully overt when they blew up in the faces of those who ordered them. The OSS heritage of many of the founders of the CIA combined with the proclivity of U.S. presidents to order covert operations which stretched the CIA's charter to its limits and occasionally beyond combined to create a litany of blunders and catastrophe which would be funny were it not so tragic for those involved, and did it not in many cases cast long shadows upon the present-day world.

While the clandestine service was tripping over its cloaks and impaling itself upon its daggers, the primary intelligence gathering mission was neglected and bungled to such an extent that the agency provided no warning whatsoever of Stalin's atomic bomb, the Korean War, the Chinese entry into that conflict, the Suez crisis, the Hungarian uprising, the building of the Berlin Wall, the Yom Kippur war of 1973, the Iranian revolution, the Soviet invasion of Afghanistan, the Iran/Iraq War, the fall of the Berlin Wall, the collapse of the Soviet Union, Iraq's invasion of Kuwait, the nuclear tests by India and Pakistan in 1998, and more. The spider at the centre of the web appears to have been wearing a blindfold and earplugs. (Oh, they did predict both the outbreak and outcome of the Six Day War—well, that's one!)

Not only have the recently-declassified documents shone a light onto the operations of the CIA, they provide a new perspective on the information from which decision-makers were proceeding in many of the pivotal events of the latter half of the twentieth century including Korea, the Cuban missile crisis, Vietnam, and the past and present conflicts in Iraq. This book completely obsoletes everything written about the CIA before 1995; the source material which has become available since then provides the first clear look into what was previously shrouded in secrecy. There are 154 pages of end notes in smaller type—almost a book in itself—which expand, often at great length, upon topics in the main text; don't pass them up. Given the nature of the notes, I found it more convenient to read them as an appendix rather than as annotations.

 Permalink

[Audiobook] Suetonius [Gaius Suetonius Tranquillus]. The Twelve Cæsars. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [A.D. 121, 1957] 2004. ISBN 978-1-929718-39-9.
Anybody who thinks the classics are dull, or that the cult of celebrity is a recent innovation, evidently must never have encountered this book. Suetonius was a member of the Roman equestrian order who became director of the Imperial archives under the emperor Trajan and then personal secretary to his successor, Hadrian. He took advantage of his access to the palace archives and other records to recount the history of Julius Cæsar and the 11 emperors who succeeded him, through Domitian, who was assassinated in A.D. 96, by which time Suetonius was an adult.

Not far into this book, I exclaimed to myself, “Good grief—this is like People magazine!” A bit further on, it became apparent that this Roman bureaucrat had penned an account of his employer's predecessors which was way too racy even for that down-market venue. Suetonius was a prolific writer (most of his work has not survived), and his style and target audience may be inferred from the titles of some of his other books: Lives of Famous Whores, Greek Terms of Abuse, and Physical Defects of Mankind.

Each of the twelve Cæsars is sketched in a quintessentially Roman systematic fashion: according to a template as consistent as a PowerPoint presentation (abbreviated for those whose reigns were short and inconsequential). Unlike his friend and fellow historian of the epoch Tacitus, whose style is, well, taciturn, Suetonius dives right into the juicy gossip and describes it in the most explicit and sensational language imaginable. If you thought the portrayal of Julius and Augustus Cæsar in the television series “Rome” was over the top, if Suetonius is to be believed, it was, if anything, airbrushed.

Whether Suetonius can be believed is a matter of some dispute. From his choice of topics and style, he clearly savoured scandal and intrigue, and may have embroidered upon the historical record in the interest of titillation. He certainly took omens, portents, prophecies, and dreams as seriously as battles and relates them, even those as dubious as marble statues speaking, as if they were documented historical events. (Well, maybe they were—perhaps back then the people running the simulation we're living in intervened more often, before they became bored and left it to run unattended. But I'm not going there, at least here and now….) Since this is the only extant complete history of the reigns of Caligula and Claudius, the books of Tacitus covering that period having been lost, some historians have argued that the picture of the decadence of those emperors may have been exaggerated due to Suetonius's proclivity for purple prose.

This audiobook is distributed in two parts, totalling 13 hours and 16 minutes. The 1957 Robert Graves translation is used, read by Charlton Griffin, whose narration of Julius Cæsar's Commentaries (August 2007) I so enjoyed. The Graves translation gives dates in B.C. and A.D. along with the dates by consulships used in the original Latin text. Audio CD and print editions of the same translation are available. The Latin text and a public domain English translation dating from 1913–1914 are available online.

 Permalink

Rutler, George William. Coincidentally. New York: Crossroad Publishing, 2006. ISBN 978-0-8245-2440-1.
This curious little book is a collection of the author's essays on historical coincidences originally published in Crisis Magazine. Each explores coincidences around a general theme. “Coincidence” is defined rather loosely and generously. Consider (p. 160), “Two years later in Missouri, the St. Louis Municipal Bridge was dedicated concurrently with the appointment of England's poet laureate, Robert Bridges. The numerical sum of the year of his birth, 1844, multiplied by 10, is identical to the length in feet of the Philadelphia-Camden Bridge over the Delaware River.”

Here is paragraph from p. 138 which illustrates what's in store for you in these essays.

Odd and tragic coincidences in maritime history render a little more plausible the breathless meters of James Elroy Flecker (1884–1915): “The dragon-green, the luminous, the dark, the serpent-haunted sea.” That sea haunts me too, especially with the realization that Flecker died in the year of the loss of 1,154 lives on the Lusitania. More odd than tragic is this: the United States Secretary of State William Jennings Bryan (in H. L. Mencken's estimation “The National Tear-Duct”) officially protested the ship's sinking on May 13, 1915 which was the 400th anniversary, to the day, of the marriage of the Duke of Suffolk to Mary, the widow of Louis XII and sister of Henry VIII, after she had spurned the hand of the Archduke Charles. There is something ominous even in the name of the great hydrologist of the Massachusetts Institute of Technology who set the standards for water purification: Thomas Drown (1842–1904). Swinburne capitalized on the pathos: “… the place of the slaying of Itylus / The feast of Daulis, the Thracian sea.” And a singularly melancholy fact about the sea is that Swinburne did not end up in it.
I noted several factual errors. For example, on p. 169, Chuck Yeager is said to have flown a “B-51 Mustang” in World War II (the correct designation is P-51). Such lapses make you wonder about the reliability of other details, which are far more arcane and difficult to verify.

The author is opinionated and not at all hesitant to share his acerbic perspective: on p. 94 he calls Richard Wagner a “master of Nazi elevator music”. The vocabulary will send almost all readers other than William F. Buckley (who contributed a cover blurb to the book) to the dictionary from time to time. This is not a book you'll want to read straight through—your head will end up spinning with all the details and everything will dissolve into a blur. I found a chapter or two a day about right. I'd sum it up with Abraham Lincoln's observation “Well, for those who like that sort of thing, I should think it is just about the sort of thing they would like.”

 Permalink

March 2008

Minogue, Kenneth. Alien Powers. New Brunswick, NJ: Transaction Publishers, [1985] 2007. ISBN 978-0-7658-0365-8.
No, this isn't a book about Roswell. Subtitled “The Pure Theory of Ideology”, it is a challenging philosophical exploration of ideology, ideological politics, and ideological arguments and strategies in academia and the public arena. By “pure theory”, the author means to explore what is common to all ideologies, regardless of their specifics. (I should note here, as does the author, that in sloppy contemporary discourse “ideology” is often used simply to denote a political viewpoint. In this work, the author restricts it to closed intellectual systems which ascribe a structural cause to events in the world, posit a mystification which prevents people from understanding what is revealed to the ideologue, and predict an inevitable historical momentum [“progress”] toward liberation from the unperceived oppression of the present.)

Despite the goal of seeking a pure theory, independent of any specific ideology, a great deal of time is necessarily spent on Marxism, since although the roots of modern ideology can be traced (like so many other pernicious things) to Rousseau and the French Revolution, it was Marx and Engels who elaborated the first complete ideological system, providing the intellectual framework for those that followed. Marxism, Fascism, Nazism, racism, nationalism, feminism, environmentalism, and many other belief systems are seen as instantiations of a common structure of ideology. In essence, this book can be seen as a “Content Wizard” for cranking out ideological creeds: plug in the oppressor and oppressed, the supposed means of mystification and path to liberation, and out pops a complete ideological belief system ready for an enterprising demagogue to start peddling. The author shows how ideological arguments, while masquerading as science, are the cuckoo's egg in the nest of academia, as they subvert and shortcut the adversarial process of inquiry and criticism with a revelation not subject to scrutiny. The attractiveness of such bogus enlightenment to second-rate minds and indolent intellects goes a long way to explaining the contemporary prevalence in the academy of ideologies so absurd that only an intellectual could believe them.

The author writes clearly, and often with wit and irony so dry it may go right past unless you're paying attention. But this is nonetheless a difficult book: it is written at such a level of philosophical abstraction and with so many historical and literary references that many readers, including this one, find it heavy going indeed. I can't recall any book on a similar topic this formidable since chapters two through the end of Allan Bloom's The Closing of the American Mind. If you want to really understand the attractiveness of ideology to otherwise intelligent and rational people, and how ideology corrupts the academic and political spheres (with numerous examples of how slippery ideological arguments can be), this is an enlightening read, but you're going to have to work to make the most of it.

This book was originally published in 1985. This edition includes a new introduction by the author, and two critical essays reflecting upon the influence of the book and its message from a contemporary perspective where the collapse of the Soviet Union and the end of the Cold War have largely discredited Marxism in the political arena, yet left its grip and that of other ideologies upon humanities and the social sciences in Western universities, if anything, only stronger.

 Permalink

[Audiobook] Twain, Mark [Samuel Langhorne Clemens]. The Adventures of Tom Sawyer. (Audiobook, Unabridged). Auburn, CA: Audio Partners, [1876] 1995. ISBN 978-1-57270-307-0.
Having read this book as a kid, I never imagined how much more there was to it, both because of the depth of Mark Twain's prose as perceived by an adult, and due to reading his actual words, free of abridgement for a “juvenile edition”. (Note that the author, in the introduction, explicitly states that he is writing for young people and hence expects his words to reach them unexpurgated, and that they will understand them. I've no doubt that in the epoch in which he wrote them they would. Today, I have my doubts, but there's no question that the more people who are exposed to this self-reliant and enterprising view of childhood, the brighter the future will be for the children of the kids who experience the freedom of a childhood like Tom's, as opposed to those I frequently see wearing crash helmets when riding bicycles with training wheels.)

There is nothing I can possibly add to the existing corpus of commentary on one of the greatest of American novels. Well, maybe this: if you've read an abridged version (and if you read it in grade school, you probably did), then give the original a try. There's a lot of material here which can be easily cut by somebody seeking the “essence” with no sense of the art of story-telling. You may remember the proper way to get rid of warts given a dead cat and a graveyard at midnight, but do you remember all of the other ways of getting rid of warts, their respective incantations, and their merits and demerits? Savour the folklore.

This audiobook is produced and performed by voice actor Patrick Fraley, who adopts a different timbre and dialect for each of the characters in the novel. The audio programme is distributed as a single file, running 7 hours and 42 minutes, with original music between the chapters. Audio CD and numerous print editions are available, of which this one looks like a good choice.

 Permalink

Ferrigno, Robert. Sins of the Assassin. New York: Scribner, 2008. ISBN 978-1-4165-3765-6.
Here we have the eagerly awaited sequel to the author's compelling thriller Prayers for the Assassin (March 2006), now billed as the second volume in the eventual Assassin Trilogy. The book in the middle of a trilogy is often the most difficult to write. Readers are already acquainted with the setting, scenario, and many of the main characters, and aren't engaged by the novelty of discovering something entirely new. The plot usually involves ramifying the events of the first installment, while further developing characters and introducing new ones, but the reader knows at the outset that, while there may be subplots which are resolved, the book will end with the true climax of the story reserved for the final volume. These considerations tend to box in an author, and pulling off a volume two which is satisfying even when you know you're probably going to have to wait another two years to see how it all comes out is a demanding task, and one which Robert Ferrigno accomplishes magnificently in this novel.

Set three years after Prayers, the former United States remains divided into a coast-to-coast Islamic Republic, with the Christian fundamentalist Bible Belt in Texas and the old South, Mormon Territories and the Nevada Free State in the West, and the independent Nuevo Florida in the southeast, with low intensity warfare and intrigue at the borders. Both northern and southern frontiers are under pressure from green technology secular Canada and the expansionist Aztlán Empire, which is chipping away at the former U.S. southwest.

Something is up in the Bible Belt, and retired Fedayeen shadow warrior Rakkim Epps returns to his old haunts in the Belt to find out what's going on and prevent a potentially destabilising discovery from shifting the balance of power on the continent. He is accompanied by one of the most unlikely secret agents ever, whose story of self-discovery and growth is a delightful theme throughout. This may be a dystopian future, but it is populated by genuine heroes and villains, all of whom are believable human beings whose character and lives have made them who they are. There are foul and despicable characters to be sure, but also those you're inclined to initially dismiss as evil but discover through their honour and courage to be good people making the best of bad circumstances.

This novel is substantially more “science fiction-y” than Prayers—a number of technological prodigies figure in the tale, some of which strike this reader as implausible for a world less than forty years from the present, absent a technological singularity (which has not happened in this timeline), and especially with the former United States and Europe having turned into technological backwaters. I am not, however, going to engage in my usual quibbling: most of the items in question are central to the plot and mysteries the reader discovers as the story unfolds, and simply to cite them would be major spoilers. Even if I put them inside a spoiler warning, you'd be tempted to read them anyway, which would detract from your enjoyment of the book, which I don't want to do, given how much I enjoyed it. I will say that one particular character has what may be potentially the most itchy bioenhancement in all of modern fiction, and perhaps that contributes to his extravagantly foul disposition. In addition to the science fictional aspects, the supernatural appears to enter the story on several occasions—or maybe not—we'll have to wait until the next book to know for sure.

One thing you don't want to do is to read this book before first reading Prayers for the Assassin. There is sufficient background information mentioned in passing for the story to be comprehensible and enjoyable stand-alone, but if you don't understand the character and history of Redbeard, the dynamics of the various power centres in the Islamic Republic, or the fragile social equilibrium among the various communities within it, you'll miss a great deal of the richness of this future history. Fortunately, a mass market paperback edition of the first volume is now available.

You can read the first chapter of this book online at the author's Web site.

 Permalink

D'Souza, Dinesh. What's So Great About Christianity. Washington: Regnery Publishing, 2007. ISBN 978-1-59698-517-9.
I would almost certainly never have picked up a book with this title had I not happened to listen to a podcast interview with the author last October. In it, he says that his goal in writing the book was to engage the contemporary intellectually militant atheists such as Richard Dawkins, Sam Harris, Christopher Hitchens, Daniel Dennett, and Victor Stenger on their own turf, mounting a rational argument in favour of faith in general and Christianity in particular, demonstrating that there are no serious incompatibilities between the Bible and scientific theories such as evolution and the big bang, debunking overblown accounts of wrongs perpetrated in the name of religion such as the crusades, the inquisition, the persecution of Galileo, witch hunts, and religious wars in Europe, and arguing that the great mass murders of the twentieth century can be laid at the feet not of religion, but atheist regimes bent on building heaven on Earth. All this is a pretty tall order, especially for a book of just 304 pages of main text, but the author does a remarkably effective job of it. While I doubt the arguments presented here will sway those who have made a belligerent atheism central to their self esteem, many readers may be surprised to discover that the arguments of the atheists are nowhere near as one sided as their propaganda would suggest.

Another main theme of the book is identifying how many of the central components of Western civilisation: limited government, religious toleration, individualism, separation of church and state, respect for individual human rights, and the scientific method, all have their roots in the Judeo-Christian tradition, and how atheism and materialism can corrode these pillars supporting the culture which (rightly) allows the atheists the freedom to attack it. The author is neither a fundamentalist nor one who believes the Bible is true in a literal sense: he argues that when the scriptures are read, as most Christian scholars have understood them over two millennia, as using a variety of literary techniques to convey their message, there is no conflict between biblical accounts and modern science and, in some cases, the Bible seems to have anticipated recent discoveries. D'Souza believes that Darwinian evolution is not in conflict with the Bible and, while respectful of supporters of intelligent design, sees no need to invoke it. He zeroes in precisely on the key issue: that evolution cannot explain the origin of life since evolution can only operate on already living organisms upon which variation and selection can occur.

A good deal of the book can be read as a defence of religion in general against the arguments of atheism. Only in the last two chapters does he specifically make the case for the exceptionalism of Christianity. While polemicists such as Dawkins and Hitchens come across as angry, this book is written in a calm, self-confident tone and with such a limpid clarity that it is a joy to read. As one who has spent a good deal of time pondering the possibility that we may be living in a simulation created by an intelligent designer (“it isn't a universe; it's a science fair project”), this book surprised me as being 100% compatible with that view and provided several additional insights to expand my work in progress on the topic.

 Permalink

Abadzis, Nick. Laika. New York: First Second, 2007. ISBN 978-1-59643-101-0.
The first living creature to orbit the Earth (apart, perhaps, from bacterial stowaways aboard Sputnik 1) was a tough, even-tempered, former stray dog from the streets of Moscow, named Kudryavka (Little Curly), who was renamed Laika (Barker) shortly before being sent on a one-way mission largely motivated by propaganda concerns and with only the most rudimentary biomedical monitoring in a slapdash capsule thrown together in less than a month.

This comic book (or graphic novel, if you prefer) tells the story through parallel narratives of the lives of Sergei Korolev, a former inmate of Stalin's gulag in Siberia who rose to be Chief Designer of the Soviet space program, and Kudryavka, a female part-Samoyed stray who was captured and consigned to the animal research section of the Soviet Institute of Aviation Medicine (IMBP). While obviously part of the story is fictionalised, for example Kudryavka's origin and life on the street, those parts of the narrative which are recorded in history are presented with scrupulous attention to detail. The author goes so far as to show the Moon in the correct phase in events whose dates are known precisely (although he does admit frankly to playing fast and loose with the time of moonrise and moonset for dramatic effect). This is a story of survival, destiny, ambition, love, trust, betrayal, empathy, cruelty, and politics, for which the graphic format works superbly—often telling the story entirely without words. For decades Soviet propaganda spread deception and confusion about Laika's fate. It was only in 2002 that Russian sources became available which revealed what actually happened, and the account here presents the contemporary consensus based upon that information.

 Permalink

April 2008

Siddiqi, Asif A. Challenge to Apollo. Washington: National Aeronautics and Space Administration, 2000. NASA SP-2000-4408.
Prior to the collapse of the Soviet Union, accounts of the Soviet space program were a mix of legend, propaganda, speculations by Western analysts, all based upon a scanty collection of documented facts. The 1990s saw a wealth of previously secret information come to light (although many primary sources remain unavailable), making it possible for the first time to write an authoritative scholarly history of Soviet space exploration from the end of World War II through the mid-1970s; this book, published by the NASA History Division in 2000, is that history.

Whew! Many readers are likely to find that reading this massive (1011 7×14 cm pages, 1.9 kg) book cover to cover tells them far, far more about the Soviet space effort than they ever wanted to know. I bought the book from the U.S. Government Printing Office when it was published in 2000 and have been using it as a reference since then, but decided finally, as the bloggers say, to “read the whole thing”. It was a chore (it took me almost three weeks to chew through it), but ultimately rewarding and enlightening.

Back in the 1960s, when observers in the West pointed out the failure of the communist system to feed its own people or provide them with the most basic necessities, apologists would point to the successes of the Soviet space program as evidence that central planning and national mobilisation in a military-like fashion could accomplish great tasks more efficiently than the chaotic, consumer-driven market economies of the West. Indeed, with the first satellite, the first man in space, long duration piloted flights, two simultaneous piloted missions, the first spacecraft with a crew of more than one, and the first spacewalk, the Soviets racked up an impressive list of firsts. The achievements were real, but based upon what we now know from documents released in the post-Soviet era which form the foundation of this history, the interpretation of these events in the West was a stunning propaganda success by the Soviet Union backed by remarkably little substance.

Indeed, in the 1945–1974 time period covered here, one might almost say that the Soviet Union never actually had a space program at all, in the sense one uses those words to describe the contemporary activities of NASA. The early Soviet space achievements were all spin-offs of ballistic missile technology driven by Army artillery officers become rocket men. Space projects, and especially piloted flight, interested the military very little, and the space spectaculars were sold to senior political figures for their propaganda value, especially after the unanticipated impact of Sputnik on world opinion. But there was never a roadmap for the progressive development of space capability, such as NASA had for projects Mercury, Gemini, and Apollo. Instead, in most cases, it was only after a public success that designers and politicians would begin to think of what they could do next to top that.

Not only did this supposedly centrally planned economy not have a plan, the execution of its space projects was anything but centralised. Throughout the 1960s, there were constant battles among independent design bureaux run by autocratic chief designers, each angling for political support and funding at the expense of the others. The absurdity of this is perhaps best illustrated by the fact that on November 17th, 1967, six days after the first flight of NASA's Saturn V, the Central Committee issued a decree giving the go-ahead to the Chelomey design bureau to develop the UR-700 booster and LK-700 lunar spacecraft to land two cosmonauts on the Moon, notwithstanding having already spent millions of rubles on Korolev's already-underway N1-L3 project, which had not yet performed its first test flight. Thus, while NASA was checking off items in its Apollo schedule, developed years before, the Soviet Union, spending less than half of NASA's budget, found itself committed to two completely independent and incompatible lunar landing programs, with a piloted circumlunar project based on still different hardware simultaneously under development (p. 645).

The catastrophes which ensued from this chaotic situation are well documented, as well as how effective the Soviets were in concealing all of this from analysts in the West. Numerous “out there” proposed projects are described, including Chelomey's monster UR-700M booster (45 million pounds of liftoff thrust, compared to 7.5 million for the Saturn V), which would send a crew of two cosmonauts on a two-year flyby of Mars in an MK-700 spacecraft with a single launch. The little-known Soviet spaceplane projects are documented in detail.

This book is written in the same style as NASA's own institutional histories, which is to say that much of it is heroically boring and dry as the lunar regolith. Unless you're really into reorganisations, priority shifts, power grabs, and other manifestations of gigantic bureaucracies doing what they do best, you may find this tedious. This is not the fault of the author, but of the material he so assiduously presents. Regrettably, the text is set in a light sans-serif font in which (at least to my eyes) the letter “l” and the digit “1” are indistinguishable, and differ from the letter “I” in a detail I can spot only with a magnifier. This, in a book bristling with near-meaningless Soviet institutional names such as the Ministry of General Machine Building and impenetrable acronyms such as NII-1, TsKBEM (not to be confused with TsKBM) and 11F615, only compounds the reader's confusion. There are a few typographical errors, but none are serious.

This NASA publication was never assigned an ISBN, so looking it up on online booksellers will generally only find used copies. You can order new copies from the NASA Information Center at US$79 each. As with all NASA publications, the work is in the public domain, and a scanned online edition (PDF) is available. This is a 64 megabyte download, so unless you have a fast Internet connection, you'll need to be patient. Be sure to download it to a local file as opposed to viewing it in your browser, because otherwise you'll have to download the whole thing each time you open the document.

 Permalink

Ministry of Information. What Britain Has Done. London: Atlantic Books, [1945] 2007. ISBN 978-1-84354-680-1.
Here is government propaganda produced by the organisation upon which George Orwell (who worked there in World War II) based the Ministry of Truth in his novel Nineteen Eighty-Four. This slim volume (126 pages in this edition) was originally published in May of 1945, after the surrender of Germany, but with the war against Japan still underway. (Although there are references to Germany's capitulation, some chapters appear to have been written before the end of the war in Europe.)

The book is addressed to residents of the United Kingdom, and seeks to show how important their contributions were to the overall war effort, seemingly to dispel the notion that the U.S. and Soviet Union bore the brunt of the effort. To that end, it is as craftily constructed a piece of propaganda as you're likely to encounter. While subtitled “1939–1945: A Selection of Outstanding Facts and Figures”, it might equally as well be described as “Total War: Artfully Chosen Factoids”. Here is an extract from pp. 34–35 to give you a flavour.

Between September 1939 and February 1943, HM Destroyer Forester steamed 200,000 miles, a distance equal to nine times round the world.

In a single year the corvette Jonquil steamed a distance equivalent to more than three times round the world.

In one year and four months HM Destroyer Wolfhound steamed over 50,000 miles and convoyed 3,000 ships.

The message of British triumphalism is conveyed in part by omission: you will find only the barest hints in this narrative of the disasters of Britain's early efforts in the war, the cataclysmic conflict on the Eastern front, or the Pacific war waged by the United States against Japan. (On the other hand, the title is “What Britain Has Done”, so one might argue that tasks which Britain either didn't do or failed to accomplish do not belong here.) But this is not history, but propaganda, and as the latter it is a masterpiece. (Churchill's history, The Second World War, although placing Britain at the centre of the story, treats all of these topics candidly, except those relating to matters still secret, such as the breaking of German codes during the war.)

This reprint edition includes a new introduction which puts the document into historical perspective and seven maps which illustrate operations in various theatres of the war.

 Permalink

May 2008

Richelson, Jeffrey T. Spying on the Bomb. New York: W. W. Norton, [2006] 2007. ISBN 978-0-393-32982-7.
I had some trepidation about picking up this book. Having read the author's The Wizards of Langley (May 2002), expecting an account of “Q Branch” spy gizmology and encountering instead a tedious (albeit well-written and thorough) bureaucratic history of the CIA's Directorate of Science and Technology, I was afraid this volume might also reduce one of the most critical missions of U.S. intelligence in the post World War II era to another account of interagency squabbling and budget battles. Not to worry—although such matters are discussed where appropriate (especially when they led to intelligence failures), the book not only does not disappoint, it goes well beyond the mission of its subtitle, “American Nuclear Intelligence from Nazi Germany to Iran and North Korea” in delivering not just an account of intelligence activity but also a comprehensive history of the nuclear programs of each of the countries upon which the U.S. has focused its intelligence efforts: Nazi Germany, the Soviet Union, China, France, Israel, South Africa, India, Pakistan, Taiwan, Libya, Iraq, North Korea, and Iran.

The reader gets an excellent sense of just how difficult it is, even in an age of high-resolution optical and radar satellite imagery, communications intelligence, surveillance of commercial and financial transactions, and active efforts to recruit human intelligence sources, to determine the intentions of states intent (or maybe not) on developing nuclear weapons. The ease with which rogue regimes seem to be able to evade IAEA safeguards and inspectors, and manipulate diplomats loath to provoke a confrontation, is illustrated on numerous occasions. An entire chapter is devoted to the enigmatic double flash incident of September 22nd, 1979 whose interpretation remains in dispute today. This 2007 paperback edition includes a new epilogue with information on the October 2006 North Korean “fissile or fizzle” nuclear test, and recent twists and turns in the feckless international effort to restrain Iran's nuclear program.

 Permalink

Winograd, Morley and Michael D. Hais. Millennial Makeover. New Brunswick, NJ: Rutgers University Press, 2008. ISBN 978-0-8135-4301-7.
This is a disturbing book on a number of different levels. People, especially residents of the United States or subject to its jurisdiction, who cherish individual liberty and economic freedom should obtain a copy of this work (ideally, by buying a used copy to avoid putting money in the authors' pockets), put a clothespin on their noses, and read the whole thing (it only takes a day or so), being warned in advance that it may induce feelings of nausea and make you want to take three or four showers when you're done.

The premise of the book is taken from Strauss and Howe's Generations, which argues that American history is characterised by a repeating pattern of four kinds of generations, alternating between “idealistic” and “civic” periods on a roughly forty year cycle (two generations in each period). These periods have nothing to do with the notions of “right” and “left”—American history provides examples of periods of both types identified with each political tendency.

The authors argue that the United States are approaching the end of an idealistic period with a rightward tendency which began in 1968 with the election of Richard Nixon, which supplanted the civic leftward period which began with the New Deal and ended in the excesses of the 1960s. They argue that the transition between idealistic and civic periods is signalled by a “realigning election”, in which the coalitions supporting political parties are remade, defining a new alignment and majority party which will dominate government for the next four decades or so.

These realignment elections usually mark the entrance of a new generation into the political arena (initially as voters and activists, only later as political figures), and the nature of the coming era can be limned, the authors argue, by examining the formative experiences of the rising generation and the beliefs they take into adulthood. Believing that a grand realignment is imminent, if not already underway, and that its nature will be determined by what they call the “Millennial Generation” (the cohort born between 1982 through 2003: a group larger in numbers than the Baby Boom generation), the authors examine the characteristics and beliefs of this generation, the eldest members of which are now entering the electorate, to divine the nature of the post-realignment political landscape. If they are correct in their conclusions, it is a prospect to induce fear, if not despair, in lovers of liberty. Here are some quotes.

The inevitable loss in privacy and freedom that has been a constant characteristic of the nation's reaction to any crisis that threatens America's future will more easily be accepted by a generation that willingly opts to share personal information with advertisers just for the sake of earning a few “freebies.” After 9/11 and the massacres at Columbine and Virginia Tech, Millennials are not likely to object to increased surveillance and other intrusions into their private lives if it means increased levels of personal safety. The shape of America's political landscape after a civic realignment is thus more likely to favor policies that involve collective action and individual accountability than the libertarian approaches so much favored by Gen-Xers. (p. 200)
Note that the authors applaud these developments. Digital Imprimatur, here we come!
As the newest civic realignment evolves, the center of America's public policy will continue to shift away from an emphasis on individual rights and public morality toward a search for solutions that benefit the entire community in as equitable and orderly way as possible. Majorities will coalesce around ideas that involve the entire group in the solution and downplay the right of individuals to opt out of the process. (p. 250)
Millennials favor environmental protection even at the cost of economic growth by a somewhat wider margin than any other generation (43% for Millennials vs. 40% for Gen-Xers and 38% for Baby Boomers), hardly surprising, given the emphasis this issue received in their favorite childhood television programs such as “Barney” and “Sesame Street” (Frank N. Magid Associates, May 2007). (p. 263)
Deep thinkers, those millennials! (Note that these “somewhat wider” margins are within the statistical sampling error of the cited survey [p. xiv].)

The whole scheme of alternating idealist and civic epochs is presented with a historicist inevitability worthy of Hegel or Marx. While one can argue that this kind of cycle is like the oscillation between crunchy and soggy, it seems to me that the authors must be exceptionally stupid, oblivious to facts before their faces, or guilty of a breathtaking degree of intellectual dishonesty to ignore the influence of the relentless indoctrination of this generation with collectivist dogma in government schools and the legacy entertainment and news media—and I do not believe the authors are either idiots nor imperceptive. What they are, however, are long-term activists (since the 1970s) in the Democratic party, who welcome the emergence of a “civic” generation which they view as the raw material for advancing the agenda which FDR launched with the aid of the previous large civic generation in the 1930s.

Think about it. A generation which has been inculcated with the kind of beliefs illustrated by the quotations above, and which is largely ignorant of history (and much of the history they've been taught is bogus, agenda-driven propaganda), whose communications are mostly “peer-to-peer”—with other identically-indoctrinated members of the same generation, is the ideal putty in the hands of a charismatic leader bent on “unifying” a nation by using the coercive power of the state to enforce the “one best way”.

The authors make an attempt to present the millenials as a pool of potential voters in search of a political philosophy and party embodying it which, once chosen, they will likely continue to identify with for the rest of their lives (party allegiance, they claim, is much stronger in civic than in idealist eras). But it's clear that the book is, in fact, a pitch to the Democratic party to recruit these people: Republican politicians and conservative causes are treated with thinly veiled contempt.

This is entirely a book about political strategy aimed at electoral success. There is no discussion whatsoever of the specific policies upon which campaigns will be based, how they are to be implemented, or what their consequences will be for the nation. The authors almost seem to welcome catastrophes such as a “major terrorist attack … major environmental disaster … chronic, long-lasting war … hyperinflation … attack on the U.S. with nuclear weapons … major health catastrophe … major economic collapse … world war … and/or a long struggle like the Cold War” as being “events of significant magnitude to trigger a civic realignment” (p. 201).

I've written before about my decision to get out of the United States in the early 1990s, which decision I have never regretted. That move was based largely upon economic fundamentals, which I believed, and continue to believe, are not sustainable and will end badly. Over the last decade, I have been increasingly unsettled by my interactions with members of the tail-end of Generation X and the next generation, whatever you call it. If the picture presented in this book is correct (and I have no way to know whether it is), and their impact upon the U.S. political scene is anything like that envisioned by the authors, anybody still in the U.S. who values their liberty and autonomy has an even more urgent reason to get out, and quickly.

 Permalink

Brooks, Max. World War Z. New York: Three Rivers Press, 2006. ISBN 978-0-307-34661-2.
Few would have believed in the early years of the twenty-first century, as people busied themselves with their various concerns and little affairs, while their “leaders” occupied themselves with “crises” such as shortages of petroleum, mountains of bad debt, and ManBearPig, that in rural China a virus had mutated, replicating and spreading among the human population like creatures that swarm and multiply in a drop of water, slowly at first, with early outbreaks covered up to avoid bad publicity before the Chicom Olympics, soon thereafter to explode into a global contagion that would remake the world, rewrite human history, and sweep away all of the prewar concerns of mankind as trivialities while eliminating forever the infinite complacency humans had of their empire over matter and dominion over nature.

This book is an oral history of the Zombie War, told in the words of those who survived, fought, and ultimately won it. Written just ten years after victory was declared in China, with hotspots around the globe remaining to be cleared, it is a story of how cultures around the globe came to terms with a genuine existential threat, and how people and societies rise to a challenge inconceivable to a prewar mentality. Reading much like Studs Terkel's The Good War, the individual voices, including civilians, soldiers, researchers, and military and political leaders trace how unthinkable circumstances require unthinkable responses, and how ordinary people react under extraordinary stress. The emergence of the Holy Russian Empire, the evacuation and eventual reconquest of Japan, the rise of Cuba to a global financial power, the climactic end of the Second Chinese Revolution, and the enigma of the fate of North Korea are told in the words of eyewitnesses and participants.

Now, folks, this a zombie book, so if you're someone inclined to ask, “How, precisely, does this work?”, or to question the biological feasibility of the dead surviving in the depths of the ocean or freezing in the arctic winter and reanimating come spring, you're going to have trouble with this story. Suspending your disbelief and accepting the basic premise is the price of admission, but if you're willing to pay it, this is an enjoyable, unsettling, and ultimately rewarding read—even inspiring in its own strange way. It is a narrative of an apocalyptic epoch which works, and is about ten times better than Stephen King's The Stand. The author is a recognised global authority on the zombie peril.

(Yes, the first paragraph of these remarks is paraphrased from this; I thought it appropriate.)

 Permalink

[Audiobook] Thucydides. The Peloponnesian War. Vol. 1. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [c. 400 B.C.] 2005.
Not only is The Peloponnesian War the first true work of history to have come down to us from antiquity, in writing it Thucydides essentially invented the historical narrative as it is presently understood. Although having served as a general (στρατηγός) on the Athenian side in the war, he adopts a scrupulously objective viewpoint and presents the motivations, arguments, and actions of all sides in the conflict in an even-handed manner. Perhaps his having been exiled from Athens due to arriving too late to save Amphipolis from falling to the Spartans contributed both to his dispassionate recounting of the war as well as providing the leisure to write the work. Thucydides himself wrote:
It was also my fate to be an exile from my country for twenty years after my command at Amphipolis; and being present with both parties, and more especially with the Peloponnesians by reason of my exile, I had leisure to observe affairs somewhat particularly.

Unlike earlier war narratives in epic poetry, Thucydides based his account purely upon the actions of the human participants involved. While he includes the prophecies of oracles and auguries, he considers them important only to the extent they influenced decisions made by those who gave them credence. Divine intervention plays no part whatsoever in his description of events, and in his account of the Athenian Plague he even mocks how prophecies are interpreted to fit subsequent events. In addition to military and political affairs, Thucydides was a keen observer of natural phenomena: his account of the Athenian Plague reads like that of a modern epidemiologist, including his identifying overcrowding and poor sanitation as contributing factors and the observation that surviving the disease (as he did himself) conferred immunity. He further observes that solar eclipses appear to occur only at the new Moon, and may have been the first to identify earthquakes as the cause of tsunamis.

In the text, Thucydides includes lengthy speeches made by figures on all sides of the conflict, both in political assemblies and those of generals exhorting their troops to battle. He admits in the introduction that in many cases no contemporary account of these speeches exists and that he simply made up what he believed the speaker would likely have said given the circumstances. While this is not a technique modern historians would employ, Greeks, from their theatre and poetry, were accustomed to narratives presented in this form and Thucydides, inventing the concept of history as he wrote it, saw nothing wrong with inventing words in the absence of eyewitness accounts. What is striking is how modern everything seems. There are descriptions of the strategy of a sea power (Athens) confronted by a land power (Sparta), the dangers of alliances which invite weaker allies to take risks that involve their guarantors in unwanted and costly conflicts, the difficulties in mounting an amphibious assault on a defended shore, the challenge a democratic society has in remaining focused on a long-term conflict with an authoritarian opponent, and the utility of economic warfare (or, as Thucydides puts it [over and over again], “ravaging the countryside”) in sapping the adversary's capacity and will to resist. Readers with stereotyped views of Athens and Sparta may be surprised that many at the time of the war viewed Sparta as a liberator of independent cities from the yoke of the Athenian empire, and that Thucydides, an Athenian, often seems sympathetic to this view. Many of the speeches could have been given by present-day politicians and generals, except they would be unlikely to be as eloquent or argue their case so cogently. One understands why Thucydides was not only read over the centuries (at least prior to the present Dark Time, when the priceless patrimony of Western culture has been jettisoned and largely forgotten) for its literary excellence, but is still studied in military academies for its timeless insights into the art of war and the dynamics of societies at war. While modern readers may find the actual campaigns sporadic and the battles on a small scale by present day standards, from the Hellenic perspective, which saw their culture of city-states as “civilisation” surrounded by a sea of barbarians, this was a world war, and Thucydides records it as such a momentous event.

This is Volume 1 of the audiobook, which includes the first four of the eight books into which Thucydides's text is conventionally divided, covering the prior history of Greece and the first nine years of the war, through the Thracian campaigns of the Spartan Brasidas in 423 B.C. (Here is Volume 2, with the balance.) The audiobook is distributed in two parts, totalling 14 hours and 50 minutes with more than a hour of introductory essays including a biography of Thucydides and an overview of the work. The Benjamin Jowett translation is used, read by the versatile Charlton Griffin. A print edition of this translation is available.

 Permalink

Thornton, Bruce. Decline and Fall. New York: Encounter Books, 2007. ISBN 978-1-59403-206-6.
This slim volume (135 pages of main text, 161 pages in its entirety—the book is erroneously listed on Amazon.com as 300 pages in length) is an epitaph for the postwar European experiment. The author considers Europe, as defined by the post-Christian, post-national “EUtopia” envisioned by proponents of the European Union as already irretrievably failed, facing collapse in the coming decades due to economic sclerosis from bloated and intrusive statist policies, unsustainable welfare state expenditures, a demographic death spiral already beyond recovery, and transformation by a burgeoning Islamic immigrant population which Europeans lack the will to confront and compel to assimilate as a condition of residence. The book is concise, well-argued, and persuasive, but I'm not sure why it is ultimately necessary.

The same issues are discussed at greater length, more deeply, and with abundant documentation in recent books such as Mark Steyn's America Alone (November 2006), Claire Berlinski's Menace in Europe (July 2006), and Bruce Bawer's While Europe Slept (June 2007), all of which are cited as sources in this work. If you're looking for a very brief introduction and overview of Europe's problems, this book provides one, but readers interested in details of the present situation and prospects for the future will be better served by one of the books mentioned above.

A video interview with the author is available.

 Permalink

Niven, Larry, Jerry Pournelle, and Michael Flynn. Fallen Angels. New York: Baen Books, 1991. ISBN 978-0-7434-7181-7.
I do not have the slightest idea what the authors were up to in writing this novel. All three are award-winning writers of “hard” science fiction, and the first two are the most celebrated team working in that genre of all time. I thought I'd read all of the Niven and Pournelle (and assorted others) collaborations, but I only discovered this one when the 2004 reprint edition was mentioned on Jerry Pournelle's Web log.

The premise is interesting, indeed delicious: neo-Luddite environmentalists have so crippled the U.S. economy (and presumably that of other industrialised nations, although they do not figure in the novel) that an incipient global cooling trend due to solar inactivity has tipped over into an ice age. Technologists are actively persecuted, and the U.S. and Soviet space stations and their crews have been marooned in orbit, left to fend for themselves without support from Earth. (The story is set in an unspecified future era in which the orbital habitats accommodate a substantially larger population than space stations envisioned when the novel was published, and have access to lunar resources.)

The earthbound technophobes, huddling in the cold and dark as the glaciers advance, and the orbiting technophiles, watching their meagre resources dwindle despite their cleverness, are forced to confront one another when a “scoop ship” harvesting nitrogen from Earth's atmosphere is shot down by a missile and makes a crash landing on the ice cap descending on upper midwest of the United States. The two “angels”—spacemen—are fugitives sought by the Green enforcers, and figures of legend to that small band of Earthlings who preserve the dream of a human destiny in the stars.

And who would they be? Science fiction fans, of course! Sorry, but you just lost me, right about when I almost lost my lunch. By “fans”, we aren't talking about people like me, and probably many readers of this chronicle, whose sense of wonder was kindled in childhood by science fiction and who, even as adults, find it almost unique among contemporary literary genera in being centred on ideas, and exploring “what if” scenarios that other authors do not even imagine. No, here we're talking about the subculture of “fandom”, a group of people, defying parody by transcending the most outrageous attempts, who invest much of their lives into elaborating their own private vocabulary, writing instantly forgotten fan fiction and fanzines, snarking and sniping at one another over incomprehensible disputes, and organising conventions whose names seem ever so clever only to other fans, where they gather to reinforce their behaviour. The premise here is that when the mainstream culture goes South (literally, as the glaciers descend from the North), “who's gonna save us?”—the fans!

I like to think that more decades of reading science fiction than I'd like to admit to has exercised my ability to suspend disbelief to such a degree that I'm willing to accept just about any self-consistent premise as the price of admission to an entertaining yarn. Heck, last week I recommended a zombie book! But for the work of three renowned hard science fiction writers, there are a lot of serious factual flubs here. (Page numbers are from the mass market paperback edition cited above.)

  • The Titan II (not “Titan Two”) uses Aerozine 50 and Nitrogen tetroxide as propellants, not RP-1 (kerosene) and LOX. One could not fuel a Titan II with RP-1 and LOX, not only because the sizes of the propellant tanks would be incorrect for the mixture ratio of the propellants, but because the Titan II lacks the ignition system for non-hypergolic propellants. (pp. 144–145)
  • “Sheppard reach in the first Mercury-Redstone?” It's “Shepard”, and it was the third Mercury-Redstone flight. (p. 151)
  • “Schirra's Aurora 7”. Please: Aurora 7 was Carpenter's capsule (which is in the Chicago museum); Schirra's was Sigma 7. (p. 248)
  • “Dick Rhutan”. It's “Rutan”. (p, 266)
  • “Just hydrogen. But you can compress it, and it will liquify. It is not that difficult.”. Well, actually, it is. The critical point for hydrogen is 23.97° K, so regardless of how much you compress it, you still need to refrigerate it to a temperature less than half that of liquid nitrogen to obtain the liquid phase. For liquid hydrogen at one atmosphere, you need to chill it to 20.28° K. You don't just need a compressor, you need a powerful cryostat to liquefy hydrogen.
    “…letting the O2 boil off.” Oxygen squared? Please, it's O2. (p. 290)
  • “…the jets were brighter than the dawn…“. If this had been in verse, I'd have let it stand as metaphorical, but it's descriptive prose and dead wrong. The Phoenix is fueled with liquid hydrogen and oxygen, which burn with an almost invisible flame. There's no way the rocket exhaust would have been brighter than the dawn.

Now it seems to me there are three potential explanations of the numerous lapses of this story from the grounded-in-reality attention to detail one expects in hard science fiction.

  1. The authors deliberately wished to mock science fiction fans who, while able to reel off the entire credits of 1950s B movie creature features from memory, pay little attention to the actual history and science of the real world, and hence they get all kinds of details wrong while spouting off authoritatively.
  2. The story is set is an alternative universe, just a few forks from the one we inhabit. Consequently, the general outline is the same, but the little details differ. Like, for example, science fiction fans being able to work together to accomplish something productive.
  3. This manuscript, which, the authors “suspect that few books have ever been delivered this close to a previously scheduled publication date” (p. 451) was never subjected to the intensive fact-checking scrutiny which the better kind of obsessive-compulsive fan will contribute out of a sense that even fiction must be right where it intersects reality.

I'm not gonna fingo any hypotheses here. If you have no interest whatsoever in the world of science fiction fandom, you'll probably, like me, consider this the “Worst Niven and Pournelle—Ever”. On the other hand, if you can reel off every Worldcon from the first Boskone to the present and pen Feghoots for the local 'zine on days you're not rehearsing with the filk band, you may have a different estimation of this novel.

 Permalink

Paul, Ron. The Revolution. New York: Grand Central, 2008. ISBN 978-0-446-53751-3.
Ron Paul's campaign for the 2008 Republican presidential nomination has probably done more to expose voters in the United States to the message of limited, constitutional governance, individual liberty, non-interventionist foreign policy, and sound money than any political initiative in decades. Although largely ignored by the collectivist legacy media, the stunning fund-raising success of the campaign, even if not translated into corresponding success at the polls, is evidence that this essentially libertarian message (indeed, Dr. Paul ran for president in 1988 as the standard bearer of the Libertarian Party) resonates with a substantial part of the American electorate, even among the “millennial generation”, which conventional wisdom believes thoroughly indoctrinated with collectivist dogma and poised to vote away the last vestiges of individual freedom in the United States. In the concluding chapter, the candidate observes:
The fact is, liberty is not given a fair chance in our society, neither in the media, nor in politics, nor (especially) in education. I have spoken to many young people during my career, some of whom had never heard my ideas before. But as soon as I explained the philosophy of liberty and told them a little American history in light of that philosophy, their eyes lit up. Here was something they'd never heard before, but something that was compelling and moving, and which appealed to their sense of idealism. Liberty had simply never been presented to them as a choice. (p. 158)
This slender (173 page) book presents that choice as persuasively and elegantly as anything I have read. Further, the case for liberty is anchored in the tradition of American history and the classic conservatism which characterised the Republican party for the first half of the 20th century. The author repeatedly demonstrates just how recent much of the explosive growth in government has been, and observes that people seemed to get along just fine, and the economy prospered, without the crushing burden of intrusive regulation and taxation. One of the most striking examples is the discussion of abolishing the personal income tax. “Impossible”, as other politicians would immediately shout? Well, the personal income tax accounts for about 40% of federal revenue, so eliminating it would require reducing the federal budget by the same 40%. How far back would you have to go in history to discover an epoch where the federal budget was 40% below that of 2007? Why, you'd have to go all the way back to 1997! (p. 80)

The big government politicians who dominate both major political parties in the United States dismiss the common-sense policies advocated by Ron Paul in this book by saying “you can't turn back the clock”. But as Chesterton observed, why not? You can turn back a clock, and you can replace disastrous policies which are bankrupting a society and destroying personal liberty with time-tested policies which have delivered prosperity and freedom for centuries wherever adopted. Paul argues that the debt-funded imperial nanny state is doomed in any case by simple economic considerations. The only question is whether it is deliberately and systematically dismantled by the kinds of incremental steps he advocates here, or eventually collapses Soviet-style due to bankruptcy and/or hyperinflation. Should the U.S., as many expect, lurch dramatically in the collectivist direction in the coming years, it will only accelerate the inevitable debacle.

Anybody who wishes to discover alternatives to the present course and that limited constitutional government is not a relic of the past but the only viable alternative for a free people to live in peace and prosperity will find this book an excellent introduction to the libertarian/constitutionalist perspective. A five page reading list cites both classics of libertarian thought and analyses of historical and contemporary events from a libertarian viewpoint.

 Permalink

Upton, Jim. Lockheed F-104 Starfighter. North Branch, MN: Specialty Press, 2003. ISBN 978-1-58007-069-0.
In October 1951, following a fact-finding trip to Korea where he heard fighter pilots demand a plane with more speed and altitude capability than anything in existence, Kelly Johnson undertook the design of a fighter that would routinely operate at twice the speed of sound and altitudes in excess of 60,000 feet. Note that this was just four years after Chuck Yeager first flew at Mach 1 in the rocket-powered X-1, and two years before the Douglas Skyrocket research plane first achieved Mach 2. Kelly Johnson was nothing if not ambitious. He was also a man to deliver on his promises: in December 1952 he presented the completed design to the Air Force, which in March 1953 awarded a contract to build two experimental prototypes. On March 4, 1954, just a year later, the first XF-104 Starfighter made its first flight, and within another year it had flown at Mach 1.79. (The prototypes used a less powerful engine than the production model, and were consequently limited in speed.) In April 1956 the YF-104 production prototype reached Mach 2, and production models routinely operated at that speed thereafter. (In fact, the F-104 had the thrust to go faster: it was limited to Mach 2 by thermal limits on its aluminium construction and engine inlet temperature.)

The F-104 became one of the most successful international military aircraft programs of all time. A total of 2578 planes were manufactured in seven countries, and served in the air forces of 14 nations. The F-104 remained in service with the Italian Air Force until 2004, half a century after the flight of the first prototype.

Looking at a history like this, you begin to think that the days must have been longer in the 1950s, so compressed were the schedules for unprecedentedly difficult and complex engineering projects. Compare the F-104's development history with that of the current U.S. air superiority fighter, the F-22, for which a Pentagon requirement was issued in 1981, contractor proposals were solicited in 1986, and the winner of the design competition (Lockheed, erstwhile builder of the F-104) selected in 1991. And when did the F-22 enter squadron service with the Air Force? Well, that would be December 2005, twenty-four years after the Air Force launched the program. The comparable time for the F-104 was a little more than six years. Now, granted, the F-22 is a fantastically more complicated and capable design, but also consider that Kelly Johnson's team designed the F-104 with slide rules, mechanical calculators, and drawing boards, while present day aircraft use modeling and simulation tools which would have seemed like science fiction to designers of the fifties.

This prolifically illustrated book, written by a 35 year veteran of flight test engineering at Lockheed with a foreword by a former president of Lockheed-California who was the chief aerodynamicist of the XF-104 program, covers all aspects of this revolutionary airplane, from design concepts, flight testing, weapons systems, evolution of the design over the years, international manufacturing and deployment, and modifications and research programs. Readers interested in the history and technical details of one of Kelly Johnson's greatest triumphs, and a peek into the hands-on cut and try engineering of the 1950s will find this book a pure delight.

 Permalink

June 2008

Bauerlein, Mark. The Dumbest Generation. New York: Tarcher/Penguin, 2008. ISBN 978-1-58542-639-3.
The generation born roughly between 1980 and 2000, sometimes dubbed “Generation Y” or the “Millennial Generation”, now entering the workforce, the public arena, and exerting an ever-increasing influence in electoral politics, is the first generation in human history to mature in an era of ubiquitous computing, mobile communications, boundless choice in entertainment delivered by cable and satellite, virtual environments in video games, and the global connectivity and instant access to the human patrimony of knowledge afforded by the Internet. In the United States, it is the largest generational cohort ever, outnumbering the Baby Boomers who are now beginning to scroll off the screen. Generation Y is the richest (in terms of disposable income), most ethnically diverse, best educated (measured by years of schooling), and the most comfortable with new technologies and the innovative forms of social interactions they facilitate. Books like Millennials Rising sing the praises of this emerging, plugged-in, globally wired generation, and Millennial Makeover (May 2008) eagerly anticipates the influence they will have on politics and the culture.

To those of us who interact with members of this generation regularly through E-mail, Web logs, comments on Web sites, and personal Web pages, there seems to be a dissonant chord in this symphony of technophilic optimism. To be blunt, the kids are clueless. They may be able to multi-task, juggling mobile phones, SMS text messages, instant Internet messages (E-mail is so Mom and Dad!), social networking sites, Twitter, search engines, peer-to-peer downloads, surfing six hundred cable channels with nothing on while listening to an iPod and playing a video game, but when you scratch beneath the monomolecular layer of frantic media consumption and social interaction with their peers, there's, as we say on the Web, no content—they appear to be almost entirely ignorant of history, culture, the fine arts, civics, politics, science, economics, mathematics, and all of the other difficult yet rewarding aspects of life which distinguish a productive and intellectually engaged adult from a superannuated child. But then one worries that one's just muttering the perennial complaints of curmudgeonly old fogies and that, after all, the kids are all right. There are, indeed, those who argue that Everything Bad Is Good for You: that video games and pop culture are refining the cognitive, decision-making, and moral skills of youth immersed in them to never before attained levels.

But why are they so clueless, then? Well, maybe they aren't really, and Burgess Shale relics like me have simply forgotten how little we knew about the real world at that age. Errr…actually, no—this book, written by a professor of English at Emory University and former director of research and analysis for the National Endowment for the Arts, who experiences first-hand the cognitive capacities and intellectual endowment of those Millennials who arrive in his classroom every year, draws upon a wealth of recent research (the bibliography is 18 pages long) by government agencies, foundations, and market research organisations, all without any apparent agenda to promote, which documents the abysmal levels of knowledge and the ability to apply it among present-day teenagers and young adults in the U.S. If there is good news, it is that the new media technologies have not caused a precipitous collapse in objective measures of outcomes overall (although there are disturbing statistics in some regards, including book reading and attendance at performing arts events). But, on the other hand, the unprecedented explosion in technology and the maturing generation's affinity for it and facility in using it have produced absolutely no objective improvement in their intellectual performance on a wide spectrum of metrics. Further, absorption in these new technologies has further squeezed out time which youth of earlier generations spent in activities which furthered intellectual development such as reading for enjoyment, visiting museums and historical sites, attending and participating in the performing arts, and tinkering in the garage or basement. This was compounded by the dumbing down and evisceration of traditional content in the secondary school curriculum.

The sixties generation's leaders didn't anticipate how their claim of exceptionalism would affect the next generation, and the next, but the sequence was entirely logical. Informed rejection of the past became uninformed rejection of the past, and then complete and unworried ignorance of it. (p. 228)
And it is the latter which is particularly disturbing: as documented extensively, Generation Y knows they're clueless and they're cool with it! In fact, their expectations for success in their careers are entirely discordant with the qualifications they're packing as they venture out to slide down the razor blade of life (pp. 193–198). Or not: on pp. 169–173 we meet the “Twixters”, urban and suburban middle class college graduates between 22 and 30 years old who are still living with their parents and engaging in an essentially adolescent lifestyle: bouncing between service jobs with no career advancement path and settling into no long-term relationship. These sad specimens who refuse to grow up even have their own term of derision: “KIPPERS” Kids In Parents' Pockets Eroding Retirement Savings.

In evaluating the objective data and arguments presented here, it's important to keep in mind that correlation does not imply causation. One cannot run controlled experiments on broad-based social trends: only try to infer from the evidence available what might be the cause of the objective outcomes one measures. Many of the characteristics of Generation Y described here might be explained in large part simply by the immersion and isolation of young people in the pernicious peer culture described by Robert Epstein in The Case Against Adolescence (July 2007), with digital technologies simply reinforcing a dynamic in effect well before their emergence, and visible to some extent in the Boomer and Generation X cohorts who matured earlier, without being plugged in 24/7. For another insightful view of Generation Y (by another professor at Emory!), see I'm the Teacher, You're the Student (January 2005).

If Millennial Makeover is correct, the culture and politics of the United States is soon to be redefined by the generation now coming of age. This book presents a disturbing picture of what that may entail: a generation with little or no knowledge of history or of the culture of the society they've inherited, and unconcerned with their ignorance, making decisions not in the context of tradition and their intellectual heritage, but of peer popular culture. Living in Europe, it is clear that things have not reached such a dire circumstance here, and in Asia the intergenerational intellectual continuity appears to remain strong. But then, the U.S. was the first adopter of the wired society, and hence may simply be the first to arrive at the scene of the accident. Observing what happens there in the near future may give the rest of the world a chance to change course before their own dumbest generations mature. Paraphrasing Ronald Reagan, the author notes that “Knowledge is never more than one generation away from oblivion.” (p. 186) In an age where a large fraction of all human knowledge is freely accessible to anybody in a fraction of a second, what a tragedy it would be if the “digital natives” ended up, like the pejoratively denigrated “natives” of the colonial era, surrounded by a wealth of culture but ignorant of and uninterested in it.

The final chapter is a delightful and stirring defence of culture wars and culture warriors, which argues that only those grounded in knowledge of their culture and equipped with the intellectual tools to challenge accepted norms and conventional wisdom can (for better or worse) change society. Those who lack the knowledge and reasoning skills to be engaged culture warriors are putty in the hands of marketeers and manipulative politicians, which is perhaps why so many of them are salivating over the impending Millennial majority.

 Permalink

Dewar, James A. To the End of the Solar System. 2nd. ed. Burlington, Canada: Apogee Books, [2004] 2007. ISBN 978-1-894959-68-1.
If you're seeking evidence that entrusting technology development programs such as space travel to politicians and taxpayer-funded bureaucrats is a really bad idea, this is the book to read. Shortly after controlled nuclear fission was achieved, scientists involved with the Manhattan Project and the postwar atomic energy program realised that a rocket engine using nuclear fission instead of chemical combustion to heat a working fluid of hydrogen would have performance far beyond anything achievable with chemical rockets and could be the key to opening up the solar system to exploration and eventual human settlement. (The key figure of merit for rocket propulsion is “specific impulse”, expressed in seconds, which [for rockets] is simply an odd way of expressing the exhaust velocity. The best chemical rockets have specific impulses of around 450 seconds, while early estimates for solid core nuclear thermal rockets were between 800 and 900 seconds. Note that this does not mean that nuclear rockets were “twice as good” as chemical: because the rocket equation gives the mass ratio [mass of fuelled rocket versus empty mass] as exponential in the specific impulse, doubling that quantity makes an enormous difference in the missions which can be accomplished and drastically reduces the mass which must be lifted from the Earth to mount them.)

Starting in 1955, a project began, initially within the U.S. Air Force and the two main weapons laboratories, Los Alamos and Livermore, to explore near-term nuclear rocket propulsion, initially with the goal of an ICBM able to deliver the massive thermonuclear bombs of the epoch. The science was entirely straightforward: build a nuclear reactor able to operate at a high core temperature, pump liquid hydrogen through it at a large rate, expel the hot gaseous hydrogen through a nozzle, and there's your nuclear rocket. Figure out the temperature of exhaust and the weight of the entire nuclear engine, and you can work out the precise performance and mission capability of the system. The engineering was a another matter entirely. Consider: a modern civil nuclear reactor generates about a gigawatt, and is a massive structure enclosed in a huge containment building with thick radiation shielding. It operates at a temperature of around 300° C, heating pressurised water. The nuclear rocket engine, by comparison, might generate up to five gigawatts of thermal power, with a core operating around 2000° C (compared to the 1132° C melting point of its uranium fuel), in a volume comparable to a 55 gallon drum. In operation, massive quantities of liquid hydrogen (a substance whose bulk properties were little known at the time) would be pumped through the core by a turbopump, which would have to operate in an almost indescribable radiation environment which might flash the hydrogen into foam and would certainly reduce all known lubricants to sludge within seconds. And this was supposed to function for minutes, if not hours (later designs envisioned a 10 hour operating lifetime for the reactor, with 60 restarts after being refuelled for each mission).

But what if it worked? Well, that would throw open the door to the solar system. Instead of absurd, multi-hundred-billion dollar Mars programs that land a few civil servant spacemen for footprints, photos, and a few rocks returned, you'd end up, for an ongoing budget comparable to that of today's grotesque NASA jobs program, with colonies on the Moon and Mars working their way toward self-sufficiency, regular exploration of the outer planets and moons with mission durations of years, not decades, and the ability to permanently expand the human presence off this planet and simultaneously defend the planet and its biosphere against the kind of Really Bad Day that did in the dinosaurs (and a heck of a lot of other species nobody ever seems to mention).

Between 1955 and 1973, the United States funded a series of projects, usually designated as Rover and NERVA, with the potential of achieving all of this. This book is a thoroughly documented (65 pages of end notes) and comprehensive narrative of what went wrong. As is usually the case when government gets involved, almost none of the problems were technological. The battles, and the eventual defeat of the nuclear rocket were due to agencies fighting for turf, bureaucrats seeking to build their careers by backing or killing a project, politicians vying to bring home the bacon for their constituents or kill projects of their political opponents, and the struggle between the executive and legislative branches and the military for control over spending priorities.

What never happened among all of the struggles and ups and downs documented here is an actual public debate over the central rationale of the nuclear rocket: should there be, or should there not be, an expansive program (funded within available discretionary resources) to explore, exploit the resources, and settle the solar system? Because if no such program were contemplated, then a nuclear rocket would not be required and funds spent on it squandered. But if such a program were envisioned and deemed worthy of funding, a nuclear rocket, if feasible, would reduce the cost and increase the capability of the program to such an extent that the research and development cost of nuclear propulsion would be recouped shortly after the resulting technology were deployed.

But that debate was never held. Instead, the nuclear rocket program was a political football which bounced around for 18 years, consuming 1.4 billion (p. 207) then-year dollars (something like 5.3 billion in today's incredible shrinking greenbacks). Goals were redefined, milestones changed, management shaken up and reorganised, all at the behest of politicians, yet through it all virtually every single technical goal was achieved on time and often well ahead of schedule. Indeed, when the ball finally bounced out of bounds and the 8000 person staff was laid off, dispersing forever their knowledge of the “black art” of fuel element, thermal, and neutronic design constraints for such an extreme reactor, it was not because the project was judged infeasible, but the opposite. The green eyeshade brigade considered the project too likely to succeed, and feared the funding requests for the missions which this breakthrough technological capability would enable. And so ended the possibility of human migration into the solar system for my generation. So it goes. When the rock comes down, the few transient survivors off-planet will perhaps recall their names; they are documented here.

There are many things to criticise about this book. It is cheaply made: the text is set in painfully long lines in a small font with narrow margins, which require milliarcsecond-calibrated eye muscles to track from the end of a line to the start of the next. The printing lops off the descenders from the last line of many pages, leaving the reader to puzzle over words like “hvdrooen” and phrases such as “Whv not now?”. The cover seems to incorporate some proprietary substance made of kangaroo hair and discarded slinkies which makes it curl into a tube once you've opened it and read a few pages. Now, these are quibbles which do not detract from the content, but then this is a 300 page paperback without a single colour plate with a cover price of USD26.95. There are a number of factual errors in the text, but none which seriously distort the meaning for the knowledgeable reader; there are few, if any, typographical errors. The author is clearly an enthusiast for nuclear rocket technology, and this sometimes results in over-the-top hyperbole where a dispassionate recounting of the details should suffice. He is a big fan of New Mexico senator Clinton Anderson, a stalwart supporter of the nuclear rocket from its inception through its demise (which coincided with his retirement from the Senate due to health reasons), but only on p. 145 does the author address the detail that the programme was a multi-billion dollar (in an epoch when a billion dollars was real money) pork barrel project for Anderson's state.

Flawed—yes, but if you're interested in this little-known backstory of the space program of the last century, whose tawdry history and shameful demise largely explains the sorry state of the human presence in space today, this is the best source of which I'm aware to learn what happened and why. Given the cognitive collapse in the United States (Want to clear a room of Americans? Just say “nuclear!”), I can't share the author's technologically deterministic optimism, “The potential foretells a resurgence at Jackass Flats…” (p. 195), that the legacy of Rover/NERVA will be redeemed by the descendants of those who paid for it only to see it discarded. But those who use this largely forgotten and, in the demographically imploding West, forbidden knowledge to make the leap off our planet toward our destiny in the stars will find the experience summarised here, and the sources cited, an essential starting point for the technologies they'll require to get there.

 ‘Und I'm learning Chinese,’ says Wernher von Braun.

 Permalink

Raspail, Jean. Le Camp des Saints. Paris: Robert Laffont, [1973, 1978, 1985] 2006. ISBN 978-2-221-08840-1.
This is one of the most hauntingly prophetic works of fiction I have ever read. Although not a single word has been changed from its original publication in 1973 to the present edition, it is at times simply difficult to believe you're reading a book which was published thirty-five years ago. The novel is a metaphorical, often almost surreal exploration of the consequences of unrestricted immigration from the third world into the first world: Europe and France in particular, and how the instincts of openness, compassion, and generosity which characterise first world countries can sow the seeds of their destruction if they result in developed countries being submerged in waves of immigration of those who do not share their values, culture, and by their sheer numbers and rate of arrival, cannot be assimilated into the society which welcomes them.

The story is built around a spontaneous, almost supernatural, migration of almost a million desperate famine-struck residents from the Ganges on a fleet of decrepit ships, to the “promised land”, and the reaction of the developed countries along their path and in France as they approach and debark. Raspail has perfect pitch when it comes to the prattling of bien pensants, feckless politicians, international commissions chartered to talk about a crisis until it turns into catastrophe, humanitarians bent on demonstrating their good intentions whatever the cost to those they're supposed to be helping and those who fund their efforts, media and pundits bent on indoctrination instead of factual reporting, post-Christian clerics, and the rest of the intellectual scum which rises to the top and suffocates the rationality which has characterised Western civilisation for centuries and created the prosperity and liberty which makes it a magnet for people around the world aspiring to individual achievement.

Frankly addressing the roots of Western exceptionalism and the internal rot which imperils it, especially in the context of mass immigration, is a sure way to get yourself branded a racist, and that has, of course been the case with this book. There are, to be sure, many mentions of “whites” and “blacks”, but I perceive no evidence that the author imputes superiority to the first or inferiority to the second: they are simply labels for the cultures from which those actors in the story hail. One character, Hamadura, identified as a dark skinned “Français de Pondichéry” says (p. 357, my translation), “To be white, in my opinion, is not a colour of skin, but a state of mind”. Precisely—anybody, whatever their race or origin, can join the first world, but the first world has a limited capacity to assimilate new arrivals knowing nothing of its culture and history, and risks being submerged if too many arrive, particularly if well-intentioned cultural elites encourage them not to assimilate but instead work for political power and agendas hostile to the Enlightenment values of the West. As Jim Bennett observed, “Democracy, immigration, multiculturalism. Pick any two.”

Now, this is a novel from 1973, not a treatise on immigration and multiculturalism in present-day Europe, and the voyage of the fleet of the Ganges is a metaphor for the influx of immigrants into Europe which has already provoked many of the cringing compromises of fundamental Western values prophesied, of which I'm sure most readers in the 1970s would have said, “It can't happen here”. Imagine an editor fearing for his life for having published a cartoon (p. 343), or Switzerland being forced to cede the values which have kept it peaceful and prosperous by the muscle of those who surround it and the intellectual corruption of its own elites. It's all here, and much more. There's even a Pope Benedict XVI (albeit very unlike the present occupant of the throne of St. Peter).

This is an ambitious literary work, and challenging for non mother tongue readers. The vocabulary is enormous, including a number of words you won't find even in the Micro Bob. Idioms, many quite obscure (for example “Les carottes sont cuites”—all is lost), abound, and references to them appear obliquely in the text. The apocalyptic tone of the book (whose title is taken from Rev. 20:9) is reinforced by many allusions to that Biblical prophecy. This is a difficult read, which careens among tragedy, satire, and farce, forcing the reader to look beyond political nostrums about the destiny of the West and seriously ask what the consequences of mass immigration without assimilation and the accommodation by the West of values inimical to its own are likely to be. And when you think that Jean Respail saw all of this coming more than three decades ago, it almost makes you shiver. I spent almost three weeks working my way through this book, but although it was difficult, I always looked forward to picking it up, so rewarding was it to grasp the genius of the narrative and the masterful use of the language.

An English translation is available. Given the language, idioms, wordplay, and literary allusions in the original French, this work would be challenging to faithfully render into another language. I have not read the translation and cannot comment upon how well it accomplished this formidable task.

For more information about the author and his works, visit his official Web site.

 Permalink

Biggs, Barton. Wealth, War, and Wisdom. Hoboken, NJ: John Wiley & Sons, 2008. ISBN 978-0-470-22307-9.
Many people, myself included, who have followed financial markets for an extended period of time, have come to believe what may seem, to those who have not, a very curious and even mystical thing: that markets, aggregating the individual knowledge and expectations of their multitude of participants, have an uncanny way of “knowing” what the future holds. In retrospect, one can often look at a chart of broad market indices and see that the market “called” grand turning points by putting in a long-term bottom or top, even when those turning points were perceived by few if any people at the time. One of the noisiest buzzwords of the “Web 2.0” hype machine is “crowdsourcing”, yet financial markets have been doing precisely that for centuries, and in an environment in which the individual participants are not just contributing to some ratty, ephemeral Web site, but rather putting their own net worth on the line.

In this book the author, who has spent his long career as a securities analyst and hedge fund manager, and was a pioneer of investing in emerging global markets, looks at the greatest global cataclysm of the twentieth century—World War II—and explores how well financial markets in the countries involved identified the key trends and turning points in the conflict. The results persuasively support the “wisdom of the market” viewpoint and are a convincing argument that “the market knows”, even when its individual participants, media and opinion leaders, and politicians do not. Consider: the British stock market put in an all-time historic bottom in June 1940, just as Hitler toured occupied Paris and, in retrospect, Nazi expansionism in the West reached its peak. Many Britons expected a German invasion in the near future, and the Battle of Britain and the Blitz were still in the future, and yet the market rallied throughout these dark days. Somehow the market seems to have known that with the successful evacuation of the British Expeditionary Force from Dunkerque and the fall of France, the situation, however dire, was as bad as it was going to get.

In the United States, the Dow Jones Industrial Average declined throughout 1941 as war clouds darkened, fell further after Pearl Harbor and the fall of the Philippines, but put in an all-time bottom in 1942 coincident with the battles of the Coral Sea and Midway which, in retrospect, but not at the time, were seen as the key inflection point of the Pacific war. Note that at this time the U.S. was also at war with Germany and Italy but had not engaged either in a land battle, and yet somehow the market “knew” that, whatever the sacrifices to come, the darkest days were behind.

The wisdom of the markets was also apparent in the ultimate losers of the conflict, although government price-fixing and disruption of markets as things got worse obscured the message. The German CDAX index peaked precisely when the Barbarossa invasion of the Soviet Union was turned back within sight of the spires of the Kremlin. At this point the German army was intact, the Soviet breadbasket was occupied, and the Red Army was in disarray, yet somehow the market knew that this was the high point. The great defeat at Stalingrad and the roll-back of the Nazi invaders were all in the future, but despite propaganda, censorship of letters from soldiers at the front, and all the control of information a totalitarian society can employ, once again the market called the turning point. In Italy, where rampant inflation obscured nominal price indices, the inflation-adjusted BCI index put in its high at precisely the moment Mussolini made his alliance with Hitler, and it was all downhill from there, both for Italy and its stock market, despite rampant euphoria at the time. In Japan, the market was heavily manipulated by the Ministry of Finance and tight control of war news denied investors information to independently assess the war situation, but by 1943 the market had peaked in real terms and declined into a collapse thereafter.

In occupied countries, where markets were allowed to function, they provided insight into the sympathies of their participants. The French market is particularly enlightening. Clearly, the investor class was completely on-board with the German occupation and Vichy. In real terms, the market soared after the capitulation of France and peaked with the defeat at Stalingrad, then declined consistently thereafter, with only a little blip with the liberation of Paris. But then the French stock market wouldn't be French if it weren't perverse, would it?

Throughout, the author discusses how individuals living in both the winners and losers of the war could have best preserved their wealth and selves, and this is instructive for folks interested in saving their asses and assets the next time the Four Horsemen sortie from Hell's own stable. Interestingly, according to Biggs's analysis, so-called “defensive” investments such as government and top-rated corporate bonds and short-term government paper (“Treasury Bills”) performed poorly as stores of wealth in the victor countries and disastrously in the vanquished. In those societies where equity markets survived the war (obviously, this excludes those countries in Eastern Europe occupied by the Soviet Union), stocks were the best financial instrument in preserving value, although in many cases they did decline precipitously over the period of the war. How do you ride out a cataclysm like World War II? There are three key ways: diversification, diversification, and diversification. You need to diversify across financial and real assets, including (diversified) portfolios of stocks, bonds, and bills, as well as real assets such as farmland, real estate, and hard assets (gold, jewelry, etc.) for really hard times. You further need to diversify internationally: not just in the assets you own, but where you keep them. Exchange controls can come into existence with the stroke of a pen, and that offshore bank account you keep “just in case” may be all you have if the worst comes to pass. Thinking about it in that way, do you have enough there? Finally, you need to diversify your own options in the world and think about what you'd do if things really start to go South, and you need to think about it now, not then. As the author notes in the penultimate paragraph:

…the rich are almost always too complacent, because they cherish the illusion that when things start to go bad, they will have time to extricate themselves and their wealth. It never works that way. Events move much faster than anyone expects, and the barbarians are on top of you before you can escape. … It is expensive to move early, but it is far better to be early than to be late.
This is a quirky book, and not free of flaws. Biggs is a connoisseur of amusing historical anecdotes and sprinkles them throughout the text. I found them a welcome leavening of a narrative filled with human tragedy, folly, and destruction of wealth, but some may consider them a distraction and out of place. There are far more copy-editing errors in this book (including dismayingly many difficulties with the humble apostrophe) than I would expect in a Wiley main catalogue title. But that said, if you haven't discovered the wisdom of the markets for yourself, and are worried about riding out the uncertainties of what appears to be a bumpy patch ahead, this is an excellent place to start.

 Permalink

July 2008

Bryson, Bill. Shakespeare. London: Harper Perennial, 2007. ISBN 978-0-00-719790-3.
This small, thin (200 page) book contains just about every fact known for certain about the life of William Shakespeare, which isn't very much. In fact, if the book restricted itself only to those facts, and excluded descriptions of Elizabethan and Jacobean England, Shakespeare's contemporaries, actors and theatres of the time, and the many speculations about Shakespeare and the deliciously eccentric characters who sometimes promoted them, it would probably be a quarter of its present length.

For a figure whose preeminence in English literature is rarely questioned today, and whose work shaped the English language itself—2035 English words appear for the first time in the works of Shakespeare, of which about 800 continue in common use today, including critical, frugal, horrid, vast, excellent, lonely, leapfrog, and zany (pp. 112–113)—very little is known apart from the content of his surviving work. We know the dates of his birth, marriage, and death, something of his parents, siblings, wife, and children, but nothing of his early life, education, travel, reading, or any of the other potential sources of the extraordinary knowledge and insight into the human psyche which informs his work. Between the years 1585 and 1592 he drops entirely from sight: no confirmed historical record has been found, then suddenly he pops up in London, at the peak of his powers, writing, producing, and performing in plays and quickly gaining recognition as one of the preeminent dramatists of his time. We don't even know (although there is no shortage of speculation) which plays were his early works and which were later: there is no documentary evidence for the dates of the plays nor the order in which they were written, apart from a few contemporary references which allow placing a play as no later than the mention of it. We don't even know how he spelt or pronounced his name: of six extant signatures believed to be in his hand, no two spell his name the same way, and none uses the “Shakespeare” spelling in use today.

Shakespeare's plays brought him fame and a substantial fortune during his life, but plays were regarded as ephemeral things at the time, and were the property of the theatrical company which commissioned them, not the author, so no authoritative editions of the plays were published during his life. Had it not been for the efforts of his colleagues John Heminges and Henry Condell, who published the “First Folio” edition of his collected works seven years after his death, it is probable that the eighteen plays which first appeared in print in that edition would have been lost to history, with subsequent generations deeming Shakespeare, based upon surviving quarto editions of uneven (and sometimes laughable) quality of a few plays, one of a number of Elizabethan playwrights but not the towering singular figure he is now considered to be. (One wonders if there were others of Shakespeare's stature who were not as lucky in the dedication of their friends, of whose work we shall never know.) Nobody really knows how many copies of the First Folio were printed, but guesses run between 750 and 1000. Around 300 copies in various states of completeness have survived to the present, and around eighty copies are in a single room at the Folger Shakespeare Library in Washington, D.C., about two blocks from the U.S. Capitol. Now maybe decades of computer disasters have made me obsessively preoccupied with backup and geographical redundancy, but that just makes me shudder. Is there anybody there who wonders whether this is really a good idea? After all, the last time I was a few blocks from the U.S. Capitol, I spotted an ACME MISSILE BOMB right in plain sight!

A final chapter is devoted to theories that someone other than the scantily documented William Shakespeare wrote the works attributed to him. The author points out the historical inconsistencies and implausibilities of most frequently proffered claimants, and has a good deal of fun with some of the odder of the theorists, including the exquisitely named J. Thomas Looney, Sherwood E. Silliman, and George M. Battey.

Bill Bryson fans who have come to cherish his lighthearted tone and quirky digressions on curious details and personalities from such works as A Short History of Nearly Everything (November 2007) will not be disappointed. If one leaves the book not knowing a great deal about Shakespeare, because so little is actually known, it is with a rich sense of having been immersed in the England of his time and the golden age of theatre to which he so mightily contributed.

A U.S. edition is available, but at this writing only in hardcover.

 Permalink

Hirshfeld, Alan. The Electric Life of Michael Faraday. New York: Walker and Company, 2006. ISBN 978-0-8027-1470-1.
Of post-Enlightenment societies, one of the most rigidly structured by class and tradition was that of Great Britain. Those aspiring to the life of the mind were overwhelmingly the well-born, educated in the classics at Oxford or Cambridge, with the wealth and leisure to pursue their interests on their own. The career of Michael Faraday stands as a monument to what can be accomplished, even in such a stultifying system, by the pure power of intellect, dogged persistence, relentless rationality, humility, endless fascination with the intricacies of creation, and confidence that it was ultimately knowable through clever investigation.

Faraday was born in 1791, the third child of a blacksmith who had migrated to London earlier that year in search of better prospects, which he never found due to fragile health. In his childhood, Faraday's family occasionally got along only thanks to the charity of members of the fundamentalist church to which they belonged. At age 14, Faraday was apprenticed to a French émigré bookbinder, setting himself on the path to a tradesman's career. But Faraday, while almost entirely unschooled, knew how to read, and read he did—as many of the books which passed through the binder's shop as he could manage. As with many who read widely, Faraday eventually came across a book that changed his life, The Improvement of the Mind by Isaac Watts, and from the pragmatic and inspirational advice in that volume, along with the experimental approach to science he learned from Jane Marcet's Conversations in Chemistry, Faraday developed his own philosophy of scientific investigation and began to do his own experiments with humble apparatus in the bookbinder's shop.

Faraday seemed to be on a trajectory which would frustrate his curiosity forever amongst the hammers, glue, and stitches of bookbindery when, thanks to his assiduous note-taking at science lectures, his employer passing on his notes, and a providential vacancy, he found himself hired as the assistant to the eminent Humphry Davy at the Royal Institution in London. Learning chemistry and the emerging field of electrochemistry at the side of the master, he developed the empirical experimental approach which would inform all of his subsequent work.

Faraday originally existed very much in Davy's shadow, even serving as his personal valet as well as scientific assistant on an extended tour of the Continent, but slowly (and over Davy's opposition) rose to become a Fellow of the Royal Institution and director of its laboratory. Seeking to shore up the shaky finances of the Institution, in 1827 he launched the Friday Evening Discourses, public lectures on a multitude of scientific topics by Faraday and other eminent scientists, which he would continue to supervise until 1862.

Although trained as a chemist, and having made his reputation in that field, his electrochemical investigations with Davy had planted in his mind the idea that electricity was not a curious phenomenon demonstrated in public lectures involving mysterious “fluids”, but an essential component in understanding the behaviour of matter. In 1831, he turned his methodical experimental attention to the relationship between electricity and magnetism, and within months had discovered electromagnetic induction: that an electric current was induced in a conductor only by a changing magnetic field: the principle used by every electrical generator and transformer in use today. He built the first dynamo, using a spinning copper disc between the poles of a strong magnet, and thereby demonstrated the conversion of mechanical energy into electricity for the first time. Faraday's methodical, indefatigable investigations, failures along with successes, were chronicled in a series of papers eventually collected into the volume Experimental Researches in Electricity, which is considered to be one of the best narratives ever written of science as it is done.

Knowing little mathematics, Faraday expressed the concepts he discovered in elegant prose. His philosophy of science presaged that of Karl Popper and the positivists of the next century—he considered all theories as tentative, advocated continued testing of existing theories in an effort to falsify them and thereby discover new science beyond them, and he had no use whatsoever for the unobservable: he detested concepts such as “action at a distance”, which he considered mystical obfuscation. If some action occurred, there must be some physical mechanism which causes it, and this led him to formulate what we would now call field theory: that physical lines of force extend from electrically charged objects and magnets through apparently empty space, and it is the interaction of objects with these lines of force which produces the various effects he had investigated. This flew in the face of the scientific consensus of the time, and while universally admired for his experimental prowess, many regarded Faraday's wordy arguments as verging on the work of a crank. It wasn't until 1857 that the ageing Faraday made the acquaintance of the young James Clerk Maxwell, who had sent him a copy of a paper in which Maxwell made his first attempt to express Faraday's lines of force in rigorous mathematical form. By 1864 Maxwell had refined his model into his monumental field theory, which demonstrated that light was simply a manifestation of the electromagnetic field, something that Faraday had long suspected (he wrote repeatedly of “ray-vibrations”) but had been unable to prove.

The publication of Maxwell's theory marked a great inflection point between the old physics of Faraday and the new, emerging, highly mathematical style of Maxwell and his successors. While discovering the mechanism through experiment was everything to Faraday, correctly describing the behaviour and correctly predicting the outcome of experiments with a set of equations was all that mattered in the new style, which made no effort to explain why the equations worked. As Heinrich Hertz said, “Maxwell's theory is Maxwell's equations” (p. 190). Michael Faraday lived in an era in which a humble-born person with no formal education or knowledge of advanced mathematics could, purely through intelligence, assiduous self-study, clever and tireless experimentation with simple apparatus he made with his own hands, make fundamental discoveries about the universe and rise to the top rank of scientists. Those days are now forever gone, and while we now know vastly more than those of Faraday's time, one also feels we've lost something. Aldous Huxley once remarked, “Even if I could be Shakespeare, I think I should still choose to be Faraday.” This book is an excellent way to appreciate how science felt when it was all new and mysterious, acquaint yourself with one of the most admirable characters in its history, and understand why Huxley felt as he did.

 Permalink

Gurstelle, William. Backyard Ballistics. Chicago: Chicago Review Press, 2001. ISBN 978-1-55652-375-5
Responsible adults who have a compelling need to launch potatoes 200 metres downrange at high velocity, turn common paper matches into solid rockets, fire tennis balls high into the sky with duct taped together potato chip cans (potatoes again!) and a few drops of lighter fluid, launch water balloons against the aggressor with nothing more than surgical tubing and a little muscle power, engender UFO reports with shimmering dry cleaner bag hot air balloons, and more, will find the detailed instructions they need for such diversions in this book. As in his subsequent Whoosh Boom Splat (December 2007), the author provides detailed directions for fabricating these engines of entertainment from, in most cases, PVC pipe, and the scientific background for each device and suggestions for further study by the intrepid investigator who combines the curiosity of the intuitive experimentalist with the native fascination of the third chimpanzee for things that go flash and bang.

If you live in Southern California, I'd counsel putting the Cincinnati Fire Kite and Dry Cleaner Bag Balloon experiments on hold until after the next big rain.

 Permalink

Podhoretz, Norman. World War IV. New York: Doubleday, 2007. ISBN 978-0-385-52221-2.
Whether you agree with it or not, here is one of the clearest expositions of the “neoconservative” (a term the author, who is one of the type specimens, proudly uses to identify himself) case for the present conflict between Western civilisation and the forces of what he identifies as “Islamofascism”, an aggressive, expansionist, and totalitarian ideology which is entirely distinct from Islam, the religion. The author considers the Cold War to have been World War III, and hence the present and likely as protracted a conflict, as World War IV. He deems it to be as existential a struggle for civilisation against the forces of tyranny as any of the previous three wars.

If you're sceptical of such claims (as am I, being very much an economic determinist who finds it difficult to believe a region of the world whose exports, apart from natural resources discovered and extracted largely by foreigners, are less than those of Finland, can truly threaten the fountainhead of the technologies and products without which its residents would remain in the seventh century utopia they seem to idolise), read Chapter Two for the contrary view: it is argued that since 1970, a series of increasingly provocative attacks were made against the West, not in response to Western actions but due to unreconcilably different world-views. Each indication of weakness by the West only emboldened the aggressors and escalated the scale of subsequent attacks.

The author argues the West is engaged in a multi-decade conflict with its own survival at stake, in which the wars in Afghanistan and Iraq are simply campaigns. This war, like the Cold War, will be fought on many levels: not just military, but also proxy conflicts, propaganda, covert action, economic warfare, and promotion of the Western model as the solution to the problems of states imperiled by Islamofascism. There is some discussion in the epilogue of the risk posed to Europe by the radicalisation of its own burgeoning Muslim population while its indigenes are in a demographic death spiral, but for the most part the focus is on democratising the Middle East, not the creeping threat to democracy in the West by an unassimilated militant immigrant population which a feckless, cringing political class is unwilling to confront.

This book is well written and argued, but colour me unpersuaded. Instead of spending decades spilling blood and squandering fortune in a region of the world which has been trouble for every empire foolish enough to try to subdue it over the last twenty centuries, why not develop domestic energy sources to render the slimy black stuff in the ground there impotent and obsolete, secure the borders against immigration from there (except those candidates who demonstrate themselves willing to assimilate to the culture of the West), and build a wall around the place and ignore what happens inside? Works for me.

 Permalink

Gingrich, Newt. Real Change. Washington: Regnery Publishing, 2008. ISBN 978-1-59698-053-2.
Conventional wisdom about the political landscape in the United States is that it's split right down the middle (evidenced by the last two extremely close Presidential elections), with partisans of the Left and Right increasingly polarised, unwilling and/or unable to talk to one another, both committed to a “no prisoners” agenda of governance should they gain decisive power. Now, along comes Newt Gingrich who argues persuasively in this book, backed by extensive polling performed on behalf of his American Solutions organisation (results of these polls are freely available to all on the site), that the United States have, in fact, a centre-right majority which agrees on many supposedly controversial issues in excess of 70%, with a vocal hard-left minority using its dominance of the legacy media, academia, and the activist judiciary and trial lawyer cesspits to advance its agenda through non-democratic means.

Say what you want about Newt, but he's one of the brightest intellects to come onto the political stage in any major country in the last few decades. How many politicians can you think of who write what-if alternative history novels? I think Newt is onto something here. Certainly there are genuinely divisive issues upon which the electorate is split down the middle. But on the majority of questions, there is a consensus on the side of common sense which only the legacy media's trying to gin up controversy obscures in a fog of bogus conflict.

In presenting solutions to supposedly intractable problems, the author contrasts “the world that works”: free citizens and free enterprise solving problems for the financial rewards from doing so, with “the world that fails”: bureaucracies seeking to preserve and expand their claim upon the resources of the productive sector of the economy. Government, as it has come to be understood in our foul epoch, exclusively focuses upon the latter. All of this can be seen as consequences of Jerry Pournelle's Iron Law of Bureaucracy, which states that in any bureaucratic organisation there will be two kinds of people: those who work to further the actual goals of the organisation, and those who work for the organisation itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who seek to protect and augment the compensation of all teachers, including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organisation, and will thence write the rules under which the organisation functions, to the detriment of those who are coerced to fund it.

Bureaucracy and bureaucratic government can be extremely efficient and effective, as long as its ends are understood! Gingrich documents how the Detroit school system, for example, delivers taxpayer funds to the administrators, union leaders, and unaccountable teachers who form its political constituency. Educating the kids? Well, that's not on the agenda! The world that fails actually works quite well for those it benefits—the problem is that without the market feedback which obtains in the world that works, the supposed beneficiaries of the system have no voice in obtaining the services they are promised.

This is a book so full of common sense that I'm sure it will be considered “outside the mainstream” in the United States. But those who live there, and residents of other industrialised countries facing comparable challenges as demographics collide with social entitlement programs, should seriously ponder the prescriptions here which, if presented by a political leader willing to engage the population on an intellectual level, might command majorities which remake the political map.

 Permalink

August 2008

Netz, Reviel and William Noel. The Archimedes Codex. New York: Da Capo Press, 2007. ISBN 978-0-306-81580-5.
Sometimes it is easy to forget just how scanty is the material from which we know the origins of Western civilisation. Archimedes was one of the singular intellects of antiquity, with contributions to mathematics, science, and engineering which foreshadowed achievements not surpassed until the Enlightenment. And yet all we know of the work of Archimedes in the original Greek (as opposed to translations into Arabic and Latin, which may have lost information due to translators' lack of comprehension of Archimedes's complex arguments) can be traced to three manuscripts: one which disappeared in 1311, another which vanished in the 1550s, and a third: the Archimedes Palimpsest, which surfaced in Constantinople at the start of the 20th century, and was purchased at an auction for more than USD 2 million by an anonymous buyer who deposited it for conservation and research with the Walters Art Museum in Baltimore. (Note that none of these manuscripts was the original work of Archimedes: all were copies made by scribes, probably around the tenth century. But despite being copies, their being in the original Greek means they are far more likely to preserve the sense of the original text of Archimedes, even if the scribe did not understand what he was copying.)

History has not been kind to this work of Archimedes. Only two centuries after the copy of his work was made, the parchment on which it was written was scrubbed of its original content and re-written with the text of a Christian prayer book, which to the unaided eye appears to completely obscure the Archimedes text in much of the work. To compound the insult, sometime in the 20th century four full-page religious images in Byzantine style were forged over pages of the book, apparently in an attempt to increase its market value. This, then, was a bogus illustration painted on top of the prayer book text, which was written on top of the precious words of Archimedes. In addition to these depredations of mankind, many pages had been attacked by mold, and an ill-advised attempt to conserve the text, apparently in the 1960s, had gummed up the binding, including the gutter of the page where Archimedes's text was less obscured, with an intractable rubbery glue.

But from what could be read, even in fragments, it was clear that the text, if it could be extracted, would be of great significance. Two works, “The Method” and “Stomachion”, have their only known copies in this text, and the only known Greek text of “On Floating Bodies” appears here as well. Fortunately, the attempt to extract the Archimedes text was made in the age of hyperspectral imaging, X-ray fluorescence, and other nondestructive technologies, not with the crude and often disastrous chemical potions applied to attempt to recover such texts a century before.

This book, with alternating chapters written by the curator of manuscripts at the Walters and a Stanford professor of Classics and Archimedes scholar, tells the story of the origin of the manuscript, how it came to be what it is and where it resides today, and the painstaking efforts at conservation and technological wizardry (including time on the synchrotron light source beamline at SLAC) which allowed teasing the work of Archimedes from the obscuration of centuries.

What has been found so far has elevated the reputation of Archimedes even above the exalted position he already occupied in the pantheon of science. Analysis of “The Method” shows that Archimedes anticipated the use of infinitesimals and hence the calculus in his proof of the volume of curved solids. The “Stomachion”, originally thought to be a puzzle devoid of serious mathematical interest, turns out to be the first and only known venture of Greek mathematics into the realm of combinatorics.

If you're interested in rare books, the origins of mathematical thought, applications of imaging technology to historical documents, and the perilous path the words of the ancients traverse to reach us across the ages, there is much to fascinate in this account. Special thanks to frequent recommender of books Joe Marasco, who not only brought this book to my attention but mailed me a copy! Joe played a role in the discovery of the importance of the “Stomachion”, which is chronicled in the chapter “Archimedes at Play”.

 Permalink

[Audiobook] Thucydides. The Peloponnesian War. Vol. 2. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [c. 400 B.C.] 2005.
This is the second volume of the audiobook edition of Thucydides's epic history of what was, for Hellenic civilisation, a generation-long world war, describing which the author essentially invented historical narrative as it has been understood ever since. For general comments about the work, see my notes for Volume I.

Although a work of history (albeit with the invented speeches Thucydides acknowledges as a narrative device), this is as much a Greek tragedy as any of the Athenian plays. The war, which began, like so many, over a peripheral conflict between two regional hegemonies, transformed both Athens and Sparta into “warfare states”, where every summer was occupied in military campaigns, and every winter in planning for the next season's conflict. The Melian dialogue, which appears in Book V of the history, is one of the most chilling exemplars of raw power politics ever expressed—even more than two millennia later, it makes the soul shiver and, considering its consequences, makes one sympathetic to those, then and now, who decry the excesses of direct democracy.

Perhaps the massacre of the Melians offended the gods (although Thucydides would never suggest divine influence in the affairs of men), or maybe it was just a symptom of imperial overreach heading directly for the abyss, but not long afterward Athens launched the disastrous Sicilian Expedition, which ultimately resulted in a defeat which, on the scale of classical conflict, was on the order of Stalingrad and resulted in the end of democracy in Athens and its ultimate subjugation by Sparta.

Weapons, technologies, and political institutions change, but the humans who invent them are invariant under time translation. There is wisdom in this narrative of a war fought so very long ago which contemporary decision makers on the global stage ignore only at the peril of the lives and fortune entrusted to them by their constituents. If I could put up a shill at the “town hall” meetings of aspiring politicians, I'd like to ask them “Have you read Thucydides?”, and when they predictably said they had, then “Do you approve of the Athenian democracy's judgement as regards the citizens of Melos?”

This recording includes the second four of the eight books into which Thucydides's text is conventionally divided. The audiobook is distributed in two parts, totalling 11 hours and 29 minutes with an epilogue describing the events which occurred after the extant text of Thucydides ends in mid-paragraph whilst describing events of 410 B.C., six years before the end of the war. The Benjamin Jowett translation is used, read by Charlton Griffin. A print edition of this translation is available.

 Permalink

Mailer, Norman. Miami and the Siege of Chicago. New York: New York Review Books, [1968] 2008. ISBN 978-1-59017-296-4.
In the midst of the societal, political, and cultural chaos which was 1968 in the United States, Harper's magazine sent Norman Mailer to report upon the presidential nominating conventions in August of that year: first the Republicans in Miami Beach and then the Democrats in Chicago. With the prospect, forty years later, of two U.S. political conventions in which protest and street theatre may play a role not seen since 1968 (although probably nowhere near as massive or violent, especially since the political establishments of both parties appear bent upon presenting the appearance of unity), and a watershed election which may change the direction of the United States, New York Review Books have reissued this long out-of-print classic of “new journalism” reportage of the 1968 conventions. As with the comparable, but edgier, account of the 1972 campaign by Hunter S. Thompson, a good deal of this book is not about the events but rather “the reporter”, who identifies himself as such in the narrative.

If you're looking for detailed documentation of what transpired at the conventions, this is not the book to read. Much of Mailer's reporting took place in bars, in the streets, in front of the television, and on two occasions, in custody. This is an impressionistic account of events which leaves you with the feeling of what it was like to be there (at least if you were there and Norman Mailer), not what actually happened. But, God, that man could write! As reportage (the work was completed shortly after the conventions and long before the 1968 election) and not history, there is no sense of perspective, just immersion in the events. If you're old enough to recall them, as I am, you'll probably agree that he got it right, and that this recounting both stands the test of time and summons memories of the passions of that epoch.

On the last page, there are two phrases which have a particular poignancy four decades hence. Mailer, encountering Eugene McCarthy's daughter just before leaving Chicago thinks of telling her “Dear Miss, we will be fighting for forty years.” And then he concludes the book by observing, “We yet may win, the others are so stupid. Heaven help us when we do.” Wise words for the partisans of hope and change in the 2008 campaign!

 Permalink

Pournelle, Jerry. Exile—and Glory. Riverdale, NY: Baen Publishing, 2008. ISBN 978-1-4165-5563-6.
This book collects all of Jerry Pournelle's stories of Hansen Enterprises and other mega-engineering projects, which were originally published in Analog, Galaxy, and Fantasy and Science Fiction between 1972 and 1977. The stories were previously published in two books: High Justice and Exiles to Glory, which are now out of print—if you have those books, don't buy this one unless you want to upgrade to hardcover or can't resist the delightfully space-operatic cover art by Jennie Faries.

The stories take place in a somewhat dystopian future in which the “malaise” of the 1970s never ended. Governments worldwide are doing what governments do best: tax the productive, squander the revenue and patrimony of their subjects, and obstruct innovation and progress. Giant international corporations have undertaken the tasks needed to bring prosperity to a world teeming with people in a way which will not wreck the Earth's environment. But as these enterprises implement their ambitious projects on the sea floor, in orbit, and in the asteroid belt, the one great invariant, human nature, manifests itself and they find themselves confronted with the challenges which caused human societies to institute government in the first place. How should justice be carried out on the technological frontier? And, more to the point, how can it be achieved without unleashing the malign genie of coercive government? These stories are thoughtful explorations of these questions without ever ceasing to be entertaining yarns with believable characters. And you have to love what happens to the pesky lawyer on pp. 304–305!

I don't know if these stories have been revised between the time they were published in the '70s and this edition; there is no indication that they have either in this book or on Jerry Pournelle's Web site. If not, then the author was amazingly prescient about a number of subsequent events which few would have imagined probable thirty years ago. It's a little disheartening to think that one of the reasons these stories have had such a long shelf life is that none of the great projects we expected to be right around the corner in the Seventies have come to pass. As predicted here, governments have not only failed to undertake the challenges but been an active impediment to those trying to solve them, but also the business culture has become so risk-averse and oriented toward the short term that there appears to be no way to raise the capital needed to, for example, deploy solar power satellites, even though such capital is modest compared to that consumed in military adventures in Mesopotamia.

The best science fiction makes you think. The very best science fiction makes you think all over again when you re-read it three decades afterward. This is the very best, and just plain fun as well.

 Permalink

Bernstein, Peter L. Against the Gods. New York: John Wiley & Sons, [1996] 1998. ISBN 978-0-471-29563-1.
I do not use the work “masterpiece” lightly, but this is what we have here. What distinguishes the modern epoch from all of the centuries during which humans identical to us trod this Earth? The author, a distinguished and erudite analyst and participant in the securities markets over his long career, argues that one essential invention of the modern era, enabling the vast expansion of economic activity and production of wealth in Western civilisation, is the ability to comprehend, quantify, and ultimately mitigate risk, either by commingling independent risks (as does insurance), or by laying risk off from those who would otherwise bear it onto speculators willing to assume it in the interest of financial gains (for example, futures, options, and other financial derivatives). If, as in the classical world, everyone bears the entire risk of any undertaking, then all market players will be risk-averse for fear of ruin. But if risk can be shared, then the society as a whole will be willing to run more risks, and it is risks voluntarily assumed which ultimately lead (after the inevitable losses) to net gain for all.

So curious and counterintuitive are the notions associated with risk that understanding them took centuries. The ancients, who made such progress in geometry and other difficult fields of mathematics, were, while avid players of games of chance, inclined to attribute the outcome to the will of the Gods. It was not until the Enlightenment that thinkers such as Pascal, Cardano, the many Bernoullis, and others worked out the laws of probability, bringing the inherent randomness of games of chance into a framework which predicted the outcome, not of any given event—that was unknowable in principle, but the result of a large number of plays with arbitrary precision as the number of trials increased. Next was the understanding of the importance of uncertainty in decision making. It's one thing not to know whether a coin will come up heads or tails. It's entirely another to invest in a stock and realise that however accurate your estimation of the probabilistic unknowns affecting its future (for example, the cost of raw materials), it's the “unknown unknowns” (say, overnight bankruptcy due to a rogue trader in an office half way around the world) that can really sink your investment. Finally, classical economics always assumed that participants in the market behave rationally, but they don't. Anybody who thinks their fellow humans are rational need only visit a casino or watch them purchasing lottery tickets; they are sure in the long term to lose, and yet they still line up to make the sucker bet.

Somehow, I'd gotten it into my head that this was a “history of insurance”, and as a result this book sat on my shelf quite some time before I read it. It is much, much more than that. If you have any interest at all in investing, risk management in business ventures, or in the history of probability, statistics, game theory, and investigations of human behaviour in decision making, this is an essential book. Chapter 18 is one of the clearest expositions for its length that I've read of financial derivatives and both the benefits they have for prudent investors as well as the risks they pose to the global financial system. The writing is delightful, and sources are well documented in end notes and an extensive bibliography.

 Permalink

Hodges, Michael. AK47: The Story of the People's Gun. London: Sceptre, 2007. ISBN 978-0-340-92106-7.
The AK-47 (the author uses “AK47” in this book, except for a few places in the last chapter; I will use the more common hyphenated designation here) has become an iconic symbol of rebellion in the six decades since Mikhail Kalashnikov designed this simple (just 8 moving parts), rugged, inexpensive to manufacture, and reliable assault rifle. Iconic? Yes, indeed—for example the flag and coat of arms of Mozambique feature this weapon which played such a large and tragic rôle in its recent history. Wherever violence erupts around the world, you'll probably see young men brandishing AK-47s or one of its derivatives. The AK-47 has become a global brand as powerful as Coca-Cola, but symbolising insurgency and rebellion, and this book is an attempt to recount how that came to be.

Toward that end it is a total, abject, and utter failure. In a total of 225 pages, only about 35 are devoted to Mikhail Kalashnikov, the history of the weapon he invented, its subsequent diffusion and manufacture around the world, and its derivatives. Instead, what we have is a collection of war stories from Vietnam, Palestine, the Sudan, Pakistan, Iraq, and New Orleans (!), all told from a relentlessly left-wing, anti-American, and anti-Israel perspective, in which the AK-47 figures only peripherally. The author, as a hard leftist, believes, inter alia, in the bizarre notion that an inanimate object made of metal and wood can compel human beings to behave in irrational and ultimately self-destructive ways. You think I exaggerate? Well, here's an extended quote from p. 131.

The AK47 moved from being a tool of the conflict to the cause of the conflict, and by the mid-1990s it had become the progenitor of indiscriminate terror across huge swaths of the continent. How could it be otherwise? AKs were everywhere, and their ubiquity made stability a rare commodity as even the smallest groups could bring to bear a military pressure out of proportion to their actual size.
That's right—the existence of weapons compels human beings, who would presumably otherwise live together in harmony, to murder one another and rend their societies into chaotic, blood-soaked Hell-holes. Yup, and why do the birds always nest in the white areas? The concept that one should look at the absence of civil society as the progenitor of violence never enters the picture here. It is the evil weapon which is at fault, not the failed doctrines to which the author clings, which have wrought such suffering across the globe. Homo sapiens is a violent species, and our history has been one of constant battles. Notwithstanding the horrific bloodletting of the twentieth century, on a per-capita basis, death from violent conflict has fallen to an all-time low in the nation-state era, notwithstanding the advent of of weapons such as General Kalashnikov's. When bad ideas turn murderous, machetes will do.

A U.S edition is now available, but as of this date only in hardcover.

 Permalink

September 2008

Kurlansky, Mark. Cod. New York: Penguin Books, 1997. ISBN 978-0-14-027501-8.
There is nothing particularly glamourous about a codfish. It swims near the bottom of the ocean in cold continental shelf waters with its mouth open, swallowing whatever comes along, including smaller cod. While its white flesh is prized, the cod provides little sport for the angler: once hooked, it simply goes limp and must be hauled from the bottom to the boat. And its rather odd profusion of fins and blotchy colour lacks the elegance of marlin or swordfish or the menace of a shark. But the cod has, since the middle ages, played a part not only in the human diet but also in human history, being linked to the Viking exploration of the North Atlantic, the Basque nautical tradition, long-distance voyages in the age of exploration, commercial transatlantic commerce, the Caribbean slave trade, the U.S. war of independence, the expansion of territorial waters from three to twelve and now 200 miles, conservation and the emerging international governance of the law of the sea, and more.

This delightful piece of reportage brings all of this together, from the biology and ecology of the cod, to the history of its exploitation by fishermen over the centuries, the commerce in cod and the conflicts it engendered, the cultural significance of cod in various societies and the myriad ways they have found to use it, and the shameful overfishing which has depleted what was once thought to be an inexhaustible resource (and should give pause to any environmentalist who believes government regulation is the answer to stewardship). But cod wouldn't have made so much history if people didn't eat them, and the narrative is accompanied by dozens of recipes from around the world and across the centuries (one dates from 1393), including many for parts of the fish other than its esteemed white flesh. Our ancestors could afford to let nothing go to waste, and their cleverness in turning what many today would consider offal into delicacies still cherished by various cultures is admirable. Since codfish has traditionally been sold salted and dried (in which form it keeps almost indefinitely, even in tropical climates, if kept dry, and is almost 80% protein by weight—a key enabler of long ocean voyages before the advent of refrigeration), you'll also want to read the author's work on Salt (February 2005).

 Permalink

[Audiobook] Kafka, Franz. Metamorphosis. (Audiobook, Unabridged). Hong Kong: Naxos Audiobooks, [1915] 2003. ISBN 978-962-634-286-2.
If you're haunted by that recurring nightmare about waking up as a giant insect, this is not the book to read. Me, I have other dreams (although, more recently, mostly about loading out from trade shows and Hackers' conferences that never end—where could those have come from?), so I decided to plunge right into this story. It's really a novella, not a novel—about a hundred pages in a mass-market paperback print edition, but one you won't soon forget. The genius of Kafka is his ability to relate extraordinary events in the most prosaic, deadpan terms. He's not just an omniscient narrator; he is an utterly dispassionate recorder of events, treating banal, bizarre, and impassioned scenes like a camcorder—just what happened. Perhaps Kafka's day job, filling out industrial accident reports for an insurance company, helped to instill the “view from above” so characteristic of his work.

This works extraordinarily well for this dark, dark story. I guess it's safe to say that the genre of people waking up as giant insects and the consequences of that happening was both created and mined out by Kafka in this tale. There are many lessons one can draw from the events described here, some of which do not reflect well upon our species, and others which show that sometimes, even in happy families, what appears to be the most disastrous adversity may actually, even in the face of tragedy, be ultimately liberating. I could write four or five prickly paragraphs about the lessons here for self-reliance, but that's not why you come here. Read the story and draw your own conclusions. I'm amazed that younger sister Grete never agonised over whether she'd inherited the same gene as Gregor. Wouldn't you? And when she stretches her young body in the last line, don't you wonder?

Kafka is notoriously difficult to translate. He uses the structure of the German language to assemble long sentences with a startling surprise in the last few words when you encounter the verb. This is difficult to render into English and other languages which use a subject-verb-object construction in most sentences. Kafka also exploits ambiguities in German which are not translatable to other languages. My German is not (remotely) adequate to read, no less appreciate, Kafka in the original, so translation will have to do for me. Still, even without the nuances in the original, this is a compelling narrative. The story is read by British actor Martin Jarvis, who adopts an ironic tone which is perfect for Kafka's understated prose. Musical transitions separate the chapters.

The audible.com audiobook edition is sold as a single download of 2 hours and 11 minutes, 31 megabytes at MP3 quality. An Audio CD edition is available. A variety of print editions are available, as well as this free online edition, which seems to be closer than the original German than that used in this audiobook although, perhaps inevitably, more clumsy in English.

 Permalink

Crichton, Michael. Timeline. New York: Ballantine Books, 1999. ISBN 978-0-345-46826-0.
Sometimes books, even those I'm sure I'll love, end up sitting on my bookshelf for a long time before I get to them. This novel, originally published in 1999, not only sat on my bookshelf for almost a decade, it went to Africa and back in 2001 before I finally opened it last week and predictably devoured it in a few days.

Crichton is a master storyteller, and this may be the best of the many of his books I've read. I frequently remark that Crichton's work often reads like a novelisation of a screenplay, where you can almost see the storyboards for each chapter as you read it, and that's certainly the case here. This story just begs to be made into a great movie. Regrettably, it was subsequently made into an awful one. So skip the movie and enjoy the book, which is superb.

There's a price of admission, which is accepting some high octane quantum flapdoodle which enables an eccentric billionaire (where would stories like this be without eccentric billionaires?) to secretly develop a time machine which can send people back to historical events, all toward the end of creating perfectly authentic theme parks on historical sites researched through time travel and reconstructed as tourist attractions. (I'm not sure that's the business plan I would come up with if I had a time machine, but it's the premise it takes to make the story work.)

But something goes wrong, and somebody finds himself trapped in 14th century France, and an intrepid band of historians must go back into that world to rescue their team leader. This sets the stage for adventures in the middle ages, based on the recent historical view that the period was not a Dark Age but rather a time of intellectual, technological, and cultural ferment. The story is both an adventurous romp and a story of personal growth which makes one ask the question, “In which epoch would I prosper best?”.

Aside from the necessary suspension of disbelief and speculation about life in the 14th century (about which there remain many uncertainties), there are a few goofs. For example, in the chapter titled “26:12:01” (you'll understand the significance when you read the book), one character discovers that once dark-adapted he can see well by starlight. “Probably because there was no air pollution, he thought. He remembered reading that in earlier centuries, people could see the planet Venus during the day as we can now see the moon. Of course, that had been impossible for hundreds of years.” Nonsense—at times near maximum elongation, anybody who has a reasonably clear sky and knows where to look can spot Venus in broad daylight. I've seen it on several occasions, including from the driveway of my house in Switzerland and 20 kilometres from downtown San Francisco. But none of these detract from the fact that this is a terrific tale which will keep you turning the pages until the very satisfying end.

Spoiler warning: Plot and/or ending details follow.  
The explanation for how the transmitted people are reassembled at the destination in the next to last chapter of the “Black Rock” section (these chapters have neither titles nor numbers) seems to me to miss a more clever approach which would not affect the story in any way (as the explanation never figures in subsequent events). Instead of invoking other histories in the multiverse which are able to reconstitute the time travellers (which raises all kinds of questions about identity and continuity of consciousness), why not simply argue that unitarity is preserved only across the multiverse as a whole, and that when the quantum state of the transmitted object is destroyed in this universe, it is necessarily reassembled intact in the destination universe, because failure to do so would violate unitarity and destroy the deterministic evolution of the wave function?

This is consistent with arguments for what happens to quantum states which fall into a black hole or wormhole (on the assumption that the interior is another universe in the multiverse), and also fits nicely with the David Deutsch's view of the multiverse and my own ideas toward a general theory of paranormal phenomena.

Spoilers end here.  

 Permalink

Veronico, Nicholas A. Boeing 377 Stratocruiser. North Branch, MN: Specialty Press, [2001] 2002. ISBN 978-1-58007-047-8.
The Boeing 377 Stratocruiser, launched in November 1945, with its first flight in July 1947 and entry into airline revenue service with Pan Am in April 1949, embodied the vision of luxurious postwar air travel based on the technological advances made in aviation during the war. (Indeed, the 377 inherited much from the Boeing B-29 and was a commercial derivative of the XC-97 prototype cargo aircraft.) The Stratocruiser, along with its contemporaries, the Lockheed Constellation and Douglas DC-7, represented the apogee of piston powered airliner design. This was an era in which air travel was a luxury indulged in by the elite, and passengers were provided amenities difficult to imagine in our demotic days of flying cattle cars. There was a luxury compartment seating up to eight people with private sleeping berths and (in some configurations) a private bathroom. First class passengers could sleep in seats that reclined into beds more than six feet long, or in upper berths which folded out at nighttime. Economy passengers were accommodated in reclining “sleeperette” seats with sixty inches seat pitch (about twice that of present day economy class). Men and women had their own separate dressing rooms and toilets, and a galley allowed serving multi-course meals on china with silverware as well as buffet snacks. Downstairs on the cargo deck was a lounge seating as many as 14 with a full bar and card tables. One of the reasons for all of these creature comforts was that at a typical cruising speed of 300–340 miles per hour passengers on long haul flights had plenty of time to appreciate them: eleven hours on a flight from Seattle to Honolulu, for example.

Even in the 1950s “flying was the safest way to fly”, but nonetheless taking to the air was much more of an adventure than it is today, hence all those flight insurance vending machines in airports of the epoch. Of a total of 56 Boeing 377s built, no fewer than 10 were lost in accidents, costing a total of 135 crew and passenger lives. Three ditched at sea, including Pan Am 943, which went down in mid-Pacific with all onboard rescued by a Coast Guard weather ship with only a few minor injuries. In addition to crashes, on two separate occasions the main cabin door sprang open in flight, in each case causing one person to be sucked out to their death.

The advent of jet transports brought the luxury piston airliner era to an abrupt end. Stratocruiser airframes, sold to airlines in the 1940s for around US$1.3 million each, were offered in a late 1960 advert in Aviation Week, “14 aircraft from $75,000.00, flyaway”—how the mighty had fallen. Still, the book was not yet closed on the 377. One former Pan Am plane was modified into the Pregnant Guppy airlifter, used to transport NASA's S-IV and S-IVB upper stages for the Saturn I, IB, and V rockets from the manufacturer in California to the launch site in Florida. Later other 377 and surplus C-97 airframes were used to assemble Super Guppy cargo planes, one of which remains in service with NASA.

This book provides an excellent look into a long-gone era of civil aviation at the threshold of the jet age. More than 150 illustrations, including eight pages in colour, complement the text, which is well written with only a few typographical and factual errors. An appendix provides pictures of all but one 377 (which crashed into San Francisco Bay on a routine training flight in 1950, less than a month after being delivered to the airline), with a complete operational history of each.

 Permalink

Sowell, Thomas. Basic Economics. 2nd. ed. New York: Basic Books, [2004] 2007. ISBN 978-0-465-08145-5.
Want to know what's my idea of a financial paradise? A democratic country where the electorate understands the material so lucidly explained in this superb book. Heck, I'd settle for a country where even a majority of the politicians grasped these matters. In fewer than four hundred pages, without a single graph or equation, the author explains the essentials of economics, which he defines as “the study of the use of scarce resources which have alternative uses”. While economics is a large and complex field with many different points of view, he argues that there are basic economic principles upon which virtually all economists agree, across the spectrum from libertarians to Marxists, that these fundamentals apply to all forms of economic and social organisation—feudalism, capitalism, fascism, socialism, communism, whatever—and in all times: millennia of human history provide abundant evidence for the functioning of these basic laws in every society humans have ever created.

But despite these laws being straightforward (if perhaps somewhat counterintuitive until you learn to “think like an economist”), the sad fact is that few citizens and probably even a smaller fraction of politicians comprehend them. In their ignorance, they confuse intentions and goals (however worthy) with incentives and their consequences, and the outcomes of their actions, however predictable, only serve to illustrate the cost when economic principles are ignored. As the author concludes on the last page:

Perhaps the most important distinction is between what sounds good and what works. The former may be sufficient for purposes of politics or moral preening, but not for the economic advancement of people in general or the poor in particular. For those willing to stop and think, basic economics provides some tools for evaluating policies and proposals in terms of their logical implications and empirical consequences.

And this is precisely what the intelligent citizen needs to know in these times of financial peril. I know of no better source to acquire such knowledge than this book.

I should note that due to the regrettably long bookshelf latency at Fourmilab, I read the second edition of this work after the third edition became available. Usually I wouldn't bother to mention such a detail, but while the second edition I read was 438 pages in length, the third is a 640 page ker-whump on the desktop. Now, my experience in reading the works of Thomas Sowell over the decades is that he doesn't waste words and that every paragraph encapsulates wisdom that's worth taking away, even if you need to read it four or five times over a few days to let it sink in. But still, I'm wary of books which grow to such an extent between editions. I read the second edition, and my unconditional endorsement of it as something you absolutely have to read as soon as possible is based upon the text I read. In all probability the third edition is even better—Dr. Sowell understands the importance of reputation in a market economy better than almost anybody, but I can neither evaluate nor endorse something I haven't yet read. That said, I'm confident that regardless of which edition of this book you read, you will close it as a much wiser citizen of a civil society and participant in a free economy than when you opened the volume.

 Permalink

October 2008

Corsi, Jerome L. The Obama Nation. New York: Threshold Editions, 2008. ISBN 978-1-4165-9806-0.
The author of this book was co-author, with John O'Neill, of the 2004 book about John Kerry, Unfit for Command (October 2004), which resulted in the introduction of the verb “to swiftboat” into the English language. In this outing, the topic is Barack Obama, whose enigmatic origin, slim paper trail, and dubious associates are explored here. Unlike the earlier book, where his co-author had first-hand experience with John Kerry, this book is based almost entirely on secondary sources, well documented in end notes, with many from legacy media outlets, in particular investigative reporting by the Chicago Sun-Times and Chicago Tribune.

The author concludes that behind Obama's centrist and post-partisan presentation is a thoroughly radical agenda, with long-term associations with figures on the extreme left-wing fringe of American society. He paints an Obama administration, especially if empowered by a filibuster-proof majority in the Senate and a House majority, as likely to steer American society toward a European-like social democratic agenda in the greatest veer to the left since the New Deal.

Is this, in fact, likely? Well, there are many worrisome, well-sourced, items here, but then one wonders about the attention to detail of an author who believes that Germany is a permanent member of the United Nations Security Council (p. 262). Lapses like this and a strong partisan tone undermine the persuasiveness of the case made here. I hear that David Freddoso's The Case Against Barack Obama is a better put argument, grounded in Obama's roots in Chicago machine politics rather than ideology, but I haven't read that book and I probably won't as the election will surely have gone down before I'd get to it.

If you have no idea where Obama came from or what he believes, there are interesting items here to follow up, but I wouldn't take the picture presented here as valid without independently verifying the source citations and making my own judgement as to their veracity.

 Permalink

Bean, Alan and Andrew Chaikin. Apollo. Shelton, CT: The Greenwich Workshop, 1998. ISBN 978-0-86713-050-8.
On November 19th, 1969, Alan Bean became the fourth man to walk on the Moon, joining Apollo 12 commander Pete Conrad on the surface of Oceanus Procellarum. He was the first person to land on the Moon on his very first space flight. He later commanded the Skylab 3 mission in 1973, spending more than 59 days in orbit.

Astronauts have had a wide variety of second careers after retiring from NASA: executives, professors, politicians, and many others. Among the Apollo astronauts, only Alan Bean set out, after leaving NASA in 1981, to become a professional artist, an endeavour at which he has succeeded, both artistically and commercially. This large format coffee table book collects many of his paintings completed before its publication in 1998, with descriptions by the artist of the subject material of each and, in many cases, what he was trying to achieve artistically. The companion text by space writer Andrew Chaikin (A Man on the Moon) provides an overview of Bean's career and the Apollo program.

Bean's art combines scrupulous attention to technical detail (for example, the precise appearance of items reflected in the curved visor of spacesuit helmets) with impressionistic brushwork and use of colour, intended to convey how the lunar scenes felt, as opposed to the drab, near monochrome appearance of the actual surface. This works for some people, while others find it grating—I like it very much. Visit the Alan Bean Gallery and make up your own mind.

This book is out of print, but used copies are available. (While mint editions can be pricey, non-collector copies for readers just interested in the content are generally available at modest cost).

 Permalink

Phillips, Kevin. Bad Money. New York: Viking, 2008. ISBN 978-0-670-01907-6.
I was less than impressed by the author's last book, American Theocracy (March 2007), so I was a little hesitant about picking up this volume—but I'm glad I did. This is, for its length, the best resource for understanding the present financial mess I've read. While it doesn't explain everything, and necessarily skips over much of the detail, it correctly focuses on the unprecedented explosion of debt in recent decades; the dominance of finance (making money by shuffling money around) over manufacturing (making stuff) in the United States; the emergence of a parallel, unregulated, fantasy-land banking system based on arcane financial derivatives; politicians bent on promoting home ownership whatever the risk to the financial system; and feckless regulators and central bankers who abdicated their responsibility and became “serial bubblers” instead. The interwoven fate of the dollar and petroleum prices, the near-term impact of a global peak in oil production and the need to rein in carbon emissions, and their potential consequences for an already deteriorating economic situation are discussed in detail. You will also learn why government economic statistics (inflation rate, money supply, etc.) should be treated with great scepticism.

The thing about financial bubbles, and why such events are perennial in human societies, is that everybody wins—as long as the bubble continues to inflate and more suckers jump on board. Asset owners see their wealth soar, speculators make a fortune, those producing the assets enjoy ever-increasing demand, lenders earn more and more financing the purchase of appreciating assets, brokers earn greater and greater fees, and government tax revenues from everybody in the loop continue to rise—until the bubble pops. Then everybody loses, as reality reasserts itself. That's what we're beginning to see occur in today's financial markets: a grand-scale deleveraging of which events as of this writing (mid-October 2008) are just the opening act (or maybe the overture).

The author sketches possible scenarios for how the future may play out. On the whole, he's a bit more optimistic than I (despite the last chapter's being titled “The Global Crisis of American Capitalism”), but then that isn't difficult. The speculations about the future seem plausible to me, but I can imagine things developing in far different ways than those envisioned here, many of which would seem far-fetched today. There are a few errors (for example, Vladimir Putin never “headed the KGB” [p. 192]: in fact he retired from the KGB in 1991 after returning from having served as an agent in Dresden), but none seriously affects the arguments presented.

I continue to believe the author overstates the influence of the evangelical right in U.S. politics, and understates the culpability of politicians of both parties in creating the moral hazard which has now turned into the present peril. But these quibbles do not detract from this excellent primer on how the present crisis came to be, and what the future may hold.

 Permalink

Darling, Kev. De Havilland Comet. North Branch, MN: Specialty Press, 2001. ISBN 978-1-58007-036-2.
If the Boeing 377 was the epitome and eventual sunset of the piston powered airliner, the De Havilland Comet was the dawn, or perhaps the false dawn, of the jet age. As World War II was winding down, the British Government convened a commission to explore how the advances in aviation during the war could be translated into commercial aircraft in the postwar era, and how the British aviation industry could transition from military production to a leadership position in postwar aviation. Among the projects proposed, the most daring was the “Type 4”, which eventually became the De Havilland Comet. Powered by British-invented turbojet engines, it would be a swept-wing, four engine aircraft with a cruising speed in excess of 500 miles per hour and a stage length of 1500 miles. Despite these daunting technological leaps, the British aviation industry rose to the challenge, and in July 1949, the prototype De Havilland Comet took to the air. After extensive testing, the Comet entered revenue service in May 1952, the first commercial jet-powered passenger service. Surely the jet age was dawning, and Britannia would rule it.

And then disaster struck. First, three aircraft were lost due to the Comet's tetchy handling qualities and cockpit crews' unfamiliarity with the need to maintain speed in takeoff and landing with swept-wing aircraft. Another Comet was lost with all on board flying into a tropical storm in India. Analysis of the wreckage indicated that metal fatigue cracks at the corners of the square windows may have contributed to the structural failures, but this was not considered the definitive cause of the crash and Comets were permitted to continue to fly. Next, a Comet departed Rome and disintegrated in mid-air above the island of Elba, killing all on board. BOAC (the operator of the Comet in question) grounded their fleet voluntarily pending an investigation, but then reinstated flights 10 weeks later, as no probable cause had been determined for the earlier crashes. Just three days later, another BOAC aircraft, also departing Rome, disintegrated in the air near Naples, with no survivors. The British Civil Aviation Authority withdrew the Permit to Fly for the Comet, grounding all of the aircraft in operation.

Assiduous investigation determined that the flaw in the Comet had nothing to do with its breakthrough jet propulsion, or the performance it permitted, but rather structural failure due to metal fatigue, which started at the aerial covers at the top of the fuselage, then disastrously propagated to cracks originating at the square corners of the windows in the passenger cabin. Reinforcement of the weak points of the fuselage and replacement of the square windows with oval ones completely solved this problem, but only after precious time had been lost and, with it, the Comet's chance to define the jet age.

The subsequent Comets were a great success. The Comet 2 served with distinction with the Royal Air Force in a variety of capacities, and the Comet 4 became the flagship of numerous airlines around the globe. On October 4th, 1958, a Comet 4 inaugurated transatlantic jet passenger service, but just 22 days before the entry into service of the Boeing 707. The 707, with much greater passenger capacity (I remember the first time I saw one—I drew in my breath and said “It's so big”—the 747 actually had less impact on me than the 707 compared to earlier prop airliners) rapidly supplanted the Comet on high traffic city pairs.

But the Comet lived on. In the aftermarket, it was the jet fleet leader of numerous airlines, and the flagship of British airtour operator Dan-Air. The Comet 4 was the basis for the Nimrod marine patrol aircraft, which has served with the Royal Air Force since 1971 and remains in service today. With lifetime extensions, it is entirely possible that Nimrod aircraft will remain on patrol a century after its progenitor, the Comet, first took to the air.

This thorough, well-written, and lavishly illustrated (8 pages in colour) book provides comprehensive coverage of the Comet and Nimrod programmes, from concept through development, test, entry into service, tragedy, recovery, and eventual success (short-lived for the Comet 4, continuing for its Nimrod offspring).

 Permalink

Klavan, Andrew. Empire of Lies. New York: Harcourt, 2008. ISBN 978-0-15-101223-7.
One perfect October Saturday afternoon, Jason Harrow, successful businessman, happily married father of three, committed Christian whose religion informs his moral sense, is sharing a lazy day with his family when the phone rings and sets into a motion an inexorable sequence of events which forces him to confront his dark past, when he was none of those things. Drawn from his prosperous life in the Midwest to the seamy world of Manhattan, he finds himself enmeshed in an almost hallucinatory web of evil and deceit which makes him doubt his own perception of reality, fearing that the dementia which consumed his mother is beginning to manifest itself in him, and that his moral sense is nothing but a veneer over the dark passions of his past.

This is a thriller that thrills. Although the story is unusual for these days in having a Christian protagonist who is not a caricature, this is no Left Behind hymn-singing tract; in fact, the language and situations are quite rough and unsuitable for the less than mature. The author, two of whose earlier books have been adapted into the films True Crime and Don't Say a Word, has a great deal of fun at the expense of the legacy media, political correctness, and obese, dissipated, staccato-speaking actors who—once portrayed—dashing—spacefarers. If you fall into any of those categories, you may be intensely irritated by this book, but otherwise you'll probably, like me, devour it in a few sittings. I just finished it this perfect October Saturday afternoon, and it's one of the most satisfying thrillers I've read in years.

A spoiler-free podcast interview with the author is available.

 Permalink

West, Diana. The Death of the Grown-Up. New York: St. Martin's Griffin, 2007. ISBN 978-0-312-34049-0.
In The Case Against Adolescence (July 2007), Robert Epstein argued that the concept of adolescence as a distinct phase of life is a recently-invented social construct which replaced the traditional process of childhood passing into an apprenticeship to adulthood around the time of puberty. In this book, acid-penned author Diana West, while not discussing Epstein's contentions, suggests that the impact of adolescence upon the culture is even greater and more pernicious, and that starting with the Boomer generation, the very goal of maturing into an adult has been replaced by a “forever young” narcissism which elevates the behaviour of adolescence into the desideratum of people who previously would have been expected to put such childish things behind them and assume the responsibilities of adults.

What do you get when you have a society full of superannuated adolescents? An adolescent culture, of course, addicted to instant gratification (see the debt crisis), lack of respect for traditional virtues and moderation, a preference for ignoring difficult problems in favour of trivial distractions, and for euphemisms instead of unpleasant reality. Such a society spends so much time looking inward that it forgets who it is or where it has come from, and becomes as easily manipulated as an adolescent at the hands of a quick-talking confidence man. And there are, as always, no shortage of such predators ready to exploit it.

This situation, the author argues, crossing the line from cultural criticism into red meat territory, becomes an existential threat when faced with what she calls “The Real Culture War”: the challenge to the West from Islam (not “Islamists”, “Islamofascists”, “Islamic terrorists”, “militant fundamentalists” or the like, but Islam—the religion, in which she contends the institutions of violent jihad and dhimmitude for subjected populations which do not convert have been established from its early days). Islam, she says. is a culture which, whatever its shortcomings, does know what it is, exhorts its adherents to propagate it, and has no difficulty proclaiming its superiority over all others or working toward a goal of global domination. Now this isn't of course, the first time the West has faced such a threat: in just the last century the equally aggressive and murderous ideologies of fascism and communism were defeated, but they were defeated by an adult society, not a bunch of multicultural indoctrinated, reflexively cringing, ignorant or disdainful of their own culture, clueless about history, parents and grandparents whose own process of maturation stopped somewhere in their teens.

This is a polemic, and sometimes reads like a newspaper op-ed piece which has to punch its message through in limited space as opposed to the more measured development of an argument appropriate to the long form. I also think the author really misses a crucial connection in not citing the work of Epstein and others on the damage wrought by the concept of adolescence itself—when you segregate young adults by age and cut them off from the contact with adults which traditionally taught them what adulthood meant and how and why they should aspire to it, is it any surprise that you end up with a culture filled with people who have never figured out how to behave as adults?

 Permalink

November 2008

Buckley, Christopher. Supreme Courtship. New York: Twelve, 2008. ISBN 978-0-446-57982-7.
You know you're about to be treated to the highest level of political farce by a master of the genre when you open a book which begins with the sentence:
Supreme Court Associate Justice J. Mortimer Brinnin's deteriorating mental condition had been the subject of talk for some months now, but when he showed up for oral argument with his ears wrapped in aluminum foil, the consensus was that the time had finally come for him to retire.
The departure of Mr. Justice Brinnin created a vacancy which embattled President Donald Vanderdamp attempted to fill with two distinguished jurists boasting meagre paper trails, both of whom were humiliatingly annihilated in hearings before the Senate Judiciary Committee, whose chairman, loquacious loose cannon and serial presidential candidate Dexter Mitchell, coveted the seat for himself.

After rejection of his latest nominee, the frustrated president was channel surfing at Camp David when he came across the wildly popular television show Courtroom Six featuring television (and former Los Angeles Superior Court) judge Pepper Cartwright dispensing down-home justice with her signature Texas twang and dialect. Let detested Senator Mitchell take on that kind of popularity, thought the Chief Executive, chortling at the prospect, and before long Judge Pepper is rolled out as the next nominee, and prepares for the confirmation fight.

I kind of expected this story to be about how an authentic straight-talking human being confronts the “Borking” judicial nominees routinely receive in today's Senate, but it's much more and goes way beyond that, which I shall refrain from discussing to avoid spoilers. I found the latter half of the book less satisfying that the first—it seemed like once on the court Pepper lost some of her spice, but I suppose that's realistic (yet who expects realism in farces?). Still, this is a funny book, with hundreds of laugh out loud well-turned phrases and Buckley's customary delightfully named characters. The fractured Latin and snarky footnotes are an extra treat. This is not a roman à clef, but you will recognise a number of Washington figures upon which various characters were modelled.

 Permalink

Kimball, Roger. Tenured Radicals. 3rd. ed. Chicago: Ivan R. Dee, [1990, 1991, 1998] 2008. ISBN 978-1-56663-796-1.
If you want to understand what's happening in the United States today, and how the so-called millennial generation (May 2008) came to be what it is, there's no better place to start than this book, originally published eighteen years ago, which has just been released in a new paperback edition with an introduction and postscript totalling 65 pages which update the situation as of 2008. The main text has been revised as well, and a number of footnotes added to update matters which have changed since earlier editions.

Kimball's thesis is that, already by 1990, and far more and broadly diffused today, the humanities departments (English, Comparative Literature, Modern Languages, Philosophy, etc.) of prestigious (and now almost all) institutions of higher learning have been thoroughly radicalised by politically-oriented academics who have jettisoned the traditional canon of literature, art, and learning and rejected the traditional mission of a liberal arts education in favour of indoctrinating students in a nominally “multicultural” but actually anti-Western ideology which denies the existence of objective truth and the meaning of text, and inculcates the Marxist view that all works can be evaluated only in terms of their political context and consequences. These pernicious ideas, which have been discredited by their disastrous consequences in the last century and laughed out of public discourse everywhere else, have managed to achieve an effective hegemony in the American academy, with tenured radicals making hiring and tenure decisions based upon adherence to their ideology as opposed to merit in disinterested intellectual inquiry.

Now, putting aside this being disastrous to a society which, like all societies, is never more than one generation away from losing its culture, and catastrophic to a country which now has a second generation of voters entering the electorate who are ignorant of the cultural heritage they inherited and the history of the nation whose leadership they are about to assume, this spectacle can also be quite funny if observed with special goggles which only transmit black humour. For the whole intellectual tommyrot of “deconstruction” and “postmodernism” has become so trendy that intellectuals in other fields one would expect to be more immune to such twaddle are getting into the act, including the law (“Critical Legal Studies”) and—astoundingly—architecture. An entire chapter is devoted to “Deconstructivist Architecture”, which by its very name seems to indicate you wouldn't want to spend much time in buildings “deconstructed” by its proponents. And yet, it has a bevy of earnest advocates, including Peter Eisenman, one of the most distinguished of U.S. architects, who advised those wishing to move beyond the sterility of modernism to seek

a theory of the center, that is, a theory which occupies the center. I believe that only when such a theory of the center is articulated will architecture be able to transform itself as it always has and as it always will…. But the center that I am talking about is not a center that can be the center that we know is in the past, as a nostalgia for center. Rather, this not new but other center will be … an interstitial one—but one with no structure, but one also that embraces as periphery in its own centric position. … A center no longer sustained by nostalgia and no longer sustained by univocal discourse. (p. 187)
Got that? I'd hate to be a client explaining to him that I want the main door to be centred between these two windows.

But seriously, apart from the zaniness, intellectual vapidity and sophistry, and obscurantist prose (all of which are on abundant display here), what we're seeing what Italian Communist Antonio Gramsci called the “long march through the institutions” arriving at the Marxist promised land: institutions of higher education funded with taxpayer money and onerous tuition payments paid by hard-working parents and towering student loans disgorging class after class of historically and culturally ignorant, indoctrinated, and easily influenced individuals into the electorate, just waiting for a charismatic leader who knows how to eloquently enunciate the trigger words they've been waiting for.

In the 2008 postscript the author notes that a common reaction to the original 1990 edition of the book was the claim that he had cherry-picked for mockery a few of the inevitably bizarre extremes you're sure to find in a vibrant and diverse academic community. But with all the news in subsequent years of speech codes, jackboot enforcing of “diversity”, and the lockstep conformity of much of academia, this argument is less plausible today. Indeed, much of the history of the last two decades has been the diffusion of new deconstructive and multicultural orthodoxy from elite institutions into the mainstream and its creeping into the secondary school curriculum as well. What happens in academia matters, especially in a country in which an unprecedented percentage of the population passes through what style themselves as institutions of higher learning. The consequences of this should be begin to be manifest in the United States over the next few years.

 Permalink

Anderson, Brian C. and Adam D. Thierer. A Manifesto for Media Freedom. New York: Encounter Books, 2008. ISBN 978-1-59403-228-8.
In the last decade, the explosive growth of the Internet has allowed a proliferation of sources of information and opinion unprecedented in the human experience. As humanity's first ever many-to-many mass medium, the Internet has essentially eliminated the barriers to entry for anybody who wishes to address an audience of any size in any medium whatsoever. What does it cost to start your own worldwide television or talk radio show? Nothing—and the more print-inclined can join the more than a hundred million blogs competing for the global audience's attention. In the United States, the decade prior to the great mass-market pile-on to the Internet saw an impressive (by pre-Internet standards) broadening of radio and television offerings as cable and satellite distribution removed the constraints of over-the-air bandwidth and limited transmission range, and abolition of the “Fairness Doctrine” freed broadcasters to air political and religious programming of every kind.

Fervent believers in free speech found these developments exhilarating and, if they had any regrets, they were only that it didn't happen more quickly or go as far as it might. One of the most instructive lessons of this epoch has been that prominent among the malcontents of the new media age have been politicians who mouth their allegiance to free speech while trying to muzzle it, and legacy media outlets who wrap themselves in the First Amendment while trying to construe it as a privilege reserved for themselves, not a right to which the general populace is endowed as individuals.

Unfortunately for the cause of liberty, while technologists, entrepreneurs, and new media innovators strive to level the mass communication playing field, it's the politicians who make the laws and write the regulations under which everybody plays, and the legacy media which support politicians inclined to tilt the balance back in their favour, reversing (or at least slowing) the death spiral in their audience and revenue figures. This thin volume (just 128 pages: even the authors describe it as a “brief polemic”) sketches the four principal threats they see to the democratisation of speech we have enjoyed so far and hope to see broadened in unimagined ways in the future. Three have suitably Orwellian names: the “Fairness Doctrine” (content-based censorship of broadcast media), “Network Neutrality” (allowing the FCC's camel nose into the tent of the Internet, with who knows what consequences as Fox Charlie sweeps Internet traffic into the regulatory regime it used to stifle innovation in broadcasting for half a century), and “Campaign Finance Reform” (government regulation of political speech, often implemented in such a way as to protect incumbents from challengers and shut out insurgent political movements from access to the electorate). The fourth threat to new media is what the authors call “neophobia”: fear of the new. To the neophobe, the very fact of a medium's being innovative is presumptive proof that it is dangerous and should be subjected to regulation from which pre-existing media are exempt. Just look at the political entrepreneurs salivating over regulating video games, social networking sites, and even enforcing “balance” in blogs and Web news sources to see how powerful a force this is. And we have a venerable precedent in broadcasting being subjected, almost from its inception unto the present, to regulation unthinkable for print media.

The actual manifesto presented here occupies all of a page and a half, and can be summarised as “Don't touch! It's working fine and will evolve naturally to get better and better.” As I agree with that 100%, my quibbles with the book are entirely minor items of presentation and emphasis. The chapter on network neutrality doesn't completely close the sale, in my estimation, on how something as innocent-sounding as “no packet left behind” can open the door to intrusive content regulation of the Internet and the end of privacy, but then it's hard to explain concisely: when I tried five years ago, more than 25,000 words spilt onto the page. Also, perhaps because the authors' focus is on political speech, I think they've underestimated the extent to which, in regulation of the Internet, ginned up fear of what I call the unholy trinity: terrorists, drug dealers, and money launderers, can be exploited by politicians to put in place content regulation which they can then turn to their own partisan advantage.

This is a timely book, especially for readers in the U.S., as the incoming government seems more inclined to these kinds of regulations than that it supplants. (I am on record as of July 10th, 2008, as predicting that an Obama administration would re-impose the “fairness doctrine”, enact “network neutrality”, and [an issue not given the attention I think it merits in this book] adopt “hate speech” legislation, all with the effect of stifling [mostly due to precautionary prior restraint] free speech in all new media.) For a work of advocacy, this book is way too expensive given its length: it would reach far more of the people who need to be apprised of these threats to their freedom of expression and to access to information were it available as an inexpensive paperback pamphlet or on-line download.

A podcast interview with one of the authors is available.

 Permalink

Macintyre, Ben. Agent Zigzag. New York: Three Rivers Press, 2007. ISBN 978-0-307-35341-2.
I'm not sure I'd agree with the cover blurb by the Boston Globe reviewer who deemed this “The best book ever written”, but it's a heck of a great read and will keep you enthralled from start to finish. Imagine the best wartime espionage novel you've ever read, stir in exploits from a criminal caper yarn, leaven with an assortment of delightfully eccentric characters, and then make the whole thing totally factual, exhaustively documented from archives declassified decades later by MI5, and you have this compelling story.

The protagonist, Eddie Chapman was, over his long and convoluted career, a British soldier; deserter; safecracker; elite criminal; prisoner of His Majesty, the government of the Isle of Jersey, and the Nazi occupation in Paris; volunteer spy and saboteur for the German Abwehr; parachute spy in Britain; double agent for MI5; instructor at a school for German spies in Norway; spy once again in Britain, deceiving the Germans about V-1 impact locations; participant in fixed dog track races; serial womaniser married to the same woman for fifty years; and for a while an “honorary crime correspondent” to the Sunday Telegraph. That's a lot to fit into even a life as long as Chapman's, and a decade after his death, those who remember him still aren't sure where his ultimate allegiance lay or even if the concept applied to him. If you simply look at him as an utterly amoral person who managed to always come up standing, even after intensive interrogations by MI5, the Abwehr, Gestapo, and SS, you miss his engaging charm, whether genuine or feigned, which engendered deeply-felt and long-lasting affection among his associates, both British and Nazi, criminal and police, all of whom describe him as a unique character.

Information on Chapman's exploits has been leaking out ever since he started publishing autobiographical information in 1953. Dodging the Official Secrets Act, in 1966 he published a more detailed account of his adventures, which was made into a very bad movie starring Christopher Plummer as Eddie Chapman. Since much of this information came from Chapman, it's not surprising that a substantial part of it was bogus. It is only with the release of the MI5 records, and through interviews with surviving participants in Chapman's exploits that the author was able to piece together an account which, while leaving many questions of motivation uncertain, at least pins down the facts and chronology.

This is a thoroughly delightful story of a totally ambiguous character: awarded the Iron Cross for his services to the Nazi Reich, having mistresses simultaneously supported in Britain and Norway by MI5 and the Abwehr, covertly pardoned for his high-profile criminal record for his service to the Crown, and unreconstructed rogue in his long life after the war. If published as spy fiction, this would be considered implausible in the extreme; the fact that it really happened makes this one of the most remarkable wartime stories I've read and an encounter with a character few novelists could invent.

 Permalink

Miller, Ron and Fredrick C. Durant III. The Art of Chesley Bonestell. London: Paper Tiger, 2001. ISBN 978-1-85585-884-8.
If you're interested in astronomy and space, you're almost certainly familiar with the space art of Chesley Bonestell, who essentially created the genre of realistic depictions of extraterrestrial scenes. But did you know that Bonestell also:

  • Was a licensed architect in the State of California, who contributed to the design of a number of buildings erected in Northern California in the aftermath of the 1906 earthquake?
  • Chose the site for the 1915 Panama-Pacific International Exposition (of which the San Francisco Palace of Fine Arts remains today)?
  • Laid out the Seventeen Mile Drive in Pebble Beach on the Monterey Peninsula?
  • Did detailed design of the ornamentation of the towers of the Golden Gate Bridge, and illustrated pamphlets explaining the engineering of the bridge?
  • Worked for years in Hollywood doing matte paintings for films including Citizen Kane?
  • Not only did the matte paintings, but designed the buildings of Howard Roark for the film version of The Fountainhead?
  • Painted the Spanish missions of California as they would have appeared in their heyday?

Although Bonestell always considered himself an illustrator, not an artist, and for much of his career took no particular care to preserve the originals of his work, here was a polymath with a paintbrush who brought genius as well as precision to every subject he rendered. He was, like his collaborator on Destination Moon, Robert A. Heinlein (the two admired each other's talents, but Bonestell thought Heinlein somewhat of a nut in his political views; their relationship got off to a rocky start when Bonestell visited Heinlein's self-designed dream house and pronounced his architectural judgement that it looked like a gas station), a businessman first—he would take the job that paid best and quickest, and produced a large volume of commercial art to order, all with the attention to detail of his more artistically ambitious creations.

While Bonestell was modest about his artistic pretensions, he had no shortage of self-esteem: in 1974 he painted a proposed redesign of the facade of St. Peter's Basilica better in keeping with his interpretation of Michelangelo's original intent and arranged to have it sent to the Pope who responded, in essence, “Thanks, but no thanks”.

This resplendent large-format coffee table book tells the story of Bonestell's long and extraordinarily creative career in both text and hundreds of full-colour illustrations of his work. To open this book to almost any page is to see worlds unknown at the time, rendered through the eye of an artist whose mind transported him there and sparked the dream of exploration in the generations which expanded the human presence and quest to explore beyond the home planet.

This book is out of print and used copies command a frightful premium; I bought this book when it was for sale at the cover price and didn't get around to reading all the text for seven years, hence its tardy appearance here.

 Permalink

Kauffman, Bill. Forgotten Founder, Drunken Prophet. Wilmington: ISI Books, 2008. ISBN 978-1-933859-73-6.
It is a cliché to observe that history is written by the victors, but rarely is it as evident as in the case of the drafting and ratification of the United States Constitution, where the proponents of a strong national government, some of whom, including Alexander Hamilton, wished to “annihilate the State distinctions and State operations” (p. 30), not only conducted the proceedings in secret, carefully managed the flow of information to the public, and concealed their nationalist, nay imperial, ambitions from the state conventions which were to vote on ratification. Indeed, just like modern-day collectivists in the U.S. who have purloined the word “liberal”, which used to mean a champion of individual freedom, the covert centralisers at the Constitutional Convention styled themselves “Federalists”, while promoting a supreme government which was anything but federal in nature. The genuine champions of a federal structure allowed themselves to be dubbed “Anti-Federalists” and, as always, were slandered as opposing “progress” (but toward what?). The Anti-Federalists counted among their ranks men such as Samuel Adams, Patrick Henry, George Mason, Samuel Chase, and Elbridge Gerry: these were not reactionary bumpkins but heroes, patriots, and intellectuals the equal of any of their opponents. And then there was Luther Martin, fervent Anti-Federalist and perhaps the least celebrated of the Founding Fathers.

Martin's long life was a study in contradictions. He was considered one of the most brilliant trial lawyers of his time, and yet his courtroom demeanour was universally described as long-winded, rambling, uncouth, and ungrammatical. He often appeared in court obviously inebriated, was slovenly in appearance and dress, when excited would flick spittle from his mouth, and let's not get into his table manners. At the Consitutional Convention he was a fierce opponent of the Virginia Plan which became the basis of the Constitution and, with Samuel Adams and Mason, urged the adoption of a Bill of Rights. He argued vehemently for the inclusion of an immediate ban on the importation of slaves and a plan to phase out slavery while, as of 1790, owning six slaves himself yet serving as Honorary-Counselor to a Maryland abolitionist society.

After the Constitution was adopted by the convention (Martin had walked out by the time and did not sign the document), he led the fight against its ratification by Maryland. Maryland ratified the Constitution over his opposition, but he did manage to make the ratification conditional upon the adoption of a Bill of Rights.

Martin was a man with larger than life passions. Although philosophically close to Thomas Jefferson in his view of government, he detested the man because he believed Jefferson had slandered one of his wife's ancestors as a murderer of Indians. When Jefferson became President, Martin the Anti-Federalist became Martin the ardent Federalist, bent on causing Jefferson as much anguish as possible. When a law student studying with him eloped with and married his daughter, Martin turned incandescent, wrote, and self-published a 163 page full-tilt tirade against the bounder titled Modern Gratitude.

Lest Martin come across as a kind of buffoon, bear in mind that after his singular performance at the Constitutional Convention, he went on to serve as Attorney General of the State of Maryland for thirty years (a tenure never equalled in all the years which followed), argued forty cases before the U.S. Supreme Court, and appeared for the defence in two of the epochal trials of early U.S. jurisprudence: the impeachment trial of Supreme Court Justice Samuel Chase before the U.S. Senate, and the treason trial of Aaron Burr—and won acquittals on both occasions.

The author is an unabashed libertarian, and considers Martin's diagnosis of how the Constitution would inevitably lead to the concentration of power in a Federal City (which his fellow Anti-Federalist George Clinton foresaw, “would be the asylum of the base, idle, avaricious, and ambitious” [p. xiii]) to the detriment of individual liberty as prescient. One wishes that Martin had been listened to, while sympathising with those who actually had to endure his speeches.

The author writes with an exuberantly vast vocabulary which probably would have sent the late William F. Buckley to the dictionary on several occasions: every few pages you come across a word like “roorback”, “eftsoons”, “sennight”, or “fleer”. For a complete list of those which stumped me, open the vault of the spoilers.

Spoiler warning: Plot and/or ending details follow.  
Here are the delightfully obscure words used in this book. To avoid typographic fussiness, I have not quoted them. Each is linked to its definition. Vocabulary ho!

malison, exordium, eristic, roorback, tertium quid, bibulosity, eftsoons, vendue, froward, pococurante, disprized, toper, cerecloth, sennight, valetudinarian, variorum, concinnity, plashing, ultimo, fleer, recusants, scrim, flagitious, indurated, truckling, linguacious, caducity, prepotency, natheless, dissentient, placemen, lenity, burke, plangency, roundelay, hymeneally, mesalliance, divagation, parti pris, anent, comminatory, descry, minatory
Spoilers end here.  

This is a wonderful little book which, if your view of the U.S. Constitution has been solely based on the propaganda of those who promulgated it, is an excellent and enjoyable antidote.

 Permalink

December 2008

Sheckley, Robert. The People Trap and Mindswap. New York: Ace Books, [1952–1966, 1968] 1981. ISBN 978-0-441-65874-9.
This “Ace Double” (albeit not in the classic dos-à-dos format, but simply concatenated) contains a collection of mostly unrelated short stories by Robert Sheckley, and the short novel Mindswap, which is an extraordinarily zany story even by the standards of the year in which it was written, 1966, which was a pretty zany year—perhaps Sheckley foresaw just how weird the next few years would get.

I bought this book because it contained a story I've remembered ever since I first read it four decades ago (and, even then, a decade after it was first published in Galaxy in 1953), “The Laxian Key”. In a century and a half of science fiction, this is the only exemplar of which I'm aware of a story based upon economics which is also riotously funny. I won't give away the plot, but just imagine the ultimate implications of “it's free!”.

These stories are gems from the era in which science fiction was truly the “literature of ideas”—it's the ideas that matter; don't look for character development or introspection: the characters are props for the idea that underlies each story. If you like this kind of thing, which I do enormously, here is a master at work at the apogee of the genre, when you could pick up any one of the science fiction magazines and find several stories that made you look at the world through glasses which presented reality in a very different light.

This book is long out of print, but used copies are readily available, often for less than the 1981 reprint cover price.

 Permalink

Rawles, James Wesley. Patriots. Philadelphia: Clearwater Press, 2006. ISBN 978-1-4257-3407-7.

A human being should be able to change a diaper, plan an invasion, butcher a hog, design a building, conn a ship, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve an equation, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

Robert A. Heinlein

In this compelling novel, which is essentially a fictionalised survival manual, the author tracks a small group of people who have banded together to ride out total societal collapse in the United States, prepared themselves, and are eventually forced by circumstances to do all of these things and more. I do not have high expectations for self-published works by first-time authors, but I started to read this book whilst scanning documents for one of my other projects and found it so compelling that the excellent book I was currently reading (a review of which will appear here shortly) was set aside as I scarfed up this book in a few days.

Our modern, technological civilisation has very much a “just in time” structure: interrupt electrical power and water supplies and sewage treatment fail in short order. Disrupt the fuel supply (in any number of ways), and provision of food to urban centres fails in less than a week, with food riots and looting the most likely outcome. As we head into what appears to be an economic spot of bother, it's worth considering just how bad it may get, and how well you and yours are prepared to ride out the turbulence. This book, which one hopes profoundly exaggerates the severity of what is to come, is an excellent way to inventory your own preparations and skills for a possible worst case scenario. For a sense of the author's perspective, and for a wealth of background information only alluded to in passing in the book, visit the author's SurvivalBlog.com site.

Sploosh, splash, inky squirt! Ahhhh…, it's Apostrophe Squid trying to get my attention. What is it about self-published authors who manifest encyclopedic knowledge across domains as diverse as nutrition, military tactics, medicine, economics, agriculture, weapons and ballistics, communications security, automobile and aviation mechanics, and many more difficult to master fields, yet who stumble over the humble apostrophe like their combat bootlaces were tied together? Our present author can tell you how to modify a common amateur radio transceiver to communicate on the unmonitored fringes of the Citizens' Band and how to make your own improvised Claymore mines, but can't seem to form the possessive of a standard plural English noun, and hence writes “Citizen's Band” and the equivalent in all instances. (Just how useful would a “Citizen's Band” radio be, with only one citizen transmitting and receiving on it?)

Despite the punctuational abuse and the rather awkward commingling of a fictional survival scenario with a catalogue of preparedness advice and sources of things you'll need when the supply chain breaks, I found this a compulsive page-turner. It will certainly make you recalibrate your ability to ride out that bad day when you go to check the news and find there's no Internet, and think again about just how much food you should store in the basement and (more importantly), how skilled you are in preparing what you cached many years ago, not to mention what you'll do when that supply is exhausted.

 Permalink

Drury, Allen. Come Nineveh, Come Tyre. New York: Avon, 1973. ISBN 978-0-380-00126-2.
This novel is one of the two alternative conclusions the author wrote for the series which began with his Pulitzer Prize winning Advise and Consent. As the series progressed, Drury became increasingly over the top (some would say around the bend) in skewering the media, academia, and the Washington liberal establishment of the 1960s and 1970 with wickedly ironic satire apt to make the skulls of contemporary bien pensants explode.

The story is set in a time in which the U.S. is involved in two protracted and broadly unpopular foreign wars, one seemingly winding down, the other an ongoing quagmire, both launched by a deeply despised president derided by the media and opposition as a warmonger. Due to a set of unexpected twists and turns in an electoral campaign like no other, a peace candidate emerges as the nominee of his party—a candidate with no foreign policy experience but supreme self-confidence, committed to engaging America's adversaries directly in one-on-one diplomacy, certain the outstanding conflicts can be thus resolved and, with multilateral good will, world peace finally achieved. This eloquent, charismatic, almost messianic candidate mobilises the support of a new generation, previously disengaged from politics, who not only throw their youthful vigour behind his campaign but enter the political arena themselves and support candidates aligned with the presidential standard bearer. Around the world, the candidate is praised as heralding a new era in America. The media enlist themselves on his side in an unprecedented manner, passing, not just on editorial pages but in supposedly objective news coverage, from artful bias to open partisanship. Worrisome connections between the candidate and radicals unwilling to renounce past violent acts, anti-American demagogues, and groups which resort to thuggish tactics against opponents and critics do not figure in the media's adulatory coverage of their chosen one. The media find themselves easily intimidated by even veiled threats of violence, and quietly self-censor criticism of those who oppose liberty for fear of “offending.” The candidate, inspiring the nation with hope for peace and change for the better, wins a decisive victory, sweeping in strong majorities in both the House and Senate, including many liberal freshmen aligned with the president-elect and owing their seats to the coattails of his victory. Bear in mind that this novel was published in 1973!

This is the story of what happens after the candidate of peace, change, and hope takes office, gives a stunningly eloquent, visionary, and bold inaugural address, and basks in worldwide adulation while everything goes swimmingly—for about twelve hours. Afterward, well, things don't, and a cataclysmic set of events are set into motion which threaten to change the U.S. in ways other than were hoped by those who elected the new man.

Now, this book was published three and a half decades ago, and much has changed in the intervening time, which doubtless explains why all of the books in the series are now long out of print. But considering the précis above, and how prophetic many of its elements were of the present situation in the U.S., maybe there's some wisdom here relevant to the changes underway there. Certainly one hopes that used booksellers aren't getting a lot of orders for this volume from buyers in Moscow, Beijing, Pyongyang, and Tehran. I had not read this book since its initial publication (when, despite almost universal disdain from the liberal media, it sold almost 200,000 copies in hardcover), and found in re-reading it that the story, while obviously outdated in some regards (the enemy of yore, the Soviet Bear, is no more, but who knows where Russia's headed?), especially as regards the now-legacy media, stands up better than I remembered it from the first reading. The embrace of media content regulation by a “liberal” administration is especially chilling at a time when talk of re-imposing the “Fairness Doctrine” and enforcing “network neutrality” is afoot in Washington.

All editions of this book are out of print, but used copies of the mass-market paperback are presently available for little more than the shipping cost. Get yours before the bad guys clean out the shelves!

 Permalink

Shlaes, Amity. The Forgotten Man. New York: Harper Perennial, [2007] 2008. ISBN 978-0-06-093642-6.
The conventional narrative of the Great Depression and New Deal is well-defined, and generations have been taught the story of how financial hysteria and lack of regulation led to the stock market crash of October 1929, which tipped the world economy into depression. The do-nothing policies of Herbert Hoover and his Republican majority in Congress allowed the situation to deteriorate until thousands of banks had failed, unemployment rose to around a quarter of the work force, collapsing commodity prices bankrupted millions of farmers, and world trade and credit markets froze, exporting the Depression from the U.S. to developed countries around the world. Upon taking office in 1932, Franklin Roosevelt embarked on an aggressive program of government intervention in the economy, going off the gold standard, devaluing the dollar, increasing government spending and tax rates on corporations and the wealthy by breathtaking amounts, imposing comprehensive regulation on every aspect of the economy, promoting trade unions, and launching public works and job creation programs on a massive scale. Although neither financial markets nor unemployment recovered to pre-crash levels, and full recovery did not occur until war production created demand for all industry could produce, at least FDR's New Deal kept things from getting much worse, kept millions from privation and starvation, and just possibly, by interfering with the free market in ways never before imagined in America, preserved it, and democracy, from the kind of revolutionary upheaval seen in the Soviet Union, Italy, Japan, and Germany. The New Deal pitted plutocrats, big business, and Wall Street speculators against the “forgotten man”—the people who farmed their land, toiled in the factories, and strove to pay their bills and support their families and, for once, allied with the Federal Government, the little guys won.

This is a story of which almost any student having completed an introductory course in American history can recount the key points. It is a tidy story, an inspiring one, and both a justification for an activist government and demonstration that such intervention can work, even in the most dire of economic situations. But is it accurate? In this masterful book, based largely on primary and often contemporary sources, the author makes a forceful argument that is is not—she does not dispute the historical events, most of which did indeed occur as described above, but rather the causal narrative which has been erected, largely after the fact, to explain them. Looking at what actually happened and when, the tidily wrapped up package begins to unravel and discordant pieces fall out.

For example, consider the crash of 1929. Prior to the crash, unemployment was around three percent (the Federal Government did not compile unemployment figures at the time, and available sources differ in methodology and hence in the precise figures). Following the crash, unemployment began to rise steeply and had reached around 9% by the end of 1929. But then the economy began to recover and unemployment fell. President Hoover was anything but passive: the Great Engineer launched a flurry of initiatives, almost all disastrously misguided. He signed the Hawley-Smoot Tariff (over the objection of an open letter signed by 1,028 economists and published in the New York Times). He raised taxes and, diagnosing the ills of the economy as due to inflation, encouraged the Federal Reserve to contract the money supply. To counter falling wages, he jawboned industry leaders to maintain wage levels which predictably resulted in layoffs instead of reduced wages. It was only after these measures took hold that the economy, which before seemed to be headed into a 1921-like recession, nosed over and began to collapse toward the depths of the Depression.

There was a great deal of continuity between the Hoover and early Roosevelt administrations. Roosevelt did not rescind Hoover's disastrous policies, but rather piled on intrusive regulation of agriculture and industry, vastly increased Federal spending (he almost doubled the Federal budget in his first term), increased taxes to levels before unimaginable in peacetime, and directly attacked private enterprise in sectors such as electrical power generation and distribution, which he felt should be government enterprises. Investment, the author contends, is the engine of economic recovery, and Roosevelt's policies resulted in a “capital strike” (a phrase used at the time), as investors weighed their options and decided to sit on their money. Look at this way: suppose you're a plutocrat and have millions at your disposal. You can invest them in a business, knowing that if the business fails you're out your investment, but that if it generates a profit the government will tax away more than 75% of your gains. Or, you can put your money in risk- and tax-free government bonds and be guaranteed a return. Which would you choose?

The story of the Great Depression is told largely by following a group of individuals through the era. Many of the bizarre aspects of the time appear here: Father Divine; businesses and towns printing their own scrip currency; the Schechter Brothers kosher poultry butchers taking on FDR's NRA and utterly defeating it in the Supreme Court; the prosecution of Andrew Mellon, Treasury Secretary to three Presidents, for availing himself of tax deductions the government admitted were legal; and utopian “planned communities” such as Casa Grande in Arizona, where displaced farmers found themselves little more than tenants in a government operation resembling Stalin's collective farms.

From the tone of some of the reaction to the original publication of this book, you might think it a hard-line polemic longing to return to the golden days of the Coolidge administration. It is nothing of the sort. This is a fact-based re-examination of the Great Depression and the New Deal which, better than any other book I've read, re-creates the sense of those living through it, when nobody really understood what was happening and people acting with the best of intentions (and the author imputes nothing else to either Hoover or Roosevelt) could not see what the consequences of their actions would be. In fact, Roosevelt changed course so many times that it is difficult to discern a unifying philosophy from his actions—sadly, this very pragmatism created an uncertainty in the economy which quite likely lengthened and deepened the Depression. This paperback edition contains an afterword in which the author responds to the principal criticisms of the original work.

It is hard to imagine a more timely book. Since this book was published, the U.S. have experienced a debt crisis, real estate bubble collapse, sharp stock market correction, rapidly rising unemployment and economic contraction, with an activist Republican administration taking all kinds of unprecedented actions to try to avert calamity. A Democratic administration, radiating confidence in itself and the power of government to make things better, is poised to take office, having promised programs in its electoral campaign which are in many ways reminiscent of those enacted in FDR's “hundred days”. Apart from the relevance of the story to contemporary events, this book is a pure delight to read.

 Permalink

  2009  

January 2009

Taleb, Nassim Nicholas. The Black Swan. New York: Random House, 2007. ISBN 978-1-4000-6351-2.
If you are interested in financial markets, investing, the philosophy of science, modelling of socioeconomic systems, theories of history and historicism, or the rôle of randomness and contingency in the unfolding of events, this is a must-read book. The author largely avoids mathematics (except in the end notes) and makes his case in quirky and often acerbic prose (there's something about the French that really gets his goat) which works effectively.

The essential message of the book, explained by example in a wide variety of contexts is (and I'll be rather more mathematical here in the interest of concision) is that while many (but certainly not all) natural phenomena can be well modelled by a Gaussian (“bell curve”) distribution, phenomena in human society (for example, the distribution of wealth, population of cities, book sales by authors, casualties in wars, performance of stocks, profitability of companies, frequency of words in language, etc.) are best described by scale-invariant power law distributions. While Gaussian processes converge rapidly upon a mean and standard deviation and rare outliers have little impact upon these measures, in a power law distribution the outliers dominate.

Consider this example. Suppose you wish to determine the mean height of adult males in the United States. If you go out and pick 1000 men at random and measure their height, then compute the average, absent sampling bias (for example, picking them from among college basketball players), you'll obtain a figure which is very close to that you'd get if you included the entire male population of the country. If you replaced one of your sample of 1000 with the tallest man in the country, or with the shortest, his inclusion would have a negligible effect upon the average, as the difference from the mean of the other 999 would be divided by 1000 when computing the average. Now repeat the experiment, but try instead to compute mean net worth. Once again, pick 1000 men at random, compute the net worth of each, and average the numbers. Then, replace one of the 1000 by Bill Gates. Suddenly Bill Gates's net worth dwarfs that of the other 999 (unless one of them randomly happened to be Warren Buffett, say)—the one single outlier dominates the result of the entire sample.

Power laws are everywhere in the human experience (heck, I even found one in AOL search queries), and yet so-called “social scientists” (Thomas Sowell once observed that almost any word is devalued by preceding it with “social”) blithely assume that the Gaussian distribution can be used to model the variability of the things they measure, and that extrapolations from past experience are predictive of the future. The entry of many people trained in physics and mathematics into the field of financial analysis has swelled the ranks of those who naïvely assume human action behaves like inanimate physical systems.

The problem with a power law is that as long as you haven't yet seen the very rare yet stupendously significant outlier, it looks pretty much like a Gaussian, and so your model based upon that (false) assumption works pretty well—until it doesn't. The author calls these unimagined and unmodelled rare events “Black Swans”—you can see a hundred, a thousand, a million white swans and consider each as confirmation of your model that “all swans are white”, but it only takes a single black swan to falsify your model, regardless of how much data you've amassed and how long it has correctly predicted things before it utterly failed.

Moving from ornithology to finance, one of the most common causes of financial calamities in the last few decades has been the appearance of Black Swans, wrecking finely crafted systems built on the assumption of Gaussian behaviour and extrapolation from the past. Much of the current calamity in hedge funds and financial derivatives comes directly from strategies for “making pennies by risking dollars” which never took into account the possibility of the outlier which would wipe out the capital at risk (not to mention that of the lenders to these highly leveraged players who thought they'd quantified and thus tamed the dire risks they were taking).

The Black Swan need not be a destructive bird: for those who truly understand it, it can point the way to investment success. The original business concept of Autodesk was a bet on a Black Swan: I didn't have any confidence in our ability to predict which product would be a success in the early PC market, but I was pretty sure that if we fielded five products or so, one of them would be a hit on which we could concentrate after the market told us which was the winner. A venture capital fund does the same thing: because the upside of a success can be vastly larger than what you lose on a dud, you can win, and win big, while writing off 90% of all of the ventures you back. Investors can fashion a similar strategy using options and option-equivalent investments (for example, resource stocks with a high cost of production), diversifying a small part of their portfolio across a number of extremely high risk investments with unbounded upside while keeping the bulk in instruments (for example sovereign debt) as immune as possible to Black Swans.

There is much more to this book than the matters upon which I have chosen to expound here. What you need to do is lay your hands on this book, read it cover to cover, think it over for a while, then read it again—it is so well written and entertaining that this will be a joy, not a chore. I find it beyond charming that this book was published by Random House.

 Permalink

Hendrickx, Bart and Bert Vis. Energiya-Buran. Chichester, UK: Springer Praxis, 2007. ISBN 978-0-387-69848-9.
This authoritative history chronicles one of the most bizarre episodes of the Cold War. When the U.S. Space Shuttle program was launched in 1972, the Soviets, unlike the majority of journalists and space advocates in the West who were bamboozled by NASA's propaganda, couldn't make any sense of the economic justification for the program. They worked the numbers, and they just didn't work—the flight rates, cost per mission, and most of the other numbers were obviously not achievable. So, did the Soviets chuckle at this latest folly of the capitalist, imperialist aggressors and continue on their own time-proven path of mass-produced low-technology expendable boosters? Well, of course not! They figured that even if their wisest double-domed analysts were unable to discern the justification for the massive expenditures NASA had budgeted for the Shuttle, there must be some covert military reason for its existence to which they hadn't yet twigged, and hence they couldn't tolerate a shuttle gap and consequently had to build their own, however pointless it looked on the surface.

And that's precisely what they did, as this book so thoroughly documents, with a detailed history, hundreds of pictures, and technical information which has only recently become available. Reasonable people can argue about the extent to which the Soviet shuttle was a copy of the American (and since the U.S. program started years before and placed much of its design data into the public domain, any wise designer would be foolish not to profit by using it), but what is not disputed is that (unlike the U.S. Shuttle) Energiya was a general purpose heavy-lift launcher which had the orbiter Buran as only one of its possible payloads and was one of the most magnificent engineering projects of the space programs of any nation, involving massive research and development, manufacturing, testing, integrated mission simulation, crew training, and flight testing programs.

Indeed, Energiya-Buran was in many ways a better-conceived program for space access than the U.S. Shuttle program: it integrated a heavy payload cargo launcher with the shuttle program, never envisioned replacing less costly expendable boosters with the shuttle, and forecast a development program which would encompass full reusability of boosters and core stages and both unmanned cargo and manned crew changeout missions to Soviet space stations.

The program came to a simultaneously triumphant and tragic end: the Energiya booster and the Energiya-Buran shuttle system performed flawless missions (the first Energiya launch failed to put its payload into orbit, but this was due to a software error in the payload: the launcher performed nominally from ignition through payload separation).

In the one and only flight of Buran (launch and landing video, other launch views) the orbiter was placed into its intended orbit and landed on the cosmodrome runway at precisely the expected time.

And then, in the best tradition not only of the Communist Party of the Soviet Union but of the British Labour Party of the 1970s, this singular success was rewarded by cancellation of the entire program. As an engineer, I have almost unlimited admiration for my ex-Soviet and Russian colleagues who did such masterful work and who will doubtless advance technology in the future to the benefit of us all. We should celebrate the achievement of those who created this magnificent space transportation system, while encouraging those inspired by it to open the high frontier to all of those who exulted in its success.

 Permalink

Sinclair, Upton. Dragon's Teeth. Vol. 2. Safety Harbor, FL: Simon Publications, [1942] 2001. ISBN 978-1-931313-15-5.
This is the second half of the third volume in Upton Sinclair's grand-scale historical novel covering the years from 1913 through 1949. Please see my notes on the first half for details on the series and this novel. The second half, comprising books four through six of the original novel (this is a print on demand facsimile edition, in which each of the original novels is split into two parts due to constraints of the publisher), covers the years 1933 and 1934, as Hitler tightens his grip on Germany and persecution of the Jews begins in earnest.

The playboy hero Lanny Budd finds himself in Germany trying to arrange the escape of Jewish relatives from the grasp of the Nazi tyranny, meets Goebbels, Göring, and eventually Hitler, and discovers the depth of the corruption and depravity of the Nazi regime, and then comes to experience it directly when he becomes caught up in the Night of the Long Knives.

This book was published in January 1942, less than a month after Pearl Harbor. It is remarkable to read a book written in a time when the U.S. and Nazi Germany were at peace and the swastika flag flew from the German embassy in Washington which got the essence of the Nazis so absolutely correct (especially the corruption of the regime, which was overlooked by so many until Albert Speer's books decades later). This is very much a period piece, and enjoyable in giving a sense of how people saw the events of the 1930s not long after they happened. I'm not, however, inclined to slog on through the other novels in the saga—one suffices for me.

 Permalink

Butterfield, Jeremy. Damp Squid. Oxford: Oxford University Press, 2008. ISBN 978-0-19-923906-1.
Dictionaries attempt to capture how language (or at least the words of which it is composed) is used, or in some cases should be used according to the compiler of the dictionary, and in rare examples, such as the monumental Oxford English Dictionary (OED), to trace the origin and history of the use of words over time. But dictionaries are no better than the source material upon which they are based, and even the OED, with its millions of quotations contributed by thousands of volunteer readers, can only sample a small fraction of the written language. Further, there is much more to language than the definitions of words: syntax, grammar, regional dialects and usage, changes due to the style of writing (formal, informal, scholarly, etc.), associations of words with one another, differences between spoken and written language, and evolution of all of these matters and more over time. Before the advent of computers and, more recently, the access to large volumes of machine-readable text afforded by the Internet, research into these aspects of linguistics was difficult, extraordinarily tedious, and its accuracy suspect due to the small sample sizes necessarily used in studies.

Computer linguistics sets out to study how a language is actually used by collecting a large quantity of text (called a corpus), tagged with identifying information useful for the intended studies, and permitting measurement of the statistics of the content of the text. The first computerised corpus was created in 1961, containing the then-staggering number of one million words. (Note that since a corpus contains extracts of text, the word count refers to the total number of words, not the number of unique words—as we'll see shortly, a small number of words accounts for a large fraction of the text.) The preeminent research corpus today is the Oxford English Corpus which, in 2006, surpassed two billion words and is presently growing at the rate of 350 million words a year—ain't the Web grand, or what?

This book, which is a pure delight, compelling page turner, and must-have for all fanatic “wordies”, is a light-hearted look at the state of the English language today: not what it should be, but what it is. Traditionalists and fussy prescriptivists (among whom I count myself) will be dismayed at the battles already lost: “miniscule” and “straight-laced” already outnumber “minuscule” and “strait-laced”, and many other barbarisms and clueless coinages are coming on strong. Less depressing and more fascinating are the empirical research on word frequency (Zipf's Law is much in evidence here, although it is never cited by name)—the ten most frequent words make up 25% of the corpus, and the top one hundred account for fully half of the text—word origins, mutation of words and terms, association of words with one another, idiomatic phrases, and the way context dictates the choice of words which most English speakers would find almost impossible to distinguish by definition alone. This amateur astronomer finds it heartening to discover that the most common noun modified by the adjective “naked” is “eye” (1398 times in the corpus; “body” is second at 1144 occurrences). If you've ever been baffled by the origin of the idiom “It's raining cats and dogs” in English, just imagine how puzzled the Welsh must be by “Bwrw hen wragedd a ffyn” (“It's raining old women and sticks”).

The title? It's an example of an “eggcorn” (p. 58–59): a common word or phrase which mutates into a similar sounding one as speakers who can't puzzle out its original, now obscure, meaning try to make sense of it. Now that the safetyland culture has made most people unfamiliar with explosives, “damp squib” becomes “damp squid” (although, if you're a squid, it's not being damp that's a problem). Other eggcorns marching their way through the language are “baited breath”, “preying mantis”, and “slight of hand”.

 Permalink

Smith, L. Neil, Rex F. May, Scott Bieser, and Jen Zach. Roswell, Texas. Round Rock, TX: Big Head Press, [2007] 2008. ISBN 978-0-9743814-5-9.
I have previously mentioned this story and even posted a puzzle based upon it. This was based upon the online edition, which remains available for free. For me, reading anything, including a comic book (sorry—“graphic novel”), online a few pages a week doesn't count as reading worthy of inclusion in this list, so I deferred listing it until I had time to enjoy the trade paperback edition, which has been sitting on my shelf for several months after its June 2008 release.

This rollicking, occasionally zany, alternative universe story is set in the libertarian Federated States of Texas, where, as in our own timeline, something distinctly odd happens on July 4th, 1947 on a ranch outside the town of Roswell. As rumours spread around the world, teams from the Federated States, the United States, the California Republic, the Franco-Mexican Empire, Nazi Britain, and others set out to discover the truth and exploit the information for their own benefit. Involved in the scheming and race to the goal are this universe's incarnations of Malcolm Little, Meir Kahane, Marion Morrison, Eliot Ness, T. E. Lawrence, Walt Disney, Irène Joliot-Curie, Karol Wojtyla, Gene Roddenberry, and Audie Murphy, among many others. We also encounter a most curious character from an out of the way place L. Neil Smith fans will recall fondly.

The graphic format works very well with the artfully-constructed story. Be sure to scan each panel for little details—there are many, and easily missed if you focus only on the text. The only disappointment in this otherwise near-perfect entertainment is that readers of the online edition will be dismayed to discover that all of the beautiful colour applied by Jen Zach has been flattened out (albeit very well) into grey scale in the print edition. Due to the higher resolution of print, you can still make out things in the book edition which aren't discernible online, but it's a pity to lose the colour. The publisher has explained the economic reasons which compelled this decision, which make perfect sense. Should a “premium edition” come along, I'll be glad to part with US$40 for a full colour copy.

 Permalink

Trevor-Roper, Hugh. Hitler's War Directives. Edinburgh: Birlinn, [1964] 2004. ISBN 978-1-84341-014-0.
This book, originally published in 1964, contains all of Adolf Hitler's official decrees on the prosecution of the European war, from preparations for the invasion of Poland in 1939 to his final exhortation to troops on the Eastern Front of 15th April 1945 to stand in place or die. The author introduces each of the translated orders with an explanation of the situation at the time, and describes subsequent events. A fifteen page introduction explains the context of these documents and the structure of the organisations to which they were directed.

For those familiar with the history of the period, there are few revelations to be gained from these documents. It is interesting to observe the extent to which Hitler was concerned with creating and substantiating the pretexts for his aggression in both the East and West, and also how when the tide turned and the Wehrmacht was rolled back from Stalingrad to Berlin, he focused purely upon tactical details, never seeming to appreciate (at least in these orders to the military, state, and party) the inexorable disaster consuming them all.

As these are decrees at the highest level, they are largely composed of administrative matters and only occasionally discuss operational items; as such one's eyes may glaze over reading too much in one sitting. The bizarre parallel structure of state and party created by Hitler is evident in a series of decrees issued during the defensive phase of the war in which essentially the same orders were independently issued to state and party leaders, subordinating each to military commanders in battle areas. As the Third Reich approached collapse, the formal numbering of orders was abandoned, and senior military commanders issued orders in Hitler's name. These are included here using a system of numbering devised by the author. Appendices include lists of code names for operations, abbreviations, and people whose names appear in the orders.

If you aren't well-acquainted with the history of World War II in Europe, you'll take away little from this work. While the author sketches the history of each order, you really need to know the big picture to understand the situation the Germans faced and what they knew at the time to comprehend the extent to which Hitler's orders evidenced cunning or denial. Still, one rarely gets the opportunity to read the actual operational orders issued during a major conflict which ended in annihilation for the person giving them and the nation which followed him, and this book provides a way to understand how ambition, delusion, and blind obedience can lead to tragic catastrophe.

 Permalink

February 2009

Suprynowicz, Vin. The Ballad of Carl Drega. Reno: Mountain Media, 2002. ISBN 978-0-9670259-2-6.
I was about write “the author is the most prominent libertarian writing for the legacy media today”, but in fact, to my knowledge, he is the only genuine libertarian employed by a major metropolitan newspaper (the Las Vegas Review-Journal), where he writes editorials and columns, the latter syndicated to a number of other newspapers. This book, like his earlier Send In The Waco Killers, is a collection of these writings, plus letters from readers and replies, along with other commentary. This volume covers the period from 1994 through the end of 2001, and contains his columns reacting to the terrorist attacks of September 11th, 2001, which set him at odds with a number of other prominent libertarians.

Suprynowicz is not one of those go-along, get-along people L. Neil Smith describes as “nerf libertarians”. He is a hard-edged lover of individual liberty, and defends it fiercely in all of its aspects here. As much of the content of the book was written as columns to be published weekly, collected by topic rather than chronologically, it may occasionally seem repetitive if you read the whole book cover to cover. It is best enjoyed a little at a time, which is why it did not appear here until years after I started to read it. If you're a champion of liberty who is prone to hypertension, you may want to increase your blood pressure medication before reading some of the stories recounted here. The author's prognosis for individual freedom in the U.S. seems to verge upon despair; in this I concur, which is why I no longer live there, but still it's depressing for people everywhere. Chapter 9 (pp. 441–476) is a collection of the “Greatest Hits from the Mailbag”, a collection of real mail (and hilarious replies) akin to Fourmilab's own Titanium Cranium Awards.

This book is now out of print, and used copies currently sell at almost twice the original cover price.

 Permalink

War Department. Instructions for American Servicemen in Britain. Oxford: Bodelian Library, [1942] 2004. ISBN 978-1-85124-085-2.
Shortly after the entry of the United States into the European war following the attack on Pearl Harbor, U.S. troops began to arrive in Britain in 1942. Although more than two years would elapse before the D-Day invasion of Normandy, an ever-increasing number of “overpaid, oversexed, and over here” American troops would establish air bases, build logistics for the eventual invasion, and provide liaison with the British command.

This little (31 page, small format) book reproduces a document originally furnished to U.S. troops embarking for Britain as seven pages of typescript. It provides a delightful look at how Americans perceived the British at the epoch, and also how they saw themselves—there's even an admonishment to soldiers of Irish ancestry not to look upon the English as their hereditary enemies, and a note that the American colloquialism “I look like a bum” means something much different in an English pub. A handy table helps Yanks puzzle out the bewildering British money.

Companion volumes were subsequently published for troops bound for Iraq (yes, in 1943!) and France; I'll get to them in due course.

 Permalink

Klemperer, Victor. I Will Bear Witness. Vol. 1. New York: Modern Library, [1933–1941, 1995] 1998. ISBN 978-0-375-75378-7.
This book is simultaneously tedious, depressing, and profoundly enlightening. The author (a cousin of the conductor Otto Klemperer) was a respected professor of Romance languages and literature at the Technical University of Dresden when Hitler came to power in 1933. Although the son of a Reform rabbi, Klemperer had been baptised in a Christian church and considered himself a protestant Christian and entirely German. He volunteered for the German army in World War I and served at the front in the artillery and later, after recovering from a serious illness, in the army book censorship office on the Eastern front. As a fully assimilated German, he opposed all appeals to racial identity politics, Zionist as well as Nazi.

Despite his conversion to protestantism, military service to Germany, exalted rank as a professor, and decades of marriage to a woman deemed “Aryan” under the racial laws promulgated by the Nazis, Klemperer was considered a “full-blooded Jew” and was subject to ever-escalating harassment, persecution, humiliation, and expropriation as the Nazis tightened their grip on Germany. As civil society spiralled toward barbarism, Klemperer lost his job, his car, his telephone, his house, his freedom of movement, the right to shop in “Aryan stores”, access to public and lending libraries, and even the typewriter on which he continued to write in the hope of maintaining his sanity. His world shrank from that of a cosmopolitan professor fluent in many European languages to a single “Jews' house” in Dresden, shared with other once-prosperous families similarly evicted from their homes. His family and acquaintances dwindle as, one after another, they opt for emigration, leaving only the author and his wife still in Germany (due to lack of opportunities, but also to an inertia and sense of fatalism evident in the narrative). Slowly the author's sense of Germanness dissipates as he comes to believe that what is happening in Germany is not an aberration but somehow deeply rooted in the German character, and that Hitler embodies beliefs widespread among the population which were previously invisible before becoming so starkly manifest. Klemperer is imprisoned for eight days in 1941 for a blackout violation for which a non-Jew would have received a warning or a small fine, and his prison journal, written a few days after his release, is a matter of fact portrayal of how an encounter with the all-powerful and arbitrary state reduces the individual to a mental servitude more pernicious than physical incarceration.

I have never read any book which provides such a visceral sense of what it is like to live in a totalitarian society and how quickly all notions of justice, rights, and human dignity can evaporate when a charismatic leader is empowered by a mob in thrall to his rhetoric. Apart from the description of the persecution the author's family and acquaintances suffered themselves, he turns a keen philologist's eye on the language of the Third Reich, and observes how the corruption of the regime is reflected in the corruption of the words which make up its propaganda. Ayn Rand's fictional (although to some extent autobiographical) We the Living provides a similar sense of life under tyranny, but this is the real thing, written as events happened, with no knowledge of how it was all going to come out, and is, as a consequence, uniquely compelling. Klemperer wrote these diaries with no intention of their being published: they were, at most, the raw material for an autobiography he hoped eventually to write, so when you read these words you're perceiving how a Jew in Nazi Germany perceived life day to day, and how what historians consider epochal events in retrospect are quite naturally interpreted by those hearing of them for the first time in the light of “What does this mean for me?”

The author was a prolific diarist who wrote thousands of pages from the early 1900s throughout his long life. The original 1995 German publication of the 1933–1945 diaries as Ich will Zeugnis ablegen bis zum letzten was a substantial abridgement of the original document and even so ran to almost 1700 pages. This English translation further abridges the diaries and still often seems repetitive. End notes provide historical context, identify the many people who figure in the diary, and translate the foreign phrases the author liberally sprinkles among the text.

I will certainly read Volume 2, which covers the years 1942–1945, but probably not right away—after this powerful narrative, I'm inclined toward lighter works for a while.

 Permalink

Smith, Edward E. Masters of the Vortex. New York: Pyramid Books, [1960] 1968. ISBN 978-0-515-02230-8.
This novel is set in the Galactic Patrol universe, but is not part of the Lensman saga—the events take place an unspecified time after the conclusion of that chronicle. Galactic civilisation depends upon atomic power, but as Robert A. Heinlein (to whom this book is dedicated) observed, “Blowups Happen”, and for inexplicable reasons atomic power stations randomly erupt into deadly self-sustaining nuclear vortices, threatening to ultimately consume the planets they ravage. (Note that in the technophilic and optimistic universe of the Galactic Patrol, and the can-do society its creator inhabited, the thought that such a downside of an energy technology essential to civilisation would cause its renunciation never enters the mind.)

When a freak vortex accident kills ace nucleonicist Neal Cloud's family, he swears a personal vendetta against the vortices and vows to destroy them or be destroyed trying. This mild-mannered scientist who failed the Lensman entry examination re-invents himself as “Storm Cloud, the Vortex Blaster”, and in his eponymous ship flits off to rid the galaxy of the atomic plague. This is Doc Smith space opera, so you can be sure there are pirates, zwilniks, crooked politicians, blasters, space axes, and aliens of all persuasions in abundance—not to mention timeless dialogue like:

“Eureka! Good evening, folks.”
“Eureka? I hope you rot in hell, Graves…”
“This isn't Graves. Cloud. Storm Cloud, the Vortex Blaster, investigating…”
“Oh, Bob, the patrol!” the girl screamed.

It wouldn't be Doc Smith if it weren't prophetic, and in this book published in the year in which the Original Nixon was to lose the presidential election to John F. Kennedy, we catch a hint of a “New Nixon” as the intrepid Vortex Blaster visits the planet Nixson II on p. 77. While not as awe inspiring in scope as the Lensman novels, this is a finely crafted yarn which combines a central puzzle with many threads exploring characteristics of alien cultures (never cross an adolescent cat-woman from Vegia!), the ultimate power of human consciousness, and the eternal question never far from the mind of the main audience of science fiction: whether a nerdy brainiac can find a soulmate somewhere out there in the spacelanes.

If you're unacquainted with the Lensman universe, this is not the place to start, but once you've worked your way through, it's a delightful lagniappe to round out the epic. Unlike the Lensman series, this book remains out of print. Used copies are readily available although sometimes pricey. For those with access to the gizmo, a Kindle edition is available.

 Permalink

Simon, Roger L. Blacklisting Myself. New York: Encounter Books, 2008. ISBN 978-1-59403-247-9.
The author arrived in Hollywood in the tumultuous year of 1968, fired by his allegiance to the New Left and experience in the civil rights struggle in the South to bring his activism to the screen and, at the same time, driven by his ambition to make it big in the movie business. Unlike the multitudes who arrive starry-eyed in tinseltown only to be frustrated trying to “break in”, Simon succeeded, both as a screenwriter (he was nominated for an Oscar for his screen adaptation of Enemies: A Love Story and as a novelist, best known for his Moses Wine detective fiction. One of the Moses Wine novels, The Big Fix, made it to the screen, with Simon also writing the screenplay. Such has been his tangible success that the author today lives in the Hollywood Hills house once shared by Joe DiMaggio and Marilyn Monroe.

This is in large part a memoir of a life in Hollywood, with pull-no-punches anecdotes about the celebrities and players in the industry, and the often poisonous culture of the movie business. But is also the story of the author's political evolution from the New Left through Hollywood radical chic (he used to hang with the Black Panthers) and eventual conversion to neo-conservatism which has made him a “Hollywood apostate” and which he describes on the first page of the book as “the ideological equivalent of a sex change operation”. He describes how two key events—the O. J. Simpson trial and the terrorist attacks of 2001—caused him to question assumptions he'd always taken as received wisdom and how, once he did start to think for himself instead of nodding in agreement with the monolithic leftist consensus in Hollywood, began to perceive and be appalled by the hypocrisy not only in the beliefs of his colleagues but between their lifestyles and the values they purported to champion. (While Simon has become a staunch supporter of efforts, military and other, to meet the threat of Islamic aggression and considers himself a fiscal conservative, he remains as much on the left as ever when it comes to social issues. But, as he describes, any dissent whatsoever from the Hollywood leftist consensus is enough to put one beyond the pale among the smart set, and possibly injure the career of even somebody as well-established as he.)

While never suggesting that he or anybody else has been the victim of a formal blacklist like that of suspected Communist sympathisers in the 1940s and 1950s, he does describe how those who dissent often feign support for leftist causes or simply avoid politically charged discussions to protect their careers. Simon was one of the first Hollywood figures to jump in as a blogger, and has since reinvented himself as a New Media entrepreneur, founding Pajamas Media and its associated ventures; he continues to actively blog. An early adopter of technology since the days of the Osborne 1 and CompuServe forums, he believes that new technology provides the means for an end-run around Hollywood groupthink, but by itself is insufficient (p. 177):

The answer to the problem of Hollywood for those of a more conservative or centrist bent is to go make movies of their own. Of course, to do so means finding financing and distribution. Today's technologies are making that simpler. Cameras and editing equipment cost a pittance. Distribution is at hand for the price of a URL. All that's left is the creativity. Unfortunately, that's the difficult part.

A video interview with the author is available.

 Permalink

March 2009

Birmingham, John. Without Warning. New York: Del Rey, 2009. ISBN 978-0-345-50289-6.
One of the most common counsels offered to authors by agents and editors is to choose a genre and remain within it. A book which spans two or more of the usual categories runs the risk of “falling into the crack”, with reviewers not certain how to approach it and, on the marketing side, retailers unsure of where in the store it should be displayed. This is advice which the author of this work either never received or laughingly disdained. The present volume combines a political/military techno-thriller in the Tom Clancy tradition with alternative history as practiced by Harry Turtledove, but wait—there's more, relativistic arm-waving apocalyptic science fiction in the vein of the late Michael Crichton. This is an ambitious combination, and one which the author totally bungles in this lame book, which is a complete waste of paper, ink, time, and money.

The premise is promising. What would happen if there were no United States (something we may, after all, effectively find out over the next few years, if not in the manner posited here)? In particular, wind the clock back to just before the start of the 2003 invasion of Iraq, and assume the U.S. vanished—what would the world look like in the aftermath? You ask, “what do you mean by the U.S. vanishing?” Well, you see, an interdimensional portal opens between a fifth dimensional braneworld which disgorges 500,000 flying saucers which spread out over North America, from which tens of millions of 10 metre tall purple and green centipedes emerge to hunt down and devour every human being in the United States and most of Canada and Mexico, leaving intact only the airheads in western Washington State and Hawaii and the yahoos in Alaska. No—not really—in fact what is proposed here is even more preposterously implausible than the saucers and centipedes, and is never explained in the text. It is simply an absurd plot device which defies about as many laws of physics as rules of thumb for authors of thrillers.

So the U.S. goes away, and mayhem erupts all around the world. The story is told by tracking with closeups of various people in the Middle East, Europe, on the high seas, Cuba, and the surviving remnant of the U.S. The way things play out isn't implausible, but since the precipitating event is absurd on the face of it, it's difficult to care much about the consequences as described here. I mean, here we have a book in which Bill Gates has a cameo rôle providing a high-security communications device which is competently implemented and works properly the first time—bring on the saucers and giant centipedes!

As the pages dwindle toward the end, it seems like nothing is being resolved. Then you turn the last page and discover that you've been left in mid-air and are expected to buy After America next year to find out how it all comes out. Yeah, right—fool me once, shame on you; fool me twice, not gonna happen!

Apart from the idiotic premise, transgenred plot, and side-splitting goofs like the mention of “UCLA's Berkeley campus” (p. 21), the novel drips with gratuitous obscenity. Look, one expects soldiers and sailors to cuss, and having them speak that way conveys a certain authenticity. But here, almost everybody, from mild-mannered city engineers to urbane politicians seem unable to utter two sentences without dropping one or more F-bombs. Aside from the absurdity of the plot, this makes the reading experience coarsening. Perhaps that is how people actually speak in this post-Enlightenment age; if so, I do not wish to soil my recreational reading by being reminded of it.

If we end up in the kind of post-apocalyptic world described here, we'll probably have to turn to our libraries once the hoard of toilet paper in the basement runs out. I know which book will be first on the list.

 Permalink

Wilczek, Frank. The Lightness of Being. New York: Basic Books, 2008. ISBN 978-0-465-00321-1.
For much of its history as a science, physics has been about mass and how it behaves in response to various forces, but until very recently physics had little to say about the origin of mass: it was simply a given. Some Greek natural philosophers explained it as being made up of identical atoms, but then just assumed that the atoms somehow had their own intrinsic mass. Newton endowed all matter with mass, but considered its origin beyond the scope of observation and experiment and thus outside the purview of science. As the structure of the atom was patiently worked out in the twentieth century, it became clear that the overwhelming majority of the mass of atoms resides in a nucleus which makes up a minuscule fraction of its volume, later that the nucleus is composed of protons and neutrons, and still later that those particles were made up of quarks and gluons, but still physicists were left with no explanation for why these particles had the masses they did or, for that matter, any mass at all.

In this compelling book, Nobel Physics laureate and extraordinarily gifted writer Frank Wilczek describes how one of the greatest intellectual edifices ever created by the human mind: the drably named “standard model” of particle physics, combined with what is almost certainly the largest scientific computation ever performed to date (teraflop massively parallel computers running for several months on a single problem), has finally produced a highly plausible explanation for the origin of the mass of normal matter (ourselves and everything we have observed in the universe), or at least about 95% of it—these matters, and matter itself, always seems to have some more complexity to tease out.

And what's the answer? Well, the origin of mass is the vacuum, and its interaction with fields which fill all of the space in the universe. The quantum vacuum is a highly dynamic medium, seething with fluctuations and ephemeral virtual particles which come and go in instants which make even the speed of present-day computers look like geological time. The interaction of this vacuum with massless quarks produces, through processes explained so lucidly here, around 95% of the mass of the nucleus of atoms, and hence what you see when stepping on the bathroom scale. Hey, if you aren't happy with that number, just remember that 95% of it is just due to the boiling of the quantum vacuum. Or, you could go on a diet.

This spectacular success of the standard model, along with its record over the last three decades in withstanding every experimental test to which it has been put, inspires confidence that, as far as it goes, it's on the right track. But just as the standard model was consolidating this triumph, astronomers produced powerful evidence that everything it explains: atoms, ourselves, planets, stars, and galaxies—everything we observe and the basis of all sciences from antiquity to the present—makes up less than 5% of the total mass of the universe. This discovery, and the conundrum of how the standard model can be reconciled with the equally-tested yet entirely mathematically incompatible theory of gravitation, general relativity, leads the author into speculation on what may lie ahead, how what we presently know (or think we know) may be a piece in a larger puzzle, and how experimental tests expected within the next decade may provide clues and open the door to these larger theories. All such speculation is clearly labeled, but it is proffered in keeping with what he calls the Jesuit Credo, “It is more blessed to ask forgiveness than permission.”

This is a book for the intelligent layman, and a superb twenty page glossary is provided for terms used in the text with which the reader may be unfamiliar. In fact, the glossary is worth reading in its own right, as it expands on many subjects and provides technical details absent in the main text. The end notes are also excellent and shouldn't be missed. One of the best things about this book, in my estimation, is what is missing from it. Unlike so many physicists writing for a popular audience, Wilczek feels no need whatsoever to recap the foundations of twentieth century science. He assumes, and I believe wisely, that somebody who picks up a book on the origin of mass by a Nobel Prize winner probably already knows the basics of special relativity and quantum theory and doesn't need to endure a hundred pages recounting them for the five hundredth time before getting to the interesting stuff. For the reader who has wandered in without this background knowledge, the glossary will help, and also direct the reader to introductory popular books and texts on the various topics.

 Permalink

Pipes. Richard. Communism: A History. New York: Doubleday, [2001] 2003. ISBN 978-0-8129-6864-4.
This slim volume (just 175 pages) provides, for its size, the best portrait I have encountered of the origins of communist theory, the history of how various societies attempted to implement it in the twentieth century, and the tragic consequences of those grand scale social experiments and their aftermath. The author, a retired professor of history at Harvard University, is one of the most eminent Western scholars of Russian and Soviet history. The book examines communism as an ideal, a program, and its embodiment in political regimes in various countries. Based on the ideals of human equality and subordination of the individual to the collective which date at least back to Plato, communism, first set out as a program of action by Marx and Engels, proved itself almost infinitely malleable in the hands of subsequent theorists and political leaders, rebounding from each self-evident failure (any one of which should, in a rational world, have sufficed to falsify a theory which proclaims itself “scientific”), morphing into yet another infallible and inevitable theory of history. In the words of the immortal Bullwinkle J. Moose, “This time for sure!”

Regardless of the nature of the society in which the communist program is undertaken and the particular variant of the theory adopted, the consequences have proved remarkably consistent: emergence of an elite which rules through violence, repression, and fear; famine and economic stagnation; and collapse of the individual enterprise and innovation which are the ultimate engine of progress of all kinds. No better example of this is the comparison of North and South Korea on p. 152. Here are two countries which started out identically devastated by Japanese occupation in World War II and then by the Korean War, with identical ethnic makeup, which diverged in the subsequent decades to such an extent that famine killed around two million people in North Korea in the 1990s, at which time the GDP per capita in the North was around US$900 versus US$13,700 in the South. Male life expectancy at birth in the North was 48.9 years compared to 70.4 years in the South, with an infant mortality rate in the North more than ten times that of the South. This appalling human toll was modest compared to the famines and purges of the Soviet Union and Communist China, or the apocalyptic fate of Cambodia under Pol Pot. The Black Book of Communism puts the total death toll due to communism in the twentieth century as between 85 and 100 million, which is half again greater than that of both world wars combined. To those who say “One cannot make an omelette without breaking eggs”, the author answers, “Apart from the fact that human beings are not eggs, the trouble is that no omelette has emerged from the slaughter.” (p. 158)

So effective were communist states in their “big lie” propaganda, and so receptive were many Western intellectuals to its idealistic message, that many in the West were unaware of this human tragedy as it unfolded over the better part of a century. This book provides an excellent starting point for those unaware of the reality experienced by those living in the lands of communism and those for whom that epoch is distant, forgotten history, but who remain, like every generation, susceptible to idealistic messages and unaware of the suffering of those who attempted to put them into practice in the past.

Communism proved so compelling to intellectuals (and, repackaged, remains so) because it promised hope for a new way of living together and change to a rational world where the best and the brightest—intellectuals and experts—would build a better society, shorn of all the conflict and messiness which individual liberty unavoidably entails. The author describes this book as “an introduction to Communism and, at the same time, its obituary.” Maybe—let's hope so. But this book can serve an even more important purpose: as a cautionary tale of how the best of intentions can lead directly to the worst of outcomes. When, for example, one observes in the present-day politics of the United States the creation, deliberate exacerbation, and exploitation of crises to implement a political agenda; use of engineered financial collapse to advance political control over the economy and pauperise and render dependent upon the state classes of people who would otherwise oppose it; the creation, personalisation, and demonisation of enemies replacing substantive debate over policy; indoctrination of youth in collectivist dogma; and a number of other strategies right out of Lenin's playbook, one wonders if the influence of that evil mummy has truly been eradicated, and wishes that the message in this book were more widely known there and around the world.

 Permalink

Post, David G. In Search of Jefferson's Moose. New York: Oxford University Press, 2009. ISBN 978-0-19-534289-5.
In 1787, while serving as Minister to France, Thomas Jefferson took time out from his diplomatic duties to arrange to have shipped from New Hampshire across the Atlantic Ocean the complete skeleton, skin, and antlers of a bull moose, which was displayed in his residence in Paris. Jefferson was involved in a dispute with the Comte de Buffon, who argued that the fauna of the New World were degenerate compared to those of Europe and Asia. Jefferson concluded that no verbal argument or scientific evidence would be as convincing of the “structure and majesty of American quadrupeds” as seeing a moose in the flesh (or at least the bone), so he ordered one up for display.

Jefferson was a passionate believer in the exceptionality of the New World and the prospects for building a self-governing republic in its expansive territory. If it took hauling a moose all the way to Paris to convince Europeans disdainful of the promise of his nascent nation, then so be it—bring on the moose! Among Jefferson's voluminous writings, perhaps none expressed these beliefs as strongly as his magisterial Notes on the State of Virginia. The present book, subtitled “Notes on the State of Cyberspace” takes Jefferson's work as a model and explores this new virtual place which has been built based upon a technology which simply sends packets of data from place to place around the world. The parallels between the largely unexplored North American continent of Jefferson's time and today's Internet are strong and striking, as the author illustrates with extensive quotations from Jefferson interleaved in the text (set in italics to distinguish them from the author's own words) which are as applicable to the Internet today as the land west of the Alleghenies in the late 18th century.

Jefferson believed in building systems which could scale to arbitrary size without either losing their essential nature or becoming vulnerable to centralisation and the attendant loss of liberty and autonomy. And he believed that free individuals, living within such a system and with access to as much information as possible and the freedom to communicate without restrictions would self-organise to perpetuate, defend, and extend such a polity. While Europeans, notably Montesquieu, believed that self-governance was impossible in a society any larger than a city-state, and organised their national and imperial governments accordingly, Jefferson's 1784 plan for the government of new Western territory set forth an explicitly power law fractal architecture which, he believed, could scale arbitrarily large without depriving citizens of local control of matters which directly concerned them. This architecture is stunningly similar to that of the global Internet, and the bottom-up governance of the Internet to date (which Post explores in some detail) is about as Jeffersonian as one can imagine.

As the Internet has become a central part of global commerce and the flow of information in all forms, the eternal conflict between the decentralisers and champions of individual liberty (with confidence that free people will sort things out for themselves)—the Jeffersonians—and those who believe that only strong central authority and the vigorous enforcement of rules can prevent chaos—Hamiltonians—has emerged once again in the contemporary debate about “Internet governance”.

This is a work of analysis, not advocacy. The author, a law professor and regular contributor to The Volokh Conspiracy Web log, observes that, despite being initially funded by the U.S. Department of Defense, the development of the Internet to date has been one of the most Jeffersonian processes in history, and has scaled from a handful of computers in 1969 to a global network with billions of users and a multitude of applications never imagined by its creators, and all through consensual decision making and contractual governance with nary a sovereign gun-wielder in sight. So perhaps before we look to “fix” the unquestioned problems and challenges of the Internet by turning the Hamiltonians loose upon it, we should listen well to the wisdom of Jefferson, who has much to say which is directly applicable to exploring, settling, and governing this new territory which technology has opened up. This book is a superb way to imbibe the wisdom of Jefferson, while learning the basics of the Internet architecture and how it, in many ways, parallels that of aspects of Jefferson's time. Jefferson even spoke to intellectual property issues which read like today's news, railing against a “rascal” using an abusive patent of a long-existing device to extort money from mill owners (p. 197), and creating and distributing “freeware” including a design for a uniquely efficient plough blade based upon Newton's Principia which he placed in the public domain, having “never thought of monopolizing by patent any useful idea which happened to offer itself to me” (p. 196).

So astonishing was Jefferson's intellect that as you read this book you'll discover that he has a great deal to say about this new frontier we're opening up today. Good grief—did you know that the Oxford English Dictionary even credits Jefferson with being the first person to use the words “authentication” and “indecipherable” (p. 124)? The author's lucid explanations, deft turns of phrase, and agile leaps between the eighteenth and twenty-first centuries are worthy of the forbidding standard set by the man so extensively quoted here. Law professors do love their footnotes, and this is almost two books in one: the focused main text and the more rambling but fascinating footnotes, some of which span several pages. There is also an extensive list of references and sources for all of the Jefferson quotations in the end notes.

 Permalink

Niven, Larry and Jerry Pournelle. Escape from Hell. New York: Tor Books, 2009. ISBN 978-0-7653-1632-5.
Every now and then you read a novel where you're absolutely certain as you turn the pages that the author(s) had an absolute blast writing it, and when that's the case the result is usually superbly entertaining. That is certainly true here. How could two past masters of science fiction and fantasy not delight in a scenario in which they can darn to heck anybody they wish, choosing the particular torment for each and every sinner?

In this sequel to the authors' 1976 novel Inferno, the protagonist of the original novel, science fiction writer Allen Carpenter, makes a second progress through Hell. This time, after an unfortunate incident on the Ice in the Tenth Circle, he starts out back in the Vestibule, resolved that this time he will escape from Hell himself and, as he progresses ever downward toward the exit described by Dante, to determine if it is possible for any damned soul to escape and to aid those willing to follow him.

Hell is for eternity, but that doesn't mean things don't change there. In the decades since Carpenter's first traverse, there have been many modifications in the landscape of the underworld. We meet many newly-damned souls as well as revisiting those encountered before. Carpenter recounts his story to Sylvia Plath, who as a suicide, has been damned as a tree in the Wood of the Suicides in the Seventh Circle and who, rescued by him, accompanies him downward to the exit. The ice cream stand in the Fiery Desert is a refreshing interlude from justice without mercy! The treatment of one particular traitor in the Ice is sure to prove controversial; the authors explain their reasoning for his being there in the Notes at the end. A theme which runs throughout is how Hell is a kind of Heaven to many of those who belong there and, having found their niche in Eternity, aren't willing to gamble it for the chance of salvation. I've had jobs like that—got better.

I'll not spoil the ending, but will close by observing that the authors have provided a teaser for a possible Paradiso somewhere down the road. Should that come to pass, I'll look forward to devouring it as I did this thoroughly rewarding yarn. I'll wager that if that work comes to pass, Pournelle's Iron Law of Bureaucracy will be found to apply as Below, so Above.

 Permalink

Forstchen, William R. One Second After. New York: Forge, 2009. ISBN 978-0-7653-1758-2.
Suppose, one fine spring day, with no warning or evident cause, the power went out. After a while, when it didn't come back on, you might try to telephone the power company, only to discover the phone completely dead. You pull out your mobile phone, and it too is kaput—nothing happens at all when you try to turn it on. You get the battery powered radio you keep in the basement in case of storms, and it too is dead; you swap in the batteries from the flashlight (which works) but that doesn't fix the radio. So, you decide to drive into town and see if anybody there knows what's going on. The car doesn't start. You set out on foot, only to discover when you get to the point along the lane where you can see the highway that it's full of immobile vehicles with their drivers wandering around on foot as in a daze.

What's happening—The Day the Earth Stood Still? Is there a saucer on the ground in Washington? Nobody knows: all forms of communication are down, all modes of transportation halted. You might think this yet another implausible scenario for a thriller, but what I've just described (in a form somewhat different than the novel) is pretty much what the sober-sided experts of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack sketch out in their April 2008 Critical National Infrastructures report and 2004 Executive Report as the consequences of the detonation of a single nuclear weapon in space high above the continental United States. There would be no thermal, blast, or radiation effects on the ground (although somebody unlucky enough to be looking toward the location of the detonation the sky might suffer vision damage, particularly if it occurred at night), but a massive electromagnetic pulse (EMP) created as prompt gamma rays from the nuclear detonation create free electrons in the upper atmosphere due to the Compton effect which spiral along the lines of force of Earth's magnetic field and emit an intense electric field pulse in three phases which reaches the ground and affects electrical and electronic equipment in a variety of ways, none good. As far as is known, the electromagnetic pulse is completely harmless to humans and other living organisms and would not even be perceived by them.

But it's Hell on electronics. The immediate (E1) pulse arrives at the speed of light everywhere within the line of sight of the detonation, and with a rise time of at most a few nanoseconds, gets into all kinds of electronics much faster than any form of transient protection can engage; this is what kills computer and communications gear and any other kind of electronics with exposed leads or antennas which the pulse can excite. The second phase (E2) pulse is much like the effects of a local lightning strike, and would not cause damage to equipment with proper lightning protection except that in many cases the protection mechanisms may have been damaged or disabled by the consequences of the E1 pulse (which has no counterpart in lightning, and hence lightning mitigation gear is not tested to withstand it). Finally, the E3 pulse arrives, lasting tens to hundreds of seconds, which behaves much like the fields created during a major solar/geomagnetic storm (although the EMP effect may be larger), inducing large currents in long distance electrical transmission lines and other extended conductive structures. The consequences of this kind of disruption are well documented from a number of incidents such as the 1989 geomagnetic storm which caused the collapse of the Quebec Hydro power distribution grid. But unlike a geomagnetic storm, the EMP E3 pulse can affect a much larger area, hit regions in latitudes rarely vulnerable to geomagnetic storms, and will have to be recovered from in an environment where electronics and communications are down due to the damage from the E1 and E2 pulses.

If you attribute much of the technological and economic progress of the last century and a half to the connection of the developed world by electrical, transportation, communication, and computational networks which intimately link all parts of the economy and interact with one another in complex and often non-obvious ways, you can think about the consequences of the detonation of a single nuclear weapon launched by a relatively crude missile (which need not be long range if fired, say, from a freighter outside the territorial waters of the target country) by imagining living in the 21st century, seeing the lights flicker and go out and hearing the air conditioner stop, and two minutes later you're living in 1860. None of this is fantasy—all of the EMP effects were documented in nuclear tests in the 1960s and hardening military gear against EMP has been an active area of research and development for decades: this book, which sits on my own shelf, was published 25 years ago. Little or no effort has been expended on hardening the civil infrastructure or commercial electronics against this threat.

This novel looks at what life might be like in the year following an EMP attack on the United States, seen through the microcosm of a medium sized college town in North Carolina where the protagonist is a history professor. Unlike many thrillers, the author superbly describes the sense of groping in the dark when communication is cut and rumours begin to fly, the realisation that with the transportation infrastructure down the ready food supply is measured in days (especially after the losses due to failure of refrigeration), and the consequences to those whose health depends upon medications produced at great distance and delivered on a just in time basis. It is far from a pretty picture, but given the premises of the story (about which I shall natter a bit below), entirely plausible in my opinion. This story has the heroes and stolid get-things-done people who come to the fore in times of crisis, but it also shows how thin the veneer of civilisation is when the food starts to run out and the usual social constraints and sanctions begin to fail. There's no triumphant ending: what is described is a disaster and the ensuing tragedy, with survival for some the best which can be made of the situation. The message is that this, or something like it although perhaps not so extreme, could happen, and that the time to take the relatively modest and inexpensive (at least compared to recent foreign military campaigns) steps to render an EMP attack less probable and, should one occur, to mitigate its impact on critical life-sustaining infrastructure and prepare for recovery from what damage does occur, is now, not the second after the power goes out—all across the continent.

This is a compelling page-turner, which I devoured in just a few days. I do believe the author overstates the total impact of an EMP attack. The scenario here is that essentially everything which incorporates solid state electronics or is plugged into the power grid is fried at the instant of the attack, and that only vacuum tube gear, vehicles without electronic ignition or fuel injection, and other museum pieces remain functional. All airliners en route fall from the sky when their electronics are hit by the pulse. But the EMP Commission report is relatively sanguine about equipment not connected to the power grid which doesn't have vulnerable antennas. They discuss aircraft at some length, and conclude that since all commercial and military aircraft are currently tested and certified to withstand direct lightning strikes, and all but the latest fly-by-wire planes use mechanical and hydraulic control linkages, they are unlikely to be affected by EMP. They may lose communication, and the collapse of the air traffic control system will pose major problems and doubtless lead to some tragedies, but all planes aloft raining from the sky doesn't seem to be in the cards. Automobiles and trucks were tested by the commission (see pp. 115–116 of the Critical Infrastructures report), and no damage whatsoever occurred to vehicles not running when subjected to a simulated pulse; some which were running stopped, but all but a few immediately restarted and none required more than routine garage repairs. Having the highways open and trucks on the road makes a huge difference in a disaster recovery scenario. But let me qualify these quibbles by noting that nobody knows what will actually happen: with non-nuclear EMP and other electromagnetic weapons a focus of current research, doubtless much of the information on vulnerability of various systems remains under the seal of secrecy. And besides, in a cataclysmic situation, it's usually the things you didn't think of which cause the most dire problems.

One language note: the author seems to believe that the word “of” is equivalent to “have” when used in a phrase such as “You should've” or “I'd have”—instead, he writes “You should of” and “I'd of”. At first I thought this was a dialect affectation of a single character, but it's used all over the place, by characters of all kinds of regional and cultural backgrounds. Now, this usage is grudgingly sanctioned (or at least acknowledged) by the descriptive Merriam-Webster's Dictionary of English Usage (p. 679, item 2), but it just drives me nuts; if you consider the definitions of the individual words, what can “should of” possibly mean?

This novel focuses on the human story of people caught entirely by surprise trying to survive in a situation beyond their imagining one second before. If reading this book makes you ponder what steps you might take beforehand to protect your family in such a circumstance, James Wesley Rawles's Patriots (December 2008), which is being issued in a new, expanded edition in April 2009, is an excellent resource, as is Rawles's SurvivalBlog.

A podcast interview with William R. Forstchen about One Second After is available.

 Permalink

April 2009

Levin, Mark R. Liberty and Tyranny. New York: Threshold Editions, 2009. ISBN 978-1-4165-6285-6.
Even at this remove, I can recall the precise moment when my growing unease that the world wasn't turning into the place I'd hoped to live as an adult became concrete and I first began to comprehend the reasons for the trends which worried me. It was October 27th, 1964 (or maybe a day or so later, if the broadcast was tape delayed) when I heard Ronald Reagan's speech “A Time for Choosing”, given in support of Barry Goldwater's U.S. presidential campaign. Notwithstanding the electoral disaster of the following week, many people consider Reagan's speech (often now called just “The Speech”) a pivotal moment both in the rebirth of conservatism in the United States and Reagan's own political career. I know that I was never the same afterward: I realised that the vague feelings of things going the wrong way were backed up by the facts Reagan articulated and, further and more important, that there were alternatives to the course the country and society was presently steering. That speech, little appreciated at the time, changed the course of American history and changed my life.

Here is a book with the potential to do the same for people today who, like me in 1964, are disturbed at the way things are going, particularly young people who, indoctrinated in government schools and the intellectual monoculture of higher education, have never heard the plain and yet eternal wisdom the author so eloquently and economically delivers here. The fact that this book has recently shot up to the number one rank in Amazon.com book sales indicates that not only is the message powerful, but that an audience receptive to it exists.

The author admirably cedes no linguistic ground to the enemies of freedom. At the very start he dismisses the terms “liberal” (How is it liberal to advocate state coercion as the answer to every problem?) and “progressive” (How can a counter-revolution against the inherent, unalienable rights of individual human beings in favour of the state possibly be deemed progress?) for “Statist”, which is used consistently thereafter. He defines a “Conservative” not as one who cherishes the past or desires to return to it, but rather a person who wishes to conserve the individual liberty proclaimed by the Declaration of Independence and supposedly protected by the Constitution (the author and I disagree about the wisdom of the latter document and the motives of those who promoted it). A Conservative is not one who, in the 1955 words of William F. Buckley “stands athwart history, yelling Stop”, but rather believes in incremental, prudential reform, informed by the experience of those who went before, from antiquity up until yesterday, with the humility to judge every policy not by its intentions but rather by the consequences it produces, and always ready to reverse any step which proves, on balance, detrimental.

The Conservative doesn't believe in utopia, nor in the perfectibility or infinite mutability of human nature. Any aggregate of flawed humans will be inevitably flawed; that which is least flawed and allows individuals the most scope to achieve the best within themselves is as much as can be hoped for. The Conservative knows from history that every attempt by Statists to create heaven on Earth by revolutionary transformation and the hope of engendering a “new man” has ended badly, often in tragedy.

For its length, this book is the best I've encountered at delivering the essentials of the conservative (or, more properly termed, but unusable due to corruption of the language, “classical liberal”) perspective on the central issues of the time. For those who have read Burke, Adam Smith, de Tocqueville, the Federalist Papers, Hayek, Bastiat, Friedman, and other classics of individual and economic liberty (the idea that these are anything but inseparable is another Statist conceit), you will find little that is new in the foundations, although all of these threads are pulled together in a comprehensible and persuasive way. For people who have never heard of any of the above, or have been taught to dismiss them as outdated, obsolete, and inapplicable to our age, this book may open the door to a new, more clear way of thinking, and through its abundant source citations (many available on the Web) invites further exploration by those who, never having thought of themselves before as “conservative”, find their heads nodding in agreement with many of the plain-spoken arguments presented here.

As the book progresses, there is less focus on fundamentals and more on issues of the day such as the regulatory state, environmentalism, immigration, welfare dependency, and foreign relations and military conflicts. This was, to me, less satisfying than the discussion of foundational principles. These issues are endlessly debated in a multitude of venues, and those who call themselves conservatives and agree on the basics nonetheless come down on different sides of many of these issues. (And why not? Conservatives draw on the lessons of the past, and there are many ways of interpreting the historical record.) The book concludes with “A Conservative Manifesto” which, while I concur that almost every point mentioned would be a step in the right direction for the United States, I cannot envision how, in the present environment, almost any of the particulars could be adopted. The change that is needed is not the election of one set of politicians to replace another—there is precious little difference between them—but rather the slow rediscovery and infusion into the culture of the invariant principles, founded in human nature rather than the theories of academics, which are so lucidly explained here. As the author notes, the statists have taken more than eight decades on their long march through the institutions to arrive at the present situation. Champions of liberty must expect to be as patient and persistent if they are to prevail. The question is whether they will enjoy the same freedom of action their opponents did, or fall victim as the soft tyranny of the providential state becomes absolute tyranny, as has so often been the case.

 Permalink

Dunn, Robin MacRae. Vickers Viscount. North Branch, MN: Specialty Press, 2003. ISBN 978-1-58007-065-2.
Post World War II Britain had few technological and industrial successes of which to boast: as government administered industrial policy, sweeping nationalisations, and ascendant unions gripped the economy, “brain drain” became the phrase for the era. One bright spot in this dingy landscape was the world's first turboprop powered airliner, the Vickers Viscount. Less ambitious than its contemporary, the turbojet powered De Havilland Comet, it escaped the tragic fate which befell early models of that design and caused it to lose out to competitors which entered the market much later.

Despite its conventional appearance and being equipped with propellers, the Viscount represented a genuine revolution in air transport. Its turbine engines were vastly more reliable than the finicky piston powerplants of contemporary airliners, and provided its passengers a much quieter ride, faster speed, and the ability to fly above much of the bumpy weather. Its performance combined efficiency in the European short hop market for which it was intended with a maximum range (as much as 2,450 miles for some models with optional fuel tanks) which allowed it to operate on many intercontinental routes.

From the first flight of the prototype in July 1948 through entry into regular scheduled airline service in April 1953, the Viscount pioneered and defined turboprop powered air transport. From the start, the plane was popular with airlines and their passengers, with a total of 445 being sold. Some airlines ended up buying other equipment simply because demand for Viscounts meant they could not obtain delivery positions as quickly as they required. The Viscount flew for a long list of operators in the primary and secondary market, and was adapted as a freighter, high-density holiday charter plane, and VIP and corporate transport. Its last passenger flight in the U.K. took place on April 18th, 1996, the 43rd anniversary of its entry into service.

This lavishly illustrated book tells the story of the Viscount from concept through retirement of the last exemplars. A guide helps sort through the bewildering list of model numbers assigned to variants of the basic design, and comparative specifications of the principal models are provided. Although every bit as significant a breakthrough in propulsion as the turbojet, the turboprop powered Viscount never had the glamour of the faster planes without propellers. But they got their passengers to their destinations quickly, safely, and made money for the airlines delivering them there, which is all one can ask of an airliner, and made the Viscount a milestone in British aeronautical engineering.

 Permalink

Lane, Nick. Power, Sex, Suicide. Oxford: Oxford University Press, 2005. ISBN 978-0-19-920564-6.
When you start to look in detail at the evolution of life on Earth, it appears to be one mystery after another. Why did life appear so quickly after the Earth became hospitable to it? Why did life spend billions of years exclusively in the form of single-celled bacteria without a nucleus (bacteria and archaea)? Why are all complex cells (eukaryotes) apparently descended from a single ancestral cell? Why did it take so long for complex multicellular organisms to evolve? (I've taken a crack [perhaps crackpot] shot at that one myself.) Why did evolution favour sexual reproduction, where two parents are required to produce offspring, while clonal reproduction is twice as efficient? Why just two sexes (among the vast majority of species) and not more? What drove the apparent trend toward greater size and complexity in multicellular organisms? Why are the life spans of organisms so accurately predicted by a power law based upon their metabolic rate? Why and how does metabolic rate fall with the size of an organism? Why did evolution favour warm-bloodedness (endothermy) when it increases an organism's requirement for food by more than an order of magnitude? Why do organisms age, and why is the rate of ageing and the appearance of degenerative diseases so closely correlated with metabolic rate? Conversely, why do birds and bats live so long: a pigeon has about the same mass and metabolic rate as a rat, yet lives ten times as long?

I was intensely interested in molecular biology and evolution of complexity in the early 1990s, but midway through that decade I kind of tuned it out—there was this “Internet” thing going on which captured my attention…. While much remains to be discovered, and many of the currently favoured hypotheses remain speculative, there has been enormous progress toward resolving these conundra in recent years, and this book is an excellent way to catch up on this research frontier.

Quite remarkably, a common thread pulling together most of these questions is one of the most humble and ubiquitous components of eukaryotic life: the mitochondria. Long recognised as the power generators of the cell (“Power”), they have been subsequently discovered to play a key rôle in the evolution of sexual reproduction (“Sex”), and in programmed cell death (apoptosis—“Suicide”). Bacteria and archaea are constrained in size by the cube/square law: they power themselves by respiratory mechanisms embedded in their cellular membranes, which grow as the square of their diameter, but consume energy within the bulk of the cell, which grows as the cube. Consequently, evolution selects for small size, as a larger bacterium can generate less energy for its internal needs. Further, bacteria compete for scarce resources purely by replication rate: a bacterium which divides even a small fraction more rapidly will quickly come to predominate in the population versus its more slowly reproducing competitors. In cell division, the most energetically costly and time consuming part is copying the genome's DNA. As a result, evolution ruthlessly selects for the shortest genome, which results in the arcane overlapping genes in bacterial DNA which look like the work of those byte-shaving programmers you knew back when computers had 8 Kb RAM. All of this conspires to keep bacteria small and simple and indeed, they appear to be as small and simple today as they were three billion years and change ago. But that isn't to say they aren't successful—you may think of them as pond scum, but if you read the bacterial blogs, they think of us as an ephemeral epiphenomenon. “It's the age of bacteria, and it always has been.”

Most popular science books deliver one central idea you'll take away from reading them. This one has a forehead slapper about every twenty pages. It is not a particularly easy read: nothing in biology is unambiguous, and you find yourself going down a road and nodding in agreement, only to find out a few pages later that a subsequent discovery has falsified the earlier conclusion. While this may be confusing, it gives a sense of how science is done, and encourages the reader toward scepticism of all “breakthroughs” reported in the legacy media.

One of the most significant results of recent research into mitochondrial function is the connection between free radical production in the respiratory pipeline and ageing. While there is a power law relationship between metabolic rate and lifespan, there are outliers (including humans, who live about twice as long as they “should” based upon their size), and a major discrepancy for birds which, while obeying the same power law, are offset toward lifespans from three to ten times as long. Current research offers a plausible explanation for this: avians require aerobic power generation much greater than mammals, and consequently have more mitochondria in their tissues and more respiratory complexes in their mitochondria. This results in lower free radical production, which retards the onset of ageing and the degenerative diseases associated with it. Maybe before long there will be a pill which amplifies the mitochondrial replication factor in humans and, even if it doesn't extend our lifespan, retards the onset of the symptoms of ageing and degenerative diseases until the very end of life (old birds are very much like young adult birds, so there's an existence proof of this). I predict that the ethical questions associated with the creation of this pill will evaporate within about 24 hours of its availability on the market. Oh, it may have side-effects, such as increasing the human lifespan to, say, 160 years. Okay, science fiction authors, over to you!

If you are even remotely interested in these questions, this is a book you'll want to read.

 Permalink

Moffat, John W. Reinventing Gravity. New York: Collins, 2008. ISBN 978-0-06-117088-1.
In the latter half of the nineteenth century, astronomers were confronted by a puzzling conflict between their increasingly precise observations and the predictions of Newton's time-tested theory of gravity. The perihelion of the elliptical orbit of the planet Mercury was found to precess by the tiny amount of 43 arc seconds per century more than could be accounted for by the gravitational influence of the Sun and the other planets. While small, the effect was unambiguously measured, and indicated that something was missing in the analysis. Urbain Le Verrier, coming off his successful prediction of the subsequently discovered planet Neptune by analysis of the orbit of Uranus, calculated that Mercury's anomalous precession could be explained by the presence of a yet unobserved planet he dubbed Vulcan. Astronomers set out to observe the elusive inner planet in transit across the Sun or during solar eclipses, and despite several sightings by respectable observers, no confirmed observations were made. Other astronomers suggested a belt of asteroids too small to observe within the orbit of Mercury could explain its orbital precession. For more than fifty years, dark matter—gravitating body or bodies so far unobserved—was invoked to explain a discrepancy between the regnant theory of gravitation and the observations of astronomers. Then, in 1915, Einstein published his General Theory of Relativity which predicted that orbits in strongly curved spacetime would precess precisely the way Mercury's orbit was observed to, and that no dark matter was needed to reconcile the theory of gravitation with observations. So much for planet Vulcan, notwithstanding the subsequent one with all the pointy-eared logicians.

In the second half of the twentieth century, a disparate collection of observations on the galactic scale and beyond: the speed of rotation of stars in the discs of spiral galaxies, the velocities of galaxies in galactic clusters, gravitational lensing of distant objects by foreground galaxy clusters, the apparent acceleration of the expansion of the universe, and the power spectrum of the anisotropies in the cosmic background radiation, have yielded results grossly at variance with the predictions of General Relativity. The only way to make the results fit the theory is to assume that everything we observe in the cosmos makes up less than 5% of its total mass, and that the balance is “dark matter” and “dark energy”, neither of which has yet been observed or detected apart from their imputed gravitational effects. Sound familiar?

In this book, John Moffat, a distinguished physicist who has spent most of his long career exploring extensions to Einstein's theory of General Relativity, dares to suggest that history may be about to repeat itself, and that the discrepancy between what our theories predict and what we observe may not be due to something we haven't seen, but rather limitations in the scope of validity of our theories. Just as Newton's theory of gravity, exquisitely precise on the terrestrial scale and in the outer solar system, failed when applied to the strong gravitational field close to the Sun in which Mercury orbits, perhaps Einstein's theory also requires corrections over the very large distances involved in the galactic and cosmological scales. The author recounts his quest for such a theory, and eventual development of Modified Gravity (MOG), a scalar/tensor/vector field theory which reduces to Einstein's General Relativity when the scalar and vector fields are set to zero.

This theory is claimed to explain all of these large scale discrepancies without invoking dark matter, and to do so, after calibration of the static fields from observational data, with no free parameters (“fudge factors”). Unlike some other speculative theories, MOG makes a number of predictions which it should be possible to test in the next decade. MOG predicts a very different universe in the strong field regime than General Relativity: there are no black holes, no singularities, and the Big Bang is replaced by a universe which starts out with zero matter density and zero entropy at the start and decays because, as we all know, nothing is unstable.

The book is fascinating, but in a way unsatisfying. The mathematical essence of the theory is never explained: you'll have to read the author's professional publications to find it. There are no equations, not even in the end notes, which nonetheless contain prose such as (p. 235):

Wilson loops can describe a gauge theory such as Maxwell's theory of electromagnetism or the gauge theory of the standard model of particle physics. These loops are gauge-invariant observables obtained from the holonomy of the gauge connection around a given loop. The holonomy of a connection in differential geometry on a smooth manifold is defined as the measure to which parallel transport around closed loops fails to preserve the geometrical data being transported. Holonomy has nontrivial local and global features for curved connections.
I know that they say you lose half the audience for every equation you include in a popular science book, but this is pretty forbidding stuff for anybody who wanders into the notes. For a theory like this, the fit to the best available observational data is everything, and this is discussed almost everywhere only in qualitative terms. Let's see the numbers! Although there is a chapter on string theory and quantum gravity, these topics are dropped in the latter half of the book: MOG is a purely classical theory, and there is no discussion of how it might lead toward the quantisation of gravitation or be an emergent effective field theory of a lower level quantum substrate.

There aren't many people with the intellect, dogged persistence, and self-confidence to set out on the road to deepen our understanding of the universe at levels far removed from those of our own experience. Einstein struggled for ten years getting from Special to General Relativity, and Moffat has worked for three times as long arriving at MOG and working out its implications. If it proves correct, it will be seen as one of the greatest intellectual achievements by a single person (with a small group of collaborators) in recent history. Should that be the case (and several critical tests which may knock the theory out of the box will come in the near future), this book will prove a unique look into how the theory was so patiently constructed. It's amusing to reflect, if it turns out that dark matter and dark energy end up being epicycles invoked to avoid questioning a theory never tested in the domains in which it was being applied, how historians of science will look back at our age and wryly ask, “What were they thinking?”.

I have a photo credit on p. 119 for a vegetable.

 Permalink

Flynn, Vince. Transfer of Power. New York: Pocket Books, 1999. ISBN 978-0-671-02320-1.
No one would have believed in the last years of the twentieth century that Islamic terrorists could make a successful strike on a high-profile symbol of U.S. power. Viewed from a decade later, this novel, the first featuring counter-terrorism operative Mitch Rapp (who sometimes makes Jack Bauer seem like a bureaucrat), is astonishingly prescient. It is an almost perfect thriller—one of the most difficult to put down books I've read in quite some time. Apart from the action, which is abundant, the author has a pitch-perfect sense of the venality and fecklessness of politicians and skewers them with a gusto reminiscent of the early novels of Allen Drury.

I was completely unaware of this author and his hugely popular books (six of which, to date, have made the New York Times bestseller list) until I heard an extended interview (transcript; audio parts 1, 2, 3) with the author, after which I immediately ordered this book. It did not disappoint, and I shall be reading more in the series.

I don't read thrillers in a hyper-critical mode unless they transgress to such an extent that I begin to exclaim “oh, come on”. Still, this novel is carefully researched, and the only goof I noticed is in the Epilogue on p. 545 where “A KH-12 Keyhole satellite was moved into geosynchronous orbit over the city of Sao Paulo and began recording phone conversations”. The KH-12 (a somewhat ambiguous designation for an upgrade of the KH-11 reconnaissance satellite) operates in low Earth orbit, not geosynchronous orbit, and is an imaging satellite, not a signals intelligence satellite equipped to intercept communications. The mass market edition I read includes a teaser for Protect and Defend, the eighth novel in the series. This excerpt contains major spoilers for the earlier books, and if you're one of those people (like me) who likes to follow the books in a series in order, give it a miss.

 Permalink

Orlov, Dmitry. Reinventing Collapse. Gabriola Island, BC, Canada: New Society Publishers, 2008. ISBN 978-0-86571-606-3.
The author was born in Leningrad and emigrated to the United States with his family in the mid-1970s at the age of 12. He experienced the collapse of the Soviet Union and the subsequent events in Russia on a series of extended visits between the late 1980s and mid 1990s. In this book he describes firsthand what happens when a continental scale superpower experiences economic and societal collapse, what it means to those living through it, and how those who survived managed to do so, in some cases prospering amid the rubble.

He then goes on to pose the question of whether the remaining superpower, the United States, is poised to experience a collapse of the same magnitude. This he answers in the affirmative, with only the timing uncertain (these events tend to happen abruptly and with little warning—in 1985 virtually every Western analyst assumed the Soviet Union was a permanent fixture on the world stage; six years later it was gone). He presents a U.S. collapse scenario in the form of the following theorem on p. 3, based upon the axioms of “Peak Oil” and the unsustainability of the debt the U.S. is assuming to finance its oil imports (as well as much of the rest of its consumer economy and public sector).

Oil powers just about everything in the US economy, from food production and distribution to shipping, construction and plastics manufacturing. When less oil becomes available, less is produced, but the amount of money in circulation remains the same, causing the prices for the now scarcer products to be bid up, causing inflation. The US relies on foreign investors to finance its purchases of oil, and foreign investors, seeing high inflation and economic turmoil, flee in droves. Result: less money with which to buy oil and, consequently, less oil with which to produce things. Lather, rinse, repeat; stop when you run out of oil. Now look around: Where did that economy disappear to?
Now if you believe in Peak Oil (as the author most certainly does, along with most of the rest of the catechism of the environmental left), this is pretty persuasive. But even if you don't, you can make the case for a purely economic collapse, especially with the unprecedented deficits and money creation as the present process of deleveraging accelerates into debt liquidation (either through inflation or outright default and bankruptcy). The ultimate trigger doesn't make a great deal of difference to the central argument: the U.S. runs on oil (and has no near-term politically and economically viable substitute) and depends upon borrowed money both to purchase oil and to service its ever-growing debt. At the moment creditors begin to doubt they're every going to be repaid (as happened with the Soviet Union in its final days), it's game over for the economy, even if the supply of oil remains constant.

Drawing upon the Soviet example, the author examines what an economic collapse on a comparable scale would mean for the U.S. Ironically, he concludes that many of the weaknesses which were perceived as hastening the fall of the Soviet system—lack of a viable cash economy, hoarding and self-sufficiency at the enterprise level, failure to produce consumer goods, lack of consumer credit, no private ownership of housing, and a huge and inefficient state agricultural sector which led many Soviet citizens to maintain their own small garden plots— resulted, along with the fact that the collapse was from a much lower level of prosperity, in mitigating the effects of collapse upon individuals. In the United States, which has outsourced much of its manufacturing capability, depends heavily upon immigrants in the technology sector, and has optimised its business models around high-velocity cash transactions and just in time delivery, the consequences post-collapse may be more dire than in the “primitive” Soviet system. If you're going to end up primitive, you may be better starting out primitive.

The author, although a U.S. resident for all of his adult life, did not seem to leave his dark Russian cynicism and pessimism back in the USSR. Indeed, on numerous occasions he mocks the U.S. and finds it falls short of the Soviet standard in areas such as education, health care, public transportation, energy production and distribution, approach to religion, strength of the family, and durability and repairability of capital and the few consumer goods produced. These are indicative of what he terms a “collapse gap”, which will leave the post-collapse U.S. in much worse shape than ex-Soviet Russia: in fact he believes it will never recover and after a die-off and civil strife, may fracture into a number of political entities, all reduced to a largely 19th century agrarian lifestyle. All of this seems a bit much, and is compounded by offhand remarks about the modern lifestyle which seem to indicate that his idea of a “sustainable” world would be one largely depopulated of humans in which the remainder lived in communities much like traditional African villages. That's what it may come to, but I find it difficult to see this as desirable. Sign me up for L. Neil Smith's “freedom, immortality, and the stars” instead.

The final chapter proffers a list of career opportunities which proved rewarding in post-collapse Russia and may be equally attractive elsewhere. Former lawyers, marketing executives, financial derivatives traders, food chemists, bank regulators, university administrators, and all the other towering overhead of drones and dross whose services will no longer be needed in post-collapse America may have a bright future in the fields of asset stripping, private security (or its mirror image, violent racketeering), herbalism and medical quackery, drugs and alcohol, and even employment in what remains of the public sector. Hit those books!

There are some valuable insights here into the Soviet collapse as seen from the perspective of citizens living through it and trying to make the best of the situation, and there are some observations about the U.S. which will make you think and question assumptions about the stability and prospects for survival of the economy and society on its present course. But there are so many extreme statements you come away from the book feeling like you've endured an “end is nigh” rant by a wild-eyed eccentric which dilutes the valuable observations the author makes.

 Permalink

Susskind, Leonard. The Black Hole War. New York: Little, Brown, 2008. ISBN 978-0-316-01640-7.
I hesitated buying this book for some months after its publication because of a sense there was something “off” in the author's last book, The Cosmic Landscape (March 2006). I should learn to trust my instincts more; this book treats a fascinating and important topic on the wild frontier between general relativity and quantum mechanics in a disappointing, deceptive, and occasionally infuriating manner.

The author is an eminent physicist who has made major contributions to string theory, the anthropic string landscape, and the problem of black hole entropy and the fate of information which is swallowed by a black hole. The latter puzzle is the topic of the present book, which is presented as a “war” between Stephen Hawking and his followers, mostly general relativity researchers, and Susskind and his initially small band of quantum field and string theorists who believed that information must be preserved in black hole accretion and evaporation lest the foundations of physics (unitarity and the invertibility of the S-matrix) be destroyed.

Here is a simple way to understand one aspect of this apparent paradox. Entropy is a measure of the hidden information in a system. The entropy of gas at equilibrium is very high because there are a huge number of microscopic configurations (position and velocity) of the molecules of the gas which result in the same macroscopic observables: temperature, pressure, and volume. A perfect crystal at absolute zero, on the other hand, has (neglecting zero-point energy), an entropy of zero because there is precisely one arrangement of atoms which exactly reproduces it. A classical black hole, as described by general relativity, is characterised by just three parameters: mass, angular momentum, and electrical charge. (The very same basic parameters as elementary particles—hmmmm….) All of the details of the mass and energy which went into the black hole: lepton and baryon number, particle types, excitations, and higher level structure are lost as soon as they cross the event horizon and cause it to expand. According to Einstein's theory, two black holes with the same mass, spin, and charge are absolutely indistinguishable even if the first was made from the collapse of a massive star and the second by crushing 1975 Ford Pintos in a cosmic trash compactor. Since there is a unique configuration for a given black hole, there is no hidden information and its entropy should therefore be zero.

But consider this: suppose you heave a ball of hot gas or plasma—a star, say—into the black hole. Before it is swallowed, it has a very high entropy, but as soon as it is accreted, you have only empty space and the black hole with entropy zero. You've just lowered the entropy of the universe, and the Second Law of Thermodynamics says that cannot ever happen. Some may argue that the Second Law is “transcended” in a circumstance like this, but it is a pill which few physicists are willing to swallow, especially since in this case it occurs in a completely classical context on a large scale where statistical mechanics obtains. It was this puzzle which led Jacob Bekenstein to propose that black holes did, in fact, have an entropy which was proportional to the area of the event horizon in units of Planck length squared. Black holes not only have entropy, they have a huge amount of it, and account for the overwhelming majority of entropy in the universe. Stephen Hawking subsequently reasoned that if a black hole has entropy, it must have temperature and radiate, and eventually worked out the mechanism of Hawking radiation and the evaporation of black holes.

But if a black hole can evaporate, what happens to the information (more precisely, the quantum state) of the material which collapsed into the black hole in the first place? Hawking argued that it was lost: the evaporation of the black hole was a purely thermal process which released none of the information lost down the black hole. But one of the foundations of quantum mechanics is that information is never lost; it may be scrambled in complex scattering processes to such an extent that you can't reconstruct the initial state, but in principle if you had complete knowledge of the state vector you could evolve the system backward and arrive at the initial configuration. If a black hole permanently destroys information, this wrecks the predictability of quantum mechanics and with it all of microscopic physics.

This book chronicles the author's quest to find out what happens to information that falls into a black hole and discover the mechanism by which information swallowed by the black hole is eventually restored to the universe when the black hole evaporates. The reader encounters string theory, the holographic principle, D-branes, anti de Sitter space, and other arcana, and is eventually led to the explanation that a black hole is really just an enormous ball of string, which encodes in its structure and excitations all of the information of the individual fundamental strings swallowed by the hole. As the black hole evaporates, little bits of this string slip outside the event horizon and zip away as fundamental particles, carrying away the information swallowed by the hole.

The story is told largely through analogies and is easy to follow if you accept the author's premises. I found the tone of the book quite difficult to take, however. The word which kept popping into my head as I made my way through was “smug”. The author opines on everything and anything, and comes across as scornful of anybody who disagrees with his opinions. He is bemused and astonished when he discovers that somebody who is a Republican, an evangelical Christian, or some other belief at variance with the dogma of the academic milieu he inhabits can, nonetheless, actually be a competent scientist. He goes on for two pages (pp. 280–281) making fun of Mormonism and then likens Stephen Hawking to a cult leader. The physics is difficult enough to explain; who cares about what Susskind thinks about everything else? Sometimes he goes right over the top, resulting in unseemly prose like the following.

Although the Black Hole War should have come to an end in early 1998, Stephen Hawking was like one of those unfortunate soldiers who wander in the jungle for years, not knowing that the hostilities have ended. By this time, he had become a tragic figure. Fifty-six years old, no longer at the height of his intellectual powers, and almost unable to communicate, Stephen didn't get the point. I am certain that it was not because of his intellectual limitations. From the interactions I had with him well after 1998, it was obvious that his mind was still extremely sharp. But his physical abilities had so badly deteriorated that he was almost completely locked within his own head. With no way to write an equation and tremendous obstacles to collaborating with others, he must have found it impossible to do the things physicists ordinarily do to understand new, unfamiliar work. So Stephen went on fighting for some time. (p. 419)
Or, Prof. Susskind, perhaps it's that the intellect of Prof. Hawking makes him sceptical of arguments based a “theory” which is, as you state yourself on p. 384, “like a very complicated Tinkertoy set, with lots of different parts that can fit together in consistent patterns”; for which not a single fundamental equation has yet been written down; in which no model that remotely describes the world in which we live has been found; whose mathematical consistency and finiteness in other than toy models remains conjectural; whose results regarding black holes are based upon another conjecture (AdS/CFT) which, even if proven, operates in a spacetime utterly unlike the one we inhabit; which seems to predict a vast “landscape” of possible solutions (vacua) which make it not a theory of everything but rather a “theory of anything”; which is formulated in a flat Minkowski spacetime, neglecting the background independence of general relativity; and which, after three decades of intensive research by some of the most brilliant thinkers in theoretical physics, has yet to make a single experimentally-testable prediction, while demonstrating its ability to wiggle out of almost any result (for example, failure of the Large Hadron Collider to find supersymmetric particles).

At the risk of attracting the scorn the author vents on pp. 186–187 toward non-specialist correspondents, let me say that the author's argument for “black hole complementarity” makes absolutely no sense whatsoever to this layman. In essence, he argues that matter infalling across the event horizon of a black hole, if observed from outside, is disrupted by the “extreme temperature” there, and is excited into its fundamental strings which spread out all over the horizon, preserving the information accreted in the stringy structure of the horizon (whence it can be released as the black hole evaporates). But for a co-moving observer infalling with the matter, nothing whatsoever happens at the horizon (apart from tidal effects whose magnitude depends upon the mass of the black hole). Susskind argues that since you have to choose your frame of reference and cannot simultaneously observe the event from both outside the horizon and falling across it, there is no conflict between these two descriptions, and hence they are complementary in the sense Bohr described quantum observables.

But, unless I'm missing something fundamental, the whole thing about the “extreme temperature” at the black hole event horizon is simply nonsense. Yes, if you lower a thermometer from a space station at some distance from a black hole down toward the event horizon, it will register a diverging temperature as it approaches the horizon. But this is because it is moving near the speed of light with respect to spacetime falling through the horizon and is seeing the cosmic background radiation blueshifted by a factor which reaches infinity at the horizon. Further, being suspended above the black hole, the thermometer is in a state of constant acceleration (it might as well have a rocket keeping it at a specified distance from the horizon as a tether), and is thus in a Rindler spacetime and will measure black body radiation even in a vacuum due to the Unruh effect. But note that due to the equivalence principle, all of this will happen precisely the same even with no black hole. The same thermometer, subjected to the identical acceleration and velocity with respect to the cosmic background radiation frame, will read precisely the same temperature in empty space, with no black hole at all (and will even observe a horizon due to its hyperbolic motion).

The “lowering the thermometer” is a completely different experiment from observing an object infalling to the horizon. The fact that the suspended thermometer measures a high temperature in no way implies that a free-falling object approaching the horizon will experience such a temperature or be disrupted by it. A co-moving observer with the object will observe nothing as it crosses the horizon, while a distant observer will see the object appear to freeze and wink out as it reaches the horizon and the time dilation and redshift approaches infinity. Nowhere is there this legendary string blowtorch at the horizon spreading out the information in the infalling object around a horizon which, observed from either perspective, is just empty space.

The author concludes, in a final chapter titled “Humility”, “The Black Hole War is over…”. Well, maybe, but for this reader, the present book did not make the sale. The arguments made here are based upon aspects of string theory which are, at the moment, purely conjectural and models which operate in universes completely different from the one we inhabit. What happens to information that falls into a black hole? Well, Stephen Hawking has now conceded that it is preserved and released in black hole evaporation (but this assumes an anti de Sitter spacetime, which we do not inhabit), but this book just leaves me shaking my head at the arm waving arguments and speculative theorising presented as definitive results.

 Permalink

May 2009

Cochran, Gregory and Henry Harpending. The 10,000 Year Explosion. New York: Basic Books, 2009. ISBN 978-0-465-00221-4.
“Only an intellectual could believe something so stupid” most definitely applies to the conventional wisdom among anthropologists and social scientists that human evolution somehow came to an end around 40,000 years ago with the emergence of modern humans and that differences among human population groups today are only “skin deep”: the basic physical, genetic, and cognitive toolkit of humans around the globe is essentially identical, with only historical contingency and cultural inheritance responsible for different outcomes.

To anybody acquainted with evolutionary theory, this should have been dismissed as ideologically motivated nonsensical propaganda on the face of it. Evolution is driven by changes and new challenges faced by a species as it moves into new niches and environments, adapts to environmental change, migrates and encounters new competition, and is afflicted by new diseases which select for those with immunity. Modern humans, in their expansion from Africa to almost every habitable part of the globe, have endured changes and challenges which dwarf those of almost any other metazoan species. It stands to reason, then, that the pace of human evolution, far from coming to a halt, would in fact accelerate dramatically, as natural selection was driven by the coming and going of ice ages, the development of agriculture and domestication of animals, spread of humans into environments inhospitable to their ancestors, trade and conquest resulting in the mixing of genes among populations, and numerous other factors.

Fortunately, we're lucky to live in an age in which we need no longer speculate upon such matters. The ability to sequence the human genome and compare the lineage of genes in various populations has created the field of genetic anthropology, which is in the process of transforming what was once a “soft science” into a thoroughly quantitative discipline where theories can be readily falsified by evidence in the genome. This book has the potential of creating a phase transition in anthropology: it is a manifesto for the genomic revolution, and a few years from now anthropologists who ignore the kind of evidence presented here will be increasingly forgotten, publishing papers nobody reads because they neglect the irrefutable evidence of human history we carry in our genes.

The authors are very ambitious in their claims, and I'm sure that some years from now they will be seen to have overreached in some of them. But the central message will, I am confident, stand: human evolution has dramatically accelerated since the emergence of modern humans, and is being driven at an ever faster pace by the cultural and environmental changes humans are incessantly confronting. Further, human history cannot be understood without first acknowledging that the human populations which were the actors in it were fundamentally different. The conquest of the Americas by Europeans may well not have happened had not Europeans carried genes which protected them against the infectuous diseases they also carried on their voyages of exploration and conquest. (By some estimates, indigenous populations in the Americas fell to 10% of their pre-contact levels, precipitating societal collapse.) Why do about half of all humans on Earth speak languages of the Indo-European group? Well, it may be because the obscure cattle herders from the steppes who spoke the ur-language happened to evolve a gene which made them lactose tolerant throughout adulthood, and hence were able to raise cattle for dairy products, which is five times as productive (measured by calories per unit area) as raising cattle for meat. While Europeans' immunity to disease served them well in their conquest of the Americas, their lack of immunity to diseases endemic in sub-Saharan Africa (in particular, falciparum malaria) rendered initial attempts colonise that region disastrous.

The authors do not hesitate to speculate on possible genetic influences on events in human history, but their conjectures are based upon published genetic evidence, cited from primary sources in the extensive end notes. A number of these discussions may lead to the sound of skulls exploding among those wedded to the dominant academic dogma. The authors suggest that some of the genes which allowed modern humans emerging from Africa to prosper in northern climes were the result of cross-breeding with Neanderthals; that just as domestication of animals results in neoteny, domestication of humans in agricultural and the consequent state societies has induced neotenous changes in “domesticated humans” which result in populations with a long history of living in agricultural societies adapting better to modern civilisation than those without that selection in their genetic heritage, and that the unique experience of selection for success in intellectually demanding professions and lack of interbreeding resulted in the emergence of the Ashkenazi Jews as a population whose mean intelligence exceeds that of all other human populations (as well as a prevalence of genetic diseases which appear linked to biochemical factors related to brain function).

There's an odd kind of doublethink present among many champions of evolutionary theory. While invoking evolution to explain even those aspects of the history of life on Earth where doing so involves what can only be called a “leap of faith”, they dismiss the self-evident consequences of natural selection on populations of their own species. Certainly, all humans constitute a single species: we can interbreed, and that's the definition. But all dogs and wolves can interbreed, yet nobody would say that there is no difference between a Great Dane and a Dachshund. Largely isolated human populations have been subjected to unique selective pressures from their environment, diet, diseases, conflict, culture, and competition, and it's nonsense to argue that these challenges did not drive selection of adaptive alleles among the population.

This book is a welcome shot across the bow of the “we're all the same” anthropological dogma, and provides a guide to the discoveries to be made as comparative genetics lays a firm scientific foundation for anthropology.

 Permalink

Grant, Rob. Fat. London: Gollancz, 2006. ISBN 978-0-575-07820-8.
Every now and then, you have a really bad day. If you're lucky, you actually experience such days less frequently than you have nightmares about them (mine almost always involve trade shows, which demonstrates how traumatic that particular form of torture can be). The only remedy is to pick up the work of a master who shows you that whatever's happened to you is nothing compared to how bad a day really can be—this is such a yarn. This farce is in the fine tradition of Evelyn Waugh and Tom Sharpe, and is set in a future in which the British nanny state finally decides to do something about the “epidemic of obesity” which is bankrupting the National Health Service by establishing Well Farms, modelled upon that earlier British innovation, the concentration camp.

The story involves several characters, all of whom experience their own really bad days and come to interact in unexpected ways (you really begin to wonder how the author is going to pull it all together as the pages dwindle, but he does, and satisfyingly). And yet, as is usually the case in the genre, everything ends well for everybody.

This is a thoroughly entertaining romp, but there's also a hard edge here. The author skewers a number of food fads and instances of bad science and propaganda in the field of diet and nutrition and even provides a list of resources for those interested in exploring the facts behind the nonsense spouted by the “studies”, “reports”, and “experts” quoted in the legacy media.

 Permalink

Berenson, Alex. The Silent Man. New York: G.P. Putnam's Sons, 2009. ISBN 978-0-399-15538-3.
This is a compelling page-turner in which the nightmare scenario of “loose nukes” falling into the hands of jihadi terrorists raises the risk of a nuclear detonation on U.S. soil. Only intrepid CIA agent (and loose cannon—heroes in books like this never seem to be of the tethered cannon variety) John Wells can put the pieces together before disaster strikes and possibly provokes consequences even worse than a nuclear blast.

The author has come up with a very clever scenario to get around many of the obvious objections to most plots of this kind. The characters of the malefactors are believable, and the suspense as the story unfolds is palpable; this is a book I did not want to either put down or have come to an end too quickly. Still, I have some major quibbles with the details, which I'll describe in the spoiler block below (I don't consider anything discussed a major plot spoiler, but better safe than sorry).

Spoiler warning: Plot and/or ending details follow.  
There are number of factual goofs (or invocations of artistic license, if you prefer). The Russian nuclear warheads stolen by the terrorists are said to be from SS-26 Iskander tactical missiles. Yet according to both the Russian military and NATO, this missile uses only conventional warheads. The warheads are said to have been returned for refurbishing due to damage from the missile's corrosive liquid fuel, but the Iskander is, in fact, a solid fuel missile

The premise of constructing an improvised gun assembly nuclear weapon from material from the secondary of a thermonuclear warhead seems highly implausible to me. (They couldn't use the fissile material from the primary because it is plutonium, which would predetonate in a gun design, and they can't fire the implosion mechanism of the primary without the permissive action link code, without which the implosion system will misfire, resulting in no nuclear yield.) Anyway, the terrorists plan to use highly enriched U-235 from the secondary in their gun bomb. The problem is that, unless I'm mistaken or the Russians use a very odd design in their bombs, there is no reason for a fusion secondary to contain anywhere near a critical mass of U-235 or, for that matter, any U-235 at all. In a Teller-Ulam design the only fissile material in the secondary is the uranium or plutonium “spark plug” used to heat the lithium deuteride fuel to a temperature where fusion can begin, but, even if U-235 is used, the quantity is much less than that required for a gun assembly bomb.

Even if the terrorists did manage to obtain sufficient U-235, I'm far from certain the bomb would have worked. They planned to use a gun assembly with a Russian SPG-9 recoilless rifle propelling the projectile into the target. They weld the tube of the bazooka directly to the steel tamper surrounding the target. But that won't work! The SPG-9 projectile is ejected from the tube by a small charge, but its rocket motor, which accelerates it to full velocity, does not ignite until the projectile is about twenty metres from the tube. So the projectile in the bomb would be accelerated only by the initial charge, which wouldn't impart anything like the velocity needed to avoid predetonation. Finally, the terrorists have no initiator: they just hope background radiation will generate a neutron to get things going. But if they aren't lucky, the whole assembly will be blown apart by the explosive charge of the SPG-9 round before nuclear detonation begins.

Now, if you don't know these details, or you're willing to ignore them (as I was), they don't in any way detract from what is a gripping story. There's no question that a small group of terrorists who came into possession of a sufficient quantity of highly enriched uranium could construct a simple gun bomb which would have a high probability of working on the first try. It's just that the scenario in the novel doesn't explain how they obtained a sufficient quantity, nor does it describe a weapon design which is likely to work.

Spoilers end here.  

 Permalink

Ciszek, Walter J. with Daniel L. Flaherty. He Leadeth Me. San Francisco: Ignatius Press, [1973] 1995. ISBN 978-0-89870-546-1.
Shortly after joining the Jesuit order in 1928, the author volunteered for the “Russian missions” proclaimed by Pope Pius XI. Consequently, he received most of his training at a newly-established centre in Rome, where in addition to the usual preparation for the Jesuit priesthood, he mastered the Russian language and the sacraments of the Byzantine rite in addition to those of the Latin. At the time of his ordination in 1937, Stalin's policy prohibited the entry of priests of all kinds to the Soviet Union, so Ciszek was assigned to a Jesuit mission in eastern Poland (as the Polish-American son of first-generation immigrants, he was acquainted with the Polish language). When Germany and the Soviet Union invaded Poland in 1939 at the outbreak of what was to become World War II, he found himself in the Soviet-occupied region and subject to increasingly stringent curbs on religious activities imposed by the Soviet occupation.

The Soviets began to recruit labour brigades in Poland to work in factories and camps in the Urals, and the author and another priest from the mission decided to volunteer for one of these brigades, concealing their identity as priests, so as to continue their ministry to the Polish labourers and the ultimate goal of embarking on their intended mission to Russia. Upon arriving at a lumbering camp, the incognito priests found that the incessant, backbreaking work and intense scrutiny by the camp bosses made it impossible to minister to the other labourers.

When Hitler double crossed Stalin and invaded the Soviet Union in 1941, the Red Army was initially in disarray and Stalin apparently paralysed, but the NKVD (later to become the KGB) did what it has always done best with great efficiency: Ciszek, along with hundreds of other innocents, was rounded up as a “German spy” and thrown in prison. When it was discovered that he was, in fact, a Catholic priest, the charge was changed to “Vatican spy”, and he was sent to the Lubyanka, where he was held throughout the entire war—five years—most of it in solitary confinement, and subjected to the relentless, incessant, and brutal interrogations for which the NKVD never seemed to lack resources even as the Soviet Union was fighting for its survival.

After refusing to be recruited as a spy, he was sentenced to 15 years hard labour in Siberia and shipped in a boxcar filled with hardened criminals to the first of a series of camps where only the strongest in body and spirit could survive. He served the entire 15 years less only three months, and was then released with a restricted internal passport which only permitted him to live in specific areas and required him to register with the police everywhere he went. In 1947, the Jesuit order listed him as dead in a Soviet prison, but he remained on the books of the KGB, and in 1963 was offered as an exchange to the U.S. for two Soviet spies in U.S. custody, and arrived back in the U.S. after twenty-three years in the Soviet Union.

In this book, as in his earlier With God in Russia, he recounts the events of his extraordinary life and provides a first-hand look at the darkest parts of a totalitarian society. Unlike the earlier book, which is more biographical, in the present volume the author uses the events he experienced as the point of departure for a very Jesuit exploration of topics including the body and soul, the priesthood, the apostolate, the kingdom of God on Earth, humility, and faith. He begins the chapter on the fear of death by observing, “Facing a firing squad is a pretty good test, I guess, of your theology of death” (p. 143).

As he notes in the Epilogue, on the innumerable occasions he was asked, after his return to the U.S., “How did you manage to survive?” and replied along the lines explained herein: by consigning his destiny to the will of God and accepting whatever came as God's will for him, many responded that “my beliefs in this matter are too simple, even naïve; they may find that my faith is not only childlike but childish.” To this he replies, “I am sorry if they feel this way, but I have written only what I know and what I have experienced. … My answer has always been—and can only be—that I survived on the basis of the faith others may find too simple and naïve” (p. 199).

Indeed, to this reader, it seemed that Ciszek's ongoing discovery that fulfillment and internal peace lay in complete submission to the will of God as revealed in the events one faces from day to day sometimes verged upon a fatalism I associate more with Islam than Catholicism. But this is the philosophy developed by an initially proud and ambitious man which permitted him not only to survive the almost unimaginable, but to achieve, to some extent, his mission to bring the word of God to those living in the officially atheist Soviet Union.

A more detailed biography with several photographs of Father Ciszek is available. Since 1990, he has been a candidate for beatification and sainthood.

 Permalink

Lauer, Heather. Bacon: A Love Story. New York: William Morrow, 2009. ISBN 978-0-06-170428-4.
The author, who operates the Bacon Unwrapped Web site, just loves bacon. But who doesn't? I've often thought that a principal reason the Middle East produces so much more trouble than it consumes is that almost nobody there ever mellows out in that salty, fat-metabolising haze of having consumed a plate-full of The Best Meat Ever.

Bacon (and other salt-cured pork products) has been produced for millennia, and the process (which is easy do at home and explained here, if you're so inclined) is simple. And yet the result is so yummy that there are innumerable ways to use this meat in all kinds of meals. This book traces the history of bacon, its use in the cuisine of cultures around the world, and its recent breakout from breakfast food to a gourmet item in main courses and even dessert.

The author is an enthusiast, and her passion is echoed in the prose. But what would be amusing in an essay comes across as a bit too precious and tedious in a 200 page book—how many times do we need to be reminded that bacon is The Best Meat Ever? There are numerous recipes for baconlicious treats you might not have ever imagined. I'm looking forward to trying the macaroni and blue cheese with bacon from p. 153. I'm not so sure about the bacon peanut brittle or the bacon candy floss. Still, the concept of bacon as candy (after all, bacon has been called “meat candy”) has its appeal: one customer's reaction upon tasting a maple bacon lollipop was “Jesus got my letter!” For those who follow Moses, there's no longer a need to forgo the joys of bacon: thanks to the miracles of twenty-first century chemistry, 100% kosher Bacon Salt (in a rainbow of flavours) aims to accomplish its mission statement: “Everything should taste like bacon.” Try it on popcorn—trust me.

If you're looking for criticism of the irrational love of bacon, you've come to the wrong place. I don't eat a lot of bacon myself—when you only have about 2000 calories a day to work with, there's only a limited amount of porky ambrosia you can admit into your menu plan. This is a superb book which will motivate you to explore other ways to incorporate preserved pork bellies into your diet, and if that isn't happiness, what is? You will learn a great deal here about the history of pork products: now I finally understand the distinction between bacon, pancetta, and prosciutto.

Bacon lovers should be sure to bookmark The Bacon Show, a Web site which promises “One bacon recipe per day, every day, forever” and has been delivering just that for more than four years.

 Permalink

June 2009

Flynn, Vince. The Third Option. New York: Pocket Books, 2000. ISBN 978-0-671-04732-0.
This is the second novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. Unlike the previous episode, Transfer of Power (April 2009), which involved a high-profile terrorist strike, this is much more of a grudge match conducted in the shadows, with Rapp as much prey as hunter and uncertain of whom he can trust. Flynn demonstrates he can pull off this kind of ambiguous espionage story as well as the flash-bang variety, and while closing the present story in a satisfying way sets the stage for the next round of intrigue without resorting to a cliffhanger.

Rapp's character becomes increasingly complex as the saga unfolds, and while often conflicted he is mission-oriented and has no difficulty understanding his job description. Here he's reluctantly describing it to a congressman who has insisted he be taken into confidence (p. 296):

“… I'm what you might call a counterterrorism specialist.”

“Okay … and what, may I ask, does a counterterrorism specialist do?”

Rapp was not well versed in trying to spin what he did, so he just blurted out the hard, cold truth. “I kill terrorists.”

“Say again?”

“I hunt them down, and I kill them.”

No nuance for Mr. Mitch!

This is a superbly crafted thriller which will make you hunger for the next. Fortunately, there are seven sequels already published and more on the way. See my comments on the first installment for additional details and a link to an interview with the author. The montage on the cover of the paperback edition I read uses a biohazard sign (☣) as its background—I have no idea why—neither disease nor biological weapons figure in the story in any way. Yes, I've been reading a lot of thrillers recently—summer's comin' and 'tis the season for light and breezy reading. I'll reserve Quantum Field Theory in a Nutshell for the dwindling daylight of autumn, if you don't mind.

 Permalink

[Audiobook] Twain, Mark [Samuel Langhorne Clemens]. Adventures of Huckleberry Finn. (Audiobook, Unabridged). Auburn, CA: Audio Partners, [1884] 1999. ISBN 978-0-393-02039-7.
If you've read an abridged or bowdlerised edition of this timeless classic as a child or been deprived of it due to its being deemed politically incorrect by the hacks and morons in charge of education in recent decades, this audiobook is a superb way (better in some ways than a print edition) to appreciate the genius of one the greatest storytellers of all time. This is not your typical narration of a print novel. Voice actor Patrick Fraley assumes a different pitch, timbre, and dialect for each of the characters, making this a performance, not a reading; his wry, ironic tone for Huck's first person narration is spot on.

I, like many readers (among them Ernest Hemingway), found the last part of the book set on the Phelps farm less satisfying than the earlier story, but so great is Mark Twain's genius that, by themselves, these chapters would be a masterwork of the imagination of childhood.

The audio programme is distributed in two files, running 11 hours and 17 minutes, with original music between the chapters and plot interludes. An Audio CD edition is available. If you're looking for a print edition, this is the one to get; it can also serve as an excellent resource to consult as you're listening to the audiobook.

 Permalink

Caplan, Bryan. The Myth of the Rational Voter. Princeton: Princeton University Press, 2007. ISBN 978-0-691-13873-2.
Every survey of the electorate in Western democracies shows it to be woefully uninformed: few can name their elected representatives or identify their party affiliation, nor answer the most basic questions about the political system under which they live. Economists and political scientists attribute this to “rational ignorance”: since there is a vanishingly small probability that the vote of a single person will be decisive, it is rational for that individual to ignore the complexities of the issues and candidates and embrace the cluelessness which these polls make manifest.

But, the experts contend, there's no problem—even if a large majority of the electorate is ignorant knuckle-walkers, it doesn't matter because they'll essentially vote at random. Their uninformed choices will cancel out, and the small informed minority will be decisive. Hence the “miracle of aggregation”: stir in millions of ignoramuses and thousands of political junkies and diligent citizens and out pops true wisdom.

Or maybe not—this book looks beyond the miracle of aggregation, which assumes that the errors of the uninformed are random, to examine whether there are systematic errors (or biases) among the general population which cause democracies to choose policies which are ultimately detrimental to the well-being of the electorate. The author identifies four specific biases in the field of economics, and documents, by a detailed analysis of the Survey of Americans and Economists on the Economy , that while economists, reputed to always disagree amongst themselves, are in fact, on issues which Thomas Sowell terms Basic Economics (September 2008), almost unanimous in their opinions, yet widely at variance from the views of the general public and the representatives they elect.

Many economists assume that the electorate votes what economists call its “rational choice”, yet empirical data presented here shows that democratic electorates behave very differently. The key insight is that choice in an election is not a preference in a market, where the choice directly affects the purchaser, but rather an allocation in a commons, where the consequences of an individual vote have negligible results upon the voter who casts it. And we all know how commons inevitably end.

The individual voter in a large democratic polity bears a vanishingly small cost in voting their ideology or beliefs, even if they are ultimately damaging to their own well-being, because the probability their own single vote will decide the election is infinitesimal. As a result, the voter is liberated to vote based upon totally irrational beliefs, based upon biases shared by a large portion of the electorate, insulated by the thought, “At least my vote won't decide the election, and I can feel good for having cast it this way”.

You might think that voters would be restrained from indulging their feel-good inclinations by considering their self interest, but studies of voter behaviour and the preferences of subgroups of voters demonstrate that in most circumstances voters support policies and candidates they believe are best for the polity as a whole, not their narrow self interest. Now, this would be a good thing if their beliefs were correct, but at least in the field of economics, they aren't, as defined by the near-unanimous consensus of professional economists. This means that there is a large, consistent, systematic bias in policies preferred by the uninformed electorate, whose numbers dwarf the small fraction who comprehend the issues in contention. And since, once again, there is no cost to an individual voter in expressing his or her erroneous beliefs, the voter can be “rationally irrational”: the possibility of one vote being decisive vanishes next to the cost of becoming informed on the issues, so it is rational to unknowingly vote irrationally. The reason democracies so often pursue irrational policies such as protectionism is not unresponsive politicians or influence of special interests, but instead politicians giving the electorate what it votes for, which is regrettably ultimately detrimental to its own self-interest.

Although the discussion here is largely confined to economic issues, there is no reason to believe that this inherent failure of democratic governance is confined to that arena. Indeed, one need only peruse the daily news to see abundant evidence of democracies committing folly with the broad approbation of their citizenry. (Run off a cliff? Yes, we can!) The author contends that rational irrationality among the electorate is an argument for restricting the scope of government and devolving responsibilities it presently undertakes to market mechanisms. In doing so, the citizen becomes a consumer in a competitive market and now has an individual incentive to make an informed choice because the consequences of that choice will be felt directly by the person making it. Naturally, as you'd expect with an irrational electorate, things seem to have been going in precisely the opposite direction for much of the last century.

This is an excellently argued and exhaustively documented book (The ratio of pages of source citations and end notes to main text may be as great as anything I've read) which will make you look at democracy in a different way and begin to comprehend that in many cases where politicians do stupid things, they are simply carrying out the will of an irrational electorate. For a different perspective on the shortcomings of democracy, also with a primary focus on economics, see Hans-Hermann Hoppe's superb Democracy: The God that Failed (June 2002), which approaches the topic from a hard libertarian perspective.

 Permalink

Dickson, Paul. The Unwritten Rules of Baseball. New York: HarperCollins, 2009. ISBN 978-0-06-156105-4.
Baseball is as much a culture as a game, and a great deal of the way it is played, managed, umpired, reported, and supported by fans is not written down in the official rulebook but rather a body of unwritten rules, customs, traditions, and taboos which, when violated, can often bring down opprobrium upon the offender greater than that of a rulebook infraction. Some egregious offences against the unwritten rules are, as documented here, remembered many decades later and seen as the key event in a player's career. In this little book (just 256 pages) the author collects and codifies in a semi-formal style (complete with three level item numbers) the unwritten rules for players, managers, umpires, the official scorer, fans, and media. For example, under “players”, rule 1.12.1 is “As a pitcher, always walk off the field at the end of an inning; for all other players, the rule is run on, run off the field”. I've been watching baseball for half a century and I'll be darned to heck if I ever noticed that—nor ever recall seeing it violated. There is an extensive discussion of the etiquette of deliberately throwing at the batter: the art of the beanball seems as formalised as a Japanese tea ceremony.

The second half of the book is a collection of aphorisms, rules of thumb, and customs organised alphabetically. In both this section and the enumerated rules, discussions of notable occasions where the rule was violated and the consequences are included. Three appendices provide other compilations of unwritten rules, including one for Japanese major leaguers.

Many of these rules will be well known to fans, but others provide little-known insight into the game. For example, did you know that hitters on a team will rarely tell a pitcher on their own team that he has a “tell” which indicates which pitch he's about to throw? This book explains the logic behind that seemingly perverse practice. I also loved the observation that the quality of books about a sport is inversely related to the size of the ball. Baseball fans, including this one who hasn't seen a game either live or televised for more than a decade, will find this book both a delight and enlightening.

 Permalink

Clancy, Tom and Steve Pieczenik. Net Force. New York: Berkley, 1999. ISBN 978-0-425-16172-2.
One of the riskiest of marketing strategies is that of “brand expansion”: you have a hugely successful product whose brand name is near-universally known and conveys an image of quality, customer satisfaction, and market leadership. But there's a problem—the very success of the brand has led to its saturating the market, either by commanding a dominant market share or inability to produce additional volume. A vendor in such a position may opt to try to “expand” the brand, leveraging its name recognition by applying it to other products, for example a budget line aimed at less well-heeled customers, a line of products related to the original (Watermelon-Mango Coke), or a completely unrelated product (Volvo dog food). This sometimes works, and works well, but more often it fails at a great cost not only to the new product (but then a large majority of all new products fail, including those of the largest companies with the most extensive market research capabilities), but also to the value of the original brand. If a brand which has become almost synonymous with its project category (Coke, Xerox, Band-Aid) becomes seen as a marketing gimmick cynically applied to induce consumers to buy products which have not earned and are not worthy of the reputation of the original brand, both the value of that brand and the estimation of its owner fall in eyes of potential customers.

Tom Clancy, who in the 1980s and 1990s was the undisputed master of the techno/political/military thriller embarked upon his own program of brand expansion, lending his name to several series of books and video games written by others and marketed under his name, leading the naïve reader to believe they were Clancy's work or at least done under his supervision and comparable to the standard of his own fiction. For example, the present book, first in the “Net Force” series, bears the complete title Tom Clancy's Net Force, an above-the-title blurb, “From the #1 New York Times Bestselling Author”, and the byline, “Created by Tom Clancy and Steve Pieczenik”. “Created”, eh…but who actually, you know, wrote the book? Well, that would be a gentleman named Steve Perry, whose name appears in the Acknowledgments in the sentence, “We'd like to thank Steve Perry for his creative ideas and his invaluable contributions to the preparation of the manuscript.”. Well yes, I suppose writing it is, indeed, an invaluable contribution to the preparation of a manuscript!

Regardless of how a novel is branded, marketed, or produced, however, the measure of its merit is what's between the covers. So how does this book measure up to the standard of Clancy's own work? I bought this book when it first came out in 1999 as an “airplane book”, but never got around to reading it. I was aware of the nature of this book at the time, having read one of the similarly-produced “Op-Center” novels, so my expectations were not high, but then neither is the level of cognition I expect to devote to a book read on an airplane, even in the pre-2001 era when air travel was not the Hell of torture, extortion, and humiliation it has become today. Anyway, I read something else on that long-forgotten trip, and the present book sat on my shelf slowly yellowing around the edges until I was about to depart on a trip in June 2009. Whilst looking for an airplane book for this trip, I happened across it and, noting that it had been published almost exactly ten years before, was set in the year 2010, and focused upon the evolution of the Internet and human-computer interaction, I thought it would be amusing to compare the vision of Clancy et alii for the next decade to the actual world in which we're living.

Well, I read it—the whole thing, in fact, on the outbound leg of what was supposed to be a short trip—you know you're having a really bad airline experience when due to thunderstorms and fog you end up in a different country than one on the ticket. My reaction? From the perspective of the present day, this is a very silly, stupid, and poorly written novel. But the greater problem is that from the perspective of 1999 this is a very silly, stupid, and poorly written novel. The technology of the 2010 in the story is not only grossly different from what we have at present, it doesn't make any sense at all to anybody with the most rudimentary knowledge of how computers, the Internet, or for that matter human beings behave. It's as if the author(s) had some kind of half-baked idea of “cyberspace” as conceived by William Gibson and mixed it up with a too-literal interpretation of the phrase “information superhighway”, ending up with car and motorcycle chases where virtual vehicles are careening down the fibrebahn dodging lumbering 18-wheeled packets of bulk data. I'm not making this up—the author(s) are (p. 247), and asking you to believe it!

The need for suspension of disbelief is not suspended from the first page to the last, and the price seems to ratchet up with every chapter. At the outset, we are asked to believe that by “gearing up” with a holographic VR (virtual reality) visor, an individual not only sees three dimensional real time imagery with the full fidelity of human vision, but also experiences touch, temperature, humidity, smell, and acceleration. Now how precisely does that work, particularly the last which appears to be at variance with some work by Professor Einstein? Oh, and this VR gear is available at an affordable price to all computer users, including high school kids in their bedrooms, and individuals can easily create their own virtual reality environments with some simple programming. There is techno-babble enough here for another dozen seasons of “24”. On p. 349, in the 38th of 40 chapters, and completely unrelated to the plot, we learn “The systems were also ugly-looking—lean-mean-GI-green—but when it came to this kind of hardware, pretty was as pretty did. These were state-of-the-art 900 MHz machines, with the new FireEye bioneuro chips, massive amounts of fiberlight memory, and fourteen hours of active battery power if the local plugs didn't work.” 900 Mhz—imagine! (There are many even more egregious examples, but I'll leave it at this in the interest of brevity and so as not to induce nausea.)

But that's not all! Teenage super-hackers, naturally, speak in their own dialect, like (p. 140):

“Hey, Jimmy Joe. How's the flow?”
“Dee eff eff, Tyrone.” This stood for DFF—data flowin' fine.
“Listen, I talked to Jay Gee. He needs our help.”
“Nopraw,” Tyrone said. “Somebody is poppin' strands.”
“Tell me somethin' I don't compro, bro. Somebody is always poppin' strands.”
“Yeah, affirm, but this is different. There's a C-1 grammer [sic] looking to rass the whole web.”
“Nofeek?”
“Nofeek.”

If you want to warm up your suspension of disbelief to take on this twaddle, imagine Tom Clancy voluntarily lending his name and reputation to it. And, hey, if you like this kind of stuff, there are nine more books in the series to read!

 Permalink

Verne, Jules. Le Château des Carpathes. Paris: Poche, [1892] 1976. ISBN 978-2-253-01329-7.
This is one of Jules Verne's later novels, originally published in 1892, and is considered “minor Verne”, which is to say it's superior to about 95% of science and adventure fiction by other authors. Five years before Bram Stoker penned Dracula, Verne takes us to a looming, gloomy, and abandoned (or is it?) castle on a Carpathian peak in Transylvania, to which the superstitious residents of nearby villages attribute all kinds of supernatural goings on. Verne is clearly having fun with the reader in this book, which reads like a mystery, but what is mysterious is not whodunit, but rather what genre of book you're reading: is it a ghost story, tale of the supernatural, love triangle, mad scientist yarn, or something else? Verne manages to keep all of these balls in the air until the last thirty pages or so, when all is revealed and resolved. It's plenty of fun getting there, as the narrative is rich with the lush descriptive prose and expansive vocabulary for which Verne is renowned. It wouldn't be a Jules Verne novel without at least one stunning throwaway prediction of future technology; here it's the video telephone, to which he gives the delightful name “téléphote”.

A public domain electronic text edition is available from Project Gutenberg in a variety of formats. A (pricey) English translation is available. I have not read it and cannot vouch for its faithfulness to Verne's text.

 Permalink

Leeson, Peter T. The Invisible Hook. Princeton: Princeton University Press, 2009. ISBN 978-0-691-13747-6.
(Guest review by Iron Jack Rackham)
Avast, ye scurvy sea-dogs! Here we gentlemen of profit have crafted our swashbuckling customs to terrify those we prey upon, and now along comes a doubly-damned economist, and a landlubber at that, to explain how our curious ways can be explained by our own self-interest and lust for booty. Why do we who sail under the skull and crossbones democratically elect our captains and quartermasters: one pirate, one vote? Why do all pirates on the crew share equally in the plunder? Why do so many sailors voluntarily join pirate crews? Why do we pay “workman's compensation” to pirates wounded in battle? Why did the pirate constitutions that govern our ships embody separation of powers long before landlubber governments twigged to the idea? Why do we hoist the Jolly Roger and identify ourselves as pirates when closing with our prey? Why do we torture and/or slay those who resist, yet rarely harm crews which surrender without a fight? Why do our ships welcome buccaneers of all races as free men on an equal basis, even when “legitimate” vessels traded in and used black slaves and their governments tolerated chattel slavery?

This economist would have you believe it isn't our outlaw culture that makes us behave that way, but rather that our own rational choice, driven by our righteous thirst for treasure chests bulging with jewels, gold, and pieces of eight leads us, as if by an invisible hook, to cooperate toward our common goals. And because we're hostis humani generis, we need no foul, coercive governments to impose this governance upon us: it's our own voluntary association which imposes the order we need to achieve our highly profitable plunder—the author calls it “an-arrgh-chy”, and it works for us. What's that? A sail on the horizon? To yer' posts, me hearties, and hoist the Jolly Roger, we're off a-piratin'!

Thank you, Iron Jack—a few more remarks, if I may…there's a lot more in this slim volume (211 pages of main text): the Jolly Roger as one of the greatest brands of all time, lessons from pirates for contemporary corporate managers, debunking of several postmodern myths such as pirates having been predominately homosexual (“swishbucklers”), an examination of how pirates established the defence in case of capture that they had been compelled to join the pirate crew, and an analysis of how changes in Admiralty law shifted the incentives and brought the golden age of piracy to an end in the 1720s.

Exists there a person whose inner child is not fascinated by pirates? This book demonstrates why pirates also appeal to one's inner anarcho-libertarian, while giving pause to those who believe that market forces, unconstrained by a code of morality, always produce good outcomes.

A podcast interview with the author is available.

 Permalink

July 2009

Swanson, Gerald. The Hyperinflation Survival Guide. Lake Oswego, OR: Eric Englund, 1989. ISBN 978-0-9741180-1-7.
In the 1980s, Harry E. Figgie, founder of Figgie International, became concerned that the then-unprecedented deficits, national debt, and trade imbalance might lead to recurrence of inflation, eventual spiralling into catastrophic hyperinflation (defined in 1956 by economist Phillip Cagan as a 50% or more average rise in prices per month, equivalent to an annual inflation rate of 12,875% or above). While there are a number of books on how individuals and investors can best protect themselves during an inflationary episode, Figgie found almost no guidance for business owners and managers for strategies to enable their enterprises to survive and make the best of the chaotic situation which hyperinflation creates.

To remedy this lacuna, Figgie assembled a three person team headed by the author, an economist at the University of Arizona, and dispatched them to South America, where on four visits over two years, they interviewed eighty business leaders and managers, bankers, and accounting professionals in Argentina, Bolivia, and Brazil, all of which were in the grip of calamitous inflation at the time, to discover how they managed to survive and cope with the challenge of prices which changed on a daily or even more frequent basis. This short book (or long pamphlet—it's less than 100 pages all-up) is the result.

The inflation which Figgie feared for the 1990s did not come to pass, but the wisdom Swanson and his colleagues collect here is applicable to any epoch of runaway inflation, wherever in the world and whenever it may eventuate. With money creation and debt today surpassing anything in the human experience, and the world's reserve currency being supported only by the willingness of other nations to lend to the United States, one certainly cannot rule out hyperinflation as a possible consequence when all of this paper money works its way through the economy and starts to bid up prices. Consequently, any business owner would be well advised to invest the modest time it takes to read this book and ponder how the advice herein, not based upon academic theorising but rather the actual experience of managers in countries suffering hyperinflation and whose enterprises managed to survive it, could be applied to the circumstances of their own business.

If you didn't live through, or have forgotten, the relatively mild (by these standards) inflation of the 1970s, this book drives home how fundamentally corrupting inflation is. Inflation is, after all, nothing other than the corruption by a national government of the currency it issues, and this corruption sullies everybody who transacts in that currency. Long term business planning goes out the window: “long term” comes to mean a week or two and “short term” today. Sound business practices such as minimising inventory and just in time manufacturing become suicidal when inventory appreciates more rapidly than money placed at interest. Management controls and the chain of command evaporate as purchasing managers must be delegated the authority to make verbal deals on the spot, paid in cash, to obtain the supplies the company needs at prices that won't bankrupt it. If wage and price controls are imposed by the government (as they always are, despite forty centuries of evidence they never work), more and more management resources must be diverted to gaming the system to retain workforce and sell products at a profitable price. Previously mundane areas of the business: purchasing and treasury, become central to the firm's survival, and speculation in raw materials and financial assets may become more profitable than the actual operations of the company. Finally (and the book dances around this a bit without ever saying it quite so baldly as I shall here), there's the flat-out corruption when the only option a business has to keep its doors open and its workers employed may be to buy or sell on the black market, evade wage and price controls by off-the-books transactions, and greasing the skids of government agencies with bulging envelopes of rapidly depreciating currency passed under the table to their functionaries.

Any senior manager, from the owner of a small business to the CEO of a multinational, who deems hyperinflation a possible outcome of the current financial turbulence, would be well advised to read this book. Although published twenty years ago, the pathology of inflation is perennial, and none of the advice is dated in any way. Indeed, as businesses have downsized, outsourced, and become more dependent upon suppliers around the globe, they are increasingly vulnerable to inflation of their home country currency. I'll wager almost every CEO who spends the time to read this book will spend the money to buy copies for all of his direct reports.

When this book was originally published by Figgie International, permission to republish any part or the entire book was granted to anybody as long as the original attribution was retained. If you look around on the Web, you'll find several copies of this book in various formats, none of which I'd consider ideal, but which at least permit you to sample the contents before ordering a print edition.

 Permalink

Keegan. John. The Face of Battle. New York: Penguin, 1976. ISBN 978-0-14-004897-1.
As the author, a distinguished military historian, observes in the extended introduction, the topic of much of military history is battles, but only rarely do historians delve into the experience of battle itself—instead they treat the chaotic and sanguinary events on the battlefield as a kind of choreography or chess game, with commanders moving pieces on a board. But what do those pieces, living human beings in the killing zone, actually endure in battle? What motivates them to advance in the face of the enemy or, on the other hand, turn and run away? What do they see and hear? What wounds do they suffer, and what are their most common cause, and how are the wounded treated during and after the battle? How do the various military specialities: infantry, cavalry, artillery, and armour, combat one another, and how can they be used together to achieve victory?

To answer these questions, the author examines three epic battles of their respective ages: Agincourt, Waterloo, and the first day of the Somme Offensive. Each battle is described in painstaking detail, not from that of the commanders, but the combatants on the field. Modern analysis of the weapons employed and the injuries they inflict is used to reconstruct the casualties suffered and their consequences for the victims. Although spanning almost five centuries, all of these battles took place in northwest Europe between European armies, and allow holding cultural influences constant (although, of course, evolving over time) as expansion of state authority and technology increased the size and lethality of the battlefield by orders of magnitude. (Henry's entire army at Agincourt numbered less than 6,000 and suffered 112 deaths during the battle, while on the first day of the Somme, British forces alone lost 57,470 men, with 19,240 killed.)

The experiences of some combatants in these set piece battles are so alien to normal human life that it is difficult to imagine how they were endured. Consider the Inniskilling Regiment, which arrived at Waterloo after the battle was already underway. Ordered by Wellington to occupy a position in the line, they stood there in static formation for four hours, while receiving cannon fire from French artillery several hundred yards away. During those hours, 450 of the regiment's 750 officers and men were killed and wounded, including 17 of the 18 officers. The same regiment, a century later, suffered devastating losses in a futile assault on the first day of the Somme.

Battles are decided when the intolerable becomes truly unendurable, and armies dissolve into the crowds from which they were formed. The author examines this threshold in various circumstances, and what happens when it is crossed and cohesion is lost. In a concluding chapter he explores how modern mechanised warfare (recall that when this book was published the threat of a Soviet thrust into Western Europe with tanks and tactical nuclear weapons was taken with deadly seriousness by NATO strategists) may have so isolated the combatants from one another and subjected them to such a level of lethality that armies might disintegrate within days of the outbreak of hostilities. Fortunately, we never got to see whether this was correct, and hopefully we never will.

I read the Kindle edition using the iPhone Kindle application. It appears to have been created by OCR scanning a printed copy of the book and passing it through a spelling checker, but with no further editing. Unsurprisingly, the errors one is accustomed to in scanned documents abound. The word “modern”, for example, appears more than dozen times as “modem”. Now I suppose cybercommand does engage in “modem warfare”, but this is not what the author means to say. The Kindle edition costs only a dollar less than the paperback print edition, and such slapdash production values are unworthy of a publisher with the reputation of Penguin.

 Permalink

O'Rourke, P. J. Driving Like Crazy. New York: Atlantic Monthly Press, 2009. ISBN 978-0-8021-1883-7.
Sex, drugs, fast cars, crazed drivers, vehicular mayhem spanning the globe from Manhattan to Kyrgyzstan, and vehicles to die for (or in) ranging from Fangio's 1939 Chevrolet racer to a six-wheel-drive Soviet Zil truck—what's not to like! Humorist and eternally young speed demon P. J. O'Rourke recounts the adventures of his reckless youth and (mostly) wreckless present from the perspective of someone who once owned a 1960 MGA (disclaimer: I once owned a 1966 MGB I named “Crunderthush”—Keith Laumer fans will understand why) and, decades later, actually, seriously contemplated buying a minivan (got better).

This collection of O'Rourke's automotive journalism has been extensively edited to remove irrelevant details and place each piece in context. His retrospective on the classic National Lampoon piece (included here) whose title is a bit too edgy for our family audience is worth the price of purchase all by itself. Ever wanted to drive across the Indian subcontinent flat-out? The account here will help you avoid that particular resolution of your mid-life crisis. (Hint: think “end of life crisis”—Whoa!)

You don't need to be a gearhead to enjoy this book. O'Rourke isn't remotely a gearhead himself: he just likes to drive fast on insane roads in marvellous machinery, and even if your own preference is to experience such joys vicariously, there are plenty of white knuckle road trips and great flatbeds full of laughs in this delightful read.

A podcast interview with the author is available.

 Permalink

Maymin, Zak. Publicani. Scotts Valley, CA: CreateSpace, 2008. ISBN 978-1-4382-2123-6.
I bought this book based on its being mentioned on a weblog as being a mix of Atlas Shrugged and “Harrison Bergeron”, and the mostly positive reviews on Amazon. Since both of those very different stories contributed powerfully to my present worldview, I was intrigued at what a synthesis of them might be like, so I decided to give this very short (just 218 pages in the print edition) novel a read.

Jerry Pournelle has written that aspiring novelists need to write at least a million words and throw them away before truly mastering their craft. I know nothing of the present author, but I suspect he hasn't yet reached that megaword milestone. There is promise here, and some compelling scenes and dialogue, but there is also the tendency to try to do too much in too few pages, and a chaotic sense of timing where you're never sure how much time has elapsed between events and how so much could occur on one timeline while another seems barely to have advanced. This is a story which could have been much better with the attention of an experienced editor, but in our outsourced, just-in-time, disintermediated economy, evidently didn't receive it, and hence the result is ultimately disappointing.

The potential of this story is great: a metaphorical exploration of the modern redistributive coercive state through a dystopia in which the “excess intelligence” of those favoured by birth is redistributed to the government elites most in need of it for “the good of society”. (Because, as has always been the case, politicians tend to be underendowed when it comes to intelligence.) Those subjected to the “redistribution” of their intelligence rebel, claiming “I own myself”—the single most liberating statement a free human can hurl against the enslaving state. And the acute reader comes to see how any redistribution is ultimately a forced taking of the mind, body, or labour of one person for the benefit of another who did not earn it: compassion at the point of a gun—the signature of the the modern state.

Unfortunately, this crystal clear message is largely lost among all of the other stuff the author tries to cram in. There's Jewish mysticism, the Kabbalah, an Essene secret society, the Russian Mafia, parapsychology, miraculous intervention, and guns with something called a “safety clip”, which I've never encountered on any of the myriad of guns I've discharged downrange.

The basic premise of intelligence being some kind of neural energy fluid one can suck from one brain and transfer to another is kind of silly, but I'd have been willing to accept it as a metaphor for sucking out the life of the mind from the creators to benefit not the consumers (it's never that way), but rather the rulers and looters. And if this book had done that, I'd have considered it a worthy addition to the literature of liberty. But, puh–leez, don't drop in a paragraph like:

Suddenly, a fiery chariot drawn by fiery horses descended from the sky. Sarah was driving. Urim and Thummim were shining on her breastplate of judgment.

Look, I've been backed into corners in stories myself on many occasions, and every time the fiery chariot option appears the best way out, I've found it best to get a good night's sleep and have another go at it on the morrow. Perhaps you have to write and discard a million words before achieving that perspective.

 Permalink

MacKenzie, Andrew. Adventures in Time. London: Athlone Press, 1997. ISBN 978-0-485-82001-0.
You are taking a pleasant walk when suddenly and without apparent reason an oppressive feeling of depression grips you. Everything seems unnaturally silent, and even the vegetation seems to have taken on different colours. You observe a house you've never noticed before when walking in the area and, a few minutes later, as you proceed, the depression lifts and everything seems as before. Later you mention what you've seen to a friend, who says she is absolutely certain nothing like the building you saw exists in the vicinity. Later, you retrace your path, and although you're sure you came the same way as before, you can find no trace of the house you so vividly remember having seen. Initially you just put it down as “just one of those things”, and not wishing to be deemed one of those people who “sees things”, make no mention of it. But still, it itches in the back of your mind, and one day, at the library, you look up historical records (which you've never consulted before) and discover that two hundred years ago on the site stood a house matching the one you saw, of which no trace remains today.

What's going on here? Well, nobody really has any idea, but experiences like that just described (loosely based upon the case described on pp. 35–38), although among the rarest of those phenomena we throw into the grab-bag called “paranormal”, have been reported sufficiently frequently to have been given a name: “retrocognition”. This small (143 page) book collects a number of accounts of apparent retrocognition from the obscure to the celebrated “adventure” of Misses Moberly and Jourdain at Versailles in 1901 (to which all of chapter 4 is devoted), and reports on detailed investigations of several cases, some of which were found to be simple misperception. All of these cases are based solely upon the reports of those who experienced them (in some cases with multiple observers confirming one another's perceptions) so, as with much of anecdotal psychical research, there is no way to rule out fraud, malice, mental illness, or false memories (the latter a concern because many of these reports concern events which occurred many years earlier). Still, the credentials, reputation, and social position of the people making these reports, and the straightforward and articulate way they describe what they experienced inclines one to take them seriously, at least as to what those making the reports perceived.

The author, at the time a Vice President of the Society for Psychical Research, considers several possible explanations, normal and paranormal, for these extraordinary experiences. He quotes a number of physicists on the enigma of time and causation in physics, but never really crosses the threshold from the usual domain of ESP, hauntings, and “psychic ether” (p. 126) to consider the even weirder possibility that these observers were accurately describing (within the well-known limits of eyewitness testimony) what they actually saw. My incompletely baked general theory of paranormal phenomena (GTPP) provides (once you accept its outlandish [to some] premises) a perfectly straightforward mechanism for retrocognition. Recall that in GTPP consciousness is thought of as a “browser” which perceives spacetime as unfolding through one path in the multiverse which embodies all possibilities. GTPP posits that consciousness has a very small (probably linked to Planck's constant in some way) ability to navigate along its path in spacetime: we call people who are good at this “lucky”. But let's look at the past half-space of what I call the “life cone”. The quantum potentialities of the future, branching in all their myriad ways, are frozen in the crystalline classical block universe as they are squeezed through the throat of the light cone—as Dyson said, the future is quantum mechanical; the past is classical. But this isn't “eternalism” in the sense that the future is forever fixed and that we have an illusion of free will; it's that the future contains all possibilities, and that we have a small ability to navigate into the future branch we wish to explore. Our past is, however, fixed once it's moved into our past light and life cones.

But who's to say that consciousness, this magnificent instrument of perception we use to browse spacetime events in our immediate vicinity and at the moment of the present of our life cone, cannot also, on rare occasions, triggered by who knows what, also browse events in our past, or even on other branches of the multiverse which our own individual past did not traverse? (The latter, perhaps, explaining vivid reports of observations which subsequent investigation conclusively determined never existed in the past—on our timeline. Friar Ockham would probably put this down to hallucination or, in the argot, “seein' things”, and I don't disagree with this interpretation; it's the historically confirmed cases that make you wonder.)

This book sat on my shelf for more than a decade before I got around to reading it from cover to cover. It is now out of print, and used copies are absurdly expensive; if you're interested in such matters, the present volume is interesting, but I cannot recommend it at the price at which it's currently selling unless you've experienced such a singular event yourself and seek validation that you're not the only one who “sees things” where your consciousness seems to browse the crystalline past or paths not taken by you in the multiverse.

 Permalink

August 2009

Weber, Bruce. As They See 'Em. New York: Scribner, 2009. ISBN 978-0-7432-9411-9.
In what other game is a critical dimension of the playing field determined on the fly, based upon the judgement of a single person, not subject to contestation or review, and depending upon the physical characteristics of a player, not to mention (although none dare discuss it) the preferences of the arbiter? Well, that would be baseball, where the plate umpire is required to call balls and strikes (about 160 called in an average major league game, with an additional 127 in which the batter swung at the pitch). A fastball from a major league pitcher, if right down the centre, takes about 11 milliseconds to traverse the strike zone, so that's the interval the umpire has, in the best case, to call the pitch. But big league pitchers almost never throw pitches over the fat part of the plate for the excellent reason that almost all hitters who have made it to the Show will knock such a pitch out of the park. So umpires have to call an endless series of pitches that graze the corners of the invisible strike zone, curving, sinking, sliding, whilst making their way to the catcher's glove, which wily catchers will quickly shift to make outside and inside pitches appear to be over the plate.

Major league umpiring is one of the most élite occupations in existence. At present, only sixty-eight people are full-time major league umpires and typically only one or two replacements are hired per year. Including minor leagues, there are fewer than 300 professional umpires working today, and since the inception of major league baseball, fewer than five hundred people have worked games as full-time umpires.

What's it like to pursue a career where if you do your job perfectly you're at best invisible, but if you make an error or, even worse, make a correct call that inflames the passion of the fans of the team it was made against, you're the subject of vilification and sometimes worse (what other sport has the equivalent of the cry from the stands, “Kill the umpire!”)? In this book, the author, a New York Times journalist, delves into the world of baseball umpiring, attending one of the two schools for professional umpires, following nascent umpires in their careers in the rather sordid circumstances of Single A ball (umpires have to drive from game to game on their own wheels—they get a mileage allowance, but that's all; often their accommodations qualify for my Sleazy Motel Roach Hammer Awards).

The author follows would-be umpires through school, the low minors, AA and AAA ball, and the bigs, all the way to veterans and the special pressures of the playoffs and the World Series. There are baseball anecdotes in abundance here: bad calls, high profile games where the umpire had to decide an impossible call, and the author's own experience behind the plate at an intersquad game in spring training where he first experienced the difference between play at the major league level and everything else—the clock runs faster. Relativity, dude—get used to it!

You think you know the rulebook? Fine—a runner is on third with no outs and the batter has a count of one ball and two strikes. The runner on third tries to steal home, and whilst sliding across the plate, is hit by the pitch, which is within the batter's strike zone. You make the call—50,000 fans and two irritable managers are waiting for you. What'll it be, ump? You have 150 milliseconds to make your call before the crowd starts to boo. (The answer is at the end of these remarks.) Bear in mind before you answer that any major league umpire gets this right 100% of the time—it's right there in the rulebook in section 6.05 (n).

Believers in “axiomatic baseball” may be dismayed at some of the discretion documented here by umpires who adjust the strike zone to “keep the game moving along” (encouraged by a “pace of game” metric used by their employer to rate them for advancement). I found the author's deliberately wrong call in a Little League blowout game (p. 113) reprehensible, but reasonable people may disagree.

As of January 2009, 289 people have been elected to the Baseball Hall of Fame. How many umpires? Exactly eight—can you name a single one? Umpires agree that they do their job best when they are not noticed, but there will be those close calls where their human judgement and perception make the difference, some of which may be, even in this age of instant replay, disputed for decades afterward. As one umpire said of a celebrated contentious call, “I saw what I saw, and I called what I saw”. The author concludes:

Baseball, I know, needs people who can not only make snap decisions but live with them, something most people will do only when there's no other choice. Come to think of it, the world in general needs people who accept responsibility so easily and so readily. We should be thankful for them.

Batter up!

Answer: The run scores, the batter is called out on strikes, and the ball is dead. Had there been two outs, the third strike would have ended the inning and the run would not have scored (p. 91).

 Permalink

Flynn, Vince. Separation of Power. New York: Pocket Books, [2001] 2009. ISBN 978-1-4391-3573-0.
Golly, these books go down smoothly, and swiftly too! This is the third novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. It continues the “story arc” begun in the second novel, The Third Option (June 2009), and picks up just two weeks after the conclusion of that story. While not leaving the reader with a cliffhanger, that book left many things to be resolved, and this novel sorts them out, administering summary justice to the malefactors behind the scenes.

The subject matter seems drawn from current and recent headlines: North Korean nukes, “shock and awe” air strikes in Iraq, special forces missions to search for Saddam's weapons of mass destruction, and intrigue in the Middle East. What makes this exceptional is that this book was originally published in 2001—before! It holds up very well when read eight years later although, of course, subsequent events sadly didn't go the way the story envisaged.

There are a few goofs: copy editors relying on the spelling checker instead of close proofing allowed a couple of places where “sight” appeared where “site” was intended, and a few other homonym flubs. I'm also extremely dubious that weapons with the properties described would have been considered operational without having been tested. And the premise of the final raid seems a little more like a video game than the circumstances of the first two novels. As one who learnt a foreign language in adulthood, I can testify that it is extraordinarily difficult to speak without an obvious accent. Is it plausible that Mitch can impersonate a figure that the top-tier security troops guarding the bunker have seen on television many times?

Still, the story works, and it's a page turner. The character of Mitch Rapp continues to darken in this novel. He becomes ever more explicitly an assassin, and notwithstanding however much his targets “need killin'”, it's unsettling to think of agents of coercive government sent to eliminate people those in power deem inconvenient, and difficult to consider the eliminator a hero. But that isn't going to keep me from reading the next in the series in a month or two.

See my comments on the first installment for additional details about the series and a link to an interview with the author.

 Permalink

Drury, Bob and Tom Clavin. Halsey's Typhoon. New York: Grove Press, 2007. ISBN 978-0-8021-4337-2.
As Douglas MacArthur's forces struggled to expand the beachhead of their landing on the Philippine island of Mindoro on December 15, 1944, Admiral William “Bull” Halsey's Third Fleet was charged with providing round the clock air cover over Japanese airfields throughout the Philippines, both to protect against strikes being launched against MacArthur's troops and kamikaze attacks on his own fleet, which had been so devastating in the battle for Leyte Gulf three weeks earlier. After supporting the initial landings and providing cover thereafter, Halsey's fleet, especially the destroyers, were low on fuel, and the admiral requested and received permission to withdraw for a rendezvous with an oiler task group to refuel.

Unbeknownst to anybody in the chain of command, this decision set the Third Fleet on a direct intercept course with the most violent part of an emerging Pacific (not so much, in this case) typhoon which was appropriately named, in retrospect, Typhoon Cobra. Typhoons in the Pacific are as violent as Atlantic hurricanes, but due to the circumstances of the ocean and atmosphere where they form and grow, are much more compact, which means that in an age prior to weather satellites, there was little warning of the onset of a storm before one found oneself overrun by it.

Halsey's orders sent the Third Fleet directly into the bull's eye of the disaster: one ship measured sustained winds of 124 knots (143 miles per hour) and seas in excess of 90 feet. Some ships' logs recorded the barometric pressure as “U”—the barometer had gone off-scale low and the needle was above the “U” in “U. S. Navy”.

There are some conditions at sea which ships simply cannot withstand. This was especially the case for Farragut class destroyers, which had been retrofitted with radar and communication antennæ on their masts and a panoply of antisubmarine and gun directing equipment on deck, all of which made them top-heavy, vulnerable to heeling in high winds, and prone to capsize.

As the typhoon overtook the fleet, even the “heavies” approached their limits of endurance. On the aircraft carrier USS Monterey, Lt. (j.g.) Jerry Ford was saved from being washed off the deck to a certain death only by luck and his athletic ability. He survived, later to become President of the United States. On the destroyers, the situation was indescribably more dire. The watch on the bridge saw the inclinometer veer back and forth on each roll between 60 and 70 degrees, knowing that a roll beyond 71° might not be recoverable. They surfed up the giant waves and plunged down, with screws turning in mid-air as they crested the giant combers. Shipping water, many lost electrical power due to shorted-out panels, and most lost their radar and communications antennæ, rendering them deaf, dumb, and blind to the rest of the fleet and vulnerable to collisions.

The sea took its toll: in all, three destroyers were sunk, a dozen other ships were hors de combat pending repairs, and 146 aircraft were destroyed, all due to weather and sea conditions. A total of 793 U.S. sailors lost their lives, more than twice those killed in the Battle of Midway.

This book tells, based largely upon interviews with people who were there, the story of what happens when an invincible fleet encounters impossible seas. There are tales of heroism every few pages, which are especially poignant since so many of the heroes had not yet celebrated their twentieth birthdays, hailed from landlocked states, and had first seen the ocean only months before at the start of this, their first sea duty. After the disaster, the heroism continued, as the crew of the destroyer escort Tabberer, under its reservist commander Henry L. Plage, who disregarded his orders and, after his ship was dismasted and severely damaged, persisted in the search and rescue of survivors from the foundered ships, eventually saving 55 from the ocean. Plage expected to face a court martial, but instead was awarded the Legion of Merit by Halsey, whose orders he ignored.

This is an epic story of seamanship, heroism, endurance, and the nigh impossible decisions commanders in wartime have to make based upon the incomplete information they have at the time. You gain an appreciation for how the master of a ship has to balance doing things by the book and improvising in exigent circumstances. One finding of the Court of Inquiry convened to investigate the disaster was that the commanders of the destroyers which were lost may have given too much priority to following pre-existing orders to hold their stations as opposed to the overriding imperative to save the ship. Given how little experience these officers had at sea, this is not surprising. CEOs should always keep in mind this utmost priority: save the ship.

Here we have a thoroughly documented historical narrative which is every bit as much a page-turner as the the latest ginned-up thriller. As it happens, one of my high school teachers was a survivor of this storm (on one of the ships which did not go down), and I remember to this day how harrowing it was when he spoke of destroyers “turning turtle”. If accounts like this make you lose sleep, this is not the book for you, but if you want to experience how ordinary people did extraordinary things in impossible circumstances, it's an inspiring narrative.

 Permalink

Jenkins, Dennis R. and Jorge R. Frank. The Apollo 11 Moon Landing. North Branch, MN: Specialty Press, 2009. ISBN 978-1-58007-148-2.
This book, issued to commemorate the 40th anniversary of the Apollo 11 Moon landing, is a gorgeous collection of photographs, including a number of panoramas digitally assembled from photos taken during the mission which appear here for the first time. The images cover all aspects of the mission: the evolution of the Apollo project, crew training, stacking the launcher and spacecraft, voyage to the Moon, surface operations, and return to Earth. The photos have accurate and informative captions, and each chapter includes a concise but comprehensive description of its topic.

This is largely a picture book, and almost entirely focused upon the Apollo 11 mission, not the Apollo program as a whole. Unless you are an absolute space nut (guilty as charged), you will almost certainly see pictures here you've never seen before, including Neil Armstrong's brush with death when the Lunar Landing Research Vehicle went all pear shaped and he had to punch out (p. 35). Look at how the ejection seat motor vectored to buy him altitude for the chute to open!

Did you know that the iconic image of Buzz Aldrin on the Moon was retouched (or, as we'd say today, PhotoShopped)? No, I'm not talking about a Moon hoax, but just that Neil Armstrong, with his Hasselblad camera and no viewfinder, did what so many photographers do—he cut off Aldrin's head in the picture. NASA public affairs folks “reconstructed” the photo that Armstrong meant to take, but whilst airbrushing the top of the helmet, they forgot to include the OPS VHF antenna which extends from Aldrin's backpack in many other photos taken on the lunar surface.

This is a great book, and a worthy commemoration of the achievement of Apollo 11. It, of course, only scratches the surface of the history of the Apollo program, or even the details of Apollo 11 mission, but I don't know an another source which brings together so many images which evoke that singular exploit. The Introduction includes a list of sources for further reading which I was amazed (or maybe not) to discover that all of which I had read.

 Permalink

Gray, Theodore. Theo Gray's Mad Science. New York: Black Dog & Leventhal Publishers, 2009. ISBN 978-1-57912-791-6.
Regular visitors here will recall that from time to time I enjoy mocking the fanatically risk-averse “safetyland” culture which has gripped the Western world over the last several decades. Pendulums do, however, have a way of swinging back, and there are a few signs that sanity (or, more accurately, entertaining insanity) may be starting to make a comeback. We've seen The Dangerous Book for Boys and the book I dubbed The Dangerous Book for Adults, but—Jeez Louise—look at what we have here! This is really the The Shudderingly Hazardous Book for Crazy People.

A total of fifty-four experiments (all summarised on the book's Web site) range from heating a hot tub with quicklime and water, exploding bubbles filled with a mixture of hydrogen and oxygen, making your own iron from magnetite sand with thermite, turning a Snickers bar into rocket fuel, and salting popcorn by bubbling chlorine gas through a pot of molten sodium (it ends badly).

The book is subtitled “Experiments You Can Do at Home—But Probably Shouldn't”, and for many of them that's excellent advice, but they're still a great deal of fun to experience vicariously. I definitely want to try the ice cream recipe which makes a complete batch in thirty seconds flat with the aid of liquid nitrogen. The book is beautifully illustrated and gives the properties of the substances involved in the experiments. Readers should be aware that as the author prominently notes at the outset, the descriptions of many of the riskier experiments do not provide all the information you'd need to perform them safely—you shouldn't even consider trying them yourself unless you're familiar with the materials involved and experienced in the precautions required when working with them.

 Permalink

September 2009

Spotts, Frederic. The Shameful Peace. New Haven, CT: Yale University Press, 2008. ISBN 978-0-300-13290-8.
Paris between the World Wars was an international capital of the arts such as the world had never seen. Artists from around the globe flocked to this cosmopolitan environment which was organised more around artistic movements than nationalities. Artists drawn to this cultural magnet included the Americans Ezra Pound, Ernest Hemingway, F. Scott Fitzgerald, Gertrude Stein, Henry Miller, e.e. cummings, Virgil Thomson, and John Dos Passos; Belgians René Magritte and Georges Simenon; the Irish James Joyce and Samuel Beckett; Russians Igor Stravinsky, Sergei Prokofiev, Vladimir Nabokov, and Marc Chagall; and Spaniards Pablo Picasso, Joan Miró, and Salvador Dali, only to mention some of the nationalities and luminaries.

The collapse of the French army and British Expeditionary Force following the German invasion in the spring of 1940, leading to the armistice between Germany and France on June 22nd, turned this world upside down. Paris found itself inside the Occupied Zone, administered directly by the Germans. Artists in the “Zone libre” found themselves subject to the Vichy government's cultural decrees, intended to purge the “decadence” of the interwar years.

The defeat and occupation changed the circumstances of Paris as an artistic capital overnight. Most of the foreign expatriates left (but not all: Picasso, among others, opted to stay), so the scene became much more exclusively French. But remarkably, or maybe not, within a month of the armistice, the cultural scene was back up and running pretty much as before. The theatres, cinemas, concert and music halls were open, the usual hostesses continued their regular soirées with the customary attendees, and the cafés continued to be filled with artists debating the same esoterica. There were changes, to be sure: the performing arts played to audiences with a large fraction of Wehrmacht officers, known Jews were excluded everywhere, and anti-German works were withdrawn by publishers and self-censored thereafter by both authors and publishers in the interest of getting their other work into print.

The artistic milieu, which had been overwhelmingly disdainful of the Third Republic, transferred their scorn to Vichy, but for the most part got along surprisingly well with the occupier. Many attended glittering affairs at the German Institute and Embassy, and fell right in with the plans of Nazi ambassador Otto Abetz to co-opt the cultural élite and render them, if not pro-German, at least neutral to the prospects of France being integrated into a unified Nazi Europe.

The writer and journalist Alfred Fabre-Luce was not alone in waxing with optimism over the promise of the new era, “This will not sanctify our defeat, but on the contrary overcome it. Rivalries between countries, that were such a feature of nineteenth-century Europe, have become passé. The future Europe will be a great economic zone where people, weary of incessant quarrels, will live in security”. Drop the “National” and keep the “Socialist”, and that's pretty much the same sentiment you hear today from similarly-placed intellectuals about the odious, anti-democratic European Union.

The reaction of intellectuals to the occupation varied from enthusiastic collaboration to apathetic self-censorship and an apolitical stance, but rarely did it cross the line into active resistance. There were some underground cultural publications, and some well-known figures did contribute to them (anonymously or under a pseudonym, bien sûr), but for the most part artists of all kinds got along, and adjusted their work to the changed circumstances so that they could continue to be published, shown, or performed. A number of prominent figures emigrated, mostly to the United States, and formed an expatriate French avant garde colony which would play a major part in the shift of the centre of the arts world toward New York after the war, but they were largely politically disengaged while the war was underway.

After the Liberation, the purge (épuration) of collaborators in the arts was haphazard and inconsistent. Artists found themselves defending their work and actions during the occupation before tribunals presided over by judges who had, after the armistice, sworn allegiance to Pétain. Some writers received heavy sentences, up to and including death, while their publishers, who had voluntarily drawn up lists of books to be banned, confiscated, and destroyed got off scot-free and kept right on running. A few years later, as the Trente Glorieuses began to pick up steam, most of those who had not been executed found their sentences commuted and went back to work, although the most egregious collaborators saw their reputations sullied for the rest of their lives. What could not be restored was the position of Paris as the world's artistic capital: the spotlight had moved on to the New World, and New York in particular.

This excellent book stirs much deeper thoughts than just those of how a number of artists came to terms with the occupation of their country. It raises fundamental questions as to how creative people behave, and should behave, when the institutions of the society in which they live are grossly at odds with the beliefs that inform their work. It's easy to say that one should rebel, resist, and throw one's body onto the gears to bring the evil machine to a halt, but it's entirely another thing to act in such a manner when you're living in a city where the Gestapo is monitoring every action of prominent people and you never know who may be an informer. Lovers of individual liberty who live in the ever-expanding welfare/warfare/nanny states which rule most “developed” countries today will find much to ponder in observing the actions of those in this narrative, and may think twice the next time they're advised to “be reasonable; go along: it can't get that bad”.

 Permalink

Charpak, Georges et Richard L. Garwin. Feux follets et champignons nucléaires. Paris: Odile Jacob, [1997] 2000. ISBN 978-2-7381-0857-9.
Georges Charpak won the Nobel Prize in Physics in 1992, and was the last person, as of this writing, to have won an unshared Physics Nobel. Richard Garwin is a quintessential “defence intellectual”: he studied under Fermi, did the detailed design of Ivy Mike, the first thermonuclear bomb, has been a member of Jason and adviser on issues of nuclear arms control and disarmament for decades, and has been a passionate advocate against ballistic missile defence and for reducing the number of nuclear warheads and the state of alert of strategic nuclear forces.

In this book the authors, who do not agree on everything and take the liberty to break out from the main text on several occasions to present their individual viewpoints, assess the state of nuclear energy—civil and military—at the turn of the century and try to chart a reasonable path into the future which is consistent with the aspirations of people in developing countries, the needs of a burgeoning population, and the necessity of protecting the environment both from potential risks from nuclear technology but also the consequences of not employing it as a source of energy. (Even taking Chernobyl into account, the total radiation emitted by coal-fired power plants is far greater than that of all nuclear stations combined: coal contains thorium, and when it is burned, it escapes in flue gases or is captured and disposed of in landfills. And that's not even mentioning the carbon dioxide emitted by burning fossil fuels.)

The reader of this book will learn a great deal about the details of nuclear energy: perhaps more than some will have the patience to endure. I made it through, and now I really understand, for the first time, why light water reactors have a negative thermal coefficient: as the core gets hotter, the U-238 atoms are increasingly agitated by the heat, and consequently are more likely due to Doppler shift to fall into one of the resonances where their neutron absorption is dramatically enhanced.

Charpak and Garwin are in complete agreement that civil nuclear power should be the primary source of new electrical generation capacity until and unless something better (such as fusion) comes along. They differ strongly on the issue of fuel cycle and waste management: Charpak argues for the French approach of reprocessing spent fuel, extracting the bred plutonium, and burning it in power reactors in the form of mixed oxide (MOX) fuel. Garwin argues for the U.S. approach of a once-through fuel cycle, with used fuel buried, its plutonium energy content discarded in the interest of “economy”. Charpak points out that the French approach drastically reduces the volume of nuclear waste to be buried, and observes that France does not have a Nevada in which to bury it.

Both authors concur that breeder reactors will eventually have a rôle to play in nuclear power generation. Not only do breeders multiply the energy which can be recovered from natural uranium by a factor of fifty, they can be used to “burn up” many of the radioactive waste products of conventional light water reactors. Several next-generation reactor concepts are discussed, including Carlo Rubbia's energy amplifier, in which the core is inherently subcritical, and designs for more conventional reactors which are inherently safe in the event of loss of control feedback or cooling. They conclude, however, that further technology maturation is required before breeders enter into full production use and that, in retrospect, Superphénix was premature.

The last third of the book is devoted to nuclear weapons and the prospects for reducing the inventory of declared nuclear powers, increasing stability, and preventing proliferation. There is, as you would expect from Garwin, a great deal of bashing the concept of ballistic missile defence (“It can't possibly work, and if it did it would be bad”). This is quite dated, as many of the arguments and the lengthy reprinted article date from the mid 1980s when the threat was a massive “war-gasm” salvo launch of thousands of ICBMs from the Soviet Union, not one or two missiles from a rogue despot who's feeling “ronery”. The authors quite reasonably argue that current nuclear force levels are absurd, and that an arsenal about the size of France's (on the order of 500 warheads) should suffice for any conceivable deterrent purpose. They dance around the option of eliminating nuclear arms entirely, and conclude that such a goal is probably unachievable in a world in which such a posture would create an incentive for a rogue state to acquire even one or two weapons. They suggest a small deterrent force operated by an international authority—good luck with that!

This is a thoughtful book which encourages rational people to think for themselves about the energy choices facing humanity in the coming decades. It counters emotional appeals and scare trigger words with the best antidote: numbers. Numbers which demonstrate, for example, that the inherent radiation of atoms in the human body (mostly C-14 and K-40) and the variation in natural background radiation from one place to another on Earth is vastly greater than the dose received from all kinds of nuclear technology. The Chernobyl and Three Mile Island accidents are examined in detail, and the lessons learnt for safely operating nuclear power stations are explored. I found the sections on nuclear weapons weaker and substantially more dated. Although the book was originally published well after the collapse of the Soviet Union, the perspective is still very much that of superpower confrontation, not the risk of proliferation to rogue states and terrorist groups. Certainly, responsibly disposing of the excess fissile material produced by the superpowers in their grotesquely hypertrophied arsenals (ideally by burning it up in civil power reactors, as opposed to insanely dumping it into a hole in the ground to remain a risk for hundreds of thousands of years, as some “green” advocates urge) is an important way to reduce the risks of proliferation, but events subsequent to the publication of this book have shown that states are capable of mounting their own indigenous nuclear weapons programs under the eyes of international inspectors. Will an “international community” which is incapable of stopping such clandestine weapons programs have any deterrent credibility even if armed with its own nuclear-tipped missiles?

An English translation of this book, entitled Megawatts and Megatons, is available.

 Permalink

Flynn, Vince. Executive Power. New York: Pocket Books, 2003. ISBN 978-0-7434-5396-7.
This is the fourth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. At the end of the third novel, Separation of Power (August 2009), Rapp's identity was outed by a self-righteous and opportunistic congressman (who gets what's coming to him), and soon-to-be-married Rapp prepares to settle down at a desk job in the CIA's counterterrorism centre. But it's hard to keep a man of action down, and when political perfidy blows the cover on a hostage rescue operation in the Philippines, resulting in the death of two Navy SEALs, Rapp gets involved in the follow-up reprisal operation in a much more direct manner than anybody expected, and winds up with a non-life-threatening but extremely embarrassing and difficult to explain injury (hint, he spends most of the balance of the book standing up). Rapp has no hesitation in taking on terror masters single-handed, but he finds himself utterly unprepared for the withering scorn unleashed against him by the two women in his life: bride and boss.

Rapp soon finds himself on the trail of a person much like himself: an assassin who works in the shadows and leaves almost no traces of evidence. This malefactor, motivated by the desire for a political outcome just as sincere as Rapp's wish to protect his nation, is manipulating (or being manipulated by?) a rogue Saudi billionaire bent on provoking mayhem in the Middle East. The ever meddlesome chief of the Mossad is swept into the scheme, and events spiral toward the brink as Rapp tries to figure out what is really going on. The conclusion gets Rapp involved up close and personal, the way he likes it, and comes to a satisfying end.

This is the first of the Mitch Rapp novels to be written after the terrorist attacks of September 2001. Other than a few oblique references to those events, little in the worldview of the series has changed. Flynn's attention to detail continues to shine in this story. About the only unrealistic thing is imagining the U.S. government as actually being serious and competent in taking the battle to terrorists. See my comments on the first installment for additional details about the series and a link to an interview with the author.

 Permalink

McDonald, Allan J. and James R. Hansen. Truth, Lies, and O-Rings. Gainesville, FL: University Press of Florida, 2009. ISBN 978-0-8130-3326-6.
More than two decades have elapsed since Space Shuttle Challenger met its tragic end on that cold Florida morning in January 1986, and a shelf-full of books have been written about the accident and its aftermath, ranging from the five volume official report of the Presidential commission convened to investigate the disaster to conspiracy theories and accounts of religious experiences. Is it possible, at this remove, to say anything new about Challenger? The answer is unequivocally yes, as this book conclusively demonstrates.

The night before Challenger was launched on its last mission, Allan McDonald attended the final day before launch flight readiness review at the Kennedy Space Center, representing Morton Thiokol, manufacturer of the solid rocket motors, where he was Director of the Space Shuttle Solid Rocket Motor Project. McDonald initially presented Thiokol's judgement that the launch should be postponed because the temperatures forecast for launch day were far below the experience base of the shuttle program and an earlier flight at the lowest temperature to date had shown evidence of blow-by the O-ring seals in the solid rocket field joints. Thiokol engineers were concerned that low temperatures would reduce the resiliency of the elastomeric rings, causing them to fail to seal during the critical ignition transient. McDonald was astonished when NASA personnel, in a reversal of their usual rôle of challenging contractors to prove why their hardware was safe to fly, demanded that Thiokol prove the solid motor was unsafe in order to scrub the launch. Thiokol management requested a five minute offline caucus back at the plant in Utah (in which McDonald did not participate) which stretched to thirty minutes and ended up with a recommendation to launch. NASA took the unprecedented step of requiring a written approval to launch from Thiokol, which McDonald refused to provide, but which was supplied by his boss in Utah.

After the loss of the shuttle and its crew, and the discovery shortly thereafter that the proximate cause was almost certainly a leak in the aft field joint of the right solid rocket booster, NASA and Thiokol appeared to circle the wagons, trying to deflect responsibility from themselves and obscure the information available to decision makers in a position to stop the launch. It was not until McDonald's testimony to the Presidential Commission chaired by former Secretary of State William P. Rogers that the truth began to come out. This thrust McDonald, up to then an obscure engineering manager, into the media spotlight and the political arena, which he quickly discovered was not at all about his priorities as an engineer: finding out what went wrong and fixing it so it could never happen again.

This memoir, composed by McDonald from contemporary notes and documents with the aid of space historian James R. Hansen (author of the bestselling authorised biography of Neil Armstrong) takes the reader through the catastrophe and its aftermath, as seen by an insider who was there at the decision to launch, on a console in the firing room when disaster struck, before the closed and public sessions of the Presidential commission, pursued by sensation-hungry media, testifying before congressional committees, and consumed by the redesign and certification effort and the push to return the shuttle to flight. It is a personal story, but told in terms, as engineers are wont to do, based in the facts of the hardware, the experimental evidence, and the recollection of meetings which made the key decisions before and after the tragedy.

Anybody whose career may eventually land them, intentionally or not (the latter almost always the case), in the public arena can profit from reading this book. Even if you know nothing about and have no interest in solid rocket motors, O-rings, space exploration, or NASA, the dynamics of a sincere, dedicated engineer who was bent on doing the right thing encountering the ravenous media and preening politicians is a cautionary tale for anybody who finds themselves in a similar position. I wish I'd had the opportunity to read this book before my own Dark Night of the Soul encounter with a reporter from the legacy media. I do not mean to equate my own mild experience with the Hell that McDonald experienced—just to say that his narrative would have been a bracing preparation for what was to come.

The chapters on the Rogers Commission investigation provided, for me, a perspective I'd not previously encountered. Many people think of William P. Rogers primarily as Nixon's first Secretary of State who was upstaged and eventually replaced by Henry Kissinger. But before that Rogers was a federal prosecutor going after organised crime in New York City and then was Attorney General in the Eisenhower administration from 1957 to 1961. Rogers may have aged, but his skills as an interrogator and cross-examiner never weakened. In the sworn testimony quoted here, NASA managers, who come across like the kids who were the smartest in their high school class and then find themselves on the left side of the bell curve when they show up as freshmen at MIT, are pinned like specimen bugs to their own viewgraphs when they try to spin Rogers and his tag team of technical takedown artists including Richard Feynman, Neil Armstrong, and Sally Ride.

One thing which is never discussed here, but should be, is just how totally insane it is to use large solid rockets, in any form, in a human spaceflight program. Understand: solid rockets are best thought of as “directed bombs”, but if detonated at an inopportune time, or when not in launch configuration, can cause catastrophe. A simple spark of static electricity can suffice to ignite the propellant in a solid rocket, and once ignited there is no way to extinguish it until it is entirely consumed. Consider: in the Shuttle era, there are usually one or more Shuttle stacks in the Vehicle Assembly Building (VAB), and if NASA's Constellation Program continues, this building will continue to stack solid rocket motors in decades to come. Sooner or later, the inevitable is going to happen: a static spark, a crane dropping a segment, or an interference fit of two segments sending a hot fragment into the propellant below. The consequence: destruction of the VAB, all hardware inside, and the death of all people working therein. The expected stand-down of the U.S. human spaceflight program after such an event is on the order of a decade. Am I exaggerating the risks here? Well, maybe; you decide. But within two years, three separate disasters struck the production of large solid motors in 1985–1986. I shall predict: if NASA continue to use large solid motors in their human spaceflight program, there will be a decade-long gap in U.S. human spaceflight sometime in the next twenty years.

If you're sufficiently interested in these arcane matters to have read this far, you should read this book. Based upon notes, it's a bit repetitive, as many of the same matters were discussed in the various venues in which McDonald testified. But if you want to read a single book to prepare you for being unexpectedly thrust into the maw of ravenous media and politicians, I know of none better.

 Permalink

October 2009

Ferrigno, Robert. Heart of the Assassin. New York: Scribner, 2009. ISBN 978-1-4165-3767-0.
This novel completes the author's Assassin Trilogy, which began with Prayers for the Assassin (March 2006) and continued with Sins of the Assassin (March 2008). This is one of those trilogies in which you really want to read the books in order. While there is some effort to provide context for readers who start in the middle, you'll miss so much of the background of the scenario and the development and previous interactions of characters that you'll miss a great deal of what's going on. If you're unfamiliar with the world in which these stories are set, please see my comments on the earlier books in the series.

As this novel opens, a crisis is brewing as a heavily armed and increasingly expansionist Aztlán is ready to exploit the disunity of the Islamic Republic and the Bible Belt, most of whose military forces are arrayed against one another, to continue to nibble away at both. Visionaries on both sides imagine a reunification of the two monotheistic parts of what were once the United States, while the Old One and his mega-Machiavellian daughter Baby work their dark plots in the background. Former fedayeen shadow warrior Rakkim Epps finds himself on missions to the darkest part of the Republic, New Fallujah (the former San Francisco), and to the radioactive remains of Washington D.C., seeking a relic which might have the power to unite the nation once again.

Having read and tremendously enjoyed the first two books of the trilogy, I was very much looking forward to this novel, but having now read it, I consider it a disappointment. As the trilogy has progressed, the author seems to have become ever more willing to invent whatever technology he needs at the moment to advance the plot, whether or not it is plausible or consistent with the rest of the world he has created, and to admit the supernatural into a story which started out set in a world of gritty reality. I spent the first 270 pages making increasingly strenuous efforts to suspend disbelief, but then when one of the characters uses a medical oxygen tank as a flamethrower, I “lost it” and started laughing out loud at each of the absurdities in the pages that followed: “DNA knives” that melt into a person's forearm, holodeck hotel rooms with faithful all-senses stimulation and simulated lifeforms, a ghost, miraculous religious relics, etc., etc. The first two books made the reader think about what it would be like if a post-apocalyptic Great Awakening reorganised the U.S. around Islamic and Christian fundamentalism. In this book, all of that is swept into the background, and it's all about the characters (who one ceases to care much about, as they become increasingly comic book like) and a political plot so preposterous it makes Dan Brown's novels seem like nonfiction.

If you've read the first two novels and want to discover how it all comes out, you will find all of the threads resolved in this book. For me, there were just too many “Oh come on, now!” moments for the result to be truly satisfying.

A podcast interview with the author is available. You can read the first chapter of this book online at the author's Web site.

 Permalink

Vallee, Jacques. Forbidden Science. Vol. 2. San Francisco: Documatica Research, 2008. ISBN 978-0-615-24974-2.
This, the second volume of Jacques Vallee's journals, chronicles the years from 1970 through 1979. (I read the first volume, covering 1957–1969, before I began this list.) Early in the narrative (p. 153), Vallee becomes a U.S. citizen, but although surrendering his French passport, he never gives up his Gallic rationalism and scepticism, both of which serve him well in the increasingly weird Northern California scene in the Seventies. It was in those locust years that the seeds for the personal computing and Internet revolutions matured, and Vallee was at the nexus of this technological ferment, working on databases, Doug Englebart's Augmentation project, and later systems for conferencing and collaborative work across networks. By the end of the decade he, like many in Silicon Valley of the epoch, has become an entrepreneur, running a company based upon the conferencing technology he developed. (One amusing anecdote which indicates how far we've come since the 70s in mindset is when he pitches his conferencing system to General Electric who, at the time, had the largest commercial data network to support their timesharing service. They said they were afraid to implement anything which looked too much like a messaging system for fear of running afoul of the Post Office.)

If this were purely a personal narrative of the formative years of the Internet and personal computing, it would be a valuable book—I was there, then, and Vallee gets it absolutely right. A journal is, in many ways, better than a history because you experience the groping for solutions amidst confusion and ignorance which is the stuff of real life, not the narrative of an historian who knows how it all came out. But in addition to being a computer scientist, entrepreneur, and (later) venture capitalist, Vallee is also one of the preeminent researchers into the UFO and related paranormal phenomena (the character Claude Lacombe, played by François Truffaut in Steven Spielberg's 1977 movie Close Encounters of the Third Kind was based upon Vallee). As the 1970s progress, the author becomes increasingly convinced that the UFO phenomenon cannot be explained by extraterrestrials and spaceships, and that it is rooted in the same stratum of the human mind and the universe we inhabit which has given rise to folklore about little people and various occult and esoteric traditions. Later in the decade, he begins to suspect that at least some UFO activity is the work of deliberate manipulators bent on creating an irrational, anti-science worldview in the general populace, a hypothesis expounded in his 1979 book, Messengers of Deception, which remains controversial three decades after its publication.

The Bay Area in the Seventies was a kind of cosmic vortex of the weird, and along with Vallee we encounter many of the prominent figures of the time, including Uri Geller (who Vallee immediately dismisses as a charlatan), Doug Engelbart, J. Allen Hynek, Anton LaVey, Russell Targ, Hal Puthoff, Ingo Swann, Ira Einhorn, Tim Leary, Tom Bearden, Jack Sarfatti, Melvin Belli, and many more. Always on a relentlessly rational even keel, he observes with dismay as many of his colleagues disappear into drugs, cults, gullibility, pseudoscience, and fads as that dark decade takes its toll. In May 1979 he feels himself to be at “the end of an age that defied all conventions but failed miserably to set new standards” (p. 463). While this is certainly spot on in the social and cultural context in which he meant it, it is ironic that so many of the standards upon which the subsequent explosion of computer and networking technology are based were created in those years by engineers patiently toiling away in Silicon Valley amidst all the madness.

An introduction and retrospective at the end puts the work into perspective from the present day, and 25 pages of end notes expand upon items in the journals which may be obscure at this remove and provide source citations for events and works mentioned. You might wonder what possesses somebody to read more than five hundred pages of journal entries by somebody else which date from thirty to forty years ago. Well, I took the time, and I'm glad I did: it perfectly recreated the sense of the times and of the intellectual and technological challenges of the age. Trust me: if you're too young to remember the Seventies, it's far better to experience those years here than to have actually lived through them.

 Permalink

Woodbury, David O. The Glass Giant of Palomar. New York: Dodd, Mead, [1939, 1948] 1953. LCCN 53000393.
I originally read this book when I was in junior high school—it was one of the few astronomy titles in the school's library. It's one of the grains of sand dropping on the pile which eventually provoked the avalanche that persuaded me I was living in the golden age of engineering and that I'd best spend my life making the most of it.

Seventy years after it was originally published (the 1948 and 1953 updates added only minor information on the final commissioning of the telescope and a collection of photos taken through it), this book still inspires respect for those who created the 200 inch Hale Telescope on Mount Palomar, and the engineering challenges they faced and overcame in achieving that milestone in astronomical instrumentation. The book is as much a biography of George Ellery Hale as it is a story of the giant telescope he brought into being. Hale was a world class scientist: he invented the spectroheliograph, discovered the magnetic fields of sunspots, founded the Astrophysical Journal and to a large extent the field of astrophysics itself, but he also excelled as a promoter and fund-raiser for grand-scale scientific instrumentation. The Yerkes, Mount Wilson, and Palomar observatories would, in all likelihood, not have existed were it not for Hale's indefatigable salesmanship. And this was an age when persuasiveness was all. With the exception of the road to the top of Palomar, all of the observatories and their equipment promoted by Hale were funded without a single penny of taxpayer money. For the Palomar 200 inch, he raised US$6 million in gold-backed 1930 dollars, which in present-day paper funny-money amounts to US$78 million.

It was a very different America which built the Palomar telescope. Not only was it never even thought of that money coercively taken from taxpayers would be diverted to pure science, anybody who wanted to contribute to the project, regardless of their academic credentials, was judged solely on their merits and given a position based upon their achievements. The chief optician who ground, polished, and figured the main mirror of the Palomar telescope (so perfectly that its potential would not be realised until recently thanks to adaptive optics) had a sixth grade education and was first employed at Mount Wilson as a truck driver. You can make of yourself what you have within yourself in America, so they say—so it was for Marcus Brown (p. 279). Milton Humason who, with Edwin Hubble, discovered the expansion of the universe, dropped out of school at the age of 14 and began his astronomical career driving supplies up Mount Wilson on mule trains. You can make of yourself what you have within yourself in America, or at least you could then. Now we go elsewhere.

Is there anything Russell W. Porter didn't do? Arctic explorer, founder of the hobby of amateur telescope making, engineer, architect…his footprints and brushstrokes are all over technological creativity in the first half of the twentieth century. And he is much in evidence here: recruited in 1927, he did the conceptual design for most of the buildings of the observatory, and his cutaway drawings of the mechanisms of the telescope demonstrate to those endowed with contemporary computer graphics tools that the eye of the artist is far more important than the technology of the moment.

This book has been out of print for decades, but used copies (often, sadly, de-accessioned by public libraries) are generally available at prices (unless you're worried about cosmetics and collectability) comparable to present-day hardbacks. It's as good a read today as it was in 1962.

 Permalink

Dewar, James with Robert Bussard. The Nuclear Rocket. Burlington, Canada: Apogee Books, 2009. ISBN 978-1-894959-99-5.
Let me begin with a few comments about the author attribution of this book. I have cited it as given on the copyright page, but as James Dewar notes in his preface, the main text of the book is entirely his creation. He says of Robert Bussard, “I am deeply indebted to Bob's contributions and consequently list his name in the credit to this book”. Bussard himself contributes a five-page introduction in which he uses, inter alia, the adjectives “amazing”, “strange”, “remarkable”, “wonderful”, “visionary”, and “most odd” to describe the work, which he makes clear is entirely Dewar's. Consequently, I shall subsequently use “the author” to denote Dewar alone. Bussard died in 2007, two years before the publication of this book, so his introduction must have been based upon a manuscript. I leave to the reader to judge the propriety of posthumously naming as co-author a prominent individual who did not write a single word of the main text.

Unlike the author's earlier To the End of the Solar System (June 2008), which was a nuts and bolts history of the U.S. nuclear rocket program, this book, titled The Nuclear Rocket, quoting from Bussard's introduction, “…is not really about nuclear rocket propulsion or its applications to space flight…”. Indeed, although some of the nitty-gritty of nuclear rocket engines are discussed, the bulk of the book is an argument for a highly-specific long term plan to transform human access to space from an elitist government run program to a market-driven expansive program with the ultimate goal of providing access to space to all and opening the solar system to human expansion and eventual dominion. This is indeed ambitious and visionary, but of all of Bussard's adjectives, the one that sticks with me is “most odd”.

Dewar argues that the NERVA B-4 nuclear thermal rocket core, developed between 1960 and 1972, and successfully tested on several occasions, has the capability, once the “taboo” against using nuclear engines in the boost to low Earth orbit (LEO) is discarded, of revolutionising space transportation and so drastically reducing the cost per unit mass to orbit that it would effectively democratise access to space. In particular, he proposes a “Re-core” engine which, integrated with a liquid hydrogen tank and solid rocket boosters, would be air-launched from a large cargo aircraft such as a C-5, with the solid rockets boosting the nuclear engine to around 30 km where they would separate for recovery and the nuclear engine engaged. The nuclear rocket would continue to boost the payload to orbital insertion. Since the nuclear stage would not go critical until having reached the upper atmosphere, there would be no radioactivity risk to those handling the stage on the ground prior to launch or to the crew of the plane which deployed the rocket.

After reaching orbit, the payload and hydrogen tank would be separated, and the nuclear engine enclosed in a cocoon (much like an ICBM reentry vehicle) which would de-orbit and eventually land at sea in a region far from inhabited land. The cocoon, which would float after landing, would be recovered by a ship, placed in a radiation-proof cask, and returned to a reprocessing centre where the highly radioactive nuclear fuel core would be removed for reprocessing (the entire launch to orbit would consume only about 1% of the highly enriched uranium in the core, so recovering the remaining uranium and reusing it is essential to the economic viability of the scheme). Meanwhile, another never critical core would be inserted in the engine which, after inspection of the non-nuclear components, would be ready for another flight. If each engine were reused 100 times, and efficient fuel reprocessing were able to produce new cores economically, the cost for each 17,000 pound payload to LEO would be around US$108 per pound.

Payloads which reached LEO and needed to go beyond (for example, to geostationary orbit, the Moon, or the planets) would rendezvous with a different variant of the NERVA-derived engine, dubbed the “Re-use” stage, which is much like Von Braun's nuclear shuttle concept. This engine, like the original NERVA, would be designed for multiple missions, needing only inspection and refuelling with liquid hydrogen. A single Re-use stage might complete 30 round-trip missions before being disposed of in deep space (offering “free launches” for planetary science missions on its final trip into the darkness).

There is little doubt that something like this is technically feasible. After all, the nuclear rocket engine was extensively tested in the years prior to its cancellation in 1972, and NASA's massive resources of the epoch examined mission profiles (under the constraint that nuclear engines could be used only for departure from LEO, however, and without return to Earth) and found no show stoppers. Indeed, there is evidence that the nuclear engine was cancelled, in part, because it was performing so well that policy makers feared it would enable additional costly NASA missions post-Apollo. There are some technological issues: for example, the author implies that the recovered Re-core, once its hot core is extracted and a new pure uranium core installed, will not be radioactive and hence safe to handle without special precautions. But what about neutron activation of other components of the engine? An operating nuclear rocket creates one of the most extreme neutronic environments outside the detonation of a nuclear weapon. Would it be possible to choose materials for the non-core components of the engine which would be immune to this and, if not, how serious would the induced radioactivity be, especially if the engine were reused up to a hundred times? The book is silent on this and a number of other questions.

The initial breakthrough in space propulsion from the first generation nuclear engines is projected to lead to rapid progress in optimising them, with four generations of successively improved engines within a decade or so. This would eventually lead to the development of a heavy lifter able to orbit around 150,000 pounds of payload per flight at a cost (after development costs are amortised or expensed) of about US$87 per pound. This lifter would allow the construction of large space stations and the transport of people to them in “buses” with up to thirty passengers per mission. Beyond that, a nuclear single stage to orbit vehicle is examined, but there are a multitude of technological and policy questions to be resolved before that could be contemplated.

All of this, however, is not what the book is about. The author is a passionate believer in the proposition that opening the space frontier to all the people of Earth, not just a few elite civil servants, is essential to preserving peace, restoring the optimism of our species, and protecting the thin biosphere of this big rock we inhabit. And so he proposes a detailed structure for accomplishing these goals, beginning with “Democratization of Space Act” to be adopted by the U.S. Congress, and the creation of a “Nuclear Rocket Development and Operations Corporation” (NucRocCorp), which would be a kind of private/public partnership in which individuals could invest. This company could create divisions (in some cases competing with one another) and charter development projects. It would entirely control space nuclear propulsion, with oversight by U.S. government regulatory agencies, which would retain strict control over the fissile reactor cores.

As the initial program migrated to the heavy lifter, this structure would morph into a multinational (admitting only “good” nations, however) structure of bewildering (to this engineer) bureaucratic complexity which makes the United Nations look like the student council of Weemawee High. The lines of responsibility and power here are diffuse in the extreme. Let me simply cite “The Stockholder's Declaration” from p. 161:

Whoever invests in the NucRocCorp and subsequent Space Charter Authority should be required to sign a declaration that commits him or her to respect the purpose of the new regime, and conduct their personal lives in a manner that recognizes the rights of their fellow man (What about woman?—JW). They must be made aware that failure to do so could result in forfeiture of their investment.

Property rights, anybody? Thought police? Apart from the manifest baroque complexity of the proposed scheme, it entirely ignores Jerry Pournelle's Iron Law of Bureaucracy: regardless of its original mission, any bureaucracy will eventually be predominately populated by those seeking to advance the interests of the bureaucracy itself, not the purpose for which it was created. The structure proposed here, even if enacted (implausible in the extreme) and even if it worked as intended (vanishingly improbable), would inevitably be captured by the Iron Law and become something like, well, NASA.

On pp. 36–37, the author likens attempts to stretch chemical rocket technology to its limits to gold plating a nail when what is needed is a bigger hammer (nuclear rockets). But this book brings to my mind another epigram: “When all you have is a hammer, everything looks like a nail.” Dewar passionately supports nuclear rocket technology and believes that it is the way to open the solar system to human settlement. I entirely concur. But when it comes to assuming that boosting people up to a space station (p. 111):

And looking down on the bright Earth and into the black heavens might create a new perspective among Protestant, Roman Catholic, and Orthodox theologians, and perhaps lead to the end of the schism plaguing Christianity. The same might be said of the division between the Sunnis and Shiites in Islam, and the religions of the Near and Far East might benefit from a new perspective.

Call me cynical, but I'll wager this particular swing of the hammer is more likely to land on a thumb than the intended nail. Those who cherish individual freedom have often dreamt of a future in which the opening of access to space would, in the words of L. Neil Smith, extend the human prospect to “freedom, immortality, and the stars”—works for me. What is proposed here, if adopted, looks more like, after more than a third of a century of dithering, the space frontier being finally opened to the brave pioneers ready to homestead there, and when they arrive, the tax man and the all-pervasive regulatory state are already there, up and running. The nuclear rocket can expand the human presence throughout the solar system. Let's just hope that when humanity (or some risk-taking subset of it) takes that long-deferred step, it does not propagate the soft tyranny of present day terrestrial governance to worlds beyond.

 Permalink

Derbyshire, John. We Are Doomed. New York: Crown Forum, 2009. ISBN 978-0-307-40958-4.
In this book, genial curmudgeon John Derbyshire, whose previous two books were popular treatments of the Riemann hypothesis and the history of algebra, argues that an authentically conservative outlook on life requires a relentlessly realistic pessimism about human nature, human institutions, and the human prospect. Such a pessimistic viewpoint immunises one from the kind of happy face optimism which breeds enthusiasm for breathtaking ideas and grand, ambitious schemes, which all of history testifies are doomed to failure and tragedy.

Adopting a pessimistic attitude is, Derbyshire says, not an effort to turn into a sourpuss (although see the photograph of the author on the dust jacket), but simply the consequence of removing the rose coloured glasses and looking at the world as it really is. To grind down the reader's optimism into a finely-figured speculum of gloom, a sequence of chapters surveys the Hellbound landscape of what passes for the modern world: “diversity”, politics, popular culture, education, economics, and third-rail topics such as achievement gaps between races and the assimilation of immigrants. The discussion is mostly centred on the United States, but in chapter 11, we take a tour d'horizon and find that things are, on the whole, as bad or worse everywhere else.

In the conclusion the author, who is just a few years my senior, voices a thought which has been rattling around my own brain for some time: that those of our generation living in the West may be seen, in retrospect, as having had the good fortune to live in a golden age. We just missed the convulsive mass warfare of the 20th century (although not, of course, frequent brushfire conflicts in which you can be killed just as dead, terrorism, or the threat of nuclear annihilation during the Cold War), lived through the greatest and most broadly-based expansion of economic prosperity in human history, accompanied by more progress in science, technology, and medicine than in all of the human experience prior to our generation. Further, we're probably going to hand in our dinner pails before the economic apocalypse made inevitable by the pyramid of paper money and bogus debt we created, mass human migrations, demographic collapse, and the ultimate eclipse of the tattered remnants of human liberty by the malignant state. Will people decades and centuries hence look back at the Boomer generation as the one that reaped all the benefits for themselves and passed on the bills and the adverse consequences to their descendants? That's the way to bet.

So what is to be done? How do we turn the ship around before we hit the iceberg? Don't look for any such chirpy suggestions here: it's all in the title—we are doomed! My own view is that we're in a race between a technological singularity and a new dark age of poverty, ignorance, subjugation to the state, and pervasive violence. Sharing the author's proclivity for pessimism, you can probably guess which I judge more probable. If you concur, you might want to read this book, which will appear in this chronicle in due time.

The book includes neither bibliography nor index. The lack of the former is particularly regrettable as a multitude of sources are cited in the text, many available online. It would be wonderful if the author posted a bibliography of clickable links (to online articles or purchase links for books cited) on his Web site, where there is a Web log of comments from readers and the author's responses.

 Permalink

Paul, Ron. End the Fed. New York: Grand Central, 2000. ISBN 978-0-446-54919-6.
Imagine a company whose performance, measured over almost a century by the primary metric given in its charter, looked like this:

USD Purchasing Power 1913--2009

Now, would you be likely, were your own personal prosperity and that of all of those around you on the line, to entrust your financial future to their wisdom and demonstrated track record? Well, if you live in the United States, or your finances are engaged in any way in that economy (whether as an investor, creditor, or trade partner), you are, because this is the chart of the purchasing power of the United States Dollar since it began to be managed by the Federal Reserve System in 1913. Helluva record, don't you think?

Now, if you know anything about basic economics (which puts you several rungs up the ladder from most present-day politicians and members of the chattering classes), you'll recall that inflation is not defined as rising prices but rather an increase in the supply of money. It's just as if you were at an auction and you gave all of the bidders 10% more money: the selling price of the item would be 10% greater, not because it had appreciated in value but simply because the bidders had more to spend on acquiring it. And what is, fundamentally, the function of the Federal Reserve System? Well, that would be to implement an “elastic currency”, decoupled from real-world measures of value, with the goal of smoothing out the business cycle. Looking at this shorn of all the bafflegab, the mission statement is to create paper money out of thin air in order to fund government programs which the legislature lacks the spine to fund from taxation or debt, and to permit banks to profit by extending credit well beyond the limits of prudence, knowing they're backed up by the “lender of last resort” when things go South. The Federal Reserve System is nothing other than an engine of inflation (money creation), and it's hardly a surprise that the dollars it issues have lost more than 95% of their value in the years since its foundation.

Acute observers of the economic scene have been warning about the risks of such a system for decades—it came onto my personal radar well before there was a human bootprint on the Moon. But somehow, despite dollar crises, oil shocks, gold and silver bubble markets, saving and loan collapse, dot.bomb, housing bubble, and all the rest, the wise money guys somehow kept all of the balls in the air—until they didn't. We are now in the early days of an extended period in which almost a century of bogus prosperity founded on paper (not to mention, new and improved pure zap electronic) money and debt which cannot ever be repaid will have to be unwound. This will be painful in the extreme, and the profligate borrowers who have been riding high whilst running up their credit cards will end up marked down, not only in the economic realm but in geopolitical power.

Nobody imagines today that it would be possible, as Alan Greenspan envisioned in the days he was a member of Ayn Rand's inner circle, to abolish the paper money machine and return to honest money (or, even better, as Hayek recommended, competing moneys, freely interchangeable in an open market). But then, nobody imagines that the present system could collapse, which it is in the process of doing. The US$ will continue its slide toward zero, perhaps with an inflection point in the second derivative as the consequences of “bailouts” and “stimuli” kick in. The Euro will first see risk premiums increase across sovereign debt issued by Eurozone nations, and then the weaker members drop out to avoid the collapse of their own economies. No currency union without political union has ever survived in the long term, and the Euro is no exception.

Will we finally come to our senses and abandon this statist paper in favour of the mellow glow of gold? This is devoutly to be wished, but I fear unlikely in my lifetime or even in those of the koi in my pond. As long as politicians can fiddle with the money in order to loot savers and investors to fund their patronage schemes and line their own pockets they will: it's been going on since Babylon, and it will probably go to the stars as we expand our dominion throughout the universe. One doesn't want to hope for total economic and societal collapse, but that appears to be the best bet for a return to honest and moral money. If that's your wish, I suppose you can be heartened that the present administration in the United States appears bent upon that outcome. Our other option is opting out with technology. We have the ability today to electronically implement Hayek's multiple currency system online. This has already been done by ventures such as e-gold, but The Man has, to date, effectively stomped upon them. It will probably take a prickly sovereign state player to make this work. Hello, Dubai!

Let me get back to this book. It is superb: read it and encourage all of your similarly-inclined friends to do the same. If they're coming in cold to these concepts, it may be a bit of a shock (“You mean, the government doesn't create money?”), but there's a bibliography at the end with three levels of reading lists to bring people up to speed. Long-term supporters of hard money will find this mostly a reinforcement of their views, but for those experiencing for the first time the consequences of rapidly depreciating dollars, this will be an eye-opening revelation of the ultimate cause, and the malignant institution which must be abolished to put an end to this most pernicious tax upon the most prudent of citizens.

 Permalink

Lyle, [Albert] Sparky and David Fisher. The Year I Owned the Yankees. New York: Bantam Books, [1990] 1991. ISBN 978-0-553-28692-2.
“Sparky” Lyle was one of the preeminent baseball relief pitchers of the 1970s. In 1977, he became the first American League reliever to win the Cy Young Award. In this book, due to one of those bizarre tax-swap transactions of the 1980–90s, George Steinbrenner, “The Boss”, was forced to divest the New York Yankees to an unrelated owner. Well, who could be more unrelated than Sparky Lyle, so when the telephone rings while he and his wife are watching “Jeopardy”, the last thing he imagines is that he's about to be offered a no-cash leveraged buy-out of the Yankees. Based upon his extensive business experience, 238 career saves, and pioneering in sitting naked on teammates' birthday cakes, he says, “Why not?” and the game, and season, are afoot.

None of this ever happened: the subtitle is “A Baseball Fantasy”, but wouldn't it have been delightful if it had? There's the pitcher with a bionic arm, cellular phone gloves so coaches can call fielders to position them for batters (if they don't get the answering machine), the clubhouse at Yankee Stadium enhanced with a Mood Room for those who wish to mellow out and a Frustration Room for those inclined to smash and break things after bruising losses, and the pitching coach who performs an exorcism and conducts a seance manifesting the spirit of Cy Young who counsels the Yankee pitching staff “Never hang a curve to Babe Ruth”. Thank you, Cy! Then there's the Japanese pitcher who can read minds and the reliever who reinvents himself as “Mr. Cool” and rides in from the bullpen on a Harley with the stadium PA system playing “Leader of the Pack”.

This is a romp which, while the very quintessence of fantasy baseball, also embodies a great deal of inside baseball wisdom. It's also eerily prophetic, as sabermetrics, as practised by Billy Beane's Oakland A's years after this book was remaindered, plays a major part in the plot. And never neglect the ultimate loyalty of a fan to their team!

Sparky becomes the owner with a vow to be the anti-Boss, but discovers as the season progresses that the realities of corporate baseball in the 1990s mandate many of the policies which caused Steinbrenner to be so detested. In the end, he comes to appreciate that any boss, to do his or her job, must be, in part, The Boss. I wish I'd read that before I discovered it for myself.

This is a great book to treat yourself to while the current World Series involving the Yankees is contested. The book is out of print, but used paperback copies in readable condition are abundant and reasonably priced. Special thanks to the reader of this chronicle who recommended this book!

 Permalink

November 2009

Malkin, Michelle. Culture of Corruption. Washington: Regnery Publishing, 2009. ISBN 978-1-59698-109-6.
This excellent book is essential to understanding what is presently going on in the United States. The author digs into the backgrounds and interconnections of the Obamas, the Clintons, their associates, the members of the Obama administration, and the web of shady organisations which surround them such as the Service Employees International Union (SEIU) and ACORN, and demonstrates, beyond a shadow of a doubt, that the United States is now ruled by a New Class of political operatives entirely distinct from the productive class which supports them and the ordinary citizens they purport to serve. Let me expand a bit on that term of art. In 1957, Milovan Đilas, Yugoslavian Communist revolutionary turned dissident, published a book titled The New Class, in which he described how, far from the egalitarian ideals of Marx and Engels, modern Communism had become captive to an entrenched political and bureaucratic class which used the power of the state to exploit its citizens. The New Class moved in different social and economic circles than the citizenry, and was moving in the direction of a hereditary aristocracy, grooming their children to take over from them.

In this book, we see a portrait of America's New Class, as exemplified by the Obama administration. (Although the focus is on Obama's people and the constituencies of the Democratic party, a similar investigation of a McCain administration wouldn't probably look much different: the special interests would differ, but not the character of the players. It's the political class as a whole and the system in which they operate which is corrupt, which is how mighty empires fall.) Reading through the biographies of the players, what is striking is that very few of them have ever worked a single day in the productive sector of the economy. They went from law school to government agency or taxpayer funded organisation to political office or to well-paid positions in a political organisation. They are members of a distinct political class which is parasitic upon the society, and whose interests do not align with the well-being of its citizens, who are coerced to support them.

And this, it seems to me, completes the picture of the most probable future trajectory of the United States. To some people Obama is the Messiah, and to others he is an American Lenin, but I think both of those views miss the essential point. He is, I concluded while reading this book, an American Juan Perón, a charismatic figure (with a powerful and ambitious wife) who champions the cause of the “little people” while amassing power and wealth to reward the cronies who keep the game going, looting the country (Argentina was the 10th wealthiest nation per capita in 1913) for the benefit of the ruling class, and setting the stage for economic devastation, political instability, and hyperinflation. It's pretty much the same game as Chicago under mayors Daley père and fils, but played out on a national scale. Adam Smith wrote, “There is a great deal of ruin in a nation”, but as demonstrated here, there is a great deal of ruination in the New Class Obama has installed in the Executive branch in Washington.

As the experience of Argentina during the Perón era and afterward demonstrates, it is possible to inflict structural damage on a society which cannot be reversed by an election, or even a coup or revolution. Once the productive class is pauperised or driven into exile and the citizenry made dependent upon the state, a new equilibrium is reached which, while stable, drastically reduces national prosperity and the standard of living of the populace. But, if the game is played correctly, as despots around the world have figured out over millennia, it can enrich the ruling class, the New Class, beyond their dreams of avarice (well, not really, because those folks are really good when it comes to dreaming of avarice), all the time they're deploring the “greed” of those who oppose them and champion the cause of the “downtrodden” ground beneath their own boots.

To quote a politician who figures prominently in this book, “let me be clear”: the present book is a straightforward investigation of individuals staffing the Obama administration and the organisations associated with them, documented in extensive end notes, many of which cite sources accessible online. All of the interpretation of this in terms of a New Class is entirely my own and should not be attributed to this book or its author.

 Permalink

Flynn, Vince. Term Limits. New York: Pocket Books, 1997. ISBN 978-0-671-02318-8.
This was the author's first novel, which he initially self-published and marketed through bookshops in his native Minnesota after failing to place it with any of the major New York publishers. There have to be a lot of editors (What's the collective noun for a bunch of editors? A rejection slip of editors? A red pencil of editors?) who wrote the dozens of rejection letters he received, as Flynn's books now routinely make the New York Times bestseller list and have sold more than ten million copies worldwide. Unlike many writers who take a number of books, published or unpublished, to master their craft (Jerry Pournelle counsels aspiring writers to expect to throw away their first million words), Flynn showed himself to be a grandmaster at the art of the thriller in his very first outing. In fact, I found this book to be even more of a compulsive page-turner than the subsequent Mitch Rapp novels (but that's to be expected, since as the series progresses there's more character development and scene-setting)—the trade paperback edition is 612 pages long and I finished it in four days.

The story takes place in the same world as the Mitch Rapp (warning—the article at this link contains minor spoilers) series, and introduces many of the characters of those books such as Thomas Stansfield, Irene Kennedy, Jack Warch, Scott Coleman, and Congressman Michael O'Rourke, but Rapp makes no appearance in it. The premise is simple: a group of retired Special Forces operatives who have spent their careers making foreign enemies of their country pay for their misdeeds concludes that the most pernicious enemies of the republic are the venal politicians spending the country into bankruptcy and ignoring the threats to its existence and decides to take, shall we say, direct action, much along the lines of Unintended Consequences (December 2003), but as a pure thriller without the political baggage of that novel.

Flynn's attention to detail is evident in this first novel, although there are a few lapses. This is to be expected, as his “brain trust” of fan/insiders had yet to discover his work and lend their expertise to vetting the gnarly details. For example, on p. 552, a KH-11 satellite is said to be “on station” and remains so for an extended period. KH-11s are in low Earth orbit, and cannot be on station anywhere. And they're operated by the National Reconnaissance Office, not the National Security Administration. Flynn seems to be very fond of the word “transponder”, and uses it in contexts where it's clear a receiver is intended. These and other minor goofs detract in no way from the story, which grips you and doesn't let go until the last page. Although this book is not at all a prerequisite to enjoying the Mitch Rapp series, in retrospect I wish I'd read it before Transfer of Power (April 2009) to better appreciate the history which formed the relationships among the secondary characters.

 Permalink

Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. ISBN 978-0-06-147278-7.
At last we have a book which squarely takes on the central puzzle of the supposedly blind, purposeless universe to which so many scientists presently ascribe the origin of life on Earth. There's hardly any point debating evolution: it can be demonstrated in the laboratory. (Some may argue that Spiegelman's monster is an example of devolution, but recall that evolutionists must obligately eschew teleology, so selection in the direction of simplicity and rapid replication is perfectly valid, and evidenced by any number of examples in bacteria.)

No, the puzzle—indeed, the enigma— is the origin of the first replicator. Once you have a self-replicating organism and a means of variation (of which many are known to exist), natural selection can kick in and, driven by the environment and eventually competition with other organisms, select for more complexity when it confers an adaptive advantage. But how did the first replicator come to be?

In the time of Darwin, the great puzzle of biology was the origin of the apparently designed structures in organisms and the diversity of life, not the origin of the first cell. For much of Darwin's life, spontaneous generation was a respectable scientific theory, and the cell was thought to be an amorphous globule of a substance dubbed “protoplasm”, which one could imagine as originating at random through chemical reactions among naturally occurring precursor molecules.

The molecular biology revolution in the latter half of the twentieth century put the focus squarely upon the origin of life. In particular, the discovery of the extraordinarily complex digital code of the genome in DNA, the supremely complex nanomachinery of gene expression (more than a hundred proteins are involved in the translation of DNA to proteins, even in the simplest of bacteria), and the seemingly intractable chicken and egg problem posed by the fact that DNA cannot replicate its information without the proteins of the transcription mechanism, while those proteins cannot be assembled without the precise sequence information provided in the DNA, decisively excluded all scenarios for the origin of life through random chemical reactions in a “warm pond”.

As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.

So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.

What do I mean by “functional information”? Just information which has a meaning expressed in a separate domain than its raw components. For example, the information theoretic entropy of a typical mountainside is as great (and, in fact, probably greater) than that of Mount Rushmore, but the latter encodes functional (or specified) information from a separate domain: that of representations of U.S. presidents known from other sources. Similarly, a DNA sequence which encodes a protein which folds into a form which performs a specific enzymatic function is vanishingly improbable to have originated by chance, and this has been demonstrated by experiment. Without the enzymes in the cell, in fact, even if you had a primordial soup containing all of the ingredients of functional proteins, they would just cross-link into non-functional goo, as nothing would prevent their side chains from bonding to one another. Biochemists know this, which is why they're so sceptical of the glib theories of physicists and computer scientists who expound upon the origin of life.

Ever since Lyell, most scientists have embraced the principle of uniformitarianism, which holds that any phenomenon we observe in nature today must have been produced by causes we observe in action at the present time. Well, at the present time, we observe many instances of complex, structured, functional encoded data with information content in excess of 500 bits: books, music, sculpture, paintings, integrated circuits, machines, and even this book review. And to what cause would the doctrinaire uniformitarian attribute all of this complex, structured information? Well, obviously, the action of an intelligent agent: intelligent design.

Once you learn to recognise it, the signatures are relatively easy to distinguish. When you have a large amount of Shannon information, but no function (for example, the contour of a natural mountainside, or a random bit string generated by radioactive decay), then chance is the probable cause. When you have great regularity (the orbits of planets, or the behaviour of elementary particles), then natural law is likely to govern. As Jacques Monod observed, most processes in nature can be attributed to Chance and Necessity, but there remain those which do not, with which archæologists, anthropologists, and forensic scientists, among others, deal with every day.

Beyond the dichotomy of chance and necessity (or a linear combination of the two), there's the trichotomy which admits intelligent design as a cause. An Egyptologist who argued that plate tectonics was responsible for the Great Sphinx of Giza would be laughed out of the profession. And yet, when those who observe information content in the minimal self-replicating organism hundreds of orders of magnitude less likely than the Sphinx having been extruded from a volcanic vent infer evidence of intelligent design of that first replicator, they are derided and excluded from scientific discourse.

What is going on here? I would suggest there is a dogma being enforced with the same kind of rigour as the Darwinists impute to their fundamentalist opponents. In every single instance in the known universe, with the sole exception of the genome of the minimal self-replicating cell and the protein machinery which allows it to replicate, when we see 500 bits or more of functional complexity, we attribute it to the action of an intelligent agent. You aren't likely to see a CSI episode where one of the taxpayer-funded sleuths attributes the murder to a gun spontaneously assembling due to quantum fluctuations and shooting “the vic” through the heart. And yet such a Boltzmann gun is thousands of orders of magnitude more probable than a minimal genetic code and transcription apparatus assembling by chance in proximity to one another in order to reproduce.

Opponents of intelligent design hearts' go all pitty-pat because they consider it (gasp) religion. Nothing could be more absurd. Francis Crick (co-discoverer of the structure of DNA) concluded that the origin of life on Earth was sufficiently improbable that the best hypothesis was that it had been seeded here deliberately by intelligent alien lifeforms. These creatures, whatever their own origins, would have engineered their life spores to best take root in promising environments, and hence we shouldn't be surprised to discover our ancestors to have been optimised for our own environment. One possibility (of which I am fond) is that our form of life is the present one in a “chain of life” which began much closer to the Big Bang. One can imagine life, originating at the quark-gluon plasma phase or in the radiation dominated universe, and seeing the end of their dominion approaching, planting the seeds of the next form of life among their embers. Dyson, Tipler, and others have envisioned the distant descendants of humanity passing on the baton of life to other lifeforms adapted to the universe of the far future. Apply the Copernican principle: what about our predecessors?

Or consider my own favourite hypothesis of origin, that we're living in a simulation. I like to think of our Creator as a 13 year old superbeing who designed our universe as a science fair project. I have written before about the clear signs accessible to experiment which might falsify this hypothesis but which, so far, are entirely consistent with it. In addition, I've written about how the multiverse model is less parsimonious than the design hypothesis.

In addition to the arguments in that paper, I would suggest that evidence we're living in a simulation is that we find, living within it, complex structured information which we cannot explain as having originated by the physical processes we discover within the simulation. In other words, we find there has been input of information by the intelligent designer of the simulation, either explicitly as genetic information, or implicitly in terms of fine-tuning of free parameters of the simulated universe so as to favour the evolution of complexity. If you were creating such a simulation (or designing a video game), wouldn't you fine tune such parameters and pre-specify such information in order to make it “interesting”?

Look at it this way. Imagine you were a sentient character in a video game. You would observe that the “game physics” of your universe was finely tuned both in the interest of computability but also to maximise the complexity of the interactions of the simulated objects. You would discover that your own complexity and that of the agents with which you interact could not be explained by the regularities of the simulation and the laws you'd deduced from them, and hence appeared to have been put in from the outside by an intelligent designer bent on winning the science fair by making the most interesting simulation. Being intensely rationalistic, you'd dismiss the anecdotal evidence for the occasional miracle as the pimple-faced Creator tweaked this or that detail to make things more interesting and thus justify an A in Miss O'Neill's Creative Cosmology class. And you'd be wrong.

Once we have discovered we're living in a simulation and inferred, from design arguments, that we're far from the top level, all of this will be obvious, but hey, if you're reading it here for the first time, welcome to the revelation of what's going on. Opponents of intelligent design claim it's “not science” or “not testable”. Poppycock—here's a science fiction story about how conclusive evidence for design might be discovered. Heck, you can go looking for it yourself!

This is an essential book for anybody interested in the origin of life on Earth. The author is a supporter of the hypothesis of intelligent design (as am I, although I doubt we would agree on any of the details). Regardless of what you think about the issue of origins, if you're interested in the question, you really need to know the biochemical details discussed here, and the combinatorial impossibility of chance assembly of even a single functionally folded protein in our universe in the time since the Big Bang.

I challenge you to read this and reject the hypothesis of intelligent design. If you reject it, then show how your alternative is more probable. I fully accept the hypothesis of intelligent design and have since I concluded more than a decade ago it's more probable than not that we're living in a simulation. We owe our existence to the Intelligent Designer who made us to be amusing. Let's hope she wins the Science Fair and doesn't turn it off!

 Permalink

December 2009

Bracken, Matthew. Enemies Foreign and Domestic. Orange Park, FL: Steelcutter Publishing, [2003] 2008. ISBN 978-0-9728310-1-7.
This is one of those books, like John Ross's Unintended Consequences and Vince Flynn's Term Limits in which a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism committed by the Federal Government of the United States finally pushes liberty-loving citizens to exercise “their right, … their duty, to throw off such Government” even if doing so requires the tree of liberty to be refreshed “with the blood of patriots and tyrants”.

In this novel a massacre at a football stadium which occurs under highly dubious circumstances serves as the pretext for a draconian ban on semiautomatic weapons, with immediate confiscation and harsh penalties for non-compliance. This is a step too far for a diverse collection of individuals who believe the Second Amendment to be the ultimate bastion against tyranny, and a government which abridges it to be illegitimate by that very act. Individually, they begin to take action, and what amounts to a low grade civil war begins to break out in the Tidewater region of Virgina, with government provocateurs from a rogue federal agency of jackbooted thugs (as opposed to the jackbooted thugs of other agencies which are “just following orders”) perpetrating their own atrocities, which are then used to justify even more restrictions on the individual right to bear arms, including a ban on telescopic sights (dubbed “sniper rifles”), transportation of weapons in automobiles, and random vehicle stop checkpoints searching for and confiscating firearms.

As the situation spirals increasingly out of control, entrepreneurial jackbooted thugs exploit it to gain power and funding for themselves, and the individuals resisting them come into contact with one another and begin to put the pieces together and understand who is responsible and why a federal law enforcement agency is committing domestic terrorism. Then it's payback time.

This novel is just superbly written. It contains a wealth of detail, all of it carefully researched and accurate. I only noted a couple of typos and factual goofs. The characters are complex, realistically flawed, and develop as the story unfolds. This is a thriller, not a political tract, and it will keep you turning the pages until the very end, while thinking about what you would do when liberty is on the line.

Excerpts from the book are available online at the author's Web site.

 Permalink

Magueijo, João. A Brilliant Darkness. New York: Basic Books, 2009. ISBN 978-0-465-00903-9.
Ettore Majorana is one of the most enigmatic figures in twentieth century physics. The son of a wealthy Sicilian family and a domineering mother, he was a mathematical prodigy who, while studying for a doctorate in engineering, was recruited to join Enrico Fermi's laboratory: the “Via Panisperna boys”. (Can't read that without seeing “panspermia”? Me neither.) Majorana switched to physics, and received his doctorate at the age of 22.

At Fermi's lab, he almost immediately became known as the person who could quickly solve intractable mathematical problems others struggled with for weeks. He also acquired a reputation for working on whatever interested him, declining to collaborate with others. Further, he would often investigate a topic to his own satisfaction, speak of his conclusions to his colleagues, but never get around to writing a formal article for publication—he seemed almost totally motivated by satisfying his own intellectual curiosity and not at all by receiving credit for his work. This infuriated his fiercely competitive boss Fermi, who saw his institute scooped on multiple occasions by others who independently discovered and published work Majorana had done and left to languish in his desk drawer or discarded as being “too obvious to publish”. Still, Fermi regarded Majorana as one of those wild talents who appear upon rare occasions in the history of science. He said,

There are many categories of scientists, people of second and third rank, who do their best, but do not go very far. There are also people of first class, who make great discoveries, which are of capital importance for the development of science. But then there are the geniuses, like Galileo and Newton. Well, Ettore was one of these.

In 1933, Majorana visited Werner Heisenberg in Leipzig and quickly became a close friend of this physicist who was, in most personal traits, his polar opposite. Afterward, he returned to Rome and flip-flopped from his extroversion in the company of Heisenberg to the life of a recluse, rarely leaving his bedroom in the family mansion for almost four years. Then something happened, and he jumped into the competition for the position of full professor at the University of Naples, bypassing the requirement for an examination due to his “exceptional merit”. He emerged from his reclusion, accepted the position, and launched into his teaching career, albeit giving lectures at a level which his students often found bewildering.

Then, on March 26th, 1938, he boarded a ship in Palermo Sicily bound for Naples and was never seen again. Before his departure he had posted enigmatic letters to his employer and family, sent a telegram, and left a further letter in his hotel room which some interpreted as suicide notes, but which forensic scientists who have read thousands of suicide notes say resemble none they've ever seen (but then, would a note by a Galileo or Newton read like that of the run of the mill suicide?). This event set in motion investigation and speculation which continues to this very day. Majorana was said to have withdrawn a large sum of money from his bank a few days before: is this plausible for one bent on self-annihilation (we'll get back to that infra)? Based on his recent interest in religion and reports of his having approached religious communities to join them, members of his family spent a year following up reports that he'd joined a monastery; despite “sightings”, none of these leads panned out. Years later, multiple credible sources with nothing apparently to gain reported that Majorana had been seen on numerous occasions in Argentina, and, abandoning physics (which he had said “was on the wrong path” before his disappearance), pursued a career as an engineer.

This only scratches the surface of the legends which have grown up around Majorana. His disappearance, occurring after nuclear fission had already been produced in Fermi's laboratory, but none of the “boys” had yet realised what they'd seen, spawns speculation that Majorana, as he often did, figured it out, worked out the implications, spoke of it to someone, and was kidnapped by the Germans (maybe he mentioned it to his friend Heisenberg), the Americans, or the Soviets. There is an Italian comic book in which Majorana is abducted by Americans, spirited off to Los Alamos to work on the Manhattan Project, only to be abducted again (to his great relief) by aliens in a flying saucer. Nobody knows—this is just one of the many mysteries bearing the name Majorana.

Today, Majorana is best known for his work on the neutrino. He responded to Paul Dirac's theory of the neutrino (which he believed unnecessarily complicated and unphysical) with his own, in which, as opposed to there being neutrinos and antineutrinos, the neutrino is its own antiparticle and hence neutrinos of the same flavour can annihilate one another. At the time these theories were proposed the neutrino had not been detected, nor would it be for twenty years. When the existence of the neutrino was confirmed (although few doubted its existence by the time Reines and Cowan detected it in 1956), few believed it would ever be possible to distinguish the Dirac and Majorana theories of the neutrino, because that particle was almost universally believed to be massless. But then the “scientific consensus” isn't always the way to bet.

Starting with solar neutrino experiments in the 1960s, and continuing to the present day, it became clear that neutrinos did have mass, albeit very little compared to the electron. This meant that the distinction between the Dirac and Majorana theories of the neutrino was accessible to experiment, and could, at least in principle, be resolved. “At least in principle”: what a clarion call to the bleeding edge experimentalist! If the neutrino is a Majorana particle, as opposed to a Dirac particle, then neutrinoless double beta decay should occur, and we'll know whether Majorana's model, proposed more than seven decades ago, was correct. I wish there'd been more discussion of the open controversy over experiments which claim a 6σ signal for neutrinoless double beta decay in 76Ge, but then one doesn't want to date one's book with matters actively disputed.

To the book: this may be the first exemplar of a new genre I'll dub “gonzo scientific biography”. Like the “new journalism” of the 1960s and '70s, this is as much about the author as the subject; the author figures as a central character in the narrative, whether transcribing his queries in pidgin Italian to the Majorana family:

“Signora wifed a brother of Ettore, Luciano?”
“What age did signora owned at that time”
“But he was olded fifty years!”
“But in end he husbanded you.”

Besides humourously trampling on the language of Dante, the author employs profanity as a superlative as do so many “new journalists”. I find this unseemly in a scientific biography of an ascetic, deeply-conflicted individual who spent most of his short life in a search for the truth and, if he erred, erred always on the side of propriety, self-denial, and commitment to dignity of all people.

Should you read this? Well, if you've come this far, of course you should!   This is an excellent, albeit flawed, biography of a singular, albeit flawed, genius whose intellectual legacy motivates massive experiments conducted deep underground and in the seas today. Suppose a neutrinoless double beta decay experiment should confirm the Majorana theory? Should he receive the Nobel prize for it? On the merits, absolutely: many physics Nobels have been awarded for far less, and let's not talk about the “soft Nobels”. But under the rules a Nobel prize can't be awarded posthumously. Which then compels one to ask, “Is Ettore dead?” Well, sure, that's the way to bet: he was born in 1906 and while many people have lived longer, most don't. But how you can you be certain? I'd say, should an experiment for neutrinoless double beta decay prove conclusive, award him the prize and see if he shows up to accept it. Then we'll all know for sure.

Heck, if he did, it'd probably make Drudge.

 Permalink

Flynn, Vince. Memorial Day. New York: Pocket Books, 2004. ISBN 978-0-7434-5398-1.
In this, the fifth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series the author returns from the more introspective view of the conflicting loyalties and priorities of the CIA's most effective loose cannon in previous novels to pen a rip-roaring edge-of-the-seat thriller which will keep you turning pages until the very last. I packed this as an “airplane book” and devoured the whole 574 page brick in less than 48 hours after I opened it on the train to the airport. Flynn is a grand master of the “just one more chapter before I go to sleep” thriller, and this is the most compelling of his novels I've read to date.

Without giving away any more than the back cover blurb, the premise is a nuclear terrorist attack on Washington, and the details of the detection of such a threat and the response to it are so precise that a U.S. government inquiry was launched into how Flynn got his information (answer—he has lots of fans in the alphabet soup agencies within a megaton or so of the Reflecting Pool). While the earlier novels in the Mitch Rapp chronicle are best read in order, you can pick this one up and enjoy it stand-alone: sure, you'll miss some of the nuances of the backgrounds and interactions among the characters, but the focus here is on crisis, mystery, taking expedient action to prevent a catastrophic outcome, and the tension between those committed to defending their nation and those committed to protecting the liberties which make that nation worthy of being defended.

As with most novels in which nuclear terrorism figures, I have some quibbles with the details, but I'm not going to natter upon them within a spoiler warning block because they made absolutely no difference to my enjoyment of this yarn. This is a thriller by a master of the genre at the height of his powers, which has not been dated in any way by the passing of years since its publication. Enjoy!

 Permalink

Codevilla, Angelo. The Character of Nations. New York: Basic Books, [1997] 2009. ISBN 978-0-465-02800-9.
As George Will famously observed, “statecraft is soulcraft”. This book, drawing on examples from antiquity to the present day, and from cultures all around the world, explores how the character, culture, and morals of a people shape the political institutions they create and how, in turn, those institutions cause the character of those living under them to evolve over time. This feedback loop provides important insights into the rise and fall of nations and empires, and is acutely important in an age where the all-encompassing administrative state appears triumphant in developed nations at the very time it reduces its citizens to subservient, ovine subjects who seek advancement not through productive work but by seeking favours from those in power, which in turn imperils the wealth creation upon which the state preys.

This has, of course, been the state of affairs in the vast majority of human societies over the long span of human history but, as the author notes, for most of that history the intrusiveness of authority upon the autonomy of the individual was limited by constraints on transportation, communication, and organisation, so the scope of effective control of even the most despotic ruler rarely extended far beyond the seat of power. The framers of the U.S. Constitution were deeply concerned whether self-government of any form could function on a scale beyond that of a city-state: there were no historical precedents for such a polity enduring beyond a generation or two. Thomas Jefferson and others who believed such a government could be established and survive in America based their optimism on the character of the American people: their independence, self-reliance, morality grounded in deep religious convictions, strong families, and willingness to take up arms to defend their liberty would guide them in building a government which would reflect and promote those foundations.

Indeed, for a century and a half, despite a disastrous Civil War and innumerable challenges and crises, the character of the U.S. continued to embody that present at the founding, and millions of immigrants from cultures fundamentally different from those of the founders were readily assimilated into an ever-evolving culture which nonetheless preserved its essential character. For much of American history, people in the U.S. were citizens in the classic sense of the word: participants in self-government, mostly at a local level, and in turn accepting the governance of their fellow citizens; living lives centred around family, faith, and work, with public affairs rarely intruding directly into their lives, yet willing to come to the defence of the nation with their very lives when it was threatened.

How quaint that all seems today. Statecraft is soulcraft, and the author illustrates with numerous examples spanning millennia how even the best-intentioned changes in the relationship of the individual to the state can, over a generation or two, fundamentally and often irreversibly alter the relationship between government and the governed, transforming the character of the nation—the nature of its population, into something very different which will, in turn, summon forth a different kind of government. To be specific, and to cite the case most common in the the last century, there is a pernicious positive feedback loop which is set into motion by the enactment of even the most apparently benign social welfare programs. Each program creates a dependent client class, whose political goals naturally become to increase their benefits at the expense of the productive classes taxed to fund them. The dependent classes become reliable voting blocs for politicians who support the programs that benefit them, which motivates those politicians to expand benefits and thus grow the dependent classes. Eventually, indeed almost inevitably, the society moves toward a tipping point where net taxpayers are outvoted by tax eaters, after which the business of the society is no longer creation of wealth but rather a zero sum competition for the proceeds of redistribution by the state.

Note that the client classes in a mature redistributive state go far beyond the “poor, weak, and infirm” the politicians who promote such programs purport to champion. They include defence contractors, financial institutions dependent upon government loan guarantees and bailouts, nationalised companies, subsidised industries and commodity prices, public employee unions, well-connected lobbying and law firms, and the swarm of parasites that darken the sky above any legislature which expends the public patrimony at its sole discretion, and of course the relatives and supporters of the politicians and bureaucrats dispensing favours from the public purse.

The author distinguishes “the nation” (the people who live in a country), “the regime” (its governing institutions), and “the establishment” (the ruling class, including politicians but also media, academia, and opinion makers). When these three bodies are largely aligned, the character of the nation will be reflected in its institutions and those institutions will reinforce that character. In many circumstances, for example despotic societies, there has never been an alignment and this has often been considered the natural order of things: rulers and ruled. It is the rarest of exceptions when this triple alignment occurs, and the sad lesson of history is that even when it does, it is likely to be a transient phenomenon: we are doomed!

This is, indeed, a deeply pessimistic view of the political landscape, perhaps better read on the beach in mid-summer than by the abbreviated and wan daylight of a northern hemisphere winter solstice. The author examines in detail how seventy years of communist rule transformed the character of the Soviet population in such a manner that the emergence of the authoritarian Russian gangster state was a near-inevitable consequence. Perhaps had double-domed “defence intellectuals” read this book when it was originally published in 1997 (the present edition is revised and updated based upon subsequent events), ill-conceived attempts at “nation building” might have been avoided and many lives and vast treasure not squandered in such futile endeavours.

 Permalink

  2010  

January 2010

Taheri, Amir. The Persian Night. New York: Encounter Books, 2009. ISBN 978-1-59403-240-0.
With Iran continuing its march toward nuclear weapons and long range missiles unimpeded by an increasingly feckless West, while simultaneously domestic discontent over the tyranny of the mullahs, economic stagnation, and stolen elections are erupting into bloody violence on the streets of major cities, this book provides a timely look at the history, institutions, personalities, and strategy of what the author dubs the “triple oxymoron”: the Islamic Republic of Iran which, he argues, espouses a bizarre flavour of Islam which is not only a heretical anathema to the Sunni majority, but also at variance with the mainstream Shiite beliefs which predominated in Iran prior to Khomeini's takeover; anything but a republic in any usual sense of the word; and motivated by a global messianic vision decoupled from the traditional interests of Iran as a nation state.

Khomeini's success in wresting control away from the ailing Shah without a protracted revolutionary struggle was made possible by support from “useful idiots” mostly on the political left, who saw Khomeini's appeal to the rural population as essential to gaining power and planned to shove him aside afterward. Khomeini, however, once in power, proved far more ruthless than his coalition partners, summarily putting to death all who opposed him, including many mullahs who dissented from his eccentric version of Islam.

Iran is often described as a theocracy, but apart from the fact that the all-powerful Supreme Guide is nominally a religious figure, the organisation of the government and distribution of power are very much along the lines of a fascist state. In fact, there is almost a perfect parallel between the institutions of Nazi Germany and those of Iran. In Germany, Hitler created duplicate party and state centres of power throughout the government and economy and arranged them in such a way as to ensure that decisions could not be made without his personal adjudication of turf battles between the two. In Iran, there are the revolutionary institutions and those of the state, operating side by side, often with conflicting agendas, with only the Supreme Guide empowered to resolve disputes. Just as Hitler set up the SS as an armed counterpoise to the Wehrmacht, Khomeini created the Islamic Revolutionary Guard Corps as the revolution's independent armed branch to parallel the state's armed forces.

Thus, the author stresses, in dealing with Iran, it is essential to be sure whether you're engaging the revolution or the nation state: over the history of the Islamic Republic, power has shifted back and forth between the two sets of institutions, and with it Iran's interaction with other players on the world stage. Iran as a nation state generally strives to become a regional superpower: in effect, re-establishing the Persian Empire from the Mediterranean to the Caspian Sea through vassal regimes. To that end it seeks weapons, allies, and economic influence in a fairly conventional manner. Iran the Islamic revolutionary movement, on the other hand, works to establish global Islamic rule and the return of the Twelfth Imam: an Islamic Second Coming which Khomeini's acolytes fervently believe is imminent. Because they brook no deviation from their creed, they consider Sunni Moslems, even the strict Wahabi sect of Saudi Arabia, as enemies which must be compelled to submit to Khomeini's brand of Islam.

Iran's troubled relationship with the United States cannot be understood without grasping the distinction between state and revolution. To the revolution, the U.S. is the Great Satan spewing foul corruption around the world, which good Muslims should curse, chanting “death to America” before every sura of the Koran. Iran the nation state, on the other hand, only wants Washington to stay out of its way as it becomes a regional power which, after all, was pretty much the state of affairs under the Shah, with the U.S. his predominant arms supplier. But the U.S. could never adopt such a strategy as long as the revolution has a hand in policy, nor will Iran's neighbours, terrified of its regional ambitions, encourage the U.S. to keep their hands off.

There is a great deal of conventional wisdom about Iran which is dead wrong, and this book dispels much of it. The supposed “CIA coup” against Mosaddegh in 1953, for which two U.S. presidents have since apologised, proves to have been nothing of the sort (although the CIA did, on occasion, claim credit for it as an example of a rare success amidst decades of blundering), with the U.S. largely supporting the nationalisation of the Iranian oil fields against fierce opposition from Britain. But cluelessness about Iran has never been in short supply among U.S. politicians. Speaking at the World Economic Forum, Bill Clinton said:

Iran today is, in a sense, the only country where progressive ideas enjoy a vast constituency. It is there that the ideas I subscribe to are defended by a majority.

Lest this be deemed a slip of the tongue due to intoxication by the heady Alpine air of Davos, a few days later on U.S. television he doubled down with:

[Iran is] the only one with elections, including the United States, including Israel, including you name it, where the liberals, or the progressives, have won two-thirds to 70 percent of the vote in six elections…. In every single election, the guys I identify with got two-thirds to 70 percent of the vote. There is no other country in the world I can say that about, certainly not my own.

I suppose if the U.S. had such an overwhelming “progressive” majority, it too would adopt “liberal” policies such as hanging homosexuals from cranes until they suffocate and stoning rape victims to death. But perhaps Clinton was thinking of Iran's customs of polygamy and “temporary marriage”.

Iran is a great nation which has been a major force on the world stage since antiquity, with a deep cultural heritage and vigorous population who, in exile from poor governance in the homeland, have risen to the top of demanding professions all around the world. Today (as well as much of the last century) Iran is saddled with a regime which squanders its patrimony on a messianic dream which runs the very real risk of igniting a catastrophic conflict in the Middle East. The author argues that the only viable option is regime change, and that all actions taken by other powers should have this as the ultimate goal. Does that mean going to war with Iran? Of course not—the very fact that the people of Iran are already pushing back against the mullahs is evidence they perceive how illegitimate and destructive the present regime is. It may even make sense to engage with institutions of the Iranian state, which will be the enduring foundation of the nation after the mullahs are sent packing, but it it essential that the Iranian people be sent the message that the forces of civilisation are on their side against those who oppress them, and to use the communication tools of this new century (Which country has the most bloggers? The U.S. Number two? Iran.) to bypass the repressive regime and directly address the people who are its victims.

Hey, I spent two weeks in Iran a decade ago and didn't pick up more than a tiny fraction of the insight available here. Events in Iran are soon to become a focus of world attention to an extent they haven't been for the last three decades. Read this book to understand how Iran figures in the contemporary Great Game, and how revolutionary change may soon confront the Islamic Republic.

 Permalink

Bryson, Bill. The Life and Times of the Thunderbolt Kid. London: Black Swan, 2007. ISBN 978-0-552-77254-9.
What could be better than growing up in the United States in the 1950s? Well, perhaps being a kid with super powers as the American dream reached its apogee and before the madness started! In this book, humorist, travel writer, and science populariser extraordinaire Bill Bryson provides a memoir of his childhood (and, to a lesser extent, coming of age) in Des Moines, Iowa in the 1950s and '60s. It is a thoroughly engaging and charming narrative which, if you were a kid there, then will bring back a flood of fond memories (as well as some acutely painful ones) and if you weren't, to appreciate, as the author closes the book, “What a wonderful world it was. We won't see its like again, I'm afraid.”

The 1950s were the golden age of comic books, and whilst shopping at the local supermarket, Bryson's mother would drop him in the (unsupervised) Kiddie Corral where he and other offspring could indulge for free to their heart's content. It's only natural a red-blooded Iowan boy would discover himself to be a superhero, The Thunderbolt Kid, endowed with ThunderVision, which enabled his withering gaze to vapourise morons. Regrettably, the power seemed to lack permanence, and the morons so dispersed into particles of the luminiferous æther had a tedious way of reassembling themselves and further vexing our hero and his long-suffering schoolmates. But still, more work for The Thunderbolt Kid!

This was a magic time in the United States—when prosperity not only returned after depression and war, but exploded to such an extent that mean family income more than doubled in the 1950s while most women still remained at home raising their families. What had been considered luxuries just a few years before: refrigerators and freezers, cars and even second cars, single family homes, air conditioning, television, all became commonplace (although kids would still gather in the yard of the neighbourhood plutocrat to squint through his window at the wonder of colour TV and chuckle at why he paid so much for it).

Although the transformation of the U.S. from an agrarian society to a predominantly urban and industrial nation was well underway, most families were no more than one generation removed from the land, and Bryson recounts his visits to his grandparents' farm which recall what was lost and gained as that pillar of American society went into eclipse.

There are relatively few factual errors, but from time to time Bryson's narrative swallows counterfactual left-wing conventional wisdom about the Fifties. For example, writing about atomic bomb testing:

Altogether between 1946 and 1962, the United States detonated just over a thousand nuclear warheads, including some three hundred in the open air, hurling numberless tons of radioactive dust into the atmosphere. The USSR, China, Britain, and France detonated scores more.

Sigh…where do we start? Well, the obvious subtext is that U.S. started the arms race and that other nuclear powers responded in a feeble manner. In fact, the U.S. conducted a total of 1030 nuclear tests, with a total of 215 detonated in the atmosphere, including all tests up until testing was suspended in 1992, with the balance conducted underground with no release of radioactivity. The Soviet Union (USSR) did, indeed, conduct “scores” of tests, to be precise 35.75 score with a total of 715 tests, with 219 in the atmosphere—more than the U.S.—including Tsar Bomba, with a yield of 50 megatons. “Scores” indeed—surely the arms race was entirely at the instigation of the U.S.

If you've grown up in he U.S. in the 1950s or wished you did, you'll want to read this book. I had totally forgotten the radioactive toilets you had to pay to use but kids could wiggle under the door to bask in their actinic glare, the glories of automobiles you could understand piece by piece and were your ticket to exploring a broad continent where every town, every city was completely different: not just another configuration of the same franchises and strip malls (and yet recall how exciting it was when they first arrived: “We're finally part of the great national adventure!”)

The 1950s, when privation gave way to prosperity, yet Leviathan had not yet supplanted family, community, and civil society, it was utopia to be a kid (although, having been there, then, I'd have deemed it boring, but if I'd been confined inside as present-day embryonic taxpayers in safetyland are I'd have probably blown things up. Oh wait—Willoughby already did that, twelve hours too early!). If you grew up in the '50s, enjoy spending a few pleasant hours back there; if you're a parent of the baby boomers, exult in the childhood and opportunities you entrusted to them. And if you're a parent of a child in this constrained century? Seek to give your child the unbounded opportunities and unsupervised freedom to explore the world which Bryson and this humble scribbler experienced as we grew up.

Vapourising morons with ThunderVision—we need you more than ever, Thunderbolt Kid!

A U.S. edition is available.

 Permalink

February 2010

Churchill, Winston S. The World Crisis. London: Penguin, [1923–1931, 2005] 2007. ISBN 978-0-14-144205-1.
Churchill's history of the Great War (what we now call World War I) was published in five volumes between 1923 and 1931. The present volume is an abridgement of the first four volumes, which appeared simultaneously with the fifth volume of the complete work. This abridged edition was prepared by Churchill himself; it is not a cut and paste job by an editor. Volume Four and this abridgement end with the collapse of Germany and the armistice—the aftermath of the war and the peace negotiations covered in Volume Five of the full history are not included here.

When this work began to appear in 1923, the smart set in London quipped, “Winston's written a book about himself and called it The World Crisis”. There's a lot of truth in that: this is something somewhere between a history and memoir of a politician in wartime. Description of the disastrous attempts to break the stalemate of trench warfare in 1915 barely occupies a chapter, while the Dardanelles Campaign, of which Churchill was seen as the most vehement advocate, and for which he was blamed after its tragic failure, makes up almost a quarter of the 850 page book.

If you're looking for a dispassionate history of World War I, this is not the book to read: it was written too close to the events of the war, before the dire consequences of the peace came to pass, and by a figure motivated as much to defend his own actions as to provide a historical narrative. That said, it does provide an insight into how Churchill's experiences in the war forged the character which would cause Britain to turn to him when war came again. It also goes a long way to explaining precisely why Churchill's warnings were ignored in the 1930s. This book is, in large part, a recital of disaster after disaster in which Churchill played a part, coupled with an explanation of why, in each successive case, it wasn't his fault. Whether or not you accept his excuses and justifications for his actions, it's pretty easy to understand how politicians and the public in the interwar period could look upon Churchill as somebody who, when given authority, produced calamity. It was not just that others were blind to the threat, but rather than Churchill's record made him a seriously flawed messenger on an occasion where his message was absolutely correct.

At this epoch, Churchill was already an excellent writer and delivers some soaring prose on occasions, but he has not yet become the past master of the English language on display in The Second World War (which won the Nobel Prize for Literature when it really meant something). There are numerous tables, charts, and maps which illustrate the circumstances of the war.

Americans who hold to the common view that “The Yanks came to France and won the war for the Allies” may be offended by Churchill's speaking of them only in passing. He considers their effect on the actual campaigns of 1918 as mostly psychological: reinforcing French and British morale and confronting Germany with an adversary with unlimited resources.

Perhaps the greatest lesson to be drawn from this work is that of the initial part, which covers the darkening situation between 1911 and the outbreak of war in 1914. What is stunning, as sketched by a person involved in the events of that period, is just how trivial the proximate causes of the war were compared to the apocalyptic bloodbath which ensued. It is as if the crowned heads, diplomats, and politicians had no idea of the stakes involved, and indeed they did not—all expected the war to be short and decisive, none anticipating the consequences of the superiority conferred on the defence by the machine gun, entrenchments, and barbed wire. After the outbreak of war and its freezing into a trench war stalemate in the winter of 1914, for three years the Allies believed their “offensives”, which squandered millions of lives for transitory and insignificant gains of territory, were conducting a war of attrition against Germany. In fact, due to the supremacy of the defender, Allied losses always exceeded those of the Germans, often by a factor of two to one (and even more for officers). Further, German losses were never greater than the number of new conscripts in each year of the war up to 1918, so in fact this “war of attrition” weakened the Allies every year it continued. You'd expect intelligence services to figure out such a fundamental point, but it appears the “by the book” military mentality dismissed such evidence and continued to hurl a generation of their countrymen into the storm of steel.

This is a period piece: read it not as a history of the war but rather to experience the events of the time as Churchill saw them, and to appreciate how they made him the wartime leader he was to be when, once again, the lights went out all over Europe.

A U.S. edition is available.

 Permalink

Carroll, Sean. From Eternity to Here. New York: Dutton, 2010. ISBN 978-0-525-95133-9.
The nature of time has perplexed philosophers and scientists from the ancient Greeks (and probably before) to the present day. Despite two and half millennia of reflexion upon the problem and spectacular success in understanding many other aspects of the universe we inhabit, not only has little progress been made on the question of time, but to a large extent we are still puzzling over the same problems which vexed thinkers in the time of Socrates: Why does there seem to be an inexorable arrow of time which can be perceived in physical processes (you can scramble an egg, but just try to unscramble one)? Why do we remember the past, but not the future? Does time flow by us, living in an eternal present, or do we move through time? Do we have free will, or is that an illusion and is the future actually predestined? Can we travel to the past or to the future? If we are typical observers in an eternal or very long-persisting universe, why do we find ourselves so near its beginning (the big bang)?

Indeed, what we have learnt about time makes these puzzles even more enigmatic. For it appears, based both on theory and all experimental evidence to date, that the microscopic laws of physics are completely reversible in time: any physical process can (and does) go in both the forward and reverse time directions equally well. (Actually, it's a little more complicated than that: just reversing the direction of time does not yield identical results, but simultaneously reversing the direction of time [T], interchanging left and right [parity: P], and swapping particles for antiparticles [charge: C] yields identical results under the so-called “CPT” symmetry which, as far is known, is absolute. The tiny violation of time reversal symmetry by itself in weak interactions seems, to most physicists, inadequate to explain the perceived unidirectional arrow of time, although some disagree.)

In this book, the author argues that the way in which we perceive time here and now (whatever “now” means) is a direct consequence of the initial conditions which obtained at the big bang—the beginning of time, and the future state into which the universe is evolving—eternity. Whether or not you agree with the author's conclusions, this book is a tour de force popular exposition of thermodynamics and statistical mechanics, which provides the best intuitive grasp of these concepts of any non-technical book I have yet encountered. The science and ideas which influenced thermodynamics and its practical and philosophical consequences are presented in a historical context, showing how in many cases phenomenological models were successful in grasping the essentials of a physical process well before the actual underlying mechanisms were understood (which is heartening to those trying to model the very early universe absent a theory of quantum gravity).

Carroll argues that the Second Law of Thermodynamics entirely defines the arrow of time. Closed systems (and for the purpose of the argument here we can consider the observable universe as such a system, although it is not precisely closed: particles enter and leave our horizon as the universe expands and that expansion accelerates) always evolve from a state of lower probability to one of higher probability: the “entropy” of a system is (sloppily stated) a measure of the probability of finding the system in a given macroscopically observable state, and over time the entropy always stays the same or increases; except for minor fluctuations, the entropy increases until the system reaches equilibrium, after which it simply fluctuates around the equilibrium state with essentially no change in its coarse-grained observable state. What we perceive as the arrow of time is simply systems evolving from less probable to more probable states, and since they (in isolation) never go the other way, we naturally observe the arrow of time to be universal.

Look at it this way—there are vastly fewer configurations of the atoms which make up an egg as produced by a chicken: shell outside, yolk in the middle, and white in between, as there are for the same egg scrambled in the pan with the fragments of shell discarded in the poubelle. There are an almost inconceivable number of ways in which the atoms of the yolk and white can mix to make the scrambled egg, but far fewer ways they can end up neatly separated inside the shell. Consequently, if we see a movie of somebody unscrambling an egg, the white and yolk popping up from the pan to be surrounded by fragments which fuse into an unbroken shell, we know some trickster is running the film backward: it illustrates a process where the entropy dramatically decreases, and that never happens in the real world. (Or, more precisely, its probability of happening anywhere in the universe in the time since the big bang is “beyond vanishingly small”.)

Now, once you understand these matters, as you will after reading the pellucid elucidation here, it all seems pretty straightforward: our universe is evolving, like all systems, from lower entropy to higher entropy, and consequently it's only natural that we perceive that evolution as the passage of time. We remember the past because the process of storing those memories increases the entropy of the universe; we cannot remember the future because we cannot predict the precise state of the coarse-grained future from that of the present, simply because there are far more possible states in the future than at the present. Seems reasonable, right?

Well, up to a point, Lord Copper. The real mystery, to which Roger Penrose and others have been calling attention for some years, is not that entropy is increasing in our universe, but rather why it is presently so low compared to what it might be expected to be in a universe in a randomly chosen configuration, and further, why it was so absurdly low in the aftermath of the big bang. Given the initial conditions after the big bang, it is perfectly reasonable to expect the universe to have evolved to something like its present state. But this says nothing at all about why the big bang should have produced such an incomprehensibly improbable set of initial conditions.

If you think about entropy in the usual thermodynamic sense of gas in a box, the evolution of the universe seems distinctly odd. After the big bang, the region which represents today's observable universe appears to have been a thermalised system of particles and radiation very near equilibrium, and yet today we see nothing of the sort. Instead, we see complex structure at scales from molecules to superclusters of galaxies, with vast voids in between, and stars profligately radiating energy into space with a temperature less than three degrees above absolute zero. That sure doesn't look like entropy going down: it's more like your leaving a pot of tepid water on the counter top overnight and, the next morning, finding a village of igloos surrounding a hot spring. I mean, it could happen, but how probable is that?

It's gravity that makes the difference. Unlike all of the other forces of nature, gravity always attracts. This means that when gravity is significant (which it isn't in a steam engine or pan of water), a gas at thermal equilibrium is actually in a state of very low entropy. Any small compression or rarefaction in a region will cause particles to be gravitationally attracted to volumes with greater density, which will in turn reinforce the inhomogeneity, which will amplify the gravitational attraction. The gas at thermal equilibrium will, then, unless it is perfectly homogeneous (which quantum and thermal fluctuations render impossible) collapse into compact structures separated by voids, with the entropy increasing all the time. Voilà galaxies, stars, and planets.

As sources of energy are exhausted, gravity wins in the end, and as structures compact ever more, entropy increasing apace, eventually the universe is filled only with black holes (with vastly more entropy than the matter and energy that fell into them) and cold dark objects. But wait, there's more! The expansion of the universe is accelerating, so any structures which are not gravitationally bound will eventually disappear over the horizon and the remnants (which may ultimately decay into a gas of unbound particles, although the physics of this remains speculative) will occupy a nearly empty expanding universe (absurd as this may sound, this de Sitter space is an exact solution to Einstein's equations of General Relativity). This, the author argues, is the highest entropy state of matter and energy in the presence of gravitation, and it appears from current observational evidence that that's indeed where we're headed.

So, it's plausible the entire evolution of the universe from the big bang into the distant future increases entropy all the way, and hence there's no mystery why we perceive an arrow of time pointing from the hot dense past to cold dark eternity. But doggone it, we still don't have a clue why the big bang produced such low entropy! The author surveys a number of proposed explanations, some of which invoke fine-tuning with no apparent physical explanations, summon an enormous (or infinite) “multiverse” of all possibilities and argue that among such an ensemble, we find ourselves in one of the vanishingly small fraction of universes like our own because observers like ourselves couldn't exist in all the others (the anthropic argument), or that the big bang was not actually the beginning and that some dynamical process which preceded the big bang (which might then be considered a “big bounce”) forced the initial conditions into a low entropy state. There are many excellent arguments against these proposals, which are clearly presented. The author's own favourite, which he concedes is as speculative as all the others, is that de Sitter space is unstable against a quantum fluctuation which nucleates a disconnected bubble universe in which entropy is initially low. The process of nucleation increases entropy in the multiverse, and hence there is no upper bound at all on entropy, with the multiverse eternal in past and future, and entropy increasing forever without bound in the future and decreasing without bound in the past.

(If you're a regular visitor here, you know what's coming, don't you?) Paging friar Ockham! We start out having discovered yet another piece of evidence for what appears to be a fantastically improbable fine-tuning of the initial conditions of our universe. The deeper we investigate this, the more mysterious it appears, as we discover no reason in the dynamical laws of physics for the initial conditions to be have been so unlikely among the ensemble of possible initial conditions. We are then faced with the “trichotomy” I discussed regarding the origin of life on Earth: chance (it just happened to be that way, or it was every possible way, and we, tautologically, live in one of the universes in which we can exist), necessity (some dynamical law which we haven't yet figured out caused the initial conditions to be the way we observe them to have been), or (and here's where all the scientists turn their backs upon me, snuff the candles, and walk away) design. Yes, design. Suppose (and yes, I know, I've used this analogy before and will certainly do so again) you were a character in a video game who somehow became sentient and began to investigate the universe you inhabited. As you did, you'd discover there were distinct regularities which governed the behaviour of objects and their interactions. As you probed deeper, you might be able to access the machine code of the underlying simulation (or at least get a glimpse into its operation by running precision experiments). You would discover that compared to a random collection of bits of the same length, it was in a fantastically improbable configuration, and you could find no plausible way that a random initial configuration could evolve into what you observe today, especially since you'd found evidence that your universe was not eternally old but rather came into being at some time in the past (when, say, the game cartridge was inserted).

What would you conclude? Well, if you exclude the design hypothesis, you're stuck with supposing that there may be an infinity of universes like yours in all random configurations, and you observe the one you do because you couldn't exist in all but a very few improbable configurations of that ensemble. Or you might argue that some process you haven't yet figured out caused the underlying substrate of your universe to assemble itself, complete with the copyright statement and the Microsoft security holes, from a generic configuration beyond your ability to observe in the past. And being clever, you'd come up with persuasive arguments as to how these most implausible circumstances might have happened, even at the expense of invoking an infinity of other universes, unobservable in principle, and an eternity of time, past and present, in which events could play out.

Or, you might conclude from the quantity of initial information you observed (which is identical to low initial entropy) and the improbability of that configuration having been arrived at by random processes on any imaginable time scale, that it was put in from the outside by an intelligent designer: you might call Him or Her the Programmer, and some might even come to worship this being, outside the observable universe, which is nonetheless responsible for its creation and the wildly improbable initial conditions which permit its inhabitants to exist and puzzle out their origins.

Suppose you were running a simulation of a universe, and to win the science fair you knew you'd have to show the evolution of complexity all the way from the get-go to the point where creatures within the simulation started to do precision experiments, discover curious fine-tunings and discrepancies, and begin to wonder…? Would you start your simulation at a near-equilibrium condition? Only if you were a complete idiot—nothing would ever happen—and whatever you might say about post-singularity super-kids, they aren't idiots (well, let's not talk about the music they listen to, if you can call that music). No, you'd start the simulation with extremely low entropy, with just enough inhomogeneity that gravity would get into the act and drive the emergence of hierarchical structure. (Actually, if you set up quantum mechanics the way we observe it, you wouldn't have to put in the inhomogeneity; it will emerge from quantum fluctuations all by itself.) And of course you'd fine tune the parameters of the standard model of particle physics so your universe wouldn't immediately turn entirely into neutrons, diprotons, or some other dead end. Then you'd sit back, turn up the volume on the MultIversePod, and watch it run. Sure 'nuff, after a while there'd be critters trying to figure it all out, scratching their balding heads, and wondering how it came to be that way. You would be most amused as they excluded your existence as a hypothesis, publishing theories ever more baroque to exclude the possibility of design. You might be tempted to….

Fortunately, this chronicle does not publish comments. If you're sending them from the future, please use the antitelephone.

(The author discusses this “simulation argument” in endnote 191. He leaves it to the reader to judge its plausibility, as do I. I remain on the record as saying, “more likely than not”.)

Whatever you may think about the Big Issues raised here, if you've never experienced the beauty of thermodynamics and statistical mechanics at a visceral level, this is the book to read. I'll bet many engineers who have been completely comfortable with computations in “thermogoddamics” for decades finally discover they “get it” after reading this equation-free treatment aimed at a popular audience.

 Permalink

D'Souza, Dinesh. Life After Death: The Evidence. Washington: Regnery Publishing, 2009 ISBN 978-1-59698-099-0.
Ever since the Enlightenment, and to an increasing extent today, there is a curious disconnect between the intellectual élite and the population at large. The overwhelming majority of human beings who have ever lived believed in their survival, in one form or another, after death, while materialists, reductionists, and atheists argue that this is nothing but wishful thinking; that there is no physical mechanism by which consciousness could survive the dissolution of the neural substrate in which it is instantiated, and point to the lack of any evidence for survival after death. And yet a large majority of people alive today beg to differ. As atheist H. G. Wells put it in a very different context, they sense that “Worlds may freeze and suns may perish, but there stirs something within us now that can never die again.” Who is right?

In this slim (256 page) volume, the author examines the scientific, philosophical, historical, and moral evidence for and implications of survival after death. He explicitly excludes religious revelation (except in the final chapter, where some evidence he cites as historical may be deemed by others to be argument from scriptural authority). Having largely excluded religion from the argument, he explores the near-universality of belief in life after death across religious traditions and notes the common threads uniting them.

But traditions and beliefs do not in any way address the actual question: does our individual consciousness, in some manner, survive the death of our bodies? While materialists discard such a notion as absurd, the author argues that there is nothing in our present-day understanding of physics, evolutionary biology, or neuroscience which excludes this possibility. In fact, the complete failure so far to understand the physical basis of consciousness can be taken as evidence that it may be a phenomenon independent of its physical instantiation: structured information which could conceivably transcend the hardware on which it currently operates.

Computer users think nothing these days of backing up their old computer, loading the backups onto a new machine (which may use a different processor and operating system), and with a little upward compatibility magic, having everything work pretty much as before. Do your applications and documents from the old computer die when you turn it off for the last time? Are they reincarnated when you load them into the replacement machine? Will they live forever as long as you continue to transfer them to successive machines, or on backup tapes? This may seem a silly analogy, but consider that materialists consider your consciousness and self to be nothing other than a pattern of information evolving in a certain way according to the rules of neural computation. Do the thought experiment: suppose nanotechnological robots replaced your meat neurons one by one with mechanical analogues with the same external electrochemical interface. Eventually your brain would be entirely different physically, but would your consciousness change at all? Why? If it's just a bunch of components, then replacing protein components with silicon (or whatever) components which work in the same way should make no difference at all, shouldn't it?

A large part of what living organisms do is sense their external environment and interact with it. Unicellular organisms swim along the gradient of increasing nutrient concentration. Other than autonomic internal functions of which we are aware only when they misbehave, humans largely experience the world through our sensory organs, and through the internal sense of self which is our consciousness. Is it not possible that the latter is much like the former—something external to the meatware of our body which is picked up by a sensory organ, in this case the neural networks of the brain?

If this be the case, in the same sense that the external world does not cease to exist when our eyes, ears, olfactory, and tactile sensations fail at the time of death or due to injury, is it not plausible that dissolution of the brain, which receives and interacts with our external consciousness, need not mean the end of that incorporeal being?

Now, this is pretty out-there stuff, which might cause the author to run from the room in horror should he hear me expound it. Fine: this humble book reviewer spent a substantial amount of time contributing to a project seeking evidence for existence of global, distributed consciousness, and has concluded that such has been demonstrated to exist by the standards accepted by most of the “hard” sciences. But let's get back to the book itself.

One thing you won't find here is evidence based upon hauntings, spiritualism, or other supposed contact with the dead (although I must admit, Chicago election returns are awfully persuasive as to the ability of the dead to intervene in affairs of the living). The author does explore near death experiences, noting their universality across very different cultures and religious traditions, and evidence for reincarnation, which he concludes is unpersuasive (but see the research of Ian Stevenson and decide for yourself). The exploration of a physical basis for the existence of other worlds (for example, Heaven and Hell) cites the “multiverse” paradigm, and invites sceptics of that “theory of anything” to denounce it as “just as plausible as life after death”—works for me.

Excuse me for taking off on a tangent here, but it is, in a formal sense. If you believe in an infinite chaotically inflating universe with random initial conditions, or in Many Worlds in One (October 2006), then Heaven and Hell explicitly exist, not only once in the multiverse, but an infinity of times. For every moment in your life that you may have to ceased to exist, there is a universe somewhere out there, either elsewhere in the multiverse or in some distant region far from our cosmic horizon in this universe, where there's an observable universe identical to our own up to that instant which diverges thence into one which grants you eternal reward or torment for your actions. In an infinite universe with random initial conditions, every possibility occurs an infinite number of times. Think about it, or better yet, don't.

The chapter on morality is particularly challenging and enlightening. Every human society has had a code of morality (different in the details, but very much the same at the core), and most of these societies have based their moral code upon a belief in cosmic justice in an afterlife. It's self-evident that bad guys sometimes win at the expense of good guys in this life, but belief that the score will be settled in the long run has provided a powerful incentive for mortals to conform to the norms which their societies prescribe as good. (I've deliberately written the last sentence in the post-modern idiom; I consider many moral norms absolutely good or bad based on gigayears of evolutionary history, but I needn't introduce that into evidence to prove my case, so I won't.) From an evolutionary standpoint, morality is a survival trait of the family or band: the hunter who shares the kill with his family and tribe will have more descendants than the gluttonous loner. A tribe which produces males who sacrifice themselves to defend their women and children will produce more offspring than the tribe whose males value only their own individual survival.

Morality, then, is, at the group level, a selective trait, and consequently it's no surprise that it's universal among human societies. But if, as serious atheists such as Bertrand Russell (as opposed to the lower-grade atheists we get today) worried, morality has been linked to religion and belief in an afterlife in every single human society to date, then how is morality (a survival characteristic) to be maintained in the absence of these beliefs? And if evolution has selected us to believe in the afterlife for the behavioural advantages that belief confers in the here and now, then how successful will the atheists be in extinguishing a belief which has conferred a behavioural selective advantage upon thousands of generations of our ancestors? And how will societies which jettison such belief fare in competition with those which keep it alive?

I could write much more about this book, but then you'd have to read a review even longer than the book, so I'll spare you. If you're interested in this topic (as you'll probably eventually be as you get closer to the checkered flag), this is an excellent introduction, and the end notes provide a wealth of suggestions for additional reading. I doubt this book will shake the convictions of either the confirmed believers or the stalwart sceptics, but it will provide much for both to think about, and perhaps motivate some folks whose approach is “I'll deal with that when the time comes” (which has been pretty much my own) to consider the consequences of what may come next.

 Permalink

Benioff, David. City of Thieves. New York: Viking, 2008. ISBN 978-0-670-01870-3.
This is a coming of age novel, buddy story, and quest saga set in the most implausible of circumstances: the 872 day Siege of Leningrad and the surrounding territory. I don't know whether the author's grandfather actually lived these events and recounted them to to him or whether it's just a literary device, but I'm certain the images you experience here will stay with you for many years after you put this book down, and that you'll probably return to it after reading it the first time.

Kolya is one of the most intriguing characters I've encountered in modern fiction, with Vika a close second. You wouldn't expect a narrative set in the German invasion of the Soviet Union to be funny, but there are quite a number of laughs here, which will acquaint you with the Russian genius for black humour when everything looks the bleakest. You will learn to be very wary around well-fed people in the middle of a siege!

Much of the description of life in Leningrad during the siege is, of course, grim, although arguably less so than the factual account in Harrison Salisbury's The 900 Days (however, note that the story is set early in the siege; conditions deteriorated as it progressed). It isn't often you read a historical novel in which Olbers' paradox figures!

 Permalink

March 2010

Sowell, Thomas. The Housing Boom and Bust. 2nd. ed. New York: Basic Books, [2009] 2010. ISBN 978-0-465-01986-1.
If you rely upon the statist legacy media for information regarding the ongoing financial crisis triggered by the collapse of the real estate bubble in certain urban markets in the United States, everything you know is wrong. This book is a crystal-clear antidote to the fog of disinformation emanating from the politicians and their enablers in media and academia.

If, as five or six people still do, you pay attention to the legacy media in the United States, you'll hear that there was a nationwide crisis in the availability of affordable housing, and that government moved to enable more people to become homeowners. The lack of regulation caused lenders to make risky loans and resell them as “toxic assets” which nobody could actually value, and these flimsy pieces of paper were sold around the world as if they were really worth something.

Everything you know is wrong.

In fact, there never was a nationwide affordable housing crisis. The percentage of family income spent on housing nationwide fell in the nineties and oughties. The bubble market in real estate was largely confined to a small number of communities which had enacted severe restrictions upon development that reduced the supply of housing—in fact, of 26 urban areas rated as “severely unaffordable”, 23 had adopted “smart growth” policies. (Rule of thumb: whenever government calls something “smart”, it's a safe bet that it's dumb.)

But the bubble was concentrated in the collectivist enclaves where the chattering class swarm and multiply: New York, San Francisco, Los Angeles, Washington, Boston, and hence featured in the media, ignoring markets such as Dallas and Houston where, in the absence of limits on development, housing prices were stable.

As Eric Sevareid observed, “The chief cause of problems is solutions”, and this has never been better demonstrated than in the sorry sequence of interventions in the market documented here. Let's briefly sketch the “problems” and “solutions” which, over decades, were the proximate cause of the present calamity.

First of all, back in the New Deal, politicians decided the problem of low rates of home ownership and the moribund construction industry of the Depression could be addressed by the solution of government (or government sponsored) institutions to provide an aftermarket in mortgages by banks, which could then sell the mortgages on their books and free up the capital to make new loans. When the economy started to grow rapidly after the end of World War II, this solution caused a boom in residential construction, enabling working class families to buy new houses in the rapidly expanding suburbs. This was seen as a problem, “suburban sprawl”, to which local politicians, particularly in well-heeled communities on the East and West coasts, responded with the solution of enacting land use restrictions (open space, minimum lot sizes, etc.) to keep the “essential character” of their communities from being changed by an invasion of hoi polloi and their houses made of ticky-tacky, all the same. This restriction of the supply of housing predictably led to a rapid rise in the price of housing in these markets (while growth-oriented markets without such restrictions experienced little nor no housing price increases, even at the height of the bubble). The increase in the price of housing priced more and more people out of the market, particularly younger first-time home buyers and minorities, which politicians proclaimed as an “affordable housing crisis”, and supposed, contrary to readily-available evidence, was a national phenomenon. They enacted solutions, such as the Community Reinvestment Act, regulation which required lenders to effectively meet quotas of low-income and minority mortgage lending, which compelled lenders to make loans their usual standards of risk evaluation would have caused them to decline. Expanding the pool of potential home buyers increased the demand for housing, and with the supply fixed due to political restrictions on development, the increase in housing prices inevitably accelerated, pricing more people out of the market. Politicians responded to this problem by encouraging lenders to make loans which would have been considered unthinkably risky just a few years before: no down payment loans, loans with a low-ball “teaser” rate for the first few years which reset to the prevailing rate thereafter, and even “liar loans” where the borrower was not required to provide documentation of income or net worth. These forms of “creative financing” were, in fact, highly-leveraged bets upon the housing bubble continuing—all would lead to massive defaults in the case of declining or even stable valuations of houses.

Because any rational evaluation of the risk of securities based upon the aggregation of these risky loans would cause investors to price them accordingly, securities of Byzantine complexity were created which allowed financial derivatives based upon them, with what amounted to insurance provided by counterparty institutions, which could receive high credit ratings by the government-endorsed rating agencies (whose revenue stream depended upon granting favourable ratings to these securities). These “mortgage-backed securities” were then sold all around the world, and ended up in the portfolios of banks, pension funds, and individual investors, including this scrivener (saw it coming; sold while the selling was good).

Then, as always happens in financial bubbles, the music stopped. Back in the days of ticker tape machines, you could hear the popping of a bubble. The spasmodic buying by the greatest fools of all would suddenly cease its clatter and an ominous silence would ensue. Then, like the first raindrops which presage a great deluge, you'd hear the tick-tick-tick of sell orders being filled below the peak price. And then the machine would start to chatter in earnest as sell orders flooded into the market, stops were hit and taken out, and volume exploded to the downside. So it has always been, and so it will always be. And so it was in this case, although in the less liquid world of real estate it took a little longer to play out.

As you'll note in these comments, and also in Sowell's book, the words “politicians” and “government” appear disproportionately as the subject of sentences which describe each step in how a supposed problem became a solution which became a problem. The legacy media would have you believe that “predatory lenders”, “greedy Wall Street firms”, “speculators”, and other nefarious private actors are the causes of the present financial crisis. These players certainly exist, and they've been evident as events have been played out, but the essence of the situation is that all of them are creations and inevitable consequences of the financial environment created by politicians who are now blaming others for the mess they created and calling for more “regulation” by politicians (as if, in the long and sorry history of regulation, it has ever made anything more “regular” than the collective judgement of millions of people freely trading with one another in an open market).

There are few people as talented as Thomas Sowell when it comes to taking a complex situation spanning decades and crossing the boundary of economics and politics, and then dissecting it out into the essentials like an anatomy teacher, explaining in clear as light prose the causes and effects, and the unintended and yet entirely predictable consequences (for those acquainted with basic economics) which led to the present mess. This is a masterpiece of such work, and anybody who's interested in the facts and details behind the obfuscatory foam emerging from the legacy media will find this book an essential resource.

Dr. Sowell's books tend to be heavily footnoted, with not only source citations but also expansions upon the discussion in the main text. The present volume uses a different style, with a lengthy “Sources” section, a full 19% of the book, listing citations for items in the text in narrative form, chapter by chapter. Expressing these items in text, without the abbreviations normally used in foot- or end-notes balloons the length of this section and introduces much redundancy. Perhaps it's due to the publisher feeling a plethora of footnotes puts off the causal reader, but for me, footnotes just work a lot better than these wordy source notes.

 Permalink

Smith, Lee. The Strong Horse. New York: Doubleday, 2010. ISBN 978-0-385-51611-2.
After the attacks upon the U.S. in September 2001, the author, who had been working as an editor in New York City, decided to find out for himself what in the Arab world could provoke such indiscriminate atrocities. Rather than turn to the works of establishment Middle East hands or radical apologists for Islamist terror, he pulled up stakes and moved to Cairo and later Beirut, spending years there living in the community, meeting people from all walks of life from doormen, cab drivers, students, intellectuals, clerics, politicians, artists, celebrities, and more. This book presents his conclusions in a somewhat unusual form: it is hard to categorise—it's part travelogue; collection of interviews; survey of history, exploration of Arab culture, art, and literature; and geopolitical analysis. What is clear is that this book is a direct assault upon the consensus view of the Middle East among Western policymakers which, if correct (and the author is very persuasive indeed) condemns many of the projects of “democratisation”, “peace processes”, and integration of the nations of the region into a globalised economy to failure; it calls for an entirely different approach to the Arab world, one from which many Western feel-good diplomats and politically correct politicians will wilt in horror.

In short, Smith concludes that the fundamental assumption of the program whose roots can be traced from Woodrow Wilson to George W. Bush—that all people, and Arabs in particular, strive for individual liberty, self-determination, and a civil society with democratically elected leaders—is simply false: those are conditions which have been purchased by Western societies over centuries at the cost of great bloodshed and suffering by the actions of heroes. This experience has never occurred in the Arab world, and consequently its culture is entirely different. One can attempt to graft the trappings of Western institutions onto an Arab state, but without a fundamental change in the culture, the graft will not take and before long things will be just as before.

Let me make clear a point the author stresses. There is not the slightest intimation in this book that there is some kind of racial or genetic difference (which are the same thing) between Arabs and Westerners. Indeed, such a claim can be immediately falsified by the large community of Arabs who have settled in the West, assimilated themselves to Western culture, and become successful in all fields of endeavour. But those are Arabs, often educated in the West, who have rejected the culture in which they were born, choosing consciously to migrate to a very different culture they find more congenial to the way they choose to live their lives. What about those who stay (whether by preference, or due to lack of opportunity to emigrate)?

No, Arabs are not genetically different in behaviour, but culture is just as heritable as any physical trait, and it is here the author says we must look to understand the region. The essential dynamic of Arab political culture and history, as described by the 14th century Islamic polymath Ibn Khaldun, is that of a strong leader establishing a dynasty or power structure to which subjects submit, but which becomes effete and feckless over time, only to eventually be overthrown violently by a stronger force (often issuing from desert nomads in the Arab experience), which begins the cycle again. The author (paraphrasing Osama bin Laden) calls this the “strong horse” theory: Arab populations express allegiance to the strongest perceived power, and expect changes in governance to come through violent displacement of a weaker existing order.

When you look at things this way, many puzzles regarding the Middle East begin to make more sense. First of all, the great success which imperial powers over the millennia, including the Persian, Ottoman, French, and British empires, have had in subduing and ruling Arabs without substantial internal resistance is explained: the empire was seen as the strong horse and Arab groups accepted subordination to it. Similarly, the ability of sectarian minorities to rule on a long-term basis in modern states such as Lebanon, Syria, and Iraq is explained, as is the great stability of authoritarian regimes in the region—they usually fall only when deposed by an external force or by a military coup, not due to popular uprisings.

Rather than presenting a lengthy recapitulation of the arguments in the book filtered through my own comprehension and prejudices, this time I invite you to read a comprehensive exposition of the author's arguments in his own words, in a transcript of a three hour interview by Hugh Hewitt. If you're interested in the topics raised so far, please read the interview and return here for some closing comments.

Is the author's analysis correct? I don't know—certainly it is at variance with that of a mass of heavy-hitting intellectuals who have studied the region for their entire careers and, if correct, means that much of Western policy toward the Middle East since the fall of the Ottoman Empire has been at best ill-informed and at worst tragically destructive. All of the debate about Islam, fundamentalist Islam, militant Islam, Islamism, Islamofascism, etc., in Smith's view, misses the entire point. He contends that Islam has nothing, or next to nothing, to do with the present conflict. Islam, born in the Arabian desert, simply canonised, with a few minor changes, a political and social regime already extant in Arabia for millennia before the Prophet, based squarely on rule by the strong horse. Islam, then, is not the source of Arab culture, but a consequence of it, and its global significance is as a vector which inoculates Arab governance by the strong horse into other cultures where Islam takes root. The extent to which the Arab culture is adopted depends upon the strength and nature of the preexisting local culture into which Islam is introduced: certainly the culture and politics of Islamic Turkey, Iran, and Indonesia are something very different from that of Arab nations, and from each other.

The author describes democracy as “a flower, not a root”. An external strong horse can displace an Arab autocracy and impose elections, a legislature, and other trappings of democracy, but without the foundations of the doctrine of natural rights, the rule of law, civil society, free speech and the tolerance of dissent, freedom of conscience, and the separation of the domain of the state from the life of the individual, the result is likely to be “one person, one vote, one time” and a return to strong horse government as has been seen so many times in the post-colonial era. Democracy in the West was the flowering of institutions and traditions a thousand years in the making, none of which have ever existed in the Arab world. Those who expect democracy to create those institutions, the author would argue, suffer from an acute case of inverting causes and effects.

It's tempting to dismiss Arab culture as described here as “dysfunctional”, but (if the analysis be correct), I don't think that's a fair characterisation. Arab governance looks dysfunctional through the eyes of Westerners who judge it based on the values their own cultures cherish, but then turnabout's fair play, and Arabs have many criticisms of the West which are equally well founded based upon their own values. I'm not going all multicultural here—there's no question that by almost any objective measure such as per capita income; industrial and agricultural output; literacy and education; treatment of women and minorities; public health and welfare; achievements in science, technology, and the arts; that the West has drastically outperformed Arab nations, which would be entirely insignificant in the world economy absent their geological good fortune to be sitting on top of an ocean of petroleum. But again, that's applying Western metrics to Arab societies. When Nasser seized power in Egypt, he burned with a desire to do the will of the Egyptian people. And like so many people over the millennia who tried to get something done in Egypt, he quickly discovered that the will of the people was to be left alone, and the will of the bureaucracy was to go on shuffling paper as before, counting down to their retirement as they'd done for centuries. In other words, by their lights, the system was working and they valued stability over the risks of change. There is also what might be described as a cultural natural selection effect in action here. In a largely static authoritarian society, the ambitious, the risk-takers, and the innovators are disproportionately prone to emigrate to places which value those attributes, namely the West. This deprives those who remain of the élite which might improve the general welfare, resulting in a population even more content with the status quo.

The deeply pessimistic message of this book is that neither wishful thinking, soaring rhetoric, global connectivity, precision guided munitions, nor armies of occupation can do very much to change a culture whose general way of doing things hasn't changed fundamentally in more than two millennia. While change may be possible, it certainly isn't going to happen on anything less than the scale of several generations, and then only if the cultural transmission belt from generation to generation can be interrupted. Is this depressing? Absolutely, but if this is the case, better to come to terms with it and act accordingly than live in a fantasy world where one's actions may lead to catastrophe for both the West and the Arab world.

 Permalink

Thor, Brad. The Last Patriot. London: Pocket Books, 2008. ISBN 978-1-84739-195-7.
This is a page-turning thriller which requires somewhat more suspension of disbelief than the typical book of the genre. The story involves, inter alia, radical Islam, the assassination of Mohammed, the Barbary pirates, Thomas Jefferson, a lost first edition of Don Quixote, puzzle boxes, cryptography, car bombs, the French DST, the U.S. president, and a plan to undermine the foundations of one of the world's great religions.

If this seems to cross over into the territory of a Dan Brown novel or the National Treasure movies, it does, and like those entertainments, you'll enjoy the ride more if you don't look too closely at the details or ask questions like, “Why is the President of the United States, with the resources of the NSA at his disposal, unable to break a simple cylinder substitution cipher devised more than two centuries ago?”. Still, if you accept this book for what it is, it's a fun read; this would make an excellent “airplane book”, at least as long as you aren't flying to Saudi Arabia—the book is banned in that country.

A U.S. edition is available.

 Permalink

Emison, John Avery. Lincoln über Alles. Gretna, LA: Pelican Publishing, 2009. ISBN 978-1-58980-692-4.
Recent books, such as Liberal Fascism (January 2008), have explored the roots and deep interconnections between the Progressive movement in the United States and the philosophy and policies of its leaders such as Theodore Roosevelt and Woodrow Wilson, and collectivist movements in twentieth century Europe, including Soviet communism, Italian fascism, and Nazism in Germany. The resurgence of collectivism in the United States, often now once again calling itself “progressive”, has made this examination not just a historical footnote but rather an important clue in understanding the intellectual foundations of the current governing philosophy in Washington.

A candid look at progressivism and its consequences for liberty and prosperity has led, among those willing to set aside accounts of history written by collectivists, whether they style themselves progressives or “liberals”, and look instead at contemporary sources and analyses by genuine classical liberals, to a dramatic reassessment of the place in history of Wilson and the two Roosevelts. While, in an academy and educational establishment still overwhelmingly dominated by collectivists, this is still a minority view, at least serious research into this dissenting view of history is available to anybody interested in searching it out.

Far more difficult to find is a critical examination of the U.S. president who was, according to this account, the first and most consequential of all American progressives, Abraham Lincoln. Some years ago, L. Neil Smith, in his essay “The American Lenin”, said that if you wanted to distinguish a libertarian from a conservative, just ask them about Abraham Lincoln. This observation has been amply demonstrated by the recent critics of progressivism, almost all conservatives of one stripe or another, who have either remained silent on the topic of Lincoln or jumped on the bandwagon and praised him.

This book is a frontal assault on the hagiography of Sainted Abe. Present day accounts of Lincoln's career and the Civil War contain so many omissions and gross misrepresentations of what actually happened that it takes a book of 300 pages like this one, based in large part on contemporary sources, to provide the context for a contrary argument. Topics many readers well-versed in the conventional wisdom view of American history may encounter for the first time here include:

  • No constitutional provision prohibited states from seceding, and the common law doctrine prohibiting legislative entrenchment (one legislature binding the freedom of a successor to act) granted sovereignty conventions the same authority to secede as to join the union in the first place.
  • None of the five living former presidents at the time Lincoln took office (only one a Southerner) supported military action against the South.
  • Lincoln's Emancipation Proclamation freed only slaves in states of the Confederacy; slaves in slave states which did not secede, including Delaware, Maryland, Kentucky, and Missouri remained in bondage. In fact, in 1861, Lincoln had written to the governors of all the states urging them to ratify the Corwin Amendment, already passed by the House and Senate, which would have written protection for slavery and indentured servitude into the Constitution. Further, Lincoln supported the secession of West Virginia from Virgina, and its admittance to the Union as a slave state. Slavery was not abolished throughout the United States until the adoption of the Thirteenth Amendment in December 1865, after Lincoln's death.
  • Despite subsequent arguments that secession was illegal, Lincoln mounted no legal challenge to the declarations of secession prior to calling for troops and initiating hostilities. Congress voted no declaration of war authorising Lincoln to employ federal troops.
  • The prosecution of total war against noncombatants in the South by Sherman and others, with the approval of Grant and Lincoln, not only constituted war crimes by modern standards, but were prohibited by the Lieber Code governing the conduct of the Union armies, signed by President Lincoln in April 1863.
  • Like the progressives of the early 20th century who looked to Bismarck's Germany as the model, and present-day U.S. progressives who want to remodel their country along the lines of the European social democracies, the philosophical underpinnings of Lincoln's Republicans and a number of its political and military figures as well as the voters who put it over the top in the states of the “old northwest” were Made in Germany. The “Forty-Eighters”, supporters of the failed 1848 revolutions in Europe, emigrated in subsequent years to the U.S. and, members of the European élite, established themselves as leaders in their new communities. They were supporters of a strong national government, progressive income taxation, direct election of Senators, nationalisation of railroads and other national infrastructure, an imperialistic foreign policy, and secularisation of the society—all part of the subsequent progressive agenda, and all achieved or almost so today. An estimation of the impact of Forty-Eighters on the 1860 election (at the time, in many states immigrants who were not yet citizens could vote if they simply declared their intention to become naturalised) shows that they provided Lincoln's margin of victory in Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, and Wisconsin (although some of these were close and may have gone the other way.)

Many of these points will be fiercely disputed by Lincoln scholars and defenders; see the arguments here, follow up their source citations, and make up your own mind. What is not in dispute is that the Civil War and the policies advocated by Lincoln and implemented in his administration and its Republican successors, fundamentally changed the relationship between the Federal government and the states. While before the Federal government was the creation of the states, to which they voluntarily delegated limited and enumerated powers, which they retained the right to reclaim by leaving the union, afterward Washington became not a federal government but a national government in the 19th century European sense, with the states increasingly becoming administrative districts charged with carrying out its policies and with no recourse when their original sovereignty was violated. A “national greatness” policy was aggressively pursued by the central government, including subsidies and land grants for building infrastructure, expansion into the Western territories (with repeatedly broken treaties and genocidal wars against their native populations), and high tariffs to protect industrial supporters in the North. It was Lincoln who first brought European-style governance to America, and in so doing became the first progressive president.

Now, anybody who says anything against Lincoln will immediately be accused of being a racist who wishes to perpetuate slavery. Chapter 2, a full 40 pages of this book, is devoted to race in America, before, during, and after the Civil War. Once again, you will learn that the situation is far more complicated than you believed it to be. There is plenty of blame to go around on all sides; after reviewing the four page list of Jim Crow laws passed by Northern states between 1777 and 1868, it is hard to regard them as champions of racial tolerance on a crusade to liberate blacks in the South.

The greatest issue regarding the Civil War, discussed only rarely now, is why it happened at all. If the war was about slavery (as most people believe today), then why, among all the many countries and colonies around the world which abolished slavery in the nineteenth century, was it only in the United States that abolition required a war? If, however, the war is regarded not as a civil war (which it wasn't, since the southern states did not wish to conquer Washington and impose their will upon the union), nor as a “war between the states” (because it wasn't the states of the North fighting against the states of the South, but rather the federal government seeking to impose its will upon states which no longer wished to belong to the union), but rather an imperial conquest waged as a war of annihilation if necessary, by a central government over a recalcitrant territory which refused to cede its sovereignty, then the war makes perfect sense, and is entirely consistent with the subsequent wars waged by Republican administrations to assert sovereignty over Indian nations.

Powerful central government, elimination of state and limitation of individual autonomy, imposition of uniform policies at a national level, endowing the state with a monopoly on the use of force and the tools to impose its will, grandiose public works projects funded by taxation of the productive sector, and sanguinary conflicts embarked upon in the interest of moralistic purity or national glory: these are all hallmarks of progressives, and this book makes a persuasive case that Lincoln was the first of their kind to gain power in the United States. Should liberty blossom again there, and the consequences of progressivism be candidly reassessed, there will be two faces to come down from Mount Rushmore, not just one.

 Permalink

Flynn, Vince. Consent to Kill. New York: Pocket Books, 2005. ISBN 978-1-4165-0501-3.
This is the sixth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. In the aftermath of Memorial Day (December 2009), a Saudi billionaire takes out a contract on Mitch Rapp, who he blames for the death of his son. Working through a cut-out, an assassin (one of the most interesting and frightening villains in the Vince Flynn yarns I've read so far—kind of an evil James Bond) is recruited to eliminate Rapp, ideally making it look like an accident to avoid further retribution. The assassin is conflicted, on the one hand respecting Rapp, but on the other excited by the challenge of going after the hardest target of all and ending his career with not just a crowning victory but a financial reward large enough to get out of the game.

Things do not go as planned, and the result is a relentless grudge match as Rapp pursues his attackers like Nemesis. This is a close-up, personal story rather than a high concept thriller like Memorial Day, and is more morality play than an edge of the seat page-turner. Once again, Flynn takes the opportunity to skewer politicians who'd rather excuse murderers than risk bad press. Although events and characters from earlier novels figure in this story, you can enjoy this one without having read any of the others.

Vince Flynn is acclaimed for the attention to detail in his novels, due not only to his own extensive research but a “brain trust” of Washington insider fans who “brief him in” on how things work there. That said, this book struck me as rather more sloppy than the others I've read, fumbling not super-geeky minutiæ but items I'd expect any editor with a sharp red pencil to finger. Below are some examples; while none are major plot spoilers, I've put them in a spoiler block just in case, but also for readers who'd like to see if they can spot them for themselves when they read the novel, then come back here and compare notes.

Spoiler warning: Plot and/or ending details follow.  
I'll cite these by chapter number, because I read the Kindle edition, which doesn't use conventional page numbers.

Chapter 53: “The sun was falling in the east, shooting golden streaks of light and shadows across the fields.” Even in CIA safe houses where weird drug-augmented interrogations are performed, the sun still sets in the west.

Chapter 63: “The presidential suite at the Hotel Baur Au Lac [sic] was secured for one night at a cost of 5,000 Swiss francs. … The suite consisted of three bedrooms, two separate living rooms, and a verandah that overlooked Lake Geneva.” Even the poshest of hotels in Zürich do not overlook Lake Geneva, seeing as it's on the other end of the country, more than 200 kilometres away! I presume he intended the Zürichsee. And you don't capitalise “au”.

Chapter 73: “Everyone on Mitch's team wore a transponder. Each agent's location was marked on the screen with a neon green dot and a number.” A neon dot would be red-orange, not green—how quickly people forget.

Chapter 78: “The 493 hp engine propelled the silver Mercedes down the Swiss autobahn at speeds sometimes approaching 150 mph. … The police were fine with fast driving, but not reckless.” There is no speed limit on German Autobahnen, but I can assure you that the Swiss police are anything but “fine” with people driving twice the speed limit of 120 km/h on their roads.

Spoilers end here.  
The conclusion is somewhat surprising. Whether we're beginning to see a flowering of compassion in Mitch Rapp or just a matter of professional courtesy is up to the reader to decide.

 Permalink

April 2010

Todd, Emmanuel. Après la démocratie. Paris: Gallimard, 2009. ISBN 978-2-07-078683-1.
This book is simultaneously enlightening, thought-provoking, and infuriating. The author is known for having forecast the collapse of the Soviet Union in 1976 and, in 2002, the end of U.S. hegemony in the political, military, and financial spheres, as we are currently witnessing. In the present work, he returns his focus to Europe, and France in particular, and examines how the economic consequences of globalisation, the emergence of low-wage economies such as China and India in direct competition with workers in the developed West, the expansion of college education from a small fraction to around a third of the population, changes in the structure of the family due to a longer lifespan and marital customs, the near eclipse of Christianity as a social and moral force in Western Europe, and the collapse of traditional political parties with which individuals would identify over long periods of time have led to a crisis in confidence among the voting public in the élites who (especially in France) have traditionally governed them, escalating to a point where serious thinkers question the continued viability of democratic governance.

Dubiety about democracy is neither limited to the author nor to France: right-like-a-stopped-clock pundit Thomas Friedman has written admiringly of China's autocracy compared to the United States, Gaia theorist James Lovelock argues that “climate change” may require the West to “put democracy on hold for a while” while other ManBearPig fabulists argue that the “failure of democracy” on this issue requires it to give way to “a form of authoritarian government by experts”.

The take in the present book is somewhat different, drawing on Todd's demographic and anthropological approach to history and policy. He argues that liberal democracy, as it emerged in Britain, France, and the United States, had as a necessary condition a level of literacy among the population of between one third and two thirds. With a lower level of literacy the general population is unable to obtain the information they need to form their own conclusions, and if a society reaches a very high level of literacy without having adopted democratic governance (for example Germany from Bismarck through World War II or the Soviet Union), then the governing structure is probably sufficiently entrenched so as to manage the flow of information to the populace and suppress democratic movements. (Actually, the author would like to believe that broad-based literacy is a necessary and sufficient condition for democracy in the long run, but to this reader he didn't make the sale.)

Once democratic governance is established, literacy tends to rise toward 100% both because governments promote it by funding education and because the citizenry has an incentive to learn to read and write in order to participate in the political process. A society with universal literacy and primary education, but only a very small class with advanced education tends to be stable, because broad political movements can communicate with the population, and the élites which make up the political and administrative class must be responsive to the electorate in order to keep their jobs. With the broad population starting out with pretty much the same educational and economic level, the resulting society tends toward egalitarianism in wealth distribution and opportunity for advancement based upon merit and enterprise. Such a society will be an engine of innovation and production, and will produce wealth which elevates the standard of living of its population, yielding overall contentment which stabilises the society against radical change.

In the twentieth century, and particularly in the latter half, growing prosperity in developed nations led to a social experiment on a massive scale entirely unprecedented in human history. For the first time, universal secondary education was seen as a social good (and enforced by compulsory education and rising school-leaving ages), with higher (college/university) education for the largest possible fraction of the population becoming the ultimate goal. Indeed, political rhetoric in the United States presently advocates making college education available for all. In France, the number of students in “tertiary” education (the emerging term of art, to avoid calling it “superior”, which would imply that those without it are inferior) burgeoned from 200,000 in 1950 to 2,179,000 in 1995, an increase of 990%, while total population grew just 39% (p. 56). Since then, the rate of higher education has remained almost constant, with the number of students growing only 4% between 1995 and 2005, precisely the increase in population during that decade. The same plateau was achieved earlier in the U.S., while Britain, which began the large-scale expansion of higher education later, only attained a comparable level in recent years, so it's too early to tell whether that will also prove a ceiling there as well.

The author calls this “stagnation” in education and blames it for a cultural pessimism afflicting all parts of the political spectrum. (He does not discuss the dumbing-down of college education which has accompanied its expansion and the attendant devaluing of the credential; this may be less the case on the Continent than in the Anglosphere.) At the same time, these societies now have a substantial portion of their population, around one third, equipped nominally with education previously reserved for a tiny élite, whose career prospects are limited simply because there aren't enough positions at the top to go around. At the same time, the educational stratification of the society into a tiny governing class, a substantial educated class inclined to feel entitled to economic rewards for all the years of their lives spent sitting in classrooms, and a majority with a secondary education strikes a blow at egalitarianism, especially in France where broad-based equality of results has been a central part of the national identity since the Revolution.

The pessimism created by this educational stagnation has, in the author's view, been multiplied to the point of crisis by what he considers to be a disastrous embrace of free trade. While he applauds the dismantling of customs barriers in Europe and supported the European “Constitution”, he blames the abundance of low-wage workers in China and India for what he sees as relentless pressure on salaries in Europe and the loss of jobs due to outsourcing of manufacturing and, increasingly, service and knowledge worker jobs. He sees this as benefiting a tiny class, maybe 1% of the population, to the detriment of all the rest. Popular dissatisfaction with this situation, and frustration in an environment where all major political parties across the ideological spectrum are staunch defenders of free trade, has led to the phenomenon of “wipeout” elections, where the dominant political party is ejected in disgust, only to be replaced by another which continues the same policies and in turn is rejected by the electorate.

Where will it all end? Well, as the author sees it, with Nicholas Sarkozy. He regards Sarkozy and everything he represents with such an actinic detestation that one expects the crackling of sparks and odour of ozone when opening the book. Indeed, he uses Sarkozy's personal shortcomings as a metaphor for what's wrong with France, and as the structure of the book as a whole. And yet he is forced to come to terms with the fact that Sarkozy was elected with the votes of 53% of French voters after, in the first round, effectively wiping out the National Front, Communists, and Greens. And yet, echoing voter discontent, in the municipal elections a year later, the left was seen as the overall winner.

How can a democratic society continue to function when the electorate repeatedly empowers people who are neither competent to govern nor aligned with the self-interest of the nation and its population? The author sees only three alternatives. The first (p. 232) is the redefinition of the state from a universal polity open to all races, creeds, and philosophies to a racially or ethnically defined state united in opposition to an “other”. The author sees Sarkozy's hostility to immigrants in France as evidence for such a redefinition in France, but does not believe that it will be successful in diverting the electorate's attention from a falling standard of living due to globalisation, not from the immigrant population. The second possibility he envisions (p. 239) is the elimination, either outright or effectively, of universal suffrage at the national level and its replacement by government by unelected bureaucratic experts with authoritarian powers, along the general lines of the China so admired by Thomas Friedman. Elections would be retained for local officials, preserving the appearance of democracy while decoupling it from governance at the national level. Lest this seem an absurd possibility, as the author notes on p. 246, this is precisely the model emerging for continental-scale government in the European Union. Voters in member states elect members to a European “parliament” which has little real power, while the sovereignty of national governments is inexorably ceded to the unelected European Commission. Note that only a few member states allowed their voters a referendum on the European “constitution” or its zombie reanimation, the Treaty of Lisbon.

The third alternative, presented in the conclusion to the work, is the only one the author sees as preserving democracy. This would be for the economic core of Europe, led by France and Germany, to adopt an explicit policy of protectionism, imposing tariffs on imports from low-wage producers with the goal of offsetting the wage differential and putting an end to the pressure on European workers, the outsourcing of jobs, and the consequent destruction of the middle class. This would end the social and economic pessimism in European societies, realign the policies of the governing class with the electorate, and restore the confidence among voters in those they elect which is essential for democracy to survive. (Due to its centuries-long commitment to free trade and alignment with the United States, Todd does not expect Great Britain to join such a protectionist regime, but believes that if France and Germany were to proclaim such a policy, their economic might and influence in the European Union would be sufficient to pull in the rest of the Continent and build a Wirtschaftsfestung Europa from the Atlantic to the Russian border.) In such a case, and only in that case, the author contends, will what comes after democracy be democracy.

As I noted at the start of these comments, I found this book, among other things, infuriating. If that's all it were, I would neither have finished it nor spent the time to write such a lengthy review, however. The work is worth reading, if for nothing else, to get a sense of the angst and malaise in present-day Europe, where it is beginning to dawn upon the architects and supporters of the social democratic welfare state that it is not only no longer competitive in the global economy but also unsustainable within its own borders in the face of a demographic collapse and failure to generate new enterprises and employment brought about by its own policies. Amidst foreboding that there are bad times just around the corner iTunes Store, and faced with an electorate which empowers candidates which leftists despise for being “populist”, “crude”, and otherwise not the right kind of people, there is a tendency among the Left to claim that “democracy is broken”, and that only radical, transformative change (imposed from the top down, against the will of the majority, if necessary) can save democracy from itself. This book is, I believe, an exemplar of this genre. I would expect several such books authored by leftist intellectuals to appear in the United States in the first years of a Palin administration.

What is particularly aggravating about the book is its refusal to look at the causes of the problems it proposes to address through a protectionist policy. Free trade did not create the regime of high taxation, crushing social charges, inability to dismiss incompetent workers, short work weeks and long vacations, high minimum wages and other deterrents to entry level jobs, and regulatory sclerosis which have made European industry uncompetitive, and high tariffs alone will not solve any of these problems, but rather simply allow them to persist for a while within a European bubble increasingly decoupled from the world economy. That's pretty much what the Soviet Union did for seventy years, if you think about it, and how well did that work out for the Soviet people?

Todd is so focused on protectionism as panacea that he Panglosses over major structural problems in Europe which would be entirely unaffected by its adoption. He dismisses demographic collapse as a problem for France, noting that the total fertility rate has risen over the last several years back to around 2 children per woman, the replacement rate. What he doesn't mention is that this is largely due to a high fertility rate among Muslim immigrants from North Africa, whose failure to assimilate and enter the economy is a growing crisis in France along with other Western European countries. The author dismisses this with a wave of the hand, accusing Sarkozy of provoking the “youth” riots of 2005 to further his own career, and argues that episode was genuinely discouraged young versus the ruling class and had little to do with Islam or ethnic conflict. One wonders how much time Dr. Todd has spent in the “no go” Muslim banlieues of Paris and other large European cities.

Further, Todd supports immigration and denounces restrictionists as opportunists seeking to distract the electorate with a scapegoat. But how is protectionism (closing the border to products from low wage countries) going to work, precisely, if the borders remain open to people from the Third World, many lacking any skills equipping them to participate in a modern industrialised society, and bringing with them, in many cases, belief systems hostile to the plurality, egalitarianism, secularism, and tolerance of European nations? If the descendants of immigrants do not assimilate, they pose a potentially disastrous social and political problem, while if they do, their entry into the job market will put pressure on wages just as surely as goods imported from China.

Given Todd's record in predicting events conventional wisdom deemed inconceivable, one should be cautious in dismissing his analysis here, especially as it drawn from the same kind of reasoning based in demographics, anthropology, and economics which informs his other work. If nothing else, it provides an excellent view of how more than fifty years journey down the social democratic road to serfdom brings into doubt how long the “democratic” part, as well as the society, can endure.

 Permalink

Rand, Ayn. Atlas Shrugged. New York: Dutton, [1957, 1992] 2005. ISBN 978-0-525-94892-6.
There is nothing I could possibly add by way of commentary on this novel, a classic of twentieth century popular fiction, one of the most discussed books of the epoch, and, more than fifty years after publication, still (at this writing) in the top two hundred books by sales rank at Amazon.com. Instead, I will confine my remarks to my own reactions upon reading this work for the third time and how it speaks to events of the present day.

I first read Atlas Shrugged in the summer of that most eventful year, 1968. I enjoyed it immensely, finding it not just a gripping story, but also, as Rand intended, a thorough (and in some ways, too thorough) exposition of her philosophy as glimpsed in The Fountainhead, which I'd read a few years earlier. I took it as an allegorical story about the pernicious effects and ultimate consequences of collectivism and the elevation of altruism over self-interest and need above earned rewards, but viewed the world in which it was set and the events which occurred there much as I did those of Orwell's 1984 and Heinlein's If This Goes On—: a cautionary tale showing the end point of trends visible in the contemporary world. But the world of Atlas Shrugged, like those of Orwell and Heinlein, seemed very remote from that of 1968—we were going to the Moon, and my expectations for the future were more along the lines of 2001 than Rand's dingy and decaying world. Also, it was 1968, for Heaven's sake, and I perceived the upheavals of the time (with a degree of naïveté and wrongheadedness I find breathtaking at this remove) as a sovereign antidote to the concentration of power and oppression of the individual, which would set things aright long before productive people began to heed Galt's call to shed the burden of supporting their sworn enemies.

My next traverse through Atlas Shrugged was a little before 1980. The seventies had taken a lot of the gloss off the bright and shiny 1968 vision of the future, and having run a small business for the latter part of that sorry decade, the encroachment of ever-rising taxes, regulation, and outright obstruction by governments at all levels was very much on my mind, which, along with the monetary and financial crises created by those policies plus a rising swamp of mysticism, pseudoscience, and the ascendant anti-human pagan cult of environmentalism, made it entirely plausible to me that the U.S. might tip over into the kind of accelerating decline described in the middle part of the novel. This second reading of the book left me with a very different impression than the first. This time I could see, from my own personal experience and in the daily news, precisely the kind of events foreseen in the story. It was no longer a cautionary tale but instead a kind of hitch-hiker's guide to the road to serfdom. Curiously, this reading the book caused me to shrug off the funk of demoralisation and discouragement and throw myself back into the entrepreneurial fray. I believed that the failure of collectivism was so self-evident that a turning point was at hand, and the landslide election of Reagan shortly thereafter appeared to bear this out. The U.S. was committed to a policy of lower taxes, rolling back regulations, standing up to aggressive collectivist regimes around the world, and opening the High Frontier with economical, frequent, and routine access to space (remember that?). While it was hardly the men of the mind returning from Galt's Gulch, it was good enough for me, and I decided to make the best of it and contribute what I could to what I perceived as the turnaround. As a footnote, it's entirely possible that if I hadn't reread Atlas Shrugged around this time, I would have given up on entrepreneurship and gone back to work for the Man—so in a way, this book was in the causal tree which led to Autodesk and AutoCAD. In any case, although working myself to exhaustion and observing the sapping of resources by looters and moochers after Autodesk's initial public stock offering in 1985, I still felt myself surfing on a wave of unbounded opportunity and remained unreceptive to Galt's pitch in 1987. In 1994? Well….

What with the eruption of the most recent financial crisis, the veer toward the hard left in the United States, and increasing talk of productive people opting to “go Galt”, I decided it was time for another pass through Atlas Shrugged, so I started reading it for the third time in early April 2010 and finished it in a little over two weeks, including some marathon sessions where I just didn't want to put it down, even though I knew the characters, principal events, and the ending perfectly well. What was different, and strikingly so, from the last read three decades ago, was how astonishingly prescient this book, published in 1957, was about events unfolding in the world today. As I noted above, in 1968 I viewed it as a dystopia set in an unspecified future. By 1980, many of the trends described in the book were clearly in place, but few of their ultimate dire consequences had become evident. In 2010, however, the novel is almost like reading a paraphrase of the history of the last quarter century. “Temporary crises”, “states of emergency”, “pragmatic responses”, calls to “sacrifice for the common good” and to “share the wealth” which seemed implausible then are the topics of speeches by present day politicians and news headlines. Further, the infiltration of academia and the news media by collectivists, their undermining the language and (in the guise of “postmodernism”) the foundations of rational thought and objective reality, which were entirely beneath the radar (at least to me) as late as 1980, are laid out here as clear as daylight, with the simultaneously pompous and vaporous prattling of soi-disant intellectuals which doubtless made the educated laugh when the book first appeared now having become commonplace in the classrooms of top tier universities and journals of what purport to be the humanities and social sciences. What once seemed a fantastic nightmare painted on a grand romantic canvas is in the process of becoming a shiveringly accurate prophecy.

So, where are we now? Well (if you'll allow me to use the word) objectively, I found the splice between our real-life past and present to be around the start of chapter 5 of part II, “Account Overdrawn”. This is about 500 pages into the hardback edition of 1168 pages, or around 40%. Obviously, this is the crudest of estimates—many things occur before that point which haven't yet in the real world and many afterward have already come to pass. Yet still, it's striking: who would have imagined piracy on the high seas to be a headline topic in the twenty-first century? On this reading I was also particularly struck by chapter 8 of part III, “The Egoist” (immediately following Galt's speech), which directly addresses a question I expect will soon intrude into the public consciousness: the legitimacy or lack thereof of nominally democratic governments. This is something I first wrote about in 1988, but never expected to actually see come onto the agenda. A recent Rasmussen poll, however, finds that just 21% of voters in the United States now believe that their federal government has the “consent of the governed”. At the same time, more than 40% of U.S. tax filers pay no federal income tax at all, and more than a majority receive more in federal benefits than they pay in taxes. The top 10% of taxpayers (by Adjusted Gross Income) pay more than 70% of all personal income taxes collected. This makes it increasingly evident that the government, if not already, runs the risk of becoming a racket in which the non-taxpaying majority use the coercive power of the state to shake down a shrinking taxpaying minority. This is precisely the vicious cycle which reaches its endpoint in this chapter, where the government loses all legitimacy in the eyes of not only its victims, but even its beneficiaries and participants. I forecast that should this trend continue (and that's the way to bet), within two years we will see crowds of people in the U.S. holding signs demanding “By what right?”.

In summary, I very much enjoyed revisiting this classic; given that it was the third time through and I don't consider myself to have changed all that much in the many years since the first time, this didn't come as a surprise. What I wasn't expecting was how differently the story is perceived based on events in the real world up to the time it's read. From the current perspective, it is eerily prophetic. It would be amusing to go back and read reviews at the time of its publication to see how many anticipated that happening. The ultimate lesson of Atlas Shrugged is that the looters subsist only by the sanction of their victims and through the product of their minds, which cannot be coerced. This is an eternal truth, which is why this novel, which states it so clearly, endures.

The link above is to the hardbound “Centennial Edition”. There are trade paperback, mass market paperback, and Kindle editions available as well. I'd avoid the mass market paperback, as the type is small and the spines of books this thick tend to disintegrate as you read them. At current Amazon prices, the hardcover isn't all that much more than the trade paperback and will be more durable if you plan to keep it around or pass it on to others. I haven't seen the Kindle transfer; if it's well done, it would be marvellous, as any print edition of this book is more than a handful.

 Permalink

Hickam, Homer H., Jr. Back to the Moon. New York: Island Books, 1999. ISBN 978-0-440-23538-5.
Jerry Pournelle advises aspiring novelists to plan to throw away their first million words before mastering the craft and beginning to sell. (Not that writing a million words to the best of your ability and failing to sell them guarantees success, to be sure. It's just that most novelists who eventually become successful have a million words of unsold manuscripts in the trunk in the attic by the time they break into print and become well known.) When lightning strikes and an author comes from nowhere to bestseller celebrity overnight, there is a strong temptation, not only for the author but also for the publisher, to dig out those unsold manuscripts, perhaps polish them up a bit, and rush them to market to capitalise upon the author's newfound name recognition. Pournelle writes, “My standard advice to beginning writers is that if you do hit it big, the biggest favor you can do your readers is to burn your trunk; but in fact most writers don't, and some have made quite a bit of money off selling what couldn't be sold before they got famous.”

Here, I believe, we have an example of what happens when an author does not follow that sage advice. Homer Hickam's Rocket Boys (July 2005), a memoir of his childhood in West Virginia coal country at the dawn of the space age, burst onto the scene in 1998, rapidly climbed the New York Times bestseller list, and was made into the 1999 film October Sky. Unknown NASA engineer Hickam was suddenly a hot literary property, and pressure to “sell the trunk” was undoubtedly intense. Out of the trunk, onto the press, into the bookshops—and here we have it, still in print a decade later.

The author joined NASA's Marshall Space Flight Center in 1981 as an aerospace engineer and worked on a variety of projects involving the Space Shuttle, including training astronauts for a number of demanding EVA missions. In the Author's Note, he observes that, while initially excited to work on the first reusable manned spacecraft, he, like many NASA engineers, eventually became frustrated with going in circles around the Earth and wished that NASA could once again send crews to explore as they had in the days of Apollo. He says, “I often found myself lurking in the techno-thriller or science fiction area of bookstores looking unsuccessfully for a novel about a realistic spacecraft, maybe even the shuttle, going back to the moon. I never found it. One day it occurred to me that if I wanted to read such a book, I would have to write it myself.”

Well, here it is. And if you're looking for a thriller about a “realistic spacecraft, maybe even the shuttle, going back to the moon”, sadly, you still haven't found it. Now, the odd thing is that this book is actually quite well written—not up to the standard of Rocket Boys, but hardly the work of a beginner. It is tightly plotted, the characters are interesting and develop as the story progresses, and the author deftly balances multiple plot lines with frequent “how are they going to get out of this?” cliffhangers, pulling it all together at the end. These are things you'd expect an engineer to have difficulty mastering as a novelist. You'd figure, however, that somebody with almost two decades of experience going to work every day at NASA and with daily contacts with Shuttle engineers and astronauts would get the technical details right, or at least make them plausible. Instead, what we have is a collection of laugh-out-loud howlers for any reader even vaguely acquainted with space flight. Not far into the book (say, fifty or sixty pages, or about a hundred “oh come on”s), I realised I was reading the literary equivalent of the Die Hard 2 movie, which the Wall Street Journal's reviewer dubbed “aviation for airheads”. The present work, “spaceflight for space cases”, is much the same: it works quite well as a thriller as long as you know absolutely nothing about the technical aspects of what's going on. It's filled with NASA jargon and acronyms (mostly used correctly) which lend it a feeling of authenticity much like Tom Clancy's early books. However, Clancy (for the most part), gets the details right: he doesn't, for example, have a submarine suddenly jump out of the water, fly at Mach 5 through the stratosphere, land on a grass runway in a remote valley in the Himalayas, then debark an assault team composed of amateurs who had never before fired a gun.

Shall we go behind the spoiler curtain and take a peek at a selection of the most egregious and side splitting howlers in this yarn?

Spoiler warning: Plot and/or ending details follow.  
  • Apollo 17 landed in the Taurus-Littrow region, not “Frau [sic] Mauro”. Apollo 14 landed at Fra Mauro.
  • In the description of the launch control centre, it is stated that Houston will assume control “the moment Columbia lifted a millimeter off the Cape Canaveral pad”. In fact, Houston assumes control once the launch pad tower has been cleared.
  • During the description of the launch, the ingress team sees the crew access arm start to retract and exclaims “Automatic launch sequence! We've got to go!”. In fact, the ingress team leaves the pad before the T−9 minute hold, and the crew access arm retracts well before the automatic sequence starts at T−31 seconds.
  • There are cameras located all over the launch complex which feed into the launch control centre. Disabling the camera in the white room would still leave dozens of other cameras active which would pick up the hijinks underway at the pad.
  • NASA human spaceflight hardware is manufactured and prepared for flight under the scrutiny of an army of inspectors who verify every aspect of the production process. Just how could infiltrators manage to embed payload in the base of the shuttle's external tank in the manufacturing plant at Michoud, and how could this extra cargo not be detected anywhere downstream? If the cargo was of any substantial size, the tank would fail fit tests on the launch platform, and certainly some pad rat would have said “that's not right” just looking at it.
  • Severing the data cable between the launch pad and the firing room would certainly cause the onboard automatic sequencer to halt the countdown. Even though the sequencer controls the launch process, it remains sensitive to a cutoff signal from the control centre, and loss of communications would cause it to abort the launch sequence. Further, the fact that the shuttle hatch was not closed would have caused the auto-sequencer to stop due to a cabin pressure alarm. And the hatch through which one boards the shuttle is not an “airlock”.
  • The description of the entire terminal countdown and launch process suffers from the time dilation common in bad movie thrillers: where several minutes of furious activity occur as the bomb counts down the last ten seconds.
  • The intended crew of the shuttle remains trapped in the pad elevator when the shuttle lifts off. They are described as having temporary hearing loss due to the noise. In fact, their innards would have been emulsified by the acoustic energy of the solid rocket boosters, then cremated and their ashes scattered by the booster plume.
  • The shuttle is said to have entered a 550 mile orbit with the external tank (ET) still attached. This is impossible; the highest orbit ever achieved by the shuttle was around 385 miles on the Hubble deployment and service missions, and this was a maximum-performance effort. Not only could the shuttle not reach 550 miles on the main engines, the orbital maneuvering system (OMS) would not have the velocity change capability (delta-V) required to circularise the orbit at this altitude with the ET still attached. And by the way, who modified the shuttle computer ascent software to change the launch trajectory and bypass ET jettison, and who loaded the modified software into the general purpose computers, and why was the modified software not detected by the launch control centre's pre-launch validation of the software load?
  • If you're planning a burn to get on a trans-lunar injection trajectory, you want to do it in as low an Earth orbit as possible in order to get the maximum assist to the burn. An orbit as low as used by the later Apollo missions probably wouldn't work due to the drag of having the ET attached, but there's no reason you'd want to go as high as 550 miles; that's just wasting energy.
  • The “Big Dog” and “Little Dog” engines are supposed to have been launched on an Indian rocket, with the mission being camouflaged as a failed communication satellite launch. But, whatever the magical properties of Big Dog, a storable propellant rocket (which it must be, since it's been parked in orbit for months waiting for the shuttle to arrive) with sufficient delta-V to boost the entire shuttle onto a trans-lunar trajectory, enter lunar orbit, and then leave lunar orbit to return to Earth would require a massive amount of fuel, be physically very large, and hence require a heavy lift launcher which (in addition to the Indians not possessing one) would not be used for a communications satellite mission. The Saturn S-IV B stage which propelled Apollo to the Moon was 17.8 metres long, 6.6 metres in diameter, and massed 119,000 kg fully fueled, and it was boosting a stack less massive than a space shuttle, and used only for trans-lunar injection, not lunar orbit entry and exit, and it used higher performance hydrogen and oxygen fuel. Big Dog would not be a bolt-in replacement engine for the shuttle, but rather a massive rocket stage which could hardly be disguised as a communications satellite.
  • On the proposed “rescue” mission by Endeavour, commander Grant proposes dropping the space station node in the cargo bay in a “parking orbit”, whence the next shuttle mission could capture it and move it to the Space Station. But in order to rendezvous with Columbia, Endeavour would have to launch into its 28.7 degree inclination orbit, leaving the space station node there. The shuttle OMS does not remotely have the delta-V for a plane change to the 51 degree orbit of the station, so there is no way the node could be delivered to the station.
  • A first-time astronaut is a “rookie”, not “rooky”. A rook is a kind of crow or a chess piece.
  • Removing a space shuttle main engine (SSME) is a complicated and lengthy procedure on the ground, requiring special tools and workstands. It is completely impossible that this could be done in orbit, especially by two people with no EVA experience, working in a part of the shuttle where there are no handgrips or restraints for EVA work, and where the shuttle's arm (remote manipulator system) cannot reach. The same goes for attaching Big Dog as a replacement.
  • As Endeavour closes in, her commander worries that “[t]oo much RCS propellant had been used to sneak up on Columbia”. But it's the orbital maneuvering system (OMS), not the reaction control system (RCS) which is used in rendezvous orbit-change maneuvers.
  • It's “Chernobyl” (Чорнобиль), not “Chernoble”.
  • Why, on a mission where all the margins are stretched razor-thin, would you bring along a spare lunar lander when you couldn't possibly know you'd need it?
  • Olivia Grant flies from Moscow to Alma-Ata on a “TU-144 transport”. The TU-144 supersonic transport was retired from service in 1978 after only 55 scheduled passenger flights. Even if somebody put a TU-144 back into service, it certainly wouldn't take six hours for the flight.
  • Vice President Vanderheld says, “France, for one, has spent trillions on thermonuclear energy. Fusion energy would destroy that investment overnight.” But fusion is thermonuclear energy!
  • When the tethered landing craft is dropped on the Moon from the shuttle, its forward velocity will be 3,700 miles per hour, the same as the shuttle's. The only way for it to “hit the lunar surface at under a hundred miles per hour” would be for the shuttle to cancel its entire orbital velocity before dropping the lander and then, in order to avoid crashing into the lunar surface, do a second burn as it was falling to restore its orbital velocity. Imparting such a delta-V to the entire shuttle would require a massive burn, for which there would be no reason to have provided the fuel in the mission plan. Also, at the moment the shuttle started the burn to cancel its orbital velocity, the tether would string out behind the shuttle, not remain at its altitude above the Moon.
  • The Apollo 17 lunar module Challenger's descent stage is said to have made a quick landing and hence have “at least half its propellant left”. Nonsense—while Cernan and Schmitt didn't land on fumes like Apollo 11 (and, to a lesser extent, Apollo 14), no Apollo mission landed with the tanks anywhere near half-full. In any case, unless I'm mistaken, residual descent engine propellant was dumped shortly after landing; this was certainly done on Apollo 11 (you can hear the confirmation on my re-mix of the Apollo 11 landing as heard in the Eagle's cabin), and I've never heard if it not being done on later missions.
  • Jack connects an improvised plug to the “electronic port used to command the descent engine” on Challenger. But there were no such “ports”—connections between the ascent and descent stages were hard-wired in a bundle which was cut in two places by a pyrotechnic “guillotine” when the ascent stage separated. The connections to the descent engine would be a mass of chopped cables which would take a medusa of space Barney clips and unavailable information to connect to.
  • Even if there were fuel and oxidiser left in the tanks of the descent stage, the helium used to pressure-feed the propellants to the engine would have been long gone. And the hypergolic combustion wouldn't make a “plume of orange and scarlet” (look at the Apollo 17 liftoff video), and without a guidance system for the descent engine, there would be no chance of entering lunar orbit.
  • The tether is supposed to be used to generate electrical power after the last fuel cell fails. But this is done far from the Earth, where the gradient in the Earth's magnetic field across the length of the tether would be much too small to generate the required power.
  • Using the tether as an aerodynamic brake at reentry is absurd. The tether would have to dissipate the entire energy of a space shuttle decelerating from Mach 36 to Mach 25. Even if the tether did not immediately burn away (which it would), it would not have the drag to accomplish this in the time available before the shuttle hit the atmosphere (with the payload bay doors still open!). And the time between the tethered satellite entering the atmosphere and the shuttle hitting the stony blue would be a matter of seconds, far too little to close the payload bay doors.
  • “The space agency had gotten out of the operations business and moved into the forefront of research and development, handing over its scientific and engineering knowledge to American commercial space operators.” Now here we have an actually prophetic passage. Let's hope it comes to pass!
  • “[W]hen the sun goes down into the sea, just as it sinks out of sight, its rays flash up through the water. If you look fast, you'll see it—a green flash.” Well, no—actually the green flash is due to atmospheric refraction and has nothing to do with water.

Apart from these particulars (and they are just a selection from a much larger assortment in the novel), the entire story suffers from what I'll call the “Tom Swift, let's go!” fallacy of science fiction predating the golden age of the 1930s. The assumption throughout this book is that people can design fantastically complicated hardware which interfaces with existing systems, put it into service by people with no training on the actual hardware and no experience in the demanding environment in which it will be used, cope with unexpected reverses on the fly, always having the requisite resources to surmount the difficulties, and succeed in the end. Actually, I'm being unfair to Tom Swift in identifying such fiction with that character. The original Tom Swift novels always had him testing his inventions extensively before putting them into service, and modifying them based upon the test results. Not here: everything is not only good to go on the first shot, it is able to overcome disasters because the necessary hardware has always providentially been brought along.

Spoilers end here.  
If you've trudged through the spoiler block at my side, you may be exasperated and wondering why I'd spend so much time flensing such a bad novel. Well, it's because I'd hoped for so much and was sorely disappointed. Had the author not said the goal was to be “realistic”, I'd have put it down after the first fifty pages or so and, under the rules of engagement of this chronicle, you'd have never seen it here. Had it been presented as a “spaceflight fantasy”, I might have finished it and remarked about how well the story was told; hey, I give my highest recommendation to a story about a trip to the Moon launched from a 900 foot long cannon!

I'll confess: I've been wanting to write a back to the Moon novel myself for at least thirty years. My scenario was very different (and I hereby place it into the public domain for scribblers more talented and sedulous than I to exploit): a signal is detected originating from the Moon with a complex encoding originating at a site where no known probe has landed. The message is a number: "365", "364", 363",… decrementing every day. Now what it would it take to go there and find out what was sending it before the countdown reaches zero? The story was to be full of standing in line to file forms to get rocket stages and capsules out of museums, back channel discussions between Soviet and U.S. space officials, and eventual co-operation on a cobbled together mission which would end up discovering…but then you'd have to have read the story. (Yes, much of this has been done in movies, but they all postdate this treatment.)

Since I'll probably never write that story, I'd hoped this novel would fill the niche, and I'm disappointed it didn't. If you know nothing about spaceflight and don't care about the details, this is a well-crafted thriller, which accounts for its many five star reviews at Amazon. If you care about technical plausibility, you can take this as either one of those books to hurl into the fireplace to warm you up on a cold winter evening or else as a laugh riot to enjoy for what it is and pass on to others looking for a diversion from the uncompromising physics of the real world.

Successful novelists, burn the trunk!

 Permalink

Landis, Tony R. and Dennis R. Jenkins. Experimental and Prototype U.S. Air Force Jet Fighters. North Branch, MN: Specialty Press, 2008. ISBN 978-1-58007-111-6.
This beautifully produced book covers every prototype jet fighter developed by the U.S. Air Force from the beginning of the jet age in the 1940s through the present day. Only concepts which at least entered the stage of prototype fabrication are included: “paper airplane” conceptual studies are not discussed, except in conjunction with designs which were actually built. The book is lavishly illustrated, with many photographs in colour, and the text is well written and almost free of typographical errors. As the title states, only Air Force prototypes are discussed—Navy and CIA development projects are covered only if Air Force versions were subsequently manufactured.

The first decade of the jet age was a wild and woolly time in the history of aeronautical engineering; we'll probably never see its like again. Compared to today's multi-decade development projects, many of the early jet designs went from contract award to flying hardware in less than a year. Between May 1953 and December 1956, no fewer than six operational jet fighter prototypes (F-100, F-101, F-102, F-104, F-105, and F-106) made their first flights. Among prototypes which never entered into serial production were concepts which illustrate the “try anything” spirit of the age. Consider, for example, the XP-81 which had a turboprop engine in the nose and a turbojet in the tail; the XF-84H with a turbine driven propeller whose blade tips exceeded the speed of sound and induced nausea in pilots and ground crews, who nicknamed it “Thunderscreech”; or the tiny XP-85 which was intended to be carried in the bomb bay of a B-36 and launched to defend the bomber should enemy interceptors attack.

So slow has been the pace of fighter development since 1960 that the first 200 pages of the book cover events up to 1960 and everything since occupies only forty pages. Recent designs are covered in the same detail as those of the golden age—it's just that there haven't been all that many of them.

If you enjoy this book, you'll probably also want to read the companion, U.S. Air Force Prototype Jet Fighters Photo Scrapbook, which collects hundreds of photographs of the planes featured in the main work which, although often fascinating, didn't make the cut for inclusion in it. Many photos, particularly of newer planes, are in colour, although some older colour shots have noticeably faded.

 Permalink

May 2010

Invisible Committee, The. The Coming Insurrection. Los Angeles: Semiotext(e)/MIT Press, [2007] 2009. ISBN 978-1-58435-080-4.
I have not paid much attention to the “anti-globalisation” protesters who seem to pop up at gatherings of international political and economic leaders, for example at the WTO Ministerial Conference in Seattle in 1999 and the Genoa G8 Summit in 2001. In large part this is because I have more interesting things with which to occupy my time, but also because, despite saturation media coverage of such events, I was unable to understand the agenda of the protesters, apart from smashing windows and hurling epithets and improvised projectiles at the organs of state security. I understand what they're opposed to, but couldn't for the life of me intuit what policies would prevail if they had their way. Still, as they are often described as “anarchists”, I, as a flaming anarchist myself, could not help but be intrigued by those so identified in the legacy media as taking the struggle to the street.

This book, written by an anonymous group of authors, has been hailed as the manifesto of this movement, so I hoped that reading it would provide some insight into what it was all about. My hope was in vain. The writing is so incoherent and the prose so impenetrable that I closed it with no more knowledge of the philosophy and programme of its authors than when I opened it. My general perception of the “anti-globalisation” movement was one of intellectual nonentities spewing inchoate rage at the “system” which produces the wealth that allows them to live their slacker lives and flit from protest to protest around the globe. Well, if this is their manifesto, then indeed that's all there is to it. The text is nearly impossible to decipher, being written in a dialect of no known language. Many paragraphs begin with an unsubstantiated and often absurd assertion, then follow it with successive verb-free sentence fragments which seem to be intended to reinforce the assertion. I suppose that if you read it as a speech before a mass assembly of fanatics who cheer whenever they hear one of their trigger words it may work, but one would expect savvy intellectuals to discern the difference in media and adapt accordingly. Whenever the authors get backed into an irreconcilable logical corner, they just drop an F-bomb and start another paragraph.

These are people so clueless that I'll have to coin a new word for those I've been calling clueless all these many years. As far as I can figure out, they assume that they can trash the infrastructure of the “system”, and all of the necessities of their day to day urban life will continue to flow to them thanks to the magic responsible for that today. These “anarchists” reject the “exploitation” of work—after all, who needs to work? “Aside from welfare, there are various benefits, disability money, accumulated student aid, subsidies drawn off fictitious childbirths, all kinds of trafficking, and so many other means that arise with every mutation of control.” (p. 103) Go anarchism! Death to the state, as long as the checks keep coming! In fact, it is almost certain that the effete would-be philosophes who set crayon (and I don't mean the French word for “pencil”) to paper to produce this work will be among the first wave of those to fall in the great die-off starting between 72 and 96 hours after that event towards which they so sincerely strive: the grid going down. Want to know what I'm talking about? Turn off the water main where it enters your house and see what happens in the next three days if you assume you can't go anywhere else where the water is on. It's way too late to learn about “rooftop vegetable gardens” when the just-in-time underpinnings which sustain modern life come to a sudden halt. Urban intellectuals may excel at publishing blows against the empire, but when the system actually goes down, bet on rural rednecks to be the survivors. Of course, as far as I can figure out what these people want, it may be that Homo sapiens returns to his roots—namely digging for roots and grubs with a pointed stick. Perhaps rather than flying off to the next G-20 meeting to fight the future, they should spend a week in one of the third world paradises where people still live that way and try it out for themselves.

The full text of the book is available online in English and French. Lest you think the Massachusetts Institute of Technology is a beacon of rationality and intelligence in a world going dark, it is their university press which distributes this book.

 Permalink

Flynn, Vince. Act of Treason. New York: Pocket Books, 2006. ISBN 978-1-4165-4226-1.
This is the seventh novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. I packed this thriller as an “airplane book” on a recent trip. The novel was far more successful than the journey, which ended up as a 12 hour round trip from Switzerland to England and back when my onward flight was cancelled thanks to an unexpected belch from volcano Whatchamacallit. By the time I got home, I was already more than 350 pages into the 467 page paperback, and I finished it over the next two days. Like all Vince Flynn books, this is a page turner, although this time there's less action and more puzzling out of shadowy connections.

The book begins with a terrorist attack on the motorcade of a presidential candidate who, then trailing in the polls, is swept into office on a sympathy vote. Now, just before the inauguration of the new administration, Rapp captures the perpetrator of the attack and, as he and CIA director Irene Kennedy start to follow the trail of those who ordered the strike, begin to suspect what may be a plot that will shake the U.S. to its foundations and undermine the legitimacy of its government. Under a tight deadline as inauguration day approaches, Rapp and Kennedy have to find out the facts and take direct action to avert calamity.

Characters from earlier books in the series appear here, and references to events which occurred earlier in the timeline are made, but this book works perfectly fine as a stand-alone novel—you can pick up the Mitch Rapp saga here and miss little or nothing (although there will, inevitably, be spoilers for events in the earlier books).

 Permalink

Kennedy, Gregory P. The Rockets and Missiles of White Sands Proving Ground, 1945–1958. Atglen, PA: Schiffer Military History, 2009. ISBN 978-0-7643-3251-7.
Southern New Mexico has been a centre of American rocketry from its origin to the present day. After being chased out of Massachusetts due to his inventions' proclivity for making ear-shattering detonations and starting fires, Robert Goddard moved his liquid fuel rocket research to a site near Roswell, New Mexico in 1930 and continued to launch increasingly advanced rockets from that site until 1943, when he left to do war work for the Navy. Faced with the need for a range to test the missiles developed during World War II, in February 1945 the U.S. Army acquired a site stretching 100 miles north from the Texas-New Mexico border near El Paso and 41 miles east-west at the widest point, designated the “White Sands Proving Ground”: taking its name from the gypsum sands found in the region, also home to the White Sands National Monument.

Although established before the end of the war to test U.S. missiles, the first large rockets launched at the site were captured German V-2s (December 2002), with the first launched (unsuccessfully) in April 1946. Over the next six years, around seventy V-2s lifted off from White Sands, using the V-2's massive (for the time) one ton payload capacity to carry a wide variety of scientific instruments into the upper atmosphere and the edge of space. In the Bumper project, the V-2 was used as the booster for the world's first two stage liquid rocket, with its WAC Corporal second stage attaining an altitude of 248 miles: higher than some satellites orbit today (it did not, of course, attain anything near orbital velocity, and quickly fell back to Earth).

Simultaneously with launches of the V-2, U.S. rocketeers arrived at White Sands to test their designs—almost every U.S. missile of the 1940s and 1950s made its first flight there. These included research rockets such as Viking and Aerobee (first launched in 1948, it remained in service until 1985 with a total of 1037 launched); the Corporal, Sergeant, and Redstone ballistic missiles; Loki, Nike, Hawk anti-aircraft missiles; and a variety of tactical missiles including the unguided (!) nuclear-tipped Honest John.

White Sands in the forties and fifties was truly the Wild West of rocketry. Even by the standards of fighter aircraft development in the epoch, this was by guess and by gosh engineering in its purest incarnation. Consider Viking 8, which broke loose from the launch pad during a static test when hold-down fittings failed, and was allowed to fly to 20,000 feet to see what would happen (p. 97). Or Viking 10, whose engine exploded on the launch pad and then threatened a massive explosion because leaking fuel was causing the tankage to crumple as it left a vacuum. An intrepid rocketeer was sent out of the blockhouse with a carbine to shoot a hole in the top of the fuel tank and allow air to enter (p. 100)—problem solved! (The rocket was rebuilt and later flew successfully.) Then there was the time they ran out of 90% hydrogen peroxide and were told the first Viking launch would have to be delayed for two weeks until a new shipment could arrive by rail. Can't have that! So two engineers drove a drum of the highly volatile and corrosive substance in the back of a station wagon from Buffalo, New York to White Sands to meet the launch deadline (p. 79). In the Nike program, people worried about whether its aniline fuel would be sufficiently available under tactical conditions, so they tried using gasoline as fuel instead—BOOM! Nope, guess not (p. 132). With all this “innovation” going on, they needed a suitable place from which to observe it, so the pyramid-shaped blockhouse had reinforced concrete walls ten feet thick with a roof 27 feet thick at the peak. This was designed to withstand a direct impact from a V-2 falling from an altitude of 100 miles. “Once the rockets are up, who cares where they come down?”

And the pace of rockets going up was absolutely frenetic, almost inconceivable by the standards of today's hangar queens and launch pad prima donnas (some years ago, a booster which sat on the pad for more than a year was nicknamed the “civil servant”: it won't work and you can't fire it). By contrast, a single development program, the Loki anti-aircraft missile, conducted a total of 2282 launches at White Sands in 1953 and 1954 (p. 115)—that's an average of more than three a day, counting weekends and holidays!

The book concludes in 1958 when White Sands Proving Ground became White Sands Missile Range (scary pop-up at this link), which remains a centre of rocket development and testing to this day. With the advent of NASA and massively funded, long-term military procurement programs, much of the cut, try, and run like Hell days of rocketry came to a close; this book covers that period which, if not a golden age, was a heck of a lot of fun for engineers who enjoy making loud noises and punching holes in the sky.

The book is gorgeous, printed on glossy paper, with hundreds of illustrations. I noted no typographical or factual errors. A complete list of all U.S. V-2, WAC Corporal, and Viking launches is given in appendices at the end.

 Permalink

Austen, Jane and Seth Grahame-Smith. Pride and Prejudice and Zombies. Philadelphia: Quirk Books, 2009. ISBN 978-1-59474-334-4.
Jane Austen's Pride and Prejudice is the quintessential British Regency era novel of manners. Originally published in 1813, it has been endlessly adapted to the stage, film, and television, and has been a staple of English literature classes from the Victorian era through post-post-modern de-deconstructionist decadence. What generations of litterateurs missed, however, is its fundamental shortcoming: there aren't any zombies in it! That's where the present volume comes in.

This work preserves 85% of Jane Austen's original text and names her as the primary author (hey, if you can't have a dead author in a zombie novel, where can you?), but enhances the original story with “ultraviolent zombie mayhem” seamlessly woven into the narrative. Now, some may consider this a travesty and desecration of a literary masterwork, but look at this way: if F-14s are cool and tyrannosaurs are cool, imagine how cool tyrannosaurs in F-14s would be? Adopting this Calvinist approach allows one to properly appreciate what has been done here.

The novel is set in an early 19th century England afflicted for five and fifty years with the “strange plague” that causes the dead to rise and stagger across the countryside alone or in packs, seeking to kill and devour the succulent brains of the living. Any scratch inflicted by one of these creatures (variously referred to as “unmentionables”, “sorry stricken”, “manky dreadfuls”, “Satan's armies”, “undead”, or simply “zombies”) can infect the living with the grievous affliction and transform them into another compulsive cranium cruncher. The five Bennet sisters have been sent by their father to be trained in the deadly arts by masters in China and have returned a formidable fighting force, sworn by blood oath to the Crown to defend Hertfordshire against the zombie peril until the time of their marriage. There is nothing their loquacious and rather ditzy mother wants more than to see her five daughters find suitable matches, and she fears their celebrated combat credentials and lack of fortune will deter the wealthy and refined suitors she imagines for them. The central story is the contentious relations and blossoming romance between Elizabeth Bennet and Fitzwilliam Darcy, a high-born zombie killer extraordinaire whose stand-offish manner is initially interpreted as arrogance and disdain for the humble Bennets. Can such fierce and proud killers find love and embark upon a life fighting alongside one another in monster murdering matrimony?

The following brief extracts give a sense of what you're getting into when you pick up this book. None are really plot spoilers, but I've put them into a spoiler block nonetheless because some folks might want to encounter these passages in context to fully enjoy the roller coaster ride between the refined and the riotous.

Spoiler warning: Plot and/or ending details follow.  
  • From a corner of the room, Mr. Darcy watched Elizabeth and her sisters work their way outward, beheading zombie after zombie as they went. He knew of only one other woman in Great Britain who wielded a dagger with such skill, such grace, and deadly accuracy.

    By the time the girls reached the walls of the assembly hall, the last of the unmentionables lay still.

    Apart from the attack, the evening altogether passed off pleasantly for the whole family. Mrs. Bennet had seen her eldest daughter much admired by the Netherfield party. … (Chapter 3)

  • Elizabeth, to whom Jane very soon communicated the chief of all this, heard it in silent indignation. Her heart was divided between concern for her sister, and thoughts of going immediately to town and dispensing the lot of them.

    “My dear Jane!” exclaimed Elizabeth, “you are too good. Your sweetness and disinterestedness are really angelic; you wish to think all the world respectable, and are hurt if I speak of killing anybody for any reason! …” (Chapter 24)

  • But why Mr. Darcy came so often to the Parsonage, it was more difficult to understand. It could not be for society, as he frequently sat there ten minutes together without opening his lips; and when he did speak, it seemed the effect of necessity rather than choice. He seldom appeared really animated, even at the sight of Mrs. Collins gnawing upon her own hand. What remained of Charlotte would liked to have believed this change the effect of love, and the object of that love her friend Eliza. She watched him whenever they were at Rosings, and whenever he came to Hunsford; but without much success, for her thoughts often wandered to other subjects, such as the warm, succulent sensation of biting into a fresh brain. …

    In her kind schemes for Elizabeth, she sometimes planned her marrying Colonel Fitzwilliam. He was beyond comparison the most pleasant man; he certainly admired her, and his situation in life was most eligible; but to counterbalance these advantages, Mr. Darcy had a considerably larger head, and thus, more brains to feast upon. (Chapter 32)

  • “When they all removed to Brighton, therefore, you had no reason, I suppose, to believe them fond of each other?”

    “Not the slightest. I can remember no symptom of affection on either side, other than her carving his name into her midriff with a dagger; but this was customary with Lydia. …” (Chapter 47)

  • He scarcely needed an invitation to stay for supper; and before he went away, an engagement was formed, chiefly through his own and Mrs. Bennet's means, for his coming next morning to shoot the first autumn zombies with her husband. (Chapter 55)
  • You may as well call it impertinence. It was very little else. The fact is, you were sick of civility, of deference, of officious attention. You were disgusted with the women who were always speaking, and looking, and thinking for your approbation alone. I roused, and interested you because I was so unlike them. I knew the joy of standing over a vanquished foe; of painting my face and arms with their blood, yet warm, and screaming to the heavens—begging, nay daring, God to send me more enemies to kill. The gentle ladies who so assiduously courted you knew nothing of this joy, and therefore, could never offer you true happiness. … (Chapter 60)
Spoilers end here.  

The novel concludes with zombies still stalking England; all attempts to find a serum, including Lady Catherine's, having failed, and without hope for a negotiated end to hostilities. Successful diplomacy requires not only good will but brains. Zombies do not have brains; they eat them. So life goes on, and those who find married bliss must undertake to instruct their progeny in the deadly arts which defend the best parts of life from the darkness.

The book includes a “Reader's Discussion Guide” ideal for classroom and book club exploration of themes raised in the novel. For example:

10. Some scholars believe that the zombies were a last-minute addition to the novel, requested by the publisher in a shameless attempt to boost sales. Others argue that the hordes of living dead are integral to Jane Austen's plot and social commentary. What do you think? Can you imagine what this novel might be without the violent zombie mayhem?
Beats me.

Of course this is going to be made into a movie—patience! A comic book edition, set of postcards, and a 2011 wall calendar ideal for holiday giving are already available—go merchandising! Here is a chart which will help you sort out the relationships among the many characters in both Jane Austen's original novel and this one.

While this is a parody, whilst reading it I couldn't help but recall Herman Kahn's parable of the lions in New York City. Humans are almost infinitely adaptable and can come to consider almost any situation normal once they've gotten used to it. In this novel zombies are something one lives with as one of the afflictions of mortal life like tuberculosis and crabgrass, and it is perfectly normal for young ladies to become warriors because that's what circumstances require. It gives one pause to think how many things we've all come to consider unremarkable in our own lives might be deemed bizarre and/or repellent from the perspective of those of another epoch or observing from a different cultural perspective.

 Permalink

White, Rowland. Vulcan 607. London: Corgi Books, 2006. ISBN 978-0-552-15229-7.
The Avro Vulcan bomber was the backbone of Britain's nuclear deterrent from the 1950s until the end of the 1960s, when ballistic missile submarines assumed the primary deterrent mission. Vulcans remained in service thereafter as tactical nuclear weapons delivery platforms in support of NATO forces. In 1982, the aging Vulcan force was months from retirement when Argentina occupied the Falkland Islands, and Britain summoned all of its armed services to mount a response. The Royal Navy launched a strike force, but given the distance (about 8000 miles from Britain to the Falklands) it would take about two weeks to arrive. The Royal Air Force surveyed their assets and concluded that only the Vulcan, supported by the Handley Page Victor, a bomber converted to an aerial refueling tanker, would permit it to project power to such a distant theatre.

But there were difficulties—lots of them. First of all, the Vulcan had been dedicated to the nuclear mission for decades: none of the crews had experience dropping conventional bombs, and the bomb bay racks to dispense them had to be hunted down in scrap yards. No Vulcan had performed aerial refueling since 1971, since its missions were assumed to be short range tactical sorties, and the refueling hardware had been stoppered. Crews were sent out to find and remove refueling probes from museum specimens to install on the bombers chosen for the mission. Simply navigating to a tiny island in the southern hemisphere in this pre-GPS era was a challenge—Vulcan crews had been trained to navigate by radar returns from the terrain, and there was no terrain whatsoever between their launch point on Ascension Island and landfall in the Falklands, so boffins figured out how to adapt navigation gear from obsolete VC10 airliners to the Vulcan and make it work. The Vulcan had no modern electronic countermeasures (ECM), rendering it vulnerable to Argentinian anti-aircraft defences, so an ECM pod from another aircraft was grafted onto its wing, fastening to a hardpoint which had never been used by a Vulcan. Finding it, and thereby knowing where to drill the holes required dismantling the wing of another Vulcan.

If the preparations were remarkable, especially since they were thrown together in just a few weeks, the mission plan was audacious—so much so that one expects it would have been rejected as absurd if proposed as the plot of a James Bond film. Executing the mission to bomb the airfield on the Falkland Islands would involve two Vulcan bombers, one Nimrod marine patrol aircraft, thirteen Victor tankers, nineteen refuelings (including Victor to Victor and Victor to Vulcan), 1.5 million pounds of fuel, and ninety aircrew. And all of these resources, assembled and deployed in a single mission, managed to put just one crater in the airstrip in the Falkland Islands, denying it to Argentine fast jets, but allowing C-130 transports to continue to operate from it.

From a training, armament, improvisation, and logistics standpoint this was a remarkable achievement, and the author argues that its consequences, direct and indirect, effectively took the Argentine fast air fighter force and navy out of the conflict, and hence paved the way for the British reconquista of the islands. Today it seems quaint; you'd just launch a few cruise missiles at the airfield, cratering it and spreading area denial munitions and that would be that, without risking a single airman. But they didn't have that option then, and so they did their best with what was available, and this epic story recounts how they pulled it off with hardware on the edge of retirement, re-purposed for a mission its designers never imagined, mounted with a plan with no margin for error, on a schedule nobody could have imagined absent wartime exigency. This is a tale of the Vulcan mission; if you're looking for a comprehensive account of the Falklands War, you'll have to look elsewhere. The Vulcan raid on the Falklands was one of those extraordinary grand gestures, like the Doolittle Raid on Japan, which cast a longer shadow in history than their direct consequences implied. After the Vulcan raid, nobody doubted the resolve of Britain, and the resulting drawback of the Argentine forces almost certainly reduced the cost of retaking the islands from the invader.

 Permalink

Flynn, Vince. Protect and Defend. New York: Pocket Books, 2007. ISBN 978-1-4165-0503-7.
This is the eighth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. I usually wait a month or two between reading installments in this thriller saga, but since I'd devoured the previous volume, Act of Treason, earlier this month on an airline trip which went seriously awry, I decided to bend the rules and read its successor on the second attempt to make the same trip. This time both the journey and the novel were entirely successful.

The story begins with Mitch Rapp cleaning up some unfinished business from Act of Treason, then transitions into an a thriller whose premises may play out in the headlines in the near future. When Iran's covert nuclear weapons facility is destroyed under mysterious circumstances, all of the players in the game, both in Iran and around the world, try to figure out what happened, who was responsible, and how they can turn events to their own advantage. Fanatic factions within the Iranian power structure see an opportunity to launch a proxy terror offensive against Israel and the United States, while those aware of the vulnerability of their country to retaliation for any attack upon those nations try to damp down the flames. The new U.S. president decides to use a back channel to approach the Iranian pragmatists with a deal to put an end to the decades-long standoff and reestablish formal relations between the nations, and dispatches the CIA director to a covert meeting with her peer, the chief of the Iranian Ministry of Intelligence and Security. But word of the meeting makes its way to the radical factions in Iran, and things go horribly wrong. It is then up to Mitch Rapp and his small team, working against the clock, to puzzle out what happened, who is responsible, and how to respond.

If you haven't read the earlier Mitch Rapp novels, you'll miss some of the context, particularly in the events of the first few chapters, but this won't detract in any way from your enjoyment of the story. Personally, I'd read (and I'm reading) the novels in order, but they are sufficiently stand-alone (particularly after the first few) that there's no problem getting into the series at any point. Vince Flynn's novels are always about the action and the characters, not preachy policy polemics. Nonetheless, one gets a sense that the strategy presented here is how the author's brain trust would like to see a confident and unapologetic West address the Iranian conundrum.

 Permalink

June 2010

Lanier, Jaron. You Are Not a Gadget. New York: Alfred A. Knopf, 2010. ISBN 978-0-307-26964-5.
In The Fatal Conceit (March 2005) Friedrich A. Hayek observed that almost any noun in the English language is devalued by preceding it with “social”. In this book, virtual reality pioneer, musician, and visionary Jaron Lanier argues that the digital revolution, which began in the 1970s with the advent of the personal computer and became a new foundation for human communication and interaction with widespread access to the Internet and the Web in the 1990s, took a disastrous wrong turn in the early years of the 21st century with the advent of the so-called “Web 2.0” technologies and “social networking”—hey, Hayek could've told you!

Like many technologists, the author was optimistic that with the efflorescence of the ubiquitous Internet in the 1990s combined with readily-affordable computer power which permitted photorealistic graphics and high fidelity sound synthesis, a new burst of bottom-up creativity would be unleashed; creative individuals would be empowered to realise not just new art, but new forms of art, along with new ways to collaborate and distribute their work to a global audience. This Army of Davids (March 2006) world, however, seems to have been derailed or at least delayed, and instead we've come to inhabit an Internet and network culture which is darker and less innovative. Lanier argues that the phenomenon of technological “lock in” makes this particularly ominous, since regrettable design decisions whose drawbacks were not even perceived when they were made, tend to become entrenched and almost impossible to remedy once they are widely adopted. (For example, just look at the difficulties in migrating the Internet to IPv6.) With application layer protocols, fundamentally changing them becomes almost impossible once a multitude of independently maintained applications rely upon them to intercommunicate.

Consider MIDI, which the author uses as an example of lock-in. Originally designed to allow music synthesisers and keyboards to interoperate, it embodies a keyboardist's view of the concept of a note, which is quite different from that, say, of a violinist or trombone player. Even with facilities such as pitch bend, there are musical articulations played on physical instruments which cannot be represented in MIDI sequences. But since MIDI has become locked in as the lingua franca of electronic music production, in effect the musical vocabulary has been limited to those concepts which can be represented in MIDI, resulting in a digital world which is impoverished in potential compared to the analogue instruments it aimed to replace.

With the advent of “social networking”, we appear to be locking in a representation of human beings as database entries with fields chosen from a limited menu of choices, and hence, as with MIDI, flattening down the unbounded diversity and potential of human individuals to categories which, not coincidentally, resemble the demographic bins used by marketers to target groups of customers. Further, the Internet, through its embrace of anonymity and throwaway identities and consequent devaluing of reputation, encourages mob behaviour and “drive by” attacks on individuals which make many venues open to the public more like a slum than an affinity group of like-minded people. Lanier argues that many of the pathologies we observe in behaviour on the Internet are neither inherent nor inevitable, but rather the consequences of bad user interface design. But with applications built on social networking platforms proliferating as rapidly as me-too venture capital hoses money in their direction, we may be stuck with these regrettable decisions and their pernicious consequences for a long time to come.

Next, the focus turns to the cult of free and open source software, “cloud computing”, “crowd sourcing”, and the assumption that a “hive mind” assembled from a multitude of individuals collaborating by means of the Internet can create novel and valuable work and even assume some of the attributes of personhood. Now, this may seem absurd, but there are many people in the Silicon Valley culture to whom these are articles of faith, and since these people are engaged in designing the tools many of us will end up using, it's worth looking at the assumptions which inform their designs. Compared to what seemed the unbounded potential of the personal computer and Internet revolutions in their early days, what the open model of development has achieved to date seems depressingly modest: re-implementations of an operating system, text editor, and programming language all rooted in the 1970s, and creation of a new encyclopedia which is structured in the same manner as paper encyclopedias dating from a century ago—oh wow. Where are the immersive massively multi-user virtual reality worlds, or the innovative presentation of science and mathematics in an interactive exploratory learning environment, or new ways to build computer tools without writing code, or any one of the hundreds of breakthroughs we assumed would come along when individual creativity was unleashed by their hardware prerequisites becoming available to a mass market at an affordable price?

Not only have the achievements of the free and open movement been, shall we say, modest, the other side of the “information wants to be free” creed has devastated traditional content providers such as the music publishing, newspaper, and magazine businesses. Now among many people there's no love lost for the legacy players in these sectors, and a sentiment of “good riddance” is common, if not outright gloating over their demise. But what hasn't happened, at least so far, is the expected replacement of these physical delivery channels with electronic equivalents which generate sufficient revenue to allow artists, journalists, and other primary content creators to make a living as they did before. Now, certainly, these occupations are a meritocracy where only a few manage to support themselves, no less become wealthy, while far more never make it. But with the mass Internet now approaching its twentieth birthday, wouldn't you expect at least a few people to have figured out how to make it work for them and prospered as creators in this new environment? If so, where are they?

For that matter, what new musical styles, forms of artistic expression, or literary genres have emerged in the age of the Internet? Has the lack of a viable business model for such creations led to a situation the author describes as, “It's as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump.” One need only visit YouTube to see what he's talking about. Don't read the comments there—that path leads to despair, which is a low state.

Lanier's interests are eclectic, and a great many matters are discussed here including artificial intelligence, machine language translation, the financial crisis, zombies, neoteny in humans and human cultures, and cephalopod envy. Much of this is fascinating, and some is irritating, such as the discussion of the recent financial meltdown where it becomes clear the author simply doesn't know what he's talking about and misdiagnoses the causes of the catastrophe, which are explained so clearly in Thomas Sowell's The Housing Boom and Bust (March 2010).

I believe this is the octopus video cited in chapter 14. The author was dubious, upon viewing this, that it wasn't a computer graphics trick. I have not, as he has, dived the briny deep to meet cephalopods on their own turf, and I remain sceptical that the video represents what it purports to. This is one of the problems of the digital media age: when anything you can imagine can be persuasively computer synthesised, how can you trust any reportage of a remarkable phenomenon to be genuine if you haven't observed it for yourself?

Occasional aggravations aside, this is a thoughtful exploration of the state of the technologies which are redefining how people work, play, create, and communicate. Readers frustrated by the limitations and lack of imagination which characterises present-day software and network resources will discover, in reading this book, that tremendously empowering phrase, “it doesn't have to be that way”, and perhaps demand better of those bringing products to the market or perhaps embark upon building better tools themselves.

 Permalink

Spira, S. F., Eaton S. Lothrop, Jr., and Jonathan B. Spira. The History of Photography As Seen Through the Spira Collection. Danville, NJ: Aperture, 2001. ISBN 978-0-89381-953-8.
If you perused the back pages of photographic magazines in the 1960s and 1970s, you'll almost certainly recall the pages of advertising from Spiratone, which offered a panoply of accessories and gadgets, many tremendously clever and useful, and some distinctly eccentric and bizarre, for popular cameras of the epoch. The creation of Fred Spira, a refugee from Nazi anschluss Austria who arrived in New York almost penniless, his ingenuity, work ethic, and sense for the needs of the burgeoning market of amateur photographers built what started as a one-man shop into a flourishing enterprise, creating standards such as the “T mount” lenses which persist to the present day. His company was a pioneer in importing high quality photographic gear from Japan and instrumental in changing the reputation of Japan from a purveyor of junk to a top end manufacturer.

Like so many businessmen who succeed to such an extent they redefine the industries in which they participate, Spira was passionate about the endeavour pursued by his customers: in his case photography. As his fortune grew, he began to amass a collection of memorabilia from the early days of photography, and this Spira Collection finally grew to more than 20,000 items, covering the entire history of photography from its precursors to the present day.

This magnificent coffee table book draws upon items from the Spira collection to trace the history of photography from the camera obscura in the 16th century to the dawn of digital photography in the 21st. While the pictures of items from the collection dominate the pages, there is abundant well-researched text sketching the development of photography, including the many blind alleys along the way to a consensus of how images should be made. You can see the fascinating process by which a design, which initially varies all over the map as individual inventors try different approaches, converges upon a standard based on customer consensus and market forces. There is probably a lesson for biological evolution somewhere in this. With inventions which appear, in retrospect, as simple as photography, it's intriguing to wonder how much earlier they might have been discovered: could a Greek artificer have stumbled on the trick and left us, in some undiscovered cache, an image of Pericles making the declamation recorded by Thucydides? Well, probably not—the simplest photographic process, the daguerreotype, requires a plate of copper, silver, and mercury sensitised with iodine. While the metals were all known in antiquity (along with glass production sufficient to make a crude lens or, failing that, a pinhole), elemental iodine was not isolated until 1811, just 28 years before Daguerre applied it to photography. But still, you never know….

This book is out of print, but used copies are generally available for less than the cover price at its publication in 2001.

 Permalink

Okrent, Daniel. Last Call: The Rise and Fall of Prohibition. New York: Scribner, 2010. ISBN 978-0-7432-7702-0.
The ratification of the Eighteenth Amendment to the U.S. Constitution in 1919, prohibiting the “manufacture, sale, or transportation of intoxicating liquors” marked the transition of the U.S. Federal government into a nanny state, which occupied itself with the individual behaviour of its citizens. Now, certainly, attempts to legislate morality and regulate individual behaviour were commonplace in North America long before the United States came into being, but these were enacted at the state, county, or municipality level. When the U.S. Constitution was ratified, it exclusively constrained the actions of government, not of individual citizens, and with the sole exception of the Thirteenth Amendment, which abridged the “freedom” to hold people in slavery and involuntary servitude, this remained the case into the twentieth century. While bans on liquor were adopted in various jurisdictions as early as 1840, it simply never occurred to many champions of prohibition that a nationwide ban, written into the federal constitution, was either appropriate or feasible, especially since taxes on alcoholic beverages accounted for as much as forty percent of federal tax revenue in the years prior to the introduction of the income tax, and imposition of total prohibition would zero out the second largest source of federal income after the tariff.

As the Progressive movement gained power, with its ambitions of continental scale government and imposition of uniform standards by a strong, centralised regime, it found itself allied with an improbable coalition including the Woman's Christian Temperance Union; the Methodist, Baptist and Presbyterian churches; advocates of women's suffrage; the Anti-Saloon League; Henry Ford; and the Ku Klux Klan. Encouraged by the apparent success of “war socialism” during World War I and empowered by enactment of the Income Tax via the Sixteenth Amendment, providing another source of revenue to replace that of excise taxes on liquor, these players were motivated in the latter years of the 1910s to impose their agenda upon the entire country in as permanent a way as possible: by a constitutional amendment. Although the supermajorities required were daunting (two thirds in the House and Senate to submit, three quarters of state legislatures to ratify), if a prohibition amendment could be pushed over the bar (if you'll excuse the term), opponents would face what was considered an insuperable task to reverse it, as it would only take 13 dry states to block repeal.

Further motivating the push not just for a constitutional amendment, but enacting one as soon as possible, were the rapid demographic changes underway in the U.S. Support for prohibition was primarily rural, in southern and central states, Protestant, and Anglo-Saxon. During the 1910s, population was shifting from farms to urban areas, from the midland toward the coasts, and the immigrant population of Germans, Italians, and Irish who were famously fond of drink was burgeoning. This meant that the electoral landscape following reapportionment after the 1920 census would be far less receptive to the foes of Demon Rum.

One must never underestimate the power of an idea whose time has come, regardless of how stupid and counterproductive it might be. And so it came to pass that the Eighteenth Amendment was ratified by the 36th state: Utah, appropriately, on January 16th, 1919, with nationwide Prohibition to come into effect a year hence. From the outset, it was pretty obvious to many astute observers what was about happen. An Army artillery captain serving in France wrote to his fiancée in Missouri, “It looks to me like the moonshine business is going to be pretty good in the land of the Liberty Loans and Green Trading Stamps, and some of us want to get in on the ground floor. At least we want to get there in time to lay in a supply for future consumption.” Captain Harry S. Truman ended up pursuing a different (and probably less lucrative career), but was certainly prescient about the growth industry of the coming decade.

From the very start, Prohibition was a theatre of the absurd. Since it was enforced by a federal statute, the Volstead Act, enforcement, especially in states which did not have their own state Prohibition laws, was the responsibility of federal agents within the Treasury Department, whose head, Andrew Mellon, was a staunch opponent of Prohibition. Enforcement was always absurdly underfunded compared to the magnitude of the bootlegging industry and their customers (the word “scofflaw” entered the English language to describe them). Federal Prohibition officers were paid little, but were nonetheless highly prized patronage jobs, as their holders could often pocket ten times their salary in bribes to look the other way.

Prohibition unleashed the American talent for ingenuity, entrepreneurship, and the do-it-yourself spirit. While it was illegal to manufacture liquor for sale or to sell it, possession and consumption were perfectly legal, and families were allowed to make up to 200 gallons (which should suffice even for the larger, more thirsty households of the epoch) for their own use. This led to a thriving industry in California shipping grapes eastward for householders to mash into “grape juice” for their own use, being careful, of course, not to allow it to ferment or to sell some of their 200 gallon allowance to the neighbours. Later on, the “Vino Sano Grape Brick” was marketed nationally. Containing dried crushed grapes, complete with the natural yeast on the skins, you just added water, waited a while, and hoisted a glass to American innovation. Brewers, not to be outdone, introduced “malt syrup”, which with the addition of yeast and water, turned into beer in the home brewer's basement. Grocers stocked everything the thirsty householder needed to brew up case after case of Old Frothingslosh, and brewers remarked upon how profitable it was to outsource fermentation and bottling to the customers.

For those more talented in manipulating the law than fermenting fluids, there were a number of opportunities as well. Sacramental wine was exempted from Prohibition, and wineries which catered to Catholic and Jewish congregations distributing such wines prospered. Indeed, Prohibition enforcers noted they'd never seen so many rabbis before, including some named Patrick Houlihan and James Maguire. Physicians and dentists were entitled to prescribe liquor for medicinal purposes, and the lucrative fees for writing such prescriptions and for pharmacists to fill them rapidly caused hard liquor to enter the materia medica for numerous maladies, far beyond the traditional prescription as snakebite medicine. While many pre-Prohibition bars re-opened as speakeasies, others prospered by replacing “Bar” with ”Drug Store” and filling medicinal whiskey prescriptions for the same clientele.

Apart from these dodges, the vast majority of Americans slaked their thirst with bootleg booze, either domestic (and sometimes lethal), or smuggled from Canada or across the ocean. The obscure island of St. Pierre, a French possession off the coast of Canada, became a prosperous entrepôt for reshipment of Canadian liquor legally exported to “France”, then re-embarked on ships headed for “Rum Row”, just outside the territorial limit of the U.S. East Coast. Rail traffic into Windsor, Ontario, just across the Detroit River from the eponymous city, exploded, as boxcar after boxcar unloaded cases of clinking glass bottles onto boats bound for…well, who knows? Naturally, with billions and billions of dollars of tax-free income to be had, it didn't take long for criminals to stake their claims to it. What was different, and deeply appalling to the moralistic champions of Prohibition, is that a substantial portion of the population who opposed Prohibition did not despise them, but rather respected them as making their “money by supplying a public demand”, in the words of one Alphonse Capone, whose public relations machine kept him in the public eye.

As the absurdity of the almost universal scorn and disobedience of Prohibition grew (at least among the urban chattering classes, which increasingly dominated journalism and politics at the time), opinion turned toward ways to undo its increasingly evident pernicious consequences. Many focussed upon amending the Volstead Act to exempt beer and light wines from the definition of “intoxicating liquors”—this would open a safety valve, and at least allow recovery of the devastated legal winemaking and brewing industries. The difficulty of actually repealing the Eighteenth Amendment deterred many of the most ardent supporters of that goal. As late as September 1930, Senator Morris Sheppard, who drafted the Eighteenth Amendment, said “There is a much chance of repealing the Eighteenth Amendment as there is for a hummingbird to fly to the planet Mars with the Washington Monument tied to its tail.”

But when people have had enough (I mean, of intrusive government, not illicit elixir), it's amazing what they can motivate a hummingbird to do! Less than two years later, the Twenty-first Amendment, repealing Prohibition, was passed by the Congress, and on December 5th, 1933, it was ratified by the 36th state (appropriately, but astonishingly, Utah), thus putting an end to what had not only become generally seen as a farce, but also a direct cause of sanguinary lawlessness and scorn for the rule of law. The cause of repeal was greatly aided not only by the thirst of the populace, but also by the thirst of their government for revenue, which had collapsed due to plunging income tax receipts as the Great Depression deepened, along with falling tariff income as international trade contracted. Reinstating liquor excise taxes and collecting corporate income tax from brewers, winemakers, and distillers could help ameliorate the deficits from New Deal spending programs.

In many ways, the adoption and repeal of Prohibition represented a phase transition in the relationship between the federal government and its citizens. In its adoption, they voted, by the most difficult of constitutional standards, to enable direct enforcement of individual behaviour by the national government, complete with its own police force independent of state and local control. But at least they acknowledged that this breathtaking change could only be accomplished by a direct revision of the fundamental law of the republic, and that reversing it would require the same—a constitutional amendment, duly proposed and ratified. In the years that followed, the federal government used its power to tax (many partisans of Repeal expected the Sixteenth Amendment to also be repealed but, alas, this was not to be) to promote and deter all kinds of behaviour through tax incentives and charges, and before long the federal government was simply enacting legislation which directly criminalised individual behaviour without a moment's thought about its constitutionality, and those who challenged it were soon considered nutcases.

As the United States increasingly comes to resemble a continental scale theatre of the absurd, there may be a lesson to be learnt from the final days of Prohibition. When something is unsustainable, it won't be sustained. It's almost impossible to predict when the breaking point will come—recall the hummingbird with the Washington Monument in tow—but when things snap, it doesn't take long for the unimaginable new to supplant the supposedly secure status quo. Think about this when you contemplate issues such as immigration, the Euro, welfare state spending, bailouts of failed financial institutions and governments, and the multitude of big and little prohibitions and intrusions into personal liberty of the pervasive nanny state—and root for the hummingbird.

In the Kindle edition, all of the photographic illustrations are collected at the very end of the book, after the index—don't overlook them.

 Permalink

Beck, Glenn. The Overton Window. New York: Threshold Editions, 2010. ISBN 978-1-4391-8430-1.
I have no idea who is actually responsible for what in the authorship of this novel. Glenn Beck is listed as the principal author, but the title page says “with contributions from Kevin Balfe, Emily Bestler, and Jack Henderson”. I have cited the book as it appears on the cover and in most mentions of it, as a work by Glenn Beck. Certainly, regardless of who originated, edited, and assembled the words into the present work, it would not have been published nor have instantaneously vaulted to the top of the bestseller lists had it not been associated with the high profile radio and television commentator to whom it is attributed. Heck, he may have written the whole thing himself and generously given credit to his editors and fact checkers—it does, indeed, read like a first attempt by an aspiring thriller author.

It isn't at all bad. Beck (et al., or whatever) tend to be a bit preachy and the first half of the novel goes pretty slow. It's only after you cross the 50 yard line that you discover there's more to the story than you thought, that things and characters are not what they seemed to be, and that the choices facing the protagonist, Noah Gardner, are more complicated than you might have thought.

The novel has been given effusive cover blurbs by masters of the genre Brad Thor and Vince Flynn. Still, I'd expect those page-turner craftsmen to have better modulated the tension in a story than we find here. A perfectly crafted thriller is like a roller coaster, with fear-inducing rises and terrifying plunges, but this is more like a lecture on constitutional government whilst riding on a Disneyland ride where most of the characters are animatronic robots there to illustrate the author's message. The characters just don't feel right. How plausible is it that a life-long advocate of liberty and conspiracy theorist would become bestest buddy with an undercover FBI agent who blackmailed him into co-operating in a sting operation less than 24 hours before? Or that a son who was tortured almost to death at the behest (and in the presence of) his father could plausibly be accepted as a minion in the father's nefarious undertaking? For the rest, we're going to have to go behind the spoiler curtain.

Spoiler warning: Plot and/or ending details follow.  
In chapter 30, Noah is said to have been kept unconscious for an entire weekend with a “fentanyl patch”. But fentanyl patches are used as an analgesic, not an anæsthetic. Although the drug was once used as a general anæsthetic, it was administered intravenously in this application, not via a transdermal patch.

The nuclear bomb “model” (which turns out to be the real thing) is supposed to have been purloined from a cruise missile which went missing during transport, and is said to weigh “eighty or one hundred pounds”. But the W-80 and W-84 cruise missile warheads weighed 290 and 388 pounds respectively. There is no way the weight of the physics package of these weapons could be reduced to such an extent while remaining functional.

The Mark 8 atomic bomb which comes on the scene in chapter 43 makes no sense at all. Where did it come from? Why was a bomb, of which only 40 were ever produced and removed from service in 1957, carefully maintained in secret and off the books for more than fifty years? Any why would the terrorists want two bombs, when the second would simply be vaporised when they set off the first? Perhaps I've missed something, but it's kind of like you're reading a spy thriller and in the middle of a gunfight a unicorn wanders through the middle and everybody stops shooting until it passes, whereupon they continue the battle as if nothing happened.

Spoilers end here.  

Apart from plausibility of the characters and quibbles, both of which I'm more than willing to excuse in a gripping thriller, the real disappointment here is that the novel ends about two hundred chapters before anything is actually resolved. This is a chronicle of the opening skirmish in a cataclysmic, protracted conflict between partisans of individual liberty and forces seeking to impose global governance by an élite. When you put the book down, you'll have met the players and understand their motives and resources, but it isn't even like the first volume of a trilogy where, regardless of how much remains to happen, there is usually at least the conclusion of a subplot. Now, you're not left with a cliffhanger, but neither is there any form of closure to the story. I suppose one has no option but to wait for the inevitable sequel, but I doubt I'll be reading it.

This is not an awful book; it's enjoyable on its own terms and its citations of real-world events may be enlightening to readers inattentive to the shrinking perimeter of liberty in this increasingly tyrannical world (the afterword provides resources for those inclined to explore further). But despite their praise for it, Vince Flynn and Brad Thor it's not.

 Permalink

Klein, Aaron with Brenda J. Elliott. The Manchurian President. New York: WND Books, 2010. ISBN 978-1-935071-87-7.
The provocative title of this book is a reference to Richard Condon's classic 1959 Cold War thriller, The Manchurian Candidate, in which a Korean War veteran, brainwashed by the Chinese while a prisoner of war in North Korea, returns as a sleeper agent, programmed to perform political assassinations on behalf of his Red controllers. The climax comes as a plot unfolds to elect a presidential candidate who will conduct a “palace coup”, turning the country over to the conspirators. The present book, on the other hand, notwithstanding its title, makes no claim that its subject, Barack Obama, has been brainwashed in any way, nor that there is any kind of covert plot to enact an agenda damaging to the United States, nor is any evidence presented which might support such assertions. Consequently, I believe the title is sensationalistic and in the end counterproductive. But what about the book?

Well, I'd argue that there is no reason to occupy oneself with conspiracy theories or murky evidence of possible radical connections in Obama's past, when you need only read the man's own words in his 1995 autobiography, Dreams from My Father, describing his time at Occidental College:

To avoid being mistaken for a sellout, I chose my friends carefully. The more politically active black students. The foreign students. The Chicanos. The Marxist professors and the structural feminists and punk-rock performance poets. We smoked cigarettes and wore leather jackets. At night, in the dorms, we discussed neocolonialism, Frantz Fanon, Eurocentrism, and patriarchy.

The sentence fragments. Now, certainly, many people have expressed radical thoughts in their college days, but most, writing an autobiography fifteen years later, having graduated from Harvard Law School and practiced law, might be inclined to note that they'd “got better”; to my knowledge, Obama makes no such assertion. Further, describing his first job in the private sector, also in Dreams, he writes:

Eventually, a consulting house to multinational corporations agreed to hire me as a research assistant. Like a spy behind enemy lines, I arrived every day at my mid-Manhattan office and sat at my computer terminal, checking the Reuters machine that blinked bright emerald messages from across the globe.

Now bear in mind that this is Obama on Obama, in a book published the same year he decided to enter Illinois politics, running for a state senate seat. Why would a politician feigning moderation in order to gain power, thence to push a radical agenda, explicitly brag of his radical credentials and background?

Well, he doesn't because he's been an overt hard left radical with a multitude of connections to leftist, socialist, communist, and militant figures all of his life, from the first Sunday school he attended in Hawaii to the circle of advisers he brought into government following his election as president. The evidence of this has been in plain sight ever since Obama came onto the public scene, and he has never made an effort to cover it up or deny it. The only reason it is not widely known is that the legacy media did not choose to pursue it. This book documents Obama's radical leftist history and connections, but it does so in such a clumsy and tedious manner that you may find it difficult to slog through. The hard left in the decades of Obama's rise to prominence is very much like that of the 1930s through 1950s: a multitude of groups with platitudinous names concealing their agenda, staffed by a cast of characters whose names pop up again and again as you tease out the details, and with sources of funding which disappear into a cloud of smoke as you try to pin them down. In fact, the “new new left” (or “contemporary progressive movement”, as they'd doubtless prefer) looks and works almost precisely like what we used to call “communist front organisations” back in the day. The only difference is that they aren't funded by the KGB, seek Soviet domination, or report to masters in Moscow—at least as far as we know….

Obama's entire career has been embedded in such a tangled web of radical causes, individuals, and groups that following any one of them is like pulling up a weed whose roots extend in all directions, tangling with other weeds, which in turn are connected every which way. What we have is not a list of associations, but rather a network, and a network is a difficult thing to describe in the linear narrative of a book. In the present case, the authors get all tangled up in the mess, and the result is a book which is repetitive, tedious, and on occasions so infuriating that it was mostly a desire not to clean up the mess and pay the repair cost which kept me from hurling it through a window. If they'd mentioned just one more time that Bill Ayers was a former Weatherman terrorist, I think I might have lost that window.

Each chapter starts out with a theme, but as the web of connections spreads, we get into material and individuals covered elsewhere, and there is little discipline in simply cross-referencing them or trusting the reader to recall their earlier mention. And when there are cross-references, they are heavy handed. For example at the start of chapter 12, they write: “Two of the architects of that campaign, and veterans of Obama's U.S. senatorial campaign—David Axelrod and Valerie Jarrett—were discussed by the authors in detail in Chapter 10 of this book.” Hello, is there an editor in the house? Who other than “the authors” would have discussed them, and where else than in “this book”? And shouldn't an attentive reader be likely to recall two prominent public figures discussed “in detail” just two chapters before?

The publisher's description promises much, including “Obama's mysterious college years unearthed”, but very little new information is delivered, and most of the book is based on secondary sources, including blog postings the credibility of which the reader is left to judge. Now, I did not find much to quibble about, but neither did I encounter much material I did not already know, and I've not obsessively followed Obama. I suppose that people who exclusively get their information from the legacy media might be shocked by what they read here, but most of it has been widely mentioned since Obama came onto the radar screen in 2007. The enigmatic lacunæ in Obama's paper trail (SAT and LSAT scores, college and law school transcripts, etc.) are mentioned here, but remain mysterious.

If you're interested in this topic, I'd recommend giving this book a miss and instead starting with the Barack Obama page on David Horowitz's Discover the Networks site, following the links outward from there. Horowitz literally knows the radical left from inside and out: the son of two members of the Communist Party of the United States, he was a founder of the New Left and editor of Ramparts magazine. Later, repelled by the murderous thuggery of the Black Panthers, he began to re-think his convictions and has since become a vocal opponent of the Left. His book, Radical Son (March 2007), is an excellent introduction to the Old and New Left, and provides insight into the structure and operation of the leftists behind and within the Obama administration.

 Permalink

Gingrich, Newt with Joe DeSantis et al.. To Save America. Washington: Regnery Publishing, 2010. ISBN 978-1-59698-596-4.
In the epilogue of Glenn Beck's The Overton Window (June 2010), he introduces the concept of a “topical storm”, defined as “a state in which so many conflicting thoughts are doing battle in your brain that you lose your ability to discern and act on any of them.” He goes on to observe that:

This state was regularly induced by PR experts to cloud and control issues in the public discourse, to keep thinking people depressed and apathetic on election days, and to discourage those who might be tempted to actually take a stand on a complex issue.

It is easy to imagine responsible citizens in the United States, faced with a topical storm of radical leftist “transformation” unleashed by the Obama administration and its Congressional minions, combined with a deep recession, high unemployment, impending financial collapse, and empowered adversaries around the world, falling into a lethargic state where each day's dismaying news simply deepens the depression and sense of powerlessness and hopelessness. Whether deliberately intended or not, this is precisely what the statists want, and it leads to a citizenry reduced to a despairing passivity as the chains of dependency are fastened about them.

This book is a superb antidote for those in topical depression, and provides common-sense and straightforward policy recommendations which can gain the support of the majorities needed to put them into place. Gingrich begins by surveying the present dire situation in the U.S. and what is at stake in the elections of 2010 and 2012, which he deems the most consequential elections in living memory. Unless stopped by voters at these opportunities, what he describes as a “secular-socialist machine” will be able to put policies in place which will restructure society in such as way as to create a dependent class of voters who will reliably return their statist masters to power for the foreseeable future, or at least until the entire enterprise collapses (which may be sooner, rather than later, but should not be wished for by champions of individual liberty as it will entail human suffering comparable to a military conquest and may result in replacement of soft tyranny by that of the jackbooted variety).

After describing the hole the U.S. have dug themselves into, the balance of the book contains prescriptions for getting out. The situation is sufficiently far gone, it is argued, that reforming the present corrupt bureaucratic system will not suffice—a regime pernicious in its very essence cannot be fixed by changes around the margin. What is needed, then, is not reform but replacement: repealing or sunsetting the bad policies of the present and replacing them with ones which make sense. In certain domains, this may require steps which seem breathtaking to present day sensibilities, but when something reaches its breaking point, drastic things will happen, for better or for worse. For example, what to do about activist left-wing Federal judges with lifetime tenure, who negate the people's will expressed through their elected legislators and executive branch? Abolish their courts! Hey, it worked for Thomas Jefferson, why not now?

Newt Gingrich seeks a “radical transformation” of U.S. society no less than does Barack Obama. Unlike Obama, however, his prescriptions, unlike his objectives, are mostly relatively subtle changes on the margin which will shift incentives in such a way that the ultimate goal will become inevitable in the fullness of time. One of the key formative events in Gingrich's life was the fall of the French Fourth Republic in 1958, which he experienced first hand while his career military stepfather was stationed in France. This both acquainted him with the possibility of unanticipated discontinuous change when the unsustainable can no longer be sustained, and the risk of a society with a long tradition of republican government and recent experience with fascist tyranny welcoming with popular acclaim what amounted to a military dictator as an alternative to chaos. Far better to reset the dials so that the society will start heading in the right direction, even if it takes a generation or two to set things aright (after all, depending on how you count, it's taken between three and five generations to dig the present hole) than to roll the dice and hope for the best after the inevitable (should present policies continue) collapse. That, after all, didn't work out too well for Russia, Germany, and China in the last century.

I have cited the authors in the manner above because a number of the chapters on specific policy areas are co-authored with specialists in those topics from Gingrich's own American Solutions and other organisations.

 Permalink

July 2010

Lewis, Michael. The Big Short. New York: W. W. Norton, 2010. ISBN 978-0-393-07223-5.
After concluding his brief career on Wall Street in the 1980s, the author wrote Liar's Poker, a memoir of a period of financial euphoria and insanity which he assumed would come crashing down shortly after his timely escape. Who could have imagined that the game would keep on going for two decades more, in the process raising the stakes from mere billions to trillions of dollars, extending its tendrils into financial institutions around the globe, and fuelling real estate and consumption bubbles in which individuals were motivated to lie to obtain money they couldn't pay back to lenders who were defrauded as to the risk they were taking?

Most descriptions of the financial crisis which erupted in 2007 and continues to play out at this writing gloss over the details, referring to “arcanely complex transactions that nobody could understand” or some such. But, in the hands of a master explainer like the author, what happened isn't at all difficult to comprehend. Irresponsible lenders (in some cases motivated by government policy) made mortgage loans to individuals which they could not afford, with an initial “teaser” rate of interest. The only way the borrower could avoid default when the interest rate “reset” to market rates was to refinance the property, paying off the original loan. But since housing prices were rising rapidly, and everybody knew that real estate prices never fall, by that time the house would have appreciated in value, giving the “homeowner” equity in the house which would justify a higher grade mortgage the borrower could afford to pay. Naturally, this flood of money into the housing market accelerated the bubble in housing prices, and encouraged lenders to create ever more innovative loans in the interest of “affordable housing for all”, including interest-only loans, those with variable payments where the borrower could actually increase the principal amount by underpaying, no-money-down loans, and “liar loans” which simply accepted the borrower's claims of income and net worth without verification.

But what financial institution would be crazy enough to undertake the risk of carrying these junk loans on its books? Well, that's where the genius of Wall Street comes in. The originators of these loans, immediately after collecting the loan fee, bundled them up into “mortgage-backed securities” and sold them to other investors. The idea was that by aggregating a large number of loans into a pool, the risk of default, estimated from historical rates of foreclosure, would be spread just as insurance spreads the risk of fire and other damages. Further, the mortgage-backed securities were divided into “tranches”: slices which bore the risk of default in serial order. If you assumed, say, a 5% rate of default on the loans making up the security, the top-level tranche would have little or no risk of default, and the rating agencies concurred, giving it the same AAA rating as U.S. Treasury Bonds. Buyers of the lower-rated tranches, all the way down to the lowest investment grade of BBB, were compensated for the risk they were assuming by higher interest rates on the bonds. In a typical deal, if 15% of the mortgages defaulted, the BBB tranche would be completely wiped out.

Now, you may ask, who would be crazy enough to buy the BBB bottom-tier tranches? This indeed posed a problem to Wall Street bond salesmen (who are universally regarded as the sharpest-toothed sharks in the tank). So, they had the back-office “quants” invent a new kind of financial derivative, the “collateralised debt obligation” (CDO), which bundled up a whole bunch of these BBB tranche bonds into a pool, divided it into tranches, et voilà, the rating agencies would rate the lowest risk tranches of the pool of junk as triple A. How to get rid of the riskiest tranches of the CDO? Lather; rinse; repeat.

Investors worried about the risk of default in these securities could insure against them by purchasing a “credit default swap”, which is simply an insurance contract which pays off if the bond it insures is not repaid in full at maturity. Insurance giant AIG sold tens of billions of these swaps, with premiums ranging from a fraction of a percent on the AAA tranches to on the order of two percent on BBB tranches. As long as the bonds did not default, these premiums were a pure revenue stream for AIG, which went right to the bottom line.

As long as the housing bubble continued to inflate, this created an unlimited supply of AAA rated securities, rated as essentially without risk (historical rates of default on AAA bonds are about one in 100,000), ginned up on Wall Street from the flakiest and shakiest of mortgages. Naturally, this caused a huge flow of funds into the housing market, which kept the bubble expanding ever faster.

Until it popped.

Testifying before a hearing by the U.S. House of Representatives on October 22nd, 2008, Deven Sharma, president of Standard & Poor's, said, “Virtually no one—be they homeowners, financial institutions, rating agencies, regulators, or investors—anticipated what is occurring.” Notwithstanding the claim of culpable clueless clown Sharma, there were a small cadre of insightful investors who saw it all coming, had the audacity to take a position against the consensus of the entire financial establishment—in truth a bet against the Western world's financial system, and the courage to hang in there, against gnawing self-doubt (“Can I really be right and everybody else wrong?”) and skittish investors, to finally cash out on the trade of the century. This book is their story. Now, lots of people knew well in advance that the derivatives-fuelled housing bubble was not going to end well: I have been making jokes about “highly-leveraged financial derivatives” since at least 1996. But it's one thing to see an inevitable train wreck coming and entirely another to figure out approximately when it's going to happen, discover (or invent) the financial instruments with which to speculate upon it, put your own capital and reputation on the line making the bet, persist in the face of an overwhelming consensus that you're not only wrong but crazy, and finally cash out in a chaotic environment where there's a risk your bets won't be paid off due to bankruptcy on the other side (counterparty risk) or government intervention.

As the insightful investors profiled here dug into the details of the fairy castle of mortgage-backed securities, they discovered that it wouldn't even take a decline in housing prices to cause defaults sufficient to wipe out the AAA rated derivatives: a mere stagnation in real estate prices would suffice to render them worthless. And yet even after prices in the markets most affected by the bubble had already levelled off, the rating agencies continued to deem the securities based on their mortgages riskless, and insurance against their default could be bought at nominal cost. And those who bought it made vast fortunes as every other market around the world plummeted.

People who make bets like that tend to be way out on the tail of the human bell curve, and their stories, recounted here, are correspondingly fascinating. This book reads like one of Paul Erdman's financial thrillers, with the difference that the events described are simultaneously much less probable and absolutely factual. If this were a novel and not reportage, I doubt many readers would find the characters plausible.

There are many lessons to be learnt here. The first is that the human animal, and therefore the financial markets in which they interact, frequently mis-estimates and incorrectly prices the risk of outcomes with low probability: Black Swan (January 2009) events, and that investors who foresee them and can structure highly leveraged, long-term bets on them can do very well indeed. Second, Wall Street is just as predatory and ruthless as you've heard it to be: Goldman Sachs was simultaneously peddling mortgage-backed securities to its customers while its own proprietary traders were betting on them becoming worthless, and this is just one of a multitude of examples. Third, never assume that “experts”, however intelligent, highly credentialed, or richly compensated, actually have any idea what they're doing: the rating agencies grading these swampgas securities AAA had never even looked at the bonds from which they were composed, no less estimated the probability that an entire collection of mortgages made at the same time, to borrowers in similar circumstances, in the same bubble markets might all default at the same time.

We're still in the early phases of the Great Deleveraging, in which towers of debt which cannot possibly be repaid are liquidated through default, restructuring, and/or inflation of the currencies in which they are denominated. This book is a masterful and exquisitely entertaining exposition of the first chapter of this drama, and reading it is an excellent preparation for those wishing to ride out, and perhaps even profit from the ongoing tragedy. I have just two words to say to you: sovereign debt.

 Permalink

Flynn, Vince. Extreme Measures. New York: Pocket Books, 2008. ISBN 978-1-4165-0504-4.
This is the ninth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series and is perhaps the most politically charged of the saga so far. When a high-ranking Taliban commander and liaison to al-Qaeda is captured in Afghanistan, CIA agent Mike Nash begins an interrogation with the aim of uncovering a sleeper cell planning terrorist attacks in the United States, but is constrained in his methods by a grandstanding senator who insists that the protections of the Geneva Convention be applied to this non-state murderer. Frustrated, Nash calls in Mitch Rapp for a covert and intense debrief of the prisoner, but things go horribly wrong and Rapp ends up in the lock-up of Bagram Air Base charged with violence not only against the prisoner but also a U.S. Air Force colonel (who is one of the great twits of all time—one wonders even with a service academy ring how such a jackass could attain that rank).

Rapp finds himself summoned before the Senate Judiciary Committee to answer the charges and endure the venting of pompous gasbags which constitutes the bulk of such proceedings. This time, however, Rapp isn't having any. He challenges the senators directly, starkly forcing them to choose between legalistic niceties and defeating rogue killers who do not play by the rules. Meanwhile, the sleeper cell is activated and puts into motion its plot to wreak terror on the political class in Washington. Deprived of information from the Taliban captive, the attack takes place, forcing politicians to realise that verbal virtuosity and grandstanding in front of cameras is no way to fight a war. Or, at least, for a moment until they forget once again, and as long as it is they who are personally threatened, not their constituents.

As Mitch Rapp becomes a senior figure and something of a Washington celebrity, Mike Nash is emerging as the conflicted CIA cowboy that Rapp was in the early books of the series. I suspect we'll see more and more of Nash in the future as Rapp recedes into the background.

 Permalink

Sowell, Thomas. Intellectuals and Society. New York: Basic Books, 2009. ISBN 978-0-465-01948-9.
What does it mean to be an intellectual in today's society? Well, certainly one expects intellectuals to engage in work which is mentally demanding, which many do, particularly within their own narrow specialities. But many other people perform work which is just as cognitively demanding: chess grandmasters, musical prodigies, physicists, engineers, and entrepreneurs, yet we rarely consider them “intellectuals” (unless they become “public intellectuals”, discussed below), and indeed “real” intellectuals often disdain their concern with the grubby details of reality.

In this book, the author identifies intellectuals as the class of people whose output consists exclusively of ideas, and whose work is evaluated solely upon the esteem in which it is held by other intellectuals. A chess player who loses consistently, a composer whose works summon vegetables from the audience, an engineer whose aircraft designs fall out of the sky are distinguished from intellectuals in that they produce objective results which succeed or fail on their own merits, and it is this reality check which determines the reputation of their creators.

Intellectuals, on the other hand, are evaluated and, in many cases, hired, funded, and promoted solely upon the basis of peer review, whether formal as in selection for publication, grant applications, or awarding of tenure, or informal: the estimation of colleagues and their citing of an individual's work. To anybody with the slightest sense of incentives, this seems a prescription for groupthink, and it is no surprise that the results confirm that supposition. If intellectuals were simply high-performance independent thinkers, you'd expect their opinions to vary all over the landscape (as is often the case among members of other mentally demanding professions). But in the case of intellectuals, as defined here, there is an overwhelming acceptance of the nostrums of the political left which appears to be unshakable regardless of how many times and how definitively they have been falsified and discredited by real world experience. But why should it be otherwise? Intellectuals themselves are not evaluated by the real world outcomes of their ideas, so it's only natural they're inclined to ignore the demonstrated pernicious consequences of the policies they advocate and bask instead in the admiration of their like-thinking peers. You don't find chemists still working with the phlogiston theory or astronomers fine-tuning geocentric models of the solar system, yet intellectuals elaborating Marxist theories are everywhere in the humanities and social sciences.

With the emergence of mass media in the 20th century, the “public intellectual” came into increasing prominence. These are people with distinguished credentials in a specialised field who proceed to pronounce upon a broad variety of topics in which their professional expertise provides them no competence or authority whatsoever. The accomplishments of Bertrand Russell in mathematics and philosophy, of Noam Chomsky in linguistics, or of Paul Erlich in entomology are beyond dispute. But when they walk onto the public stage and begin to expound upon disarmament, colonialism, and human population and resources, almost nobody in the media or political communities stops to ask just why their opinion should be weighed more highly than that of anybody else without specific expertise in the topic under discussion. And further, few go back and verify their past predictions against what actually happened. As long as the message is congenial to the audience, it seems like public intellectuals can get a career-long pass from checking their predictions against outcomes, even when the discrepancies are so great they would have caused a physical scientist to be laughed out of the field or an investor to have gone bankrupt. As biographer Roy Harrod wrote of eminent economist and public intellectual John Maynard Keynes:

He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.
As was, of course, the attention paid by his audience.

Intellectuals, even when pronouncing within their area of specialisation, encounter the same “knowledge problem” Hayek identified in conjunction with central planning of economies. While the expert, or the central planning bureau, may know more about the problem domain than 99% of individual participants in the area, in many cases that expertise constitutes less than 1% of the total information distributed among all participants and expressed in their individual preferences and choices. A free market economy can be thought of as a massively parallel cloud computer for setting prices and allocating scarce resources. Its information is in the totality of the system, not in any particular place or transaction, and any attempt to extract that information by aggregating data and working on bulk measurements is doomed to failure both because of the inherent loss of information in making the aggregations and also because any such measure will be out of date long before it is computed and delivered to the would-be planner. Intellectuals have the same conceit: because they believe they know far more about a topic than the average person involved with it (and in this they may be right), they conclude that they know much more about the topic than everybody put together, and that if people would only heed their sage counsel much better policies would be put in place. In this, as with central planning, they are almost always wrong, and the sorry history of expert-guided policy should be adequate testament to its folly.

But it never is, of course. The modern administrative state and the intelligentsia are joined at the hip. Both seek to concentrate power, sucking it out from individuals acting at their own discretion in their own perceived interest, and centralising it in order to implement the enlightened policies of the “experts”. That this always ends badly doesn't deter them, because it's power they're ultimately interested in, not good outcomes. In a section titled “The Propagation of the Vision”, Sowell presents a bill of particulars as damning as that against King George III in the Declaration of Independence, and argues that modern-day intellectuals, burrowed within the institutions of academia, government, and media, are a corrosive force etching away the underpinnings of a free society. He concludes:

Just as a physical body can continue to live, despite containing a certain amount of microorganisms whose prevalence would destroy it, so a society can survive a certain amount of forces of disintegration within it. But that is very different from saying that there is no limit to the amount, audacity and ferocity of those disintegrative forces which a society can survive, without at least the will to resist.
In the past century, it has mostly been authoritarian tyrannies which have “cleaned out the universities” and sent their effete intellectual classes off to seek gainful employment in the productive sector, for example doing some of those “jobs Americans won't do”. Will free societies, whose citizens fund the intellectual class through their taxes, muster the backbone to do the same before intellectuals deliver them to poverty and tyranny? Until that day, you might want to install my “Monkeying with the Mainstream Media”, whose Red Meat edition translates “expert” to “idiot”, “analyst” to “moron”, and “specialist” to “nitwit” in Web pages you read.

An extended video interview with the author about the issues discussed in this book is available, along with a complete transcript.

 Permalink

Thor, Brad. Foreign Influence. New York: Atria Books, 2010. ISBN 978-1-4165-8659-3.
Thanks to the inexorable working of Jerry Pournelle's Iron Law of Bureaucracy, government agencies, even those most central to the legitimate functions of government and essential to its survival and the safety of the citizenry, will inevitably become sclerotic and ineffective, serving their employees at the expense of the taxpayers. The only way to get things done is for government to outsource traditionally governmental functions to private sector contractors, and recent years have seen even military operations farmed out to private security companies.

With the intelligence community having become so dysfunctional and hamstrung by feel-good constraints upon their actions and fear of political retribution against operatives, it is only natural that intelligence work—both collection and covert operations—will move to the private sector, and in this novel, Scot Harvath has left government service to join the shadowy Carlton Group, providing innovative services to the Department of Defense. Freed of bureaucratic constraints, Harvath's inner klootzak (read the book) is fully unleashed. Less than halfway into the novel, here's Harvath reporting to his boss, Reed Carlton:

“So let me get this straight,” said the Old Man. “You trunked two Basque separatists, Tasered a madam and a bodyguard—after she kicked your tail—then bagged and dragged her to some French farmhouse where you threatened to disfigure her, then iceboarded a concierge, shot three hotel security guards, kidnapped the wife of one of Russia's wealthiest mobsters, are now sitting in a hotel in Marseille waiting for a callback from the man I sent you over there to apprehend. Is that about right?”
Never a dull moment with the Carlton Group on the job!

Aggressive action is called for, because Harvath finds himself on the trail of a time-sensitive plot to unleash terror attacks in Europe and the U.S., launched by an opaque conspiracy where nothing is as it appears to be. Is this a jihadist plot, or the first volley in an asymmetric warfare conflict launched by an adversary, or a terror network hijacked by another mysterious non-state actor with its own obscure agenda? As Harvath follows the threads, two wisecracking Chicago cops moonlighting to investigate a hit and run accident stumble upon a domestic sleeper cell about to be activated by the terror network. And as the action becomes intense, we make the acquaintance of an Athena Team, an all-babe special forces outfit which is expected to figure prominently in the next novel in the saga and will doubtless improve the prospects of these books being picked up by Hollywood. With the clock ticking, these diverse forces (and at least one you'll never see coming) unite to avert a disastrous attack on American soil. The story is nicely wrapped up at the end, but the larger mystery remains to be pursued in subsequent books.

I find Brad Thor's novels substantially more “edgy” than those of Vince Flynn or Tom Clancy—like Ian Fleming, he's willing to entertain the reader with eccentric characters and situations even if they strain the sense of authenticity. If you enjoy this kind of thing—and I do, very much—you'll find this an entertaining thriller, perfect “airplane book”, and look forward to the next in the series. A podcast interview with the author is available.

 Permalink

August 2010

Lansing, Alfred. Endurance. New York: Carroll & Graf [1959, 1986] 1999. ISBN 978-0-7867-0621-1.
Novels and dramatisations of interplanetary missions, whether (reasonably) scrupulously realistic, highly speculative, or utterly absurd, often focus on the privation of their hardy crews and the psychological and interpersonal stresses they must endure when venturing so distant from the embrace of the planetary nanny state.

Balderdash! Unless a century of socialism succeeds in infantilising its subjects into pathetic, dependent, perpetual adolescents (see the last item cited above as an example), such voyages of discovery will be crewed by explorers, that pinnacle of the human species who volunteers to pay any price, bear any burden, and accept any risk to be among the first to see what's over the horizon.

This chronicle of Ernest Shackleton's Imperial Trans-Antarctic Expedition will acquaint you with real explorers, and leave you in awe of what those outliers on the bell curve of our species can and will endure in circumstances which almost defy description on the printed page.

At the very outbreak of World War I, Shackleton's ship, the Endurance, named after the motto of his family, Fortitudine vincimus: “By endurance we conquer”, sailed for Antarctica. The mission was breathtaking in its ambition: to land a party in Vahsel Bay area of the Weddell Sea, which would cross the entire continent of Antarctica, proceeding to the South Pole with the resources landed from their ship, and then crossing to the Ross Sea with the aid of caches of supplies emplaced by a second party landing at McMurdo Sound. So difficult was the goal that Shackleton's expedition was attempting to accomplish that it was not achieved until 1957–1958, when the Commonwealth Trans-Antarctic Expedition made the crossing with the aid of motorised vehicles and aerial reconnaissance.

Shackleton's expedition didn't even manage to land on the Antarctic shore; the Endurance was trapped in the pack ice of the Weddell Sea in January 1915, and the crew were forced to endure the Antarctic winter on the ship, frozen in place. Throughout the long polar night, conditions were tolerable and morale was high, but much worse was to come. As the southern summer approached, the pack ice began to melt, break up, and grind floe against floe, and on 27th October 1915, pressure of the ice against the ship became unsustainable and Shackleton gave the order to abandon ship and establish a camp on the ice floe, floating on the Weddell Sea. The original plan was to use the sled dogs and the men to drag supplies and the ship's three lifeboats across the ice toward a cache of supplies known to have been left at Paulet Island by an earlier expedition, but pressure ridges in the sea ice soon made it evident that such an ambitious traverse would be impossible, and the crew resigned themselves to camping on the ice pack, whose drift was taking them north, until its breakup would allow them to use the boats to make for the nearest land. And so they waited, until April 8th, 1916, when the floe on which they were camped began to break up and they were forced into the three lifeboats to head for Elephant Island, a forbidding and uninhabited speck of land in the Southern Ocean. After a harrowing six day voyage, the three lifeboats arrived at the island, and for the first time in 497 days the crew of the Endurance were able to sleep on terra firma.

Nobody, even sealers and whalers operating of Antarctica, ever visited Elephant Island: Shackleton's crew were the first to land there. So the only hope of rescue was for a party to set out from there to the nearest reachable inhabited location, South Georgia Island, 1,300 kilometres across the Drake Passage, the stormiest and most treacherous sea on Earth. (There were closer destinations, but due to the winds and currents of the Southern Ocean, none of them were achievable in a vessel with the limited capabilities of their lifeboat.) Well, it had to be done, and so they did it. In one of the most remarkable achievements of seamanship of all time, Frank Worsley sailed his small open boat through these forbidding seas, surviving hurricane-force winds, rogue waves, and unimaginable conditions at the helm, arriving at almost a pinpoint landing on a tiny island in a vast sea with only his sextant and a pocket chronometer, the last remaining of the 24 the Endurance carried when it sailed from the Thames, worn around his neck to keep it from freezing.

But even then it wasn't over. Shackleton's small party had landed on the other side of South Georgia Island from the whaling station, and the state of their boat and prevailing currents and winds made it impossible to sail around the coast to there. So, there was no alternative but to go cross-country, across terrain completely uncharted (all maps showed only the coast, as nobody had ventured inland). And, with no other option, they did it. Since Shackleton's party, there has been only one crossing of South Georgia Island, done in 1955 by a party of expert climbers with modern equipment and a complete aerial survey of their route. They found it difficult to imagine how Shackleton's party, in their condition and with their resources, managed to make the crossing, but of course it was because they had to.

Then it was a matter of rescuing the party left at the original landing site on South Georgia, and then mounting an expedition to relieve those waiting at Elephant Island. The latter was difficult and frustrating—it was not until 30th August 1916 that Shackleton was able to take those he left on Elephant Island back to civilisation. And every single person who departed from South Georgia on the Endurance survived the expedition and returned to civilisation. All suffered from the voyage, but only stowaway Perce Blackboro lost a foot to frostbite; all the rest returned without consequences from their ordeal.

Bottom line—there were men on this expedition, and if similarly demanding expeditions in the future are crewed by men and women equal to their mettle, they will come through just fine without any of the problems the touchy-feely inkblot drones worry about. People with the “born as victim” self-image instilled by the nanny state are unlikely to qualify for such a mission, and should the all-smothering state manage to reduce its subjects to such larvæ, it is unlikely in the extreme that it would mount such a mission, choosing instead to huddle in its green enclaves powered by sewage and the unpredictable winds until the giant rock from the sky calls down the curtain on their fruitless existence.

I read the Kindle edition; unless you're concerned with mass and volume taking this book on a long trip (for which it couldn't be more appropriate!), I'd recommend the print edition, which is not only less expensive (neglecting shipping charges), but also reproduces with much higher quality the many photographs taken by expedition photographer Frank Hurley and preserved through the entire ordeal.

 Permalink

Suarez, Daniel. Daemon. New York: Signet, 2009. ISBN 978-0-451-22873-4.
Ever since “giant electronic brains” came into the public consciousness in the 1940s and '50s, “the computers taking over” has been a staple of science fiction, thrillers, and dystopian novels. To anybody who knows anything about computers, most of these have fallen in the spectrum from implausible to laughably bad, primarily because their authors didn't understand computers, and attributed to them anthropomorphic powers they don't possess, or assumed they had ways to influence events in the real world which they don't.

Here we have a novel that gets it right, is not just a thoughtful exploration of the interaction of computers, networks, and society, but a rip-roaring thriller as well, and, remarkably, is a first novel. In it, Matthew Sobol, a computer game designer who parleyed his genius for crafting virtual worlds in which large numbers of individuals and computer-generated characters interact (massively multiplayer online role-playing games) into a global enterprise, CyberStorm Entertainment, and a personal fortune in the hundreds of millions of dollars, tragically dies of brain cancer at the age of 34.

Shortly after Sobol's death, two CyberStorm employees die in bizarre circumstances which, when police detective Pete Sebeck begins to investigate them with the aid of itinerant computer consultant and dedicated gamer Jon Ross, lead them to suspect that they are murders orchestrated, for no immediately apparent motive, from beyond the grave by Sobol, and carried out by processes, daemons, running on Internet-connected computers without the knowledge of the systems' owners. When the FBI, called in due to their computer forensics resources, attempts to raid Sobol's mansion, things go beyond catastrophically wrong, and it appears they're up against an adversary which has resources and capabilities which are difficult to even quantify and potential consequences for society which cannot be bounded.

Spoiler warning: Plot and/or ending details follow.  
Or maybe not. Before long evidence emerges that Sobol was the victim of a scam orchestrated by Sebeck and his mistress, conning Sobol, whose cognitive facilities were failing as his disease progressed, and setting up the Daemon as a hoax to make a fortune in the stock market as CyberStorm's stock collapsed. This neatly wraps up the narrative, which is just what the police, FBI, and NSA want, and Sebeck is quickly convicted and finds himself on death row for the murders he was accused of having orchestrated. Some involved in the investigation doubt that this ties up all the loose ends, but their superiors put the kibosh on going public with their fears for the time-tested reason of “avoiding public panic”.

Meanwhile, curious things are happening in the worlds of online gaming, offshore Internet gambling and pornography businesses, pillars of the finance sector, media outlets, prisons, and online contract manufacturing. The plague of spam comes to an end in a cataclysmic event which many people on the receiving end may find entirely justified. As analysts at NSA and elsewhere put the pieces together, they begin to comprehend what they're up against and put together an above top secret task force to infiltrate and subvert the Daemon's activities. But in this wired world, it is difficult to keep anything off the record, especially when confronted by an adversary which, distributed on computers around the world, reading all Web sites and RSS feeds, and with its own stream of revenue and human agents which it rewards handsomely, is able to exert its power anywhere. It's a bit like God, when you think about it, or maybe what Google would like to become.

What makes the Daemon, and this book, so devilishly clever is that, in the words of the NSA analyst on its trail, “The Daemon is not an Internet worm or a network exploit. It doesn't hack systems. It hacks society.” Indeed, the Daemon is essentially a role playing game engine connected to the real world, with the ability to reward those humans who do its bidding with real world money, power, and prestige, not virtual credits in a game. Consider how much time and money highly intelligent people with limited social skills currently spend on online multiplayer games. Now imagine if the very best of them were recruited to deploy their talents in the world outside their parents' basements, and be compensated with wealth, independence, and power over others. Do you think there would be a shortage of people to do the Daemon's bidding, even without the many forms of coercion it could bring to bear on those who were unwilling?

Ultimately this book is about a phase change in the structure of human society brought about by the emergence of universal high bandwidth connectivity and distributed autonomous agents interacting with humans on an individual basis. From a pure Darwinian standpoint, might such a system be able to act, react, and mobilise resources so quickly and efficiently that it would run rings around the strongly hierarchical, coercive, and low bandwidth forms of organisation which have characterised human society for thousands of years? And if so, what could the legacy society do to stop it, particularly once it has become completely dependent upon the technologies which now are subverting and supplanting it?

Spoilers end here.  
When I say the author gets it right, I'm not claiming the plot is actually plausible or that something like this could happen in the present or near future—there are numerous circumstances where a reader with business or engineering experience will be extremely sceptical that so many intricate things which have never before been tested on a full scale (or at all) could be expected to work the first time. After all, multi-player online games are not opened to the public before extensive play testing and revision based upon the results. But lighten up: this is a thriller, not a technological forecast, and the price of admission in suspension of disbelief is much the same as other more conventional thrillers. Where the book gets it right is that when discussing technical details, terminology is used correctly, descriptions are accurate, and speculative technologies at least have prototypes already demonstrated. Many books of this genre simply fall into the trap of Star Trek-like technobabble or endow their technological gadgets with capabilities nobody would have any idea how to implement today. In many stories in which technology figures prominently, technologically knowledgeable readers find themselves constantly put off by blunders which aren't germane to the plot but are simply indicative of ignorance or sloppiness on the part of the author; that doesn't happen here. One of the few goofs I noticed was in chapter 37 where one of the Daemon's minions receives “[a] new 3-D plan file … then opened it in AutoCAD. It took several seconds, even on his powerful Unix workstation.” In fact, AutoCAD has run only on Microsoft platforms for more than a decade, and that isn't likely to change. But he knows about AutoCAD, not to mention the Haas Mini Mill.

The novel concludes with a rock 'em, sock 'em action scene which is going to be awe inspiring when this book is made into a movie. Rumour is that Paramount Pictures has already optioned the story, and they'll be fools if they don't proceed with production for the big screen. At the end of the book the saga is far from over, but it ends at a logical point and doesn't leave you with a cliffhanger. Fortunately, the sequel, Freedom™, is already out in hardcover and is available in a Kindle edition.

 Permalink

Reich, Eugenie Samuel. Plastic Fantastic. New York: St. Martin's Press, 2009. ISBN 978-0-230-62384-2.
Boosters of Big Science, and the politicians who rely upon its pronouncements to justify their policy prescriptions often cite the self-correcting nature of the scientific process: peer review subjects the work of researchers to independent and dispassionate scrutiny before results are published, and should an incorrect result make it into print, the failure of independent researchers to replicate it will inevitably call it into question and eventually cause it to be refuted.

Well, that's how it works in theory. Theory is very big in contemporary Big Science. This book is about how things work in fact, in the real world, and it's quite a bit different. At the turn of the century, there was no hotter property in condensed matter physics than Hendrik Schön, a junior researcher at Bell Labs who, in rapid succession reported breakthroughs in electronic devices fabricated from organic molecules including:

  • Organic field effect transistors
  • Field-induced superconductivity in organic crystals
  • Fractional quantum Hall effect in organic materials
  • Organic crystal laser
  • Light emitting organic transistor
  • Organic Josephson junction
  • High temperature superconductivity in C60
  • Single electron organic transistors

In the year 2001, Schön published a paper in a peer reviewed journal at a rate of one every eight days, with many reaching the empyrean heights of Nature, Science, and Physical Review. Other labs were in awe of his results, and puzzled because every attempt they made to replicate his experiments failed, often in ways which seemed to indicate the descriptions of experiments he published were insufficient for others to replicate them. Theorists also raised their eyebrows at Schön's results, because he claimed breakdown properties of sputtered aluminium oxide insulating layers far beyond measured experimental results, and behaviour of charge transport in his organic substrates which didn't make any sense according to the known properties of such materials.

The experimenters were in a tizzy, trying to figure out why they couldn't replicate Schön's results, while the theorists were filling blackboards trying to understand how his incongruous results could possibly make sense. His superiors were basking in the reflected glory of his ascendence into the élite of experimental physicists and the reflection of his glory upon their laboratory.

In April 2002, while waiting in the patent attorney's office at Bell Labs, researchers Julia Hsu and Lynn Loo were thumbing through copies of Schön's papers they'd printed out as background documentation for the patent application they were preparing, when Loo noticed that two graphs of inverter outputs, one in a Nature paper describing a device made of a layer of thousands of organic molecules, and another in a Science paper describing an inverter made of just one or two active molecules were identical, right down to the instrumental noise. When this was brought to the attention of Schön's manager and word of possible irregularities in Schön's publications began to make its way through the condensed matter physics grapevine, his work was subjected to intense scrutiny both within Bell Labs and by outside researchers, and additional instances of identical graphs re-labelled for entirely different experiments came to hand. Bell Labs launched a formal investigation in May 2002, which concluded, in a report issued the following September, that Schön had committed at least 16 instances of scientific misconduct, fabricating the experimental data he reported from mathematical functions, with no evidence whatsoever that he had ever built the devices he claimed to have, or performed the experiments described in his papers. A total of twenty-one papers authored by Schön in Science, Nature, and Physical Review were withdrawn, as well as a number in less prestigious venues.

What is fascinating in this saga of flat-out fraud and ultimate exposure and disgrace is how completely the much-vaunted system of checks and balances of industrial scale Big Science and peer review in the most prestigious journals completely fell on its face at the hands of a fraudster in a junior position with little or no scientific track record who was willing to make up data to confirm the published expectations of the theorists, and figured out how to game the peer review system, using criticisms of his papers as a guide to make up additional data to satisfy the objections of the referees. As a former manager of a group of ambitious and rambunctious technologists, what strikes me is how utterly Schön's colleagues and managers at Bell Labs failed in overseeing his work and vetting his results. “Extraordinary claims require extraordinary evidence”, and Schön was making and publishing extraordinary claims at the rate of almost one a week in 2001, and yet not once did anybody at Bell Labs insist on observing him perform one of the experiments he claimed to be performing, even after other meticulous experimenters in laboratories around the world reported that they were unable to replicate his results. Think about it—if a junior software developer in your company claimed to have developed a miraculous application, wouldn't you want to see a demo before issuing a press release about it and filing a patent application? And yet nobody at Bell Labs thought to do so with Schön's work.

The lessons from this episode are profound, and I see little evidence that they have been internalised by the science establishment. A great deal of experimental science is now guided by the expectations of theorists; it is difficult to obtain funding for an experimental program which looks for effects not anticipated by theory. In such an environment, an unscrupulous scientist willing to make up data that conforms to the prejudices of the theorists may be able to publish in prestigious journals and be considered a rising star of science based on an entirely fraudulent corpus of work. Because scientists, especially in the Anglo-Saxon culture, are loath to make accusations of fraud (as the author notes, in the golden age of British science such an allegation might well result in a duel being fought), failure to replicate experimental results is often assumed to be a failure by the replicator to precisely reproduce the circumstances of the original investigator, not to call into question the veracity of the reported work. Schön's work consisted of desktop experiments involving straightforward measurements of electrical properties of materials, which were about as simple as anything in contemporary science to evaluate and independently replicate. Now think of how vulnerable research on far less clear cut topics such as global climate, effects of diet on public health, and other topics would be to fraudulent, agenda-driven “research”. Also, Schön got caught only because he became sloppy in his frenzy of publication, duplicating graphs and data sets from one paper to another. How long could a more careful charlatan get away with it?

Quite aside from the fascinating story and its implications for the integrity of the contemporary scientific enterprise, this is a superbly written narrative which reads more like a thriller than an account of a regrettable episode in science. But it is entirely factual, and documented with extensive end notes citing original sources.

 Permalink

September 2010

Miller, Richard L. Under The Cloud. The Woodlands, TX: Two Sixty Press, [1986] 1991. ISBN 978-1-881043-05-8.
Folks born after the era of atmospheric nuclear testing, and acquainted with it only through accounts written decades later, are prone to react with bafflement—“What were they thinking?” This comprehensive, meticulously researched, and thoroughly documented account of the epoch not only describes what happened and what the consequences were for those in the path of fallout, but also places events in the social, political, military, and even popular culture context of that very different age. A common perception about the period is “nobody really understood the risks”. Well, it's quite a bit more complicated than that, as you'll understand after reading this exposition. As early as 1953, when ranchers near Cedar City, Utah lost more than 4000 sheep and lambs after they grazed on grass contaminated by fallout, investigators discovered the consequences of ingestion of Iodine-131, which is concentrated by the body in the thyroid gland, where it can not only lead to thyroid cancer but faster-developing metabolic diseases. The AEC reacted immediately to this discovery. Commissioner Eugene Zuckert observed that “In the present frame of mind of the public, it would only take a single illogical and unforeseeable incident to preclude holding any future tests in the United States”, and hence the author of the report on the incident was ordered to revise the document, “eliminating any reference to radiation damage or effects”. In a subsequent meetings with the farmers, the AEC denied any connection between fallout and the death of the sheep and denied compensation, claiming that the sheep, including grotesquely malformed lambs born to irradiated ewes, had died of “malnutrition”.

It was obvious to others that something serious was happening. Shortly after bomb tests began in Nevada, the Eastman Kodak plant in Rochester, New York which manufactured X-ray film discovered that when a fallout cloud was passing overhead their film batches would be ruined by pinhole fogging due to fallout radiation, and that they could not even package the film in cardboard supplied by a mill whose air and water supplies were contaminated by fallout. Since it was already known that radiologists with occupational exposure to X-rays had mean lifespans several years shorter than the general public, it was pretty obvious that exposing much of the population of a continent (and to a lesser extent the entire world) to a radiation dose which could ruin X-ray film had to be problematic at best and recklessly negligent at worst. And yet the tests continued, both in Nevada and the Pacific, until the Limited Test Ban Treaty between the U.S., USSR, and Great Britain was adopted in 1963. France and China, not signatories to the treaty, continued atmospheric tests until 1971 and 1980 respectively.

What were they thinking? Well, this was a world in which the memory of a cataclysmic war which had killed tens of millions of people was fresh, which appeared to be on the brink of an even more catastrophic conflict, which might be triggered if the adversary developed a weapon believed to permit a decisive preemptive attack or victory through intimidation. In such an environment where everything might be lost through weakness and dilatory progress in weapons research, the prospect of an elevated rate of disease among the general population was weighed against the possibility of tens of millions of deaths in a general conflict and the decision was made to pursue the testing. This may very well have been the correct decision—since you can't test a counterfactual, we'll never know—but there wasn't a general war between the East and West, and to this date no nuclear weapon has been used in war since 1945. But what is shocking and reprehensible is that the élites who made this difficult judgement call did not have the courage to share the facts with the constituents and taxpayers who paid their salaries and bought the bombs that irradiated their children's thyroids with Iodine-131 and bones with Strontium-90. (I'm a boomer. If you want to know just how many big boom clouds a boomer lived through as a kid, hold a sensitive radiation meter up to one of the long bones of the leg; you'll see the elevated beta radiation from the Strontium-90 ingested in milk and immured in the bones [Strontium is a chemical analogue of Calcium].) Instead, they denied the obvious effects, suppressed research which showed the potential risks, intimidated investigators exploring the effects of low level radiation, and covered up assessments of fallout intensity and effects upon those exposed. Thank goodness such travesties of science and public policy could not happen in our enlightened age! An excellent example of mid-fifties AEC propaganda is the Atomic Test Effects in the Nevada Test Site Region pamphlet, available on this site: “Your best action is not to be worried about fall-out. … We can expect many reports that ‘Geiger counters were going crazy here today.’ Reports like this may worry people unnecessarily. Don't let them bother you.”

This book describes U.S. nuclear testing in Nevada in detail, even giving the precise path the fallout cloud from most detonations took over the country. Pacific detonations are covered in less detail, concentrating on major events and fallout disasters such as Castle Bravo. Soviet tests and the Chelyabinsk-40 disaster are covered more sketchily (fair enough—most details remained secret when the book was written), and British, French, and Chinese atmospheric tests are mentioned only in passing.

The paperback edition of this book has the hefty cover price of US$39.95, which is ta lot for a book of 548 pages with just a few black and white illustrations. I read the Kindle edition, which is priced at US$11.99 at this writing, which is, on its merits, even more overpriced. It is a sad, sorry, and shoddy piece of work, which appears to be the result of scanning a printed edition of the book with an optical character recognition program and transferring it to Kindle format without any proofreading whatsoever. Numbers and punctuation are uniformly garbled, words are mis-recognised, random words are jammed into the text as huge raster images, page numbers and chapter headings are interleaved into the text, and hyphenated words are not joined while pairs of unrelated words are run together. The abundant end note citations are randomly garbled and not linked to the notes at the end of the book. The index is just a scan of that in the printed book, garbled, unlinked to the text, and utterly useless. Most public domain Kindle books sold for a dollar have much better production values than this full price edition. It is a shame that such an excellent work on which the author invested such a great amount of work doing the research and telling the story has been betrayed by this slapdash Kindle edition which will leave unwary purchasers feeling their pockets have been picked. I applaud Amazon's providing a way for niche publishers and independent authors to bring their works to market on the Kindle, but I wonder if their lack of quality control on the works published (especially at what passes for full price on the Kindle) might, in the end, injure the reputation of Kindle books among the customer base. After this experience, I know for sure that I will never again purchase a Kindle book from a minor publisher before checking the comments to see if the transfer merits the asking price. Amazon might also consider providing a feedback mechanism for Kindle purchasers to rate the quality of the transfer to the Kindle, which would appear along with the content-based rating of the work.

 Permalink

Walsh, Michael. Hostile Intent. New York: Pinnacle Books, 2009. ISBN 978-0-7860-2042-3.
Michael Walsh is a versatile and successful writer who has been a Moscow correspondent and music critic for Time magazine, written a novel which is a sequel to Casablanca, four books about classical music, and a screenplay for the Disney Channel which was the highest rated original movie on the channel at the time. Two of his books have been New York Times bestsellers, and his gangster novel And All the Saints won an American Book Award in 2004. This novel is the first of a projected series of five. The second, Early Warning, was released in September 2010.

In the present novel, the author turns to the genre of the contemporary thriller, adopting the template created by Tom Clancy, and used with such success by authors such as Vince Flynn and Brad Thor: a loner, conflicted agent working for a shadowy organisation, sent to do the dirty work on behalf of the highest levels of the government of the United States. In this case, the protagonist is known only as “Devlin” (although he assumes a new alias and persona every few chapters), whose parents were killed in a terrorist attack at the Rome airport in 1985 and has been raised as a covert instrument of national policy by a military man who has risen to become the head of the National Security Agency (NSA). Devlin works for the Central Security Service, a branch of the NSA which, in the novel, retains its original intent of being “Branch 4” of the armed forces, able to exploit information resources and execute covert operations outside the scope of conventional military actions.

The book begins with a gripping description of a Beslan-like school hostage attack in the United States in which Devlin is activated to take down the perpetrators. After achieving a mostly successful resolution, he begins to suspect that the entire event was simply a ruse to draw him into the open so that he could be taken down by his enemies. This supposition is confirmed, at least in his own justifiably paranoid mind, by further terrorist strikes in Los Angeles and London, which raise the stakes and further expose his identity and connections.

This is a story which starts strong but then sputters out as it unfolds. The original taut narrative of the school hostage crisis turns into a mush with a shadowy supervillain who is kind of an evil George Soros (well, I mean an even more evil George Soros), a feckless and inexperienced U.S. president (well, at least that could never happen!), and Devlin, the über paranoid loner suddenly betting everything on a chick he last met in a shoot-out in Paris.

Thrillers are supposed to thrill, but if set in the contemporary world or the near future (as is this book—the fall of Mugabe in Zimbabwe is mentioned, but everything is pretty much the same as the present), they're expected to be plausible as regards the technology used and the behaviour of the characters. It just doesn't do to have the hero, in a moment of crisis, when attacked by ten thousand AK-47 wielding fanatics from all directions, pull out his ATOMIC SPACE GUN and mow them down with a single burst.

But that's pretty much what happens here. I'll have to go behind the spoiler curtain to get into the details, so I'll either see you there or on the other side if you've decided to approach this novel freshly without my nattering over details.

Spoiler warning: Plot and/or ending details follow.  
  • We are asked to believe that a sitting U.S. president would order two members of his Secret Service detail to commit a cold blooded murder in order to frame a senator and manipulate his reelection campaign, and that the agents would carry out the murder. This is simply absurd.
  • As the story develops we learn that the shadowy “Branch 4” for which Devlin believes he is working does not, in fact, exist, and that Devlin is its sole agent, run by the director of NSA. Now Devlin has back-door access to all U.S. intelligence assets and databases and uses them throughout. How plausible is it that he wouldn't have figured this out himself?
  • Some people have cell phones: Devlin has a Hell phone. In chapter 7 we're treated to a description of Devlin's Black Telephone, which is equipped with “advanced voice-recognition software”, a fingerprint scanner in the receiver, and a retinal scanner in the handset. “If any of these elements were not sequenced within five seconds, the phone would self-destruct in a fireball of shrapnel, killing any unauthorized person unlucky enough to have picked it up.” Would you trust a government-supplied telephone bomb to work with 100% reliability? What if your stack of dossiers topples over and knocks off the receiver?
  • In several places “logarithm” is used where “algorithm” is intended. Gadgetry is rife with urban legends such as the computer virus which causes a hard drive to melt.
  • In chapter 12 the phone rings and Devlin “spoke into a Blu-Ray mouthpiece as he answered”. Blu-ray is an optical disc storage format; Bluetooth is the wireless peripheral technology. Besides, would an operative obsessed with security to the level of paranoia use a wireless headset with dubious anti-eavesdropping measures?
  • The coup de grace of the series of terrorist attacks is supposed to be an electromagnetic pulse (EMP) attack against the United States, planned to knock out all electronics, communications, and electrical power in the eastern part of the country. The attack consists of detonating an ex-Soviet nuclear weapon raised to the upper atmosphere by a weather balloon launched from a ship off the East Coast. Where to begin? Well, first of all, at the maximum altitude reachable by a weather balloon, the mean free path of the gamma rays from the detonation through the atmosphere would be limited, as opposed to the unlimited propagation distance from an explosion in space well above the atmosphere. This would mean that any ionisation of atoms in the atmosphere would be a local phenomenon, which would reduce the intensity and scope of the generated pulse. Further, the electromagnetic pulse cannot propagate past the horizon, so even if a powerful pulse were generated at the altitude of a balloon, it wouldn't propagate far enough to cause a disaster all along the East Coast.
  • In the assault on Clairvaux Prison, is it conceivable that an experienced special forces operator would take the mother of a hostage and her young son along aboard the helicopter gunship leading the strike?
  • After the fight in the prison, archvillain Skorenzy drops through a trap door and escapes to a bolt-hole, and at the end of the novel is still at large and presumed to be continuing his evil schemes. But his lair is inside a French maximum security prison! How does he get away? Say what you like about the French military, when it comes to terrorists they're deadly serious, right up there with the Mossad. Would a prison that housed Carlos the Jackal have a tunnel which would allow Skorenzy to saunter out? Would French officials allow the man who blew up a part of Los Angeles and brought down the London Eye with a cruise missile free passage?
Spoilers end here.  
It's a tangled, muddled mess. It has its moments, but there isn't the building toward a climax and then the resolution one expects from a thriller. None of the characters are really admirable, and the author's policy preferences (with which I largely agree) are exhibited far too blatantly, as opposed to being woven into the plot. The author, accomplished in other genres, may eventually master the thriller, but I doubt I'll read any of the sequels to find out for myself.

 Permalink

October 2010

Sowell, Thomas. Dismantling America. New York: Basic Books, 2010. ISBN 978-0-465-02251-9.
Thomas Sowell has been, over his career, an optimist about individual liberty and economic freedom in the United States and around the world. Having been born in the segregated South, raised by a single mother in Harlem in the 1940s, he said that the progress he had observed in his own lifetime, rising from a high school dropout to the top of his profession, convinced him that America ultimately gets it right, and that opportunity for those who wish to advance through their own merit and hard work is perennial. In recent years, however, particularly since the rise and election of Barack Obama, his outlook has darkened considerably, almost approaching that of John Derbyshire. Do you think I exaggerate? Consider this passage from the preface:

No one issue and no one administration in Washington has been enough to create a perfect storm for a great nation that has weathered many storms in its more than two centuries of existence. But the Roman Empire lasted many times longer, and weathered many storms in its turbulent times—and yet it ultimately collapsed completely.

It has been estimated that a thousand years passed before the standard of living in Europe rose again to the level it had achieved in Roman times. The collapse of civilization is not just the replacement of rulers or institutions with new rulers and new institutions. It is the destruction of a whole way of life and the painful, and sometimes pathetic, attempts to begin rebuilding amid the ruins.

Is that where America is headed? I believe it is. Our only saving grace is that we are not there yet—and that nothing is inevitable until it happens.

Strong stuff! The present volume is a collection of the author's syndicated columns dating from before the U.S. election of 2008 into the first two years of the Obama administration. In them he traces how the degeneration and systematic dismantling of the underpinnings of American society which began in the 1960s culminated in the election of Obama, opening the doors to power to radicals hostile to what the U.S. has stood for since its founding and bent on its “fundamental transformation” into something very different. Unless checked by the elections of 2010 and 2012, Sowell fears the U.S. will pass a “point of no return” where a majority of the electorate will be dependent upon government largesse funded by a minority who pay taxes. I agree: I deemed it the tipping point almost two years ago.

A common theme in Sowell's writings of the last two decades has been how public intellectuals and leftists (but I repeat myself) attach an almost talismanic power to words and assume that good intentions, expressed in phrases that make those speaking them feel good about themselves, must automatically result in the intended outcomes. Hence the belief that a “stimulus bill” will stimulate the economy, a “jobs bill” will create jobs, that “gun control” will control the use of firearms by criminals, or that a rise in the minimum wage will increase the income of entry-level workers rather than price them out of the market and send their jobs to other countries. Many of the essays here illustrate how “progressives” believe, with the conviction of cargo cultists, that their policies will turn the U.S. from a social Darwinist cowboy capitalist society to a nurturing nanny state like Sweden or the Netherlands. Now, notwithstanding that the prospects of those two countries and many other European welfare states due to demographic collapse and Islamisation are dire indeed, the present “transformation” in the U.S. is more likely, in my opinion, to render it more like Perón's Argentina than France or Germany.

Another part of the “perfect storm” envisioned by Sowell is the acquisition of nuclear weapons by Iran, the imperative that will create for other states in the region to go nuclear, and the consequent possibility that terrorist groups will gain access to these weapons. He observes that Japan in 1945 was a much tougher nation than the U.S. today, yet only two nuclear bombs caused them to capitulate in a matter of days. How many cities would the U.S. have to lose? My guess is at least two but no more than five. People talk about there being no prospect of a battleship Missouri surrender in the War on Terror (or whatever they're calling it this week), but the prospect of a U.S. surrender on the carrier Khomeini in the Potomac is not as far fetched as you might think.

Sowell dashes off epigrams like others write grocery lists. Here are a few I noted:

  • One of the painful consequences of studying history is that it makes you realize how long people have been doing the same foolish things with the same disastrous results.
  • There is usually only a limited amount of damage that can be done by dull or stupid people. For creating a truly monumental disaster, you need people with high IQs.
  • Do not expect sound judgments in a society where being “non-judgmental” is an exalted value. As someone has said, if you don't stand for something, you will fall for anything.
  • Progress in general seems to hold little interest for people who call themselves “progressives”. What arouses them are denunciations of social failures and accusations of wrong-doing.
      One wonders what they would do in heaven.
  • In a high-tech age that has seen the creation of artificial intelligence by computers, we are also seeing the creation of artificial stupidity by people who call themselves educators.
  • Most people on the left are not opposed to freedom. They are just in favor of all sorts of things that are incompatible with freedom.
  • Will those who are dismantling this society from within or those who seek to destroy us from without be the first to achieve their goal? It is too close to call.

As a collection of columns, you can read this book in any order you like (there are a few “arcs” of columns, but most are standalone), and pick it up and put it down whenever you like without missing anything. There is some duplication among the columns, but they never become tedious. Being newspaper columns, there are no source citations or notes, and there is no index. What are present in abundance are Sowell's acute observations of the contemporary scene, historical perspective, rigorous logic, economic common sense, and crystal clear exposition. I had read probably 80% of these columns when they originally appeared, but gleaned many new insights revisiting them in this collection.

The author discusses the book, topics raised in it, and the present scene in an extended video interview, for which a transcript exists. A shorter podcast interview with the author is also available.

 Permalink

Flynn, Vince. Pursuit of Honor. New York: Pocket Books, 2009. ISBN 978-1-4165-9517-5.
This is the tenth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) saga, and the conclusion of the story which began in the previous volume, Extreme Measures (July 2010). In that book, a group of terrorists staged an attack in Washington D.C., with the ringleaders managing to disappear in the aftermath. In the present novel, it's time for payback, and Mitch Rapp and his team goes on the trail not only of the terrorists but also their enablers within the U.S. government.

The author says that you should be able to pick up and enjoy any of his novels without any previous context, but in my estimation you'll miss a great deal if you begin here without having read Extreme Measures. While an attempt is made (rather clumsily, it seemed to me) to brief the reader in on the events of the previous novel, those who start here will miss much of the character development of the terrorists Karim and Hakim, and the tension between Mitch Rapp and Mike Nash, whose curious parallels underlie the plot.

This is more a story of character development and conflict between personalities and visions than action, although it's far from devoid of the latter. There is some edgy political content in which I believe the author shows his contempt for certain factions and figures on the Washington scene, including “Senator ma'am”. The conclusion is satisfying although deliberately ambiguous in some regards. I appear to have been wrong in my review of Extreme Measures about where the author was taking Mike Nash, but then you never know.

This book may, in terms of the timeline, be the end of the Mitch Rapp series. Vince Flynn's forthcoming novel, American Assassin, is a “prequel”, chronicling Rapp's recruitment into the CIA, training, and deployment on his first missions. Still, it's difficult in the extreme to cork a loose cannon, so I suspect in the coming years we'll see further exploits by Mitch Rapp on the contemporary scene.

 Permalink

Mahoney, Bob. Damned to Heaven. Austin, TX: 1st World Publishing, 2003. ISBN 978-0-9718562-8-8.
This may be the geekiest space thriller ever written. The author has worked as a spaceflight instructor at NASA's Johnson Space Center in Houston for more than a decade, training astronauts and flight controllers in the details of orbital operations. He was Lead Instructor for the first Shuttle-Mir mission. He knows his stuff, and this book, which bristles with as many acronyms and NASA jargon as a Shuttle flight plan, gets the details right and only takes liberty with the facts where necessary to advance the plot. Indeed, it seems the author is on an “expanded mission” of his NASA career as an instructor to ensure that not only those he's paid to teach, but all readers of the novel know their stuff as well—he even distinguishes acronyms pronounced letter-by-letter (such as E.V.A.) and those spoken as words (like OMS), and provides pronunciation guides for the latter.

For a first time novelist, the author writes quite well, and there are only a few typographical and factual errors. Since the dialogue is largely air to ground transmissions or proceedings of NASA mission management meetings, it comes across as stilted, but is entirely authentic—that's how they talk. Character description is rudimentary, and character development as the story progresses almost nonexistent, but then most of the characters are career civil servants who have made it to the higher echelons of an intensely politically correct and meritocratic bureaucracy where mavericks or those even remotely interesting are ground down or else cut off and jettisoned. Again, not the usual dramatis personæ of a thriller, but pretty accurate.

So what about the story? A space shuttle bound for the International Space Station suffers damage to its thermal protection system which makes it impossible to reenter safely, and the crew takes refuge on the still incomplete Station, stretching its life support resources to the limit. A series of mishaps, which may seem implausible all taken together, but every one of which has actually occurred in U.S. and Soviet space operations over the last two decades, eliminates all of the rescue alternatives but one last, desperate Hail Mary option, which a flight director embraces, not out of boldness, but because there is no other way to save the crew. Trying to thwart the rescue is a malevolent force high in the NASA management hierarchy, bent on destroying the existing human spaceflight program in order that a better replacement may be born. (The latter might have seemed preposterous when the novel was published in 2003, but looking just at the results of NASA senior management decisions in the ensuing years, it's hard to distinguish the outcomes from those of having deliberate wreckers at the helm.)

The author had just about finished the novel when the Columbia accident occurred in February 2003. Had Columbia been on a mission to the Space Station, and had the damage to its thermal protection system been detected (which is probable, as it would have been visible as the shuttle approached the station), then the scenario here, or at least the first part, would have likely occurred. The author made a few changes to the novel post-Columbia; they are detailed in notes at the end.

As a thriller, this worked for me—I read the whole thing in three days and enjoyed the author's painting his characters into corner after corner and then letting them struggle to avert disaster due to the laws of nature, ambitious bureaucratic adversaries, and cluelessness and incompetence, in ascending order of peril to mission success and crew survival. I suspect many readers will consider this a bit much; recall that I used the word “geekiest” in the first sentence of these remarks. But unlike another thriller by a NASA engineer, I was never once tempted to hurl this one into the flame trench immediately before ignition.

If the events in this book had actually happened, and an official NASA historian had written an account of them some years later, it would probably read much like this book. That is quite an achievement, and the author has accomplished that rare feat of crafting a page-turner (at least for readers who consider “geeky” a compliment) which also gets the details right and crafts scenarios which are both surprising and plausible. My quibbles with the plot are not with the technical details but rather scepticism that the NASA of today could act as quickly as in the novel, even when faced with an existential threat to its human spaceflight program.

 Permalink

[Audiobook] Wolfe, Tom. I Am Charlotte Simmons. (Audiobook, Unabridged). New York: Macmillan Audio, 2004. ISBN 978-0-312-42444-2.
Thomas Sowell has written, “Each new generation born is in effect an invasion of civilization by little barbarians, who must be civilized before it is too late”. Tom Wolfe's extensively researched and pitch-perfect account of undergraduate life at an élite U.S. college in the first decade of the twenty-first century is a testament to what happens when the barbarians sneak into the gates of the cloistered cities of academe, gain tenure, and then turn the next generation of “little barbarians” loose into a state of nature, to do what their hormones and whims tell them to.

Our viewpoint into this alien world (which the children and grandchildren of those likely to be reading this chronicle inhabit, if they're lucky [?] enough to go to one of those élite institutions which groom them for entry into the New [or, as it is coming to be called, Ruling] Class at the cost of between a tenth and a quarter of a million dollars, often front-end loaded as debt onto the lucky students just emerging into those years otherwise best spent in accumulating capital to buy a house, start a family, and make the key early year investments in retirement and inheritance for their progeny) is Charlotte Simmons of Sparta, North Carolina, a Presidential Scholar from the hill country who, by sheer academic excellence, has won a full scholarship to Dupont University, known not only for its academic prestige, but also its formidable basketball team.

Before arriving at Dupont, Charlotte knew precisely who she was, what she wanted, and where she was going. Within days after arriving, she found herself in a bizarre mirror universe where everything she valued (and which the university purported to embody) was mocked by the behaviour of the students, professors, and administrators. Her discoveries are our discoveries of this alien culture which is producing those who will decide our fate in our old age. Worry!

Nobody remotely competes with Tom Wolfe when it comes to imbibing an alien culture, mastering its jargon and patois, and fleshing out the characters who inhabit it. Wolfe's talents are in full ascendance here, and this is a masterpiece of contemporary pedagogic anthropathology. We are doomed!

The audio programme is distributed in four files, running 31 hours and 16 minutes and includes a brief interview with the author at the end. An Audio CD edition is available, as is a paperback print edition.

 Permalink

Shirer, William L. The Rise and Fall of the Third Reich. New York: Touchstone Books, [1959, 1960] 1990. ISBN 978-0-671-72868-7.
According to an apocryphal story, a struggling author asks his agent why his books aren't selling better, despite getting good reviews. The agent replies, “Look, the only books guaranteed to sell well are books about golf, books about cats, and books about Nazis.” Some authors have taken this too much to heart. When this massive cinder block of a book (1250 pages in the trade paperback edition) was published in 1960, its publisher did not believe a book about Nazis (or at least such a long one) would find a wide audience, and ordered an initial print run of just 12,500 copies. Well, it immediately went on to sell more than a million copies in hardback, and then another million in paperback (it was, at the time, the thickest paperback ever published). It has remained in print continuously for more than half a century, has been translated into a number of languages, and at this writing is in the top ten thousand books by sales rank on Amazon.com.

The author did not just do extensive research on Nazi Germany, he lived there from 1934 through 1940, working as a foreign correspondent based in Berlin and Vienna. He interviewed many of the principals of the Nazi regime and attended Nazi rallies and Hitler's Reichstag speeches. He was the only non-Nazi reporter present at the signing of the armistice between France and Germany in June 1940, and broke the news on CBS radio six hours before it was announced in Germany. Living in Germany, he was able to observe the relationship between ordinary Germans and the regime, but with access to news from the outside which was denied to the general populace by the rigid Nazi control of information. He left Germany in December 1940 when increasingly rigid censorship made it almost impossible to get accurate reporting out of Germany, and he feared the Gestapo were preparing an espionage case against him.

Shirer remarks in the foreword to the book that never before, and possibly never again, will historians have access to the kind of detailed information on the day-to-day decision making and intrigues of a totalitarian state that we have for Nazi Germany. Germans are, of course, famously meticulous record-keepers, and the rapid collapse and complete capitulation of the regime meant that those voluminous archives fell into the hands of the Allies almost intact. That, and the survival of diaries by a number of key figures in the senior leadership of Germany and Italy, provides a window into what those regimes were thinking as they drew plans which would lead to calamity for Europe and their ultimate downfall. The book is extensively footnoted with citations of primary sources, and footnotes expand upon items in the main text.

This book is precisely what its subtitle, “A History of Nazi Germany”, identifies it to be. It is not, and does not purport to be, an analysis of the philosophical origins of Nazism, investigation of Hitler's personality, or a history of Germany's participation in World War II. The war years occupy about half of the book, but the focus is not on the actual conduct of the war but rather the decisions which ultimately determined its outcome, and the way (often bizarre) those decisions were made. I first read this book in 1970. Rereading it four decades later, I got a great deal more out of it than I did the first time, largely because in the intervening years I'd read many other books about the period which cover aspects of the period which Shirer's pure Germany-focused reportage does not explore in detail.

The book has stood up well to the passage of time. The only striking lacuna is that when the book was written the fact that Britain had broken the German naval Enigma cryptosystem, and was thus able to read traffic between the German admiralty and the U-boats, had not yet been declassified by the British. Shirer's coverage of the Battle of the Atlantic (which is cursory), thus attributes the success in countering the U-boat threat to radar, antisubmarine air patrols, and convoys, which were certainly important, but far from the whole story.

Shirer is clearly a man of the Left (he manages to work in a snarky comment about the Coolidge administration in a book about Nazi Germany), although no fan of Stalin, who he rightly identifies as a monster. But I find that the author tangles himself up intellectually in trying to identify Hitler and Mussolini as “right wing”. Again and again he describes the leftist intellectual and political background of key figures in the Nazi and Fascist movements, and then tries to persuade us they somehow became “right wing” because they changed the colour of their shirts, even though the official platform and policies of the Nazi and Fascist regimes differed only in the details from those of Stalin, and even Stalin believed, by his own testimony, that he could work with Nazi Germany to the mutual benefit of both countries. It's worth revisiting Liberal Fascism (January 2008) for a deeper look at how collectivism, whatever the colour of the shirts or the emblem on the flags, stems from the same intellectual roots and proceeds to the same disastrous end point.

But these are quibbles about a monument of twentieth century reportage which has the authenticity of having been written by an eyewitness to many of the events described therein, the scholarship of extensive citations and quotations of original sources, and accessibility to the general reader. It is a classic which has withstood the test of time, and if I'm still around forty years hence, I'm sure I'll enjoy reading it a third time.

 Permalink

Codevilla, Angelo. The Ruling Class. New York: Beaufort Books, 2010. ISBN 978-0-8253-0558-0.
This slim volume (just 160 pages) is a somewhat expanded version of the author's much discussed essay with the same title which appeared in the July/August 2010 issue of The American Spectator. One of the key aspects of “American exceptionalism” over most of the nation's history has been something it didn't have but which most European and Asian nations did: a ruling class distinct from the general citizenry. Whether the ruling class was defined by heredity (as in Britain), or by meritocratic selection (as in France since the Revolution and Germany after Bismarck), most countries had a class of rulers who associated mostly with themselves, and considered themselves to uniquely embody the expertise and wisdom to instruct the masses (a word of which they tended to be fond) in how to live their lives.

In the U.S., this was much less the case. Before the vast centralisation and growth of the federal government in the New Deal and afterward, the country was mostly run by about fifty thousand people who got involved in grass roots public service: school boards, county commissions, and local political party organisations, from whom candidates for higher office were chosen based upon merit, service, and demonstrated track record. People who have come up by such a path will tend to be pretty well anchored to the concerns of ordinary citizens because they are ordinary citizens who have volunteered their time to get involved in res publica.

But with the grand centralisation of governance in Imperial Washington over the last century, a new kind of person was attracted to what used to be, and is still called, with exquisite irony, “public service”. These are people who have graduated from a handful of élite universities and law schools, and with the exception of perhaps a brief stint at a large law firm dealing mainly with the government, spent their entire careers in the public sector and its cloud of symbiotic institutions: regulatory agencies, appointed offices, elected positions, lobbying firms, and “non-governmental organisations” which derive their entire income from the government. These individuals make up what I have been calling, after Milovan Đilas, the New Class, but which Codevilla designates the Ruling Class in the present work.

In the U.S., entry to the ruling class is not, as it is in France, a meritocracy based on competitive examinations and performance in demanding academic institutions. Instead, it is largely a matter of who you, or your family, knows, what university you attended, and how well you conform to the set of beliefs indoctrinated there. At the centre of this belief system is that a modern nation is far too complicated to be governed by citizen-legislators chosen by ignorant rubes who didn't attend Harvard, Yale, Stanford, or one of the other ruling class feeder belts, but rather must be guided by enlightened experts like, well, themselves, and that all the ills of society can be solved by giving the likes of, well, themselves, more power over the population. They justify this by their reliance on “science” (the details of which they are largely ignorant), and hence they fund a horde of “scientists” who produce “studies” which support the policies they advocate.

Codevilla estimates that about a third of the U.S. population are either members of the ruling class (a small fraction), or aligned with its policies, largely due to engineered dependency on government programs. This third finds its political vehicle in the Democratic party, which represents their interests well. What about the other two thirds, which he dubs the “Country Class” (which I think is a pretty lame term, but no better comes immediately to mind)? Well, they don't have a political party at all, really. The Republican party is largely made up of ruling class people (think son of a president George W. Bush, or son of an admiral John McCain), and quickly co-opts outsiders who make it to Washington into the Imperial ruling class mindset.

A situation where one third of the population is dictating its will to the rest, and taxing a minority to distribute the proceeds to its electoral majority, in which only about a fifth of the population believes the federal government has the consent of the governed, and two thirds of the population have no effective political vehicle to achieve their agenda is, as Jimmy Carter's pollster Pat Caddell put it, pre-revolutionary. Since the ruling class has put the country on an unsustainable course, it is axiomatic that it will not be sustained. How it will end, however, is very much up in the air. Perhaps the best outcome would be a take-over of the Republican party by those genuinely representative of the “country party”, but that will be extremely difficult without a multitude of people (encouraged by their rulers toward passivity and resignation to the status quo) jumping into the fray. If the Republicans win a resounding victory in the elections of November 2010 (largely due to voters holding their noses and saying “they can't be worse than the current bums in office”) and then revert to ruling class business as usual, it's almost certain there will be a serious third party in play in 2012, not just at the presidential level (as the author notes, for a while in 1992, Ross Perot out-polled both the first Bush and Clinton before people concluded he was a flake with funny ears), but also in congressional races. If the Republicans are largely running in 2010 on a platform of, “Hey, at least we aren't the Democrats!”, then the cry in 2012 may be “We aren't either of those foul, discredited parties.”

As fiscally responsible people, let's talk about value for money. This book just doesn't cut it. You can read the original essay for free online. Although the arguments and examples therein are somewhat fleshed out in this edition, there's no essential you'll miss in reading the magazine essay instead of this book. Further, the 160 page book is padded—I can summon no kinder word—by inclusion of the full text of the Declaration of Independence and U.S. Constitution. Now, these are certainly important documents, but it's not like they aren't readily available online, nor that those inclined to read the present volume are unfamiliar with them. I think their presence is mostly due to the fact that were they elided, the book would be a mere hundred pages and deemed a pamphlet at best.

This is an enlightening and important argument, and I think spot-on in diagnosing the central problem which is transforming the U.S. from an engine of innovation and productivity into a class warfare redistributive nanny state. But save your money and read the magazine article, not the book.

 Permalink

McGovern, Patrick E. Uncorking the Past. Berkeley: University of California Press, 2009. ISBN 978-0-520-25379-7.
While a variety of animals are attracted to and consume the alcohol in naturally fermented fruit, only humans have figured out how to promote the process, producing wine from fruit and beer from cereal crops. And they've been doing it since at least the Neolithic period: the author discovered convincing evidence of a fermented beverage in residues on pottery found at the Jiahu site in China, inhabited between 7000 and 5800 B.C.

Indeed, almost every human culture which had access to fruits or grains which could be turned into an alcoholic beverage did so, and made the production and consumption of spirits an important part of their economic and spiritual life. (One puzzle is why the North American Indians, who lived among an abundance of fermentable crops never did—there are theories that tobacco and hallucinogenic mushrooms supplanted alcohol for shamanistic purposes, but basically nobody really knows.)

The author is a pioneer in the field of biomolecular archæology and head of the eponymous laboratory at the University of Pennsylvania Museum of Archæology and Anthropology; in this book takes us on a tour around the world and across the centuries exploring, largely through his own research and that of associates, the history of fermented beverages in a variety of cultures and what we can learn from this evidence about how they lived, were organised, and interacted with other societies. Only in recent decades has biochemical and genetic analysis progressed to the point that it is possible not only to determine from some gunk found at the bottom of an ancient pot not only that it was some kind of beer or wine, but from what species of fruit and grain it was produced, how it was prepared and fermented, and what additives it may have contained and whence they originated. Calling on experts in related disciplines such as palynology (the study of pollen and spores, not of the Alaskan politician), the author is able to reconstruct the economics of the bustling wine trade across the Mediterranean (already inferred from shipwrecks carrying large numbers of casks of wine) and the diffusion of the ancestral cultivated grape around the world, displacing indigenous grapes which were less productive for winemaking.

While the classical period around the Mediterranean is pretty much soaked in wine, and it'd be difficult to imagine the Vikings and other North Europeans without their beer and grogs, much less was known about alcoholic beverages in China, South America, and Africa. Once again, the author is on their trail, and not only reports upon his original research, but also attempts, in conjunction with micro-brewers and winemakers, to reconstruct the ancestral beverages of yore.

The biochemical anthropology of booze is not exactly a crowded field, and in this account written by one of its leaders, you get the sense of having met just about all of the people pursuing it. A great deal remains to be learnt—parts of the book read almost like a list of potential Ph.D. projects for those wishing to follow in the author's footsteps. But that's the charm of opening a new window into the past: just as DNA and other biochemical analyses revolutionised the understanding of human remains in archæology, the arsenal of modern analytical tools allows reconstructing humanity's almost universal companion through the ages, fermented beverages, and through them, uncork the way in which those cultures developed and interacted.

A paperback edition will be published in December 2010.

 Permalink

Haisch, Bernard. The Purpose-Guided Universe. Franklin Lakes, NJ: Career Press, 2010. ISBN 978-1-60163-122-0.
The author, an astrophysicist who was an editor of the Astrophysical Journal for a decade, subtitles this book “Believing In Einstein, Darwin, and God”. He argues that the militant atheists who have recently argued that science is incompatible with belief in a Creator are mistaken and that, to the contrary, recent scientific results are not only compatible with, but evidence for, the intelligent design of the laws of physics and the initial conditions of the universe.

Central to his argument are the variety of “fine tunings” of the physical constants of nature. He lists ten of these in the book's summary, but these are chosen from a longer list. These are quantities, such as the relative masses of the neutron and proton, the ratio of the strength of the electromagnetic and gravitational forces, and the curvature of spacetime immediately after the Big Bang which, if they differed only slightly from their actual values, would have resulted in a universe in which the complexity required to evolve any imaginable form of life would not exist. But, self evidently, we're here, so we have a mystery to explain. There are really only three possibilities:

  1. The values of the fine-tuned parameters are those we measure because they can't be anything else. One day we'll discover a master equation which allows us to predict their values from first principles, and we'll discover that any change to that equation produces inconsistent results. The universe is fine tuned because that's the only way it could be.
  2. The various parameters were deliberately fine tuned by an intelligent, conscious designer bent on creating a universe in which sufficient complexity could evolve so as to populate it with autonomous, conscious beings. The universe is fine tuned by a creator because that's necessary to achieve the goal of its creation.
  3. The parameters are random, and vary from universe to universe among an ensemble in a “multiverse” encompassing a huge, and possibly infinite number of universes with no causal connection to one another. We necessarily find the parameters of the universe we inhabit to be fine tuned to permit ourselves to exist because if they weren't, we wouldn't be here to make the observations and puzzle over the results. The universe is fine tuned because it's just one of a multitude with different settings, and we can only observe one which happens to be tuned for us.

For most of the history of science, it was assumed that possibility (1)—inevitability by physical necessity—was what we'd ultimately discover once we'd teased out the fundamental laws at the deepest level of nature. Unfortunately, despite vast investment in physics, both experimental and theoretical, astronomy, and cosmology, which has matured in the last two decades from wooly speculation to a precision science, we have made essentially zero progress toward this goal. String theory, which many believed in the heady days of the mid-1980s to be the path to that set of equations you could wear on a T-shirt and which would crank out all the dial settings of our universe, now seems to indicate to some (but not all) of those pursuing it, that possibility (3): a vast “landscape” of universes, all unobservable even in principle, one of which with wildly improbable properties we find ourselves in because we couldn't exist in most of the others is the best explanation.

Maybe, the author argues, we should take another look at possibility (2). Orthodox secular scientists are aghast at the idea, arguing that to do so is to “abandon science” and reject rational inference from experimental results in favour of revelation based only on faith. Well, let's compare alternatives (2) and (3) in that respect. Number three asks us to believe in a vast or infinite number of universes, all existing in their own disconnected bubbles of spacetime and unable to communicate with one another, which cannot be detected by any imaginable experiment, without any evidence for the method by which they were created nor idea how it all got started. And all of this to explain the laws and initial conditions of the single universe we inhabit. How's that for taking things on faith?

The author's concept of God in this volume is not that of the personal God of the Abrahamic religions, but rather something akin to the universal God of some Eastern religions, as summed up in Aldous Huxley's The Perennial Philosophy. This God is a consciousness encompassing the entire universe which causes the creation of its contents, deliberately setting things up to maximise the creation of complexity, with the eventual goal of creating more and more consciousness through which the Creator can experience the universe. This is actually not unlike the scenario sketched in Scott Adams's God's Debris, which people might take with the seriousness it deserves had it been written by somebody other than the creator of Dilbert.

If you're a regular reader of this chronicle, you'll know that my own personal view is in almost 100% agreement with Dr. Haisch on the big picture, but entirely different on the nature of the Creator. I'll spare you the detailed exposition, as you can read it in my comments on Sean Carroll's From Eternity to Here (February 2010). In short, I think it's more probable than not we're living in a simulation, perhaps created by a thirteen year old post-singularity superkid as a science fair project. Unlike an all-pervading but imperceptible Brahman or an infinitude of unobservable universes in an inaccessible multiverse, the simulation hypothesis makes predictions which render it falsifiable, and hence a scientific theory. Eventually, precision measurements will discover, then quantify, discrepancies due to round-off errors in the simulation (for example, an integration step which is too large), and—what do you know—we already have in hand a collection of nagging little discrepancies which look doggone suspicious to me.

This is not one of those mushy “science and religion can coexist” books. It is an exploration, by a serious scientist who has thought deeply about these matters, of why evidence derived entirely from science is pointing those with minds sufficiently open to entertain the idea, that the possibility of our universe having been deliberately created by a conscious intelligence who endowed it with the properties that permit it to produce its own expanding consciousness is no more absurd that the hypotheses favoured by those who reject that explanation, and is entirely compatible with recent experimental results, which are difficult in the extreme to explain in any other manner. Once the universe is created (or, as I'd put it, the simulation is started), there's no reason for the Creator to intervene: if all the dials and knobs are set correctly, the laws discovered by Einstein, Darwin, Maxwell, and others will take care of the rest. Hence there's no conflict between science and evidence-based belief in a God which is the first cause for all which has happened since.

 Permalink

Roach, Mary. Packing for Mars. New York: W. W. Norton, 2010. ISBN 978-0-393-06847-4.
At the dawn of the space age, nobody had any idea what effects travel into space might have on living beings, foremost among them the intrepid pilots of the first ships to explore the void. No organism from the ancestral cell of all terrestrial life up to the pointiest-headed professor speculating about its consequences had ever experienced more than an instant of weightlessness, and that usually ended badly with a sudden stop against an unyielding surface. (Fish and human divers are supported by their buoyancy in the water, but they are not weightless: the force of Earth's gravity continues to act upon their internal organs, and might prove to be essential for their correct functioning.) The eye, for example, freed of the pull of gravity, might change shape so that it couldn't focus; it might prove impossible to swallow; digestion of food in the stomach might not work without gravity to hold the contents together at the bottom; urination might fail without gravity working on the contents of the bladder, etc., etc.. The only way to be sure was to go and find out, and this delightful and witty book covers the quest to discover how to live in space, from the earliest animal experiments of the 1940s (most of which ended poorly for the animals, not due to travelling in space, but rather the reliability of the rockets and recovery systems to which they were entrusted) to present day long duration space station missions and research into the human factors of expeditions to Mars and the asteroids.

Travelling to space centres across the U.S., Russia, Europe, and Japan, the author delves into the physiological and psychological, not to mention the humourous and embarrassing aspects of venturing into the vacuum. She boards the vomit comet to experience weightlessness for herself, tries the television camera equipped “aiming practice toilet” on which space shuttle astronauts train before their missions, visits subjects in multi-month bed rest experiments studying loss of muscle and bone mass on simulated interplanetary missions, watches cadavers being used in crash tests of space capsules, tastes a wide variety of overwhelmingly ghastly space food (memo to astronaut corps worldwide: when they hire veterinarians to formulate your chow, don't expect gourmet grub on orbit), and, speaking of grubby, digs into experiments on the outer limits of lack of hygiene, including the odorifically heroic Gemini VII mission in which Frank Borman and James Lovell spent two weeks in a space smaller than the front seat of a Volkswagen Beetle with no way to bathe or open the window, nor bathroom facilities other than plastic bags. Some of the air to ground communications from that mission which weren't broadcast to the public at the time are reproduced here, and are both revealing and amusing in a grody kind of way.

We also meet the animals who preceded the first humans into space, and discover that their personalities were more diverse than those of the Right Stuff humans who followed. You may know of Ham (who was as gung-ho and outgoing as John Glenn) and Enos (who could be as cold and contemptuous as Alan Shepard, and as formidable hurling his feces at those within range as Nolan Ryan was with a baseball), but just imagine those who didn't fly, including Double Ugly, Miss Priss, and Big Mean.

There are a huge number of factoids here, all well-documented, that even the most obsessive space buff may not have come across. For example: why does motion sickness make you vomit? It makes sense to vomit if you've swallowed something truly noxious such as a glass of turpentine or a spoonful of lima beans, but it doesn't make any sense when your visual and vestibular systems are sending conflicting signals since emptying your stomach does nothing to solve the problem. Well, it turns out that functional brain imaging reveals that the “emetic brain” which handles the crucial time-sequencing of the vomit reflex just happens to be located next door in the meat computer to the area which integrates signals from the inner ear and visual system. When the latter is receiving crossed signals, it starts firing neurons wildly trying to make sense of it, and electro-chemical crosstalk gets into vomit central next door and it's a-hurling we will go. It turns out that, despite worries, most human organs work just fine in weightlessness, but some of them behave differently in ways to which space travellers must become accustomed. Consider the bladder—with gravity, the stretching of the wall of the bladder due to the weight of its contents is what triggers the urge to relieve oneself. But in weightlessness, the contents of the bladder, like other fluids, tend to cling to the walls due to surface tension, and the bladder fills up with no signal at all until it's completely full, at which point you have to go right now regardless of whatever you're doing or whether another crewmember is using the space toilet. Reusable manned spacecraft have a certain odour….

There may be nothing that better stimulates the human mind to think out of the box than pondering flight out of this world, and we come across a multitude of examples of innovative boffinology, both from the pages of history and contemporary research. There's the scientist, one of the world's preeminent authorities on chicken brains, who suggested fattening astronauts up to be 20 kilograms obese before launch, which would allow them to fly 90 day missions without the need to launch any food at all. Just imagine the morale among that crew! Not to be outdone, another genius proposed, given the rarity of laundromats in space, that astronauts' clothes be made of digestible fibres, so that they could eat their dirty laundry instead of packaged food. This seems to risk taking “Eat my shorts!” even beyond the tolerance threshold of Bart Simpson. Then consider the people who formulate simulated astronaut poop for testing space toilets, and those who study farts in space. Or, better yet, don't.

If you're remotely interested in space travel, you'll find this a thoroughly enjoyable book, and your only regret when closing it will be that it has come to an end. Speaking of which, if you don't read them as you traverse the main text, be sure to read the extensive end notes—there are additional goodies there for your delectation.

A paperback edition will be published in April 2011.

 Permalink

Thor, Brad. The Lions of Lucerne. New York: Pocket Books, 2002. ISBN 978-0-7434-3674-8.
This was the author's first published novel, which introduced Scot Harvath, the ex-Navy SEAL around whose exploits his subsequent thrillers have centred. In the present book, Harvath has been recruited into the Secret Service and is in charge of the U.S. president's advance team and security detail for a ski trip to Utah which goes disastrously wrong when an avalanche wipes out the entire Secret Service field team except for Harvath, leaving the president missing and his daughter grievously injured. This shock is compounded manyfold when evidence indicates that the president has been kidnapped in an elaborate plot, which is soon confirmed by an incontrovertible communication from the kidnappers.

If things weren't bad enough for the seriously battered Harvath, still suffering from a concussion and “sprained body”, he finds himself framed as the person who leaked the security arrangements to the kidnappers and for the murder of two people trying to bring evidence regarding the plot to the attention of the authorities.

Harvath decides the only way he can clear his name is to get to the bottom of the conspiracy and rescue the president himself and so, grasping at the only thread of evidence he has, travels incognito to Switzerland, where he begins to unravel the details of the plot, identify the conspirators, and discover where the president is being held and devise a plan to rescue him. You don't often come across a Swiss super-villain, but there's one here, complete with an Alpine redoubt worth of a Bond blackguard.

This is a first novel, and it shows. Thor's mastery of the craft of the thriller, both in storytelling and technical detail, has improved over the years. If I hadn't read two of the more recent books, I might have been inclined to give it up after this one, but knowing what's coming, I'll continue to enjoy books from this series. In the present story, we have a vast disparity between the means (an intricate and extremely risky plot to kidnap the U.S. president) and the ends (derailing the passage of an alternative energy bill like “cap and trade”), carried out by an international conspiracy so vast that its security would almost be certain to be quickly compromised, but which is, instead, revealed through a series of fantastically improbable coincidences. Scot Harvath is pursued by two independent teams of assassins who may be the worst shots in the entire corpus of bestselling thrillers. And the Swiss authorities simply letting somebody go who smuggled a gun into Switzerland, sprayed gunfire around a Swiss city (damaging a historical landmark in the process), and then broke into a secret Swiss military base doesn't sound like the Switzerland with which I'm acquainted.

Still, this is well deserving of the designation “thriller”, and it will keep you turning the pages. It only improves from here, but I'd start with one of the more recent novels.

 Permalink

November 2010

Ryan, Craig. Magnificent Failure. Washington: Smithsonian Books, 2003. ISBN 978-1-58834-141-9.
In his 1995 book, The Pre-Astronauts (which I read before I began keeping this list), the author masterfully explores the pioneering U.S. balloon flights into the upper atmosphere between the end of World War II and the first manned space flights, which brought both Air Force and Navy manned balloon programs to an abrupt halt. These flights are little remembered today (except for folks lucky enough to have an attic [or DVD] full of National Geographics from the epoch, which covered them in detail). Still less known is the story recounted here: one man's quest, fuelled only by ambition, determination, willingness to do whatever it took, persuasiveness, and sheer guts, to fly higher and free-fall farther than any man had ever done before. Without the backing of any military service, government agency, wealthy patron, or corporate sponsor, he achieved his first goal, setting an altitude record for lighter than air flight which remains unbroken more than four decades later, and tragically died from injuries sustained in his attempt to accomplish the second, after an in-flight accident which remains enigmatic and controversial to this day.

The term “American original” is over-used in describing exceptional characters that nation has produced, but if anybody deserves that designation, Nick Piantanida does. The son of immigrant parents from the Adriatic island of Korčula (now part of Croatia), Nick was born in 1932 and grew up on the gritty Depression-era streets of Union City, New Jersey in the very cauldron of the American melting pot, amid communities of Germans, Italians, Irish, Jews, Poles, Syrians, and Greeks. Although universally acknowledged to be extremely bright, his interests in school were mostly brawling and basketball. He excelled in the latter, sharing the 1953 YMCA All-America honours with some guy named Wilt Chamberlain. After belatedly finishing high school (bored, he had dropped out to start a scrap iron business, but was persuaded to return by his parents), he joined the Army where he was All-Army in basketball for both years of his hitch and undefeated as a heavyweight boxer. After mustering out, he received a full basketball scholarship to Fairleigh Dickinson University, then abruptly quit a few months into his freshman year, finding the regimentation of college life as distasteful as that of the Army.

In search of fame, fortune, and adventure, Nick next set his sights on Venezuela, where he vowed to be the first to climb Devil's Mountain, from which Angel Falls plummets 807 metres. Penniless, he recruited one of his Army buddies as a climbing partner and lined up sponsors to fund the expedition. At the outset, he knew nothing about mountaineering, so he taught himself on the Hudson River Palisades with the aid of books from the library. Upon arrival in Venezuela, the climbers learnt to their dismay that another expedition had just completed the first ascent of the mountain, so Nick vowed to make the first ascent of the north face, just beside the falls, which was thought unclimbable. After an arduous trip through the jungle, during which their guide quit and left the climbers alone, Nick and his partner made the ascent by themselves and returned to the acclaim of all. Such was the determination of this man.

Nick was always looking for adventure, celebrity, and the big score. He worked for a while as a steelworker on the high iron of the Verrazano-Narrows Bridge, but most often supported himself and, after his marriage, his growing family, by contract truck driving and, occasionally, unemployment checks. Still, he never ceased to look for ways, always unconventional, to make his fortune, nor failed to recruit associates and find funding for his schemes. Many of his acquaintances use the word “hustler” to describe him in those days, and one doubts that Nick would be offended by the honorific. He opened an exotic animal import business, and ordered cobras, mongooses, goanna lizards, and other critters mail-order from around the world for resale to wealthy clients. When buyers failed to materialise, he staged gladiatorial contests of both animal versus animal and animal versus himself. Eventually he imported a Bengal tiger cub which he kept in his apartment until it had grown so large it could put its paws on his shoulders, whence he traded the tiger for a decrepit airplane (he had earned a pilot's license while still in his teens). Offered a spot on the New York Knicks professional basketball team, he turned it down because he thought he could make more money barnstorming in his airplane.

Nick finally found his life's vocation when, on a lark, he made a parachute jump. Soon, he had progressed from static line beginner jumps to free fall and increasingly advanced skydiving, making as many jumps as he could afford and find the time for. And then he had the Big Idea. In 1960, Joseph Kittinger had ridden a helium balloon to an altitude of 31,333 metres and bailed out, using a small drogue parachute to stabilise his fall until he opened his main parachute at an altitude of 5,330 metres. Although this was, at the time (and remains to this day) the highest altitude parachute jump ever made, skydiving purists do not consider it a true free fall jump due to the use of the stabilising chute. In 1962, Eugene Andreev jumped from a Soviet balloon at an altitude of 25,460 metres and did a pure free fall descent, stabilising himself purely by skydiving techniques, setting an official free-fall altitude record which also remains unbroken. Nick vowed to claim both the record for highest altitude ascent and longest free-fall jump for himself, and set about it with his usual energy and single-minded determination.

Piantanida faced a daunting set of challenges in achieving his goal: at the outset he had neither balloon, gondola, spacesuit, life support system, suitable parachute, nor any knowledge of or experience with the multitude of specialities whose mastery is required to survive in the stratosphere, above 99% of the Earth's atmosphere. Kittinger and Andreev were supported by all the resources, knowledge, and funding of their respective superpowers' military establishments, while Nick had—well…Nick. But he was not to be deterred, and immediately set out educating himself and lining up people, sponsors, and gear necessary for the attempt.

The story of what became known as Project Strato-Jump reads like an early Heinlein novel, with an indomitable spirit pursuing a goal other, more “reasonable”, people considered absurd or futile. By will, guile, charm, pull, intimidation, or simply wearing down adversaries until they gave in just to make him go away, he managed to line up everything he needed, including having the company which supplied NASA with its Project Gemini spacesuits custom tailor one (Nick was built like an NBA star, not an astronaut) and loan it to him for the project.

Finally, on October 22, 1965, all was ready, and Nick took to the sky above Minnesota, bound for the edge of space. But just a few minutes after launch, at just 7,000 metres, the balloon burst, probably due to a faulty seam in the polyethylene envelope, triggered by a wind shear at that altitude. Nick rode down in the gondola under its recovery parachute, then bailed out at 3200 metres, unglamorously landing in the Pig's Eye Dump in St. Paul.

Undeterred by the failure, Nick recruited a new balloon manufacturer and raised money for a second attempt, setting off again for the stratosphere a second time on February 2, 1966. This time the ascent went flawlessly, and the balloon rose to an all-time record altitude of 37,643 metres. But as Nick proceeded through the pre-jump checklist, when he attempted to disconnect the oxygen hose that fed his suit from the gondola's supply and switch over to the “bail out bottle” from which he would breathe during the descent, the disconnect fitting jammed, and he was unable to dislodge it. He was, in effect, tethered to the gondola by his oxygen line and had no option but to descend with it. Ground control cut the gondola's parachute from the balloon, and after a harrowing descent Nick and gondola landed in a farm field with only minor injuries. The jump had failed, but Nick had flown higher than any manned balloon ever had. But since the attempt was not registered as an official altitude attempt, although the altitude attained is undisputed, the record remains unofficial.

After the second failure, Nick's confidence appeared visibly shaken. Having all that expense, work, and risk undertaken come to nought due to a small detail with which nobody had been concerned prior to the flight underlined just how small the margin for error was in the extreme environment at the edge of space and, by implication, how the smallest error or oversight could lead to disaster. Still, he was bent on trying yet again, and on May 1, 1966 (since he was trying to break a Soviet record, he thought this date particularly appropriate), launched for the third time. Everything went normally as the balloon approached 17,375 metres, whereupon the ground crew monitoring the air to ground voice link heard what was described as a “whoosh” or hiss, followed by a call of “Emergen” from Nick, followed by silence. The ground crew immediately sent a radio command to cut the balloon loose, and the gondola, with Nick inside, began to descend under its cargo parachute.

Rescue crews arrived just moments after the gondola touched down and found it undamaged, but Nick was unconscious and unresponsive. He was rushed to the local hospital, treated without avail, and then transferred to a hospital in Minneapolis where he was placed in a hyperbaric chamber where treatment for decompression sickness was administered, without improvement. On June 18th, he was transferred to the National Institute of Health hospital in Bethesda, Maryland, where he was examined and treated by experts in decompression disease and hypoxia, but never regained consciousness. He died on August 25, 1966, with an autopsy finding the cause of death hypoxia and ruptures of the tissue in the brain due to decompression.

What happened to Nick up there in the sky? Within hours after the accident, rumours started to circulate that he was the victim of equipment failure: that his faceplate had blown out or that the pressure suit had failed in some other manner, leading to an explosive decompression. This story has been repeated so often it has become almost canon—consider this article from Wired from July 2002. Indeed, when rescuers arrived on the scene, Nick's “faceplate” was damaged, but this was just the sun visor which can be pivoted down to cover the pressure-retaining faceplate, which was intact and, in a subsequent test of the helmet, found to seal perfectly. Rescuers assumed the sun visor was damaged by impact with part of the gondola during the landing and, in any case, would not have caused a decompression however damaged.

Because the pressure suit had been cut off in the emergency room, it wasn't possible to perform a full pressure test, but meticulous inspection of the suit by the manufacturer discovered no flaws which could explain an explosive decompression. The oxygen supply system in the gondola was found to be functioning normally, with all pressure vessels and regulators operating within specifications.

So, what happened? We will never know for sure. Unlike a NASA mission, there was no telemetry, nor even a sequence camera recording what was happening in the gondola. And yet, within minutes after the accident occurred, many members of the ground crew came to a conclusion as to the probable cause, which those still alive today have seen no need to revisit. Such was their certainty that reporter Robert Vaughan gave it as the cause in the story he filed with Life magazine, which he was dismayed to see replaced with an ambiguous passage by the editors, because his explanation did not fit with the narrative chosen for the story. (The legacy media acted like the legacy media even when they were the only media and not yet legacy!)

Astonishingly, all the evidence (which, admittedly, isn't very much) seems to indicate that Nick opened his helmet visor at that extreme altitude, which allowed the air in suit to rush out (causing the “whoosh”), forcing the air from his lungs (cutting off the call of “Emergency!”), and rapidly incapacitating him. The extended hypoxia and exposure to low pressure as the gondola descended under the cargo parachute caused irreversible brain damage well before the gondola landed. But why would Nick do such a crazy thing as open his helmet visor when in the physiological equivalent of space? Again, we can never know, but what is known is that he'd done it before, at lower altitudes, to the dismay of his crew, who warned him of the potentially dire consequences. There is abundant evidence that Piantanida violated the oxygen prebreathing protocol before high altitude exposure not only on this flight, but on a regular basis. He reported symptoms completely consistent with decompression sickness (the onset of “the bends”), and is quoted as saying that he could relieve the symptoms by deflating and reinflating his suit. Finally, about as close to a smoking gun as we're likely to find, the rescue crew found Nick's pressure visor unlatched and rotated away from the seal position. Since Nick would have been in a coma well before he entered breathable atmosphere, it isn't possible he could have done this before landing, and there is no way an impact upon landing could have performed the precise sequence of operations required to depressurise the suit and open the visor.

It is impossible put oneself inside the mind of such an outlier in the human population as Nick, no less imagine what he was thinking and feeling when rising into the darkness above the dawn on the third attempt at achieving his dream. He was almost certainly suffering from symptoms of decompression sickness due to inadequate oxygen prebreathing, afflicted by chronic sleep deprivation in the rush to get the flight off, and under intense stress to complete the mission before his backers grew discouraged and the money ran out. All of these factors can cloud the judgement of even the most disciplined and best trained person, and, it must be said, Nick was neither. Perhaps the larger puzzle is why members of his crew who did understand these things, did not speak up, pull the plug, or walk off the project when they saw what was happening. But then a personality like Nick can sweep people along through its own primal power, for better or for worse; in this case, to tragedy.

Was Nick a hero? Decide for yourself—my opinion is no. In pursuing his own ego-driven ambition, he ended up leaving his wife a widow and his three daughters without a father they remember, with only a meagre life insurance policy to support them. The project was basically a stunt, mounted with the goal of turning its success into money by sales of story, film, and celebrity appearances. Even had the jump succeeded, it would have yielded no useful aeromedical research data applicable to subsequent work apart from the fact that it was possible. (In Nick's defence on this account, he approached the Air Force and NASA, inviting them to supply instrumentation and experiments for the jump, and was rebuffed.)

This book is an exhaustively researched (involving many interviews with surviving participants in the events) and artfully written account of this strange episode which was, at the same time, the last chapter of the exploration of the black beyond by intrepid men in their floating machines and a kind of false dawn precursor of the private exploration of space which is coming to the fore almost half a century after Nick Piantanida set out to pursue his black sky dream. The only embarrassing aspect to this superb book is that on occasion the author equates state-sponsored projects with competence, responsibility, and merit. Well, let's see…. In a rough calculation, using 2007 constant dollars, NASA has spent northward of half a trillion dollars, killing a total of 17 astronauts (plus other employees in industrial accidents on the ground), with all of the astronaut deaths due to foreseeable risks which management failed to identify or mitigate in time.

Project Strato-Jump, funded entirely by voluntary contributions, without resort to the state's monopoly on the use of force, set an altitude record for lighter than air flight within the atmosphere which has stood from 1966 to this writing, and accomplished it in three missions with a total budget of less than (2007 constant) US$400,000, with the loss of a single life due to pilot error. Yes, NASA has achieved much, much more. But a million times more?

This is a very long review, so if you've made it to this point and found it tedious, please accept my excuses. Nick Piantanida has haunted me for decades. I followed his exploits as they happened, and were reported on the CBS Evening News in the 1960s. I felt the frustration of the second flight (with that achingly so far and yet so near view of the Earth from altitude, when he couldn't jump), and then the dismay at the calamity on the third, then the long vigil ending with his sad demise. Astronauts were, well, astronauts, but Nick was one of us. If a truck driver from New Jersey could, by main force, travel to the black of space, then why couldn't any of us? That was the real dream of the Space Age: Have Space Suit—Will Travel. Well, Nick managed to lay his hands on a space suit and travel he did!

Anybody who swallowed the bogus mainstream media narrative of Nick's “suit failure” had to watch the subsequent Gemini and Apollo EVA missions with a special sense of apprehension. A pressure suit is one of the few things in the NASA space program which has no backup: if the pressure garment fails catastrophically, you're dead before you can do anything about it. (A slow leak isn't a problem, since there's an oxygen purge system which can maintain pressure until you can get inside, but a major seam failure, or having a visor blow out or glove pop off is endsville.) Knowing that those fellows cavorting on the Moon were wearing pretty much the same suit as Nick caused those who believed the propaganda version of his death to needlessly catch their breath every time one of them stumbled and left a sitzmark or faceplant in the eternal lunar regolith.

 Permalink

Evans, M. Stanton. Blacklisted by History. New York: Three Rivers Press, 2007. ISBN 978-1-4000-8106-6.
In this book, the author, one of the lions of conservatism in the second half of the twentieth century, undertakes one of the most daunting tasks a historian can attempt: a dispassionate re-examination of one of the most reviled figures in modern American history, Senator Joseph McCarthy. So universal is the disdain for McCarthy by figures across the political spectrum, and so uniform is his presentation as an ogre in historical accounts, the media, and popular culture, that he has grown into a kind of legend used to scare people and intimidate those who shudder at being accused of “McCarthyism”. If you ask people about McCarthy, you'll often hear that he used the House Un-American Activities Committee to conduct witch hunts, smearing the reputations of innocent people with accusations of communism, that he destroyed the careers of people in Hollywood and caused the notorious blacklist of screen writers, and so on. None of this is so: McCarthy was in the Senate, and hence had nothing to do with activities of the House committee, which was entirely responsible for the investigation of Hollywood, in which McCarthy played no part whatsoever. The focus of his committee, the Permanent Subcommittee on Investigations of the Government Operations Committee of the U.S. Senate was on security policy and enforcement within first the State Department and later, the Signal Corps of the U.S. Army. McCarthy's hearings were not focussed on smoking out covert communists in the government, but rather investigating why communists and other security risks who had already been identified by investigations by the FBI and their employers' own internal security apparatus remained on the payroll, in sensitive policy-making positions, for years after evidence of their dubious connections and activities were brought to the attention of their employers and in direct contravention of the published security policies of both the Truman and Eisenhower administrations.

Any book about McCarthy published in the present environment must first start out by cutting through a great deal of misinformation and propaganda which is just simply false on the face of it, but which is accepted as conventional wisdom by a great many people. The author starts by telling the actual story of McCarthy, which is little known and pretty interesting. McCarthy was born on a Wisconsin farm in 1908 and dropped out of junior high school at the age of 14 to help his parents with the farm. At age 20, he entered a high school and managed to complete the full four year curriculum in nine months, earning his diploma. Between 1930 and 1935 he worked his way through college and law school, receiving his law degree and being admitted to the Wisconsin bar in 1935. In 1939 he ran for an elective post of circuit judge and defeated a well-known incumbent, becoming, at age 30, the youngest judge in the state of Wisconsin. In 1942, after the U.S. entered World War II following Pearl Harbor, McCarthy, although exempt from the draft due to his position as a sitting judge, resigned from the bench and enlisted in the Marine Corps, being commissioned as a second lieutenant (based upon his education) upon completion of boot camp. He served in the South Pacific as an intelligence officer with a dive bomber squadron, and flew a dozen missions as a tailgunner/photographer, earning the sobriquet “Tail-Gunner Joe”.

While still in the Marine Corps, McCarthy sought the Wisconsin Republican Senate nomination in 1944 and lost, but then in 1946 mounted a primary challenge to three-term incumbent senator Robert M. La Follette, Jr., scion of Winconsin's first family of Republican politics, narrowly defeating him in the primary, and then won the general election in a landslide, with more than 61% of the vote. Arriving in Washington, McCarthy was perceived to be a rather undistinguished moderate Republican back-bencher, and garnered little attention by the press.

All of this changed on February 9th, 1950, when he gave a speech in Wheeling, West Virgina in which he accused the State Department of being infested with communists, and claimed to have a list in his hand of known communists who continued to work at State after their identities had been made known to the Secretary of State. Just what McCarthy actually said in Wheeling remains a matter of controversy to this day, and is covered in gruelling detail in this book. This speech, and encore performances a few days later in Salt Lake City and Reno catapulted McCarthy onto the public stage, with intense scrutiny in the press and an uproar in Congress, leading to duelling committee investigations: those exploring the charges he made, and those looking into McCarthy himself, precisely what he said where and when, and how he obtained his information on security risks within the government. Oddly, from the outset, the focus within the Senate and executive branch seemed to be more on the latter than the former, with one inquiry digging into McCarthy's checkbook and his income tax returns and those of members of his family dating back to 1935—more than a decade before he was elected to the Senate.

The content of the hearings chaired by McCarthy are also often misreported and misunderstood. McCarthy was not primarily interested in uncovering Reds and their sympathisers within the government: that had already been done by investigations by the FBI and agency security organisations and duly reported to the executive departments involved. The focus of McCarthy's investigation was why, once these risks were identified, often with extensive documentation covering a period of many years, nothing was done, with those identified as security risks remaining on the job or, in some cases, allowed to resign without any note in their employment file, often to immediately find another post in a different government agency or one of the international institutions which were burgeoning in the postwar years. Such an inquiry was a fundamental exercise of the power of congressional oversight over executive branch agencies, but McCarthy (and other committees looking into such matters) ran into an impenetrable stonewall of assertions of executive privilege by both the Truman and Eisenhower administrations. In 1954, the Washington Post editorialised, “The President's authority under the Constitution to withhold from Congress confidences, presidential information, the disclosure of which would be incompatible with the public interest, is altogether beyond question”. The situational ethics of the legacy press is well illustrated by comparing this Post editorial to those two decades later when Nixon asserted the same privilege against a congressional investigation.

Indeed, the entire McCarthy episode reveals how well established, already at the mid-century point, the ruling class government/media/academia axis was. Faced with an assault largely directed at “their kind” (East Coast, Ivy League, old money, creatures of the capital) by an uncouth self-made upstart from the windswept plains, they closed ranks, launched serial investigations and media campaigns, covered up, destroyed evidence, stonewalled, and otherwise aimed to obstruct and finally destroy McCarthy. This came to fruition when McCarthy was condemned by a Senate resolution on December 2nd, 1954. (Oddly, the usual word “censure” was not used in the resolution.) Although McCarthy remained in the Senate until his death at age 48 in 1957, he was shunned in the Senate and largely ignored by the press.

The perspective of half a century later allows a retrospective on the rise and fall of McCarthy which wasn't possible in earlier accounts. Many documents relevant to McCarthy's charges, including the VENONA decrypts of Soviet cable traffic, FBI security files, and agency loyalty board investigations have been declassified in recent years (albeit, in some cases, with lengthy “redactions”—blacked out passages), and the author makes extensive use of these primary sources in the present work. In essence, what they demonstrate is that McCarthy was right: that the documents he sought in vain, blocked by claims of executive privilege, gag orders, cover-ups, and destruction of evidence were, in fact, persuasive evidence that the individuals he identified were genuine security risks who, under existing policy, should not have been employed in the sensitive positions they held. Because the entire “McCarthy era”, from his initial speech to condemnation and downfall, was less than five years in length, and involved numerous investigations, counter-investigations, and re-investigations of many of the same individuals, regarding which abundant source documents have become available, the detailed accounts in this massive book (672 pages in the trade paperback edition) can become tedious on occasion. Still, if you want to understand what really happened at this crucial episode of the early Cold War, and the background behind the defining moment of the era: the conquest of China by Mao's communists, this is an essential source.

In the Kindle edition, the footnotes, which appear at the bottom of the page in the print edition, are linked to reference numbers in the text with a numbering scheme distinct from that used for source references. Each note contains a link to return to the text at the location of the note. Source citations appear at the end of the book and are not linked in the main text. The Kindle edition includes no index.

 Permalink

Pournelle, Jerry. Fires of Freedom. Riverdale, NY: Baen Publishing, [1976, 1980] 2010. ISBN 978-1-4391-3374-3.
This book includes two classic Jerry Pournelle novels which have been long out of print. Baen Publishing is doing journeyman work bringing the back lists of science fiction masters such as Pournelle, Robert Heinlein, and Poul Anderson back to the bookshelves, and this is a much welcome addition to the list. The two novels collected here are unrelated to one another. The first, Birth of Fire, originally published in 1976, follows a gang member who accepts voluntary exile to Mars to avoid a prison sentence on Earth. Arriving on Mars, he discovers a raw frontier society dominated by large Earth corporations who exploit the largely convict labour force. Nobody has to work, but if you don't work, you don't get paid and can't recharge the air medal everybody wears around their neck. If it turns red, or you're caught in public not wearing one, good tax-paying citizens will put the freeloader “outside”—without a pressure suit.

Former gangster Garrett Pittston finds that Mars suits him just fine, and, avoiding the temptations of the big companies, signs on as a farmhand with a crusty Marsman who goes by the name of Sarge. At Windhome, Sarge's station, Garrett learns how the Marsmen claw an independent existence from the barren soil of Mars, and also how the unyielding environment has shaped their culture, in which one's word is a life or death bond. Inevitably, this culture comes into conflict with the nanny state of the colonial administration, which seeks to bring the liberty-loving Marsmen under its authority by taxing and regulating them out of existence.

Garrett finds himself in the middle of an outright war of independence, in which the Marsmen use their intimate knowledge of the planet as an ally against what, on the face of it, would appear to be overwhelming superiority of their adversaries. Garrett leads a bold mission to obtain the game-changing resource which will allow Mars to deter reprisals from Earth, and in doing so becomes a Marsman in every way.

Pournelle paints this story with spare, bold brush strokes: all non-essentials are elided, and the characters develop and events transpire with little or no filler. If Kim Stanley Robinson had told this story, it would probably have occupied two thousand pages and have readers dying of boredom or old age before anything actually happened. This book delivers an action story set in a believable environment and a society which has been shaped by it. Having been originally published in the year of the Viking landings on Mars, there are a few things it gets wrong, but there are a great many others which are spot-on, and in some cases prophetic.

The second novel in the book, King David's Spaceship, is set in the CoDominium universe in which the classic novel The Mote in God's Eye takes place. The story occurs contemporarily with The Mote, during the Second Empire of Man, when imperial forces from the planet Sparta are re-establishing contact with worlds of the original Empire of Man who have been cut off from one another, with many reverting to primitive levels of technology and civilisation in the aftermath of the catastrophic Secession Wars.

When Imperial forces arrive on Prince Samual's World, its civilisation had recovered from disastrous post-collapse warfare and plague to around the technological level of 19th century Earth. King David of the Kingdom of Haven, who hopes to unify the planet under his rule, forms an alliance with the Empire and begins to topple rivals and petty kingdoms while pacifying the less civilised South Continent. King David's chief of secret police learns, from an Imperial novel that falls into his hands, that the Empire admits worlds on different bases depending upon their political and technological evolution. Worlds which have achieved planetary government and an indigenous space travel capability are admitted as “classified worlds”, which retain a substantial degree of autonomy and are represented in one house of the Imperial government. Worlds which have not achieved these benchmarks are classed as colonies, with their local governmental institutions abolished and replaced by rule by an aristocracy of colonists imported from other, more developed planets.

David realises that, with planetary unification rapidly approaching, his days are numbered unless somehow he can demonstrate some kind of space flight capability. But the Empire enforces a rigid technology embargo against less developed worlds, putatively to allow for their “orderly development”, but at least as much to maintain the Navy's power and enrich the traders, who are a major force in the Imperial capital. Nathan McKinnie, formerly a colonel in the service of Orleans, a state whose independence was snuffed out by Haven with the help of the Navy, is recruited by the ruthless secret policeman Malcolm Dougal to lead what is supposed to be a trading expedition to the world of Makassar, whose own civilisation is arrested in a state like medieval Europe, but which is home to a “temple” said to contain a library of documents describing First Empire technology which the locals do not know how to interpret. McKinnie's mission is to gain access to the documents, discover how to build a spaceship with the resources available on Haven, and spirit this information back to his home world under the eyes of the Navy and Imperial customs officials.

Arriving on Makassar, McKinnie finds that things are even more hopeless than he imagined. The temple is in a city remote from where he landed, reachable only by crossing a continent beset with barbarian hordes, or a sea passage through a pirate fleet which has essentially shut down seafaring on the planet. Using no advanced technology apart from the knowledge in his head, he outfits a ship and recruits and trains a crew to force the passage through the pirates. When he arrives at Batav, the site of the temple, he finds it besieged by Islamic barbarians (some things never change!), who are slowly eroding the temple's defenders by sheer force of numbers.

Again, McKinnie needs no new technology, but simply knowledge of the Western way of war—in this case recruiting from the disdained dregs of society and training a heavy infantry force, which he deploys along with a newly disciplined heavy cavalry in tactical doctrine with which Cæsar would have been familiar. Having saved the temple, he forms an alliance with representatives of the Imperial Church which grants him access to the holy relics, a set of memory cubes containing the collected knowledge of the First Empire.

Back on Prince Samual's World, a Los Alamos style research establishment quickly discovers that they lack the technology to read the copies of the memory cubes they've brought back, and that the technology of even the simplest Imperial landing craft is hopelessly out of reach of their knowledge and manufacturing capabilities. So, they adopt a desperate fall back plan, and take a huge gamble to decide the fate of their world.

This is superb science fiction which combines an interesting premise, the interaction of societies at very different levels of technology and political institutions, classical warfare at sea and on land, and the difficult and often ruthless decisions which must be made when everything is at stake (you will probably remember the case of the Temple swordsmen long after you close this book). It is wonderful that these excellent yarns are back in print after far too long an absence.

 Permalink

Brandon, Craig. The Five-Year Party. Dallas: BenBella Books, 2010. ISBN 978-1-935251-80-4.
I suspect that many readers of Tom Wolfe's I Am Charlotte Simmons (October 2010) whose own bright college days are three or four decades behind them will conclude that Wolfe embroidered quite a bit upon the contemporary campus scene in the interest of telling an entertaining tale. In this book, based upon the author's twelve years of experience teaching journalism at Keene State College in New Hampshire and extensive research, you'll get a factual look at what goes on at “party schools”, which have de-emphasised education in favour of “retention”—in other words, extracting the maximum amount of money from students and their families, and burdening them with crushing loans which make it impossible for graduates to accumulate capital in those early years which, due to compounding, are so crucial. In fact, Charlotte Simmons actually paints a better picture of college life than that which awaits most freshmen arriving on campus: Charlotte's fictional Dupont University was an élite school, with at least one Nobel Prize winner on the faculty, and although corrupted by its high-profile athletic program, enforced genuine academic standards for the non-athlete student body and had real consequences for failure to perform.

Not so at party schools. First of all, let's examine what these “party schools” are. What they're not is the kind of small, private, liberal arts college parodied in Animal House. Instead, the lists of top party schools compiled annually by Playboy and the Princeton Review are overwhelmingly dominated by huge, taxpayer-supported, state universities. In the most recent set of lists, out of a total of twenty top party schools, only two were private institutions. Because of their massive size, state party schools account for a large fraction of the entire U.S. college enrollment, and hence are representative of college life for most students who do not enter the small number of élite schools which are feeders for the ruling class.

As with most “public services” operated by governments, things at these state institutions of “higher education” are not what they appear to be on the surface, and certainly not what parents expect when they send their son or daughter off on what they have been led to believe is the first step toward a promising career. The first lie is in the very concept of a “four-year college”: with today's absurd relaxation of standards for dropping classes, lighter class loads, and “retention” taking priority over selecting out those unsuited to instruction at the college level, only a minority of students finish in four years, and around half take more than five years to graduate, with only about 54% graduating even in six years. Apart from the wasted years of these students' lives, this means the price tag, and corresponding debt burden of a college education is 25%, 50%, or even more above the advertised sticker price, with the additional revenue going into the college's coffers and providing no incentive whatsoever to move students through the system more rapidly.

But the greatest scandal and fraud is not the binge drinking, widespread drug use, casual sex, high rates of serious crime covered up by a campus disciplinary system more interested in preserving the reputation of the institution than weeding out predators among the student body, although all of these are discussed in depth here, but rather the fact that at these gold-plated diploma mill feedlots, education has been de-emphasised to the extent of being entirely optional. Indeed, only about one fifth of university budgets goes to instruction; all the rest disappears into the fat salaries of endlessly proliferating legions of administrators, country club like student amenities, and ambitious building programs. Classes have been dumbed down to the extent that it is possible to navigate a “slacker track” to a bachelor's degree without ever taking a single course more intellectually demanding than what was once considered junior high level, or without being able to read, comprehend, and write the English language with high school proficiency. Grade inflation has resulted in more than 90% of all grades being either A or B, with a B expected by students as their reward simply for showing up, with the consequence that grade reports to parents and transcripts for prospective employers have become meaningless and impossible to evaluate.

The National Survey of Student Engagement finds that only about 10% of U.S. university students are “fully engaged”—actually behaving as college students were once expected to in order to make the most of the educational resources available to them. Twice that percent were “fully disengaged”: just there to party or passing time, while the remainder weren't full time slackers but not really interested in learning things.

Now these are very interesting numbers, and they lead me to a conclusion which the author never explores. Prior to the 1960s, it was assumed that only a minority of highest-ranking secondary school students would go on to college. With the mean IQ of bachelor's degree holders ranging from 110 to 120, this means that they necessarily make up around the top 10 to 15 percent of the population by intelligence. But now, the idea seems to be that everybody should get a “college education”, and indeed today in the U.S. around 70% of high school graduates go on to some kind of college program (although a far smaller fraction ever graduate). Now clearly, a college education which was once suited to the most intelligent 10% of the population is simply not going to work for the fat middle of the bell curve, which characterises the present-day college population. Looked at this way, the party school seems to be an inevitable consequence. If society has deemed it valuable that all shall receive a “college education”, then it is necessary to redefine “college education” as something the average citizen can accomplish and receive the requisite credential. Hence the elimination, or optional status, of actual learning, evaluation of performance, and useful grades. With universities forced to compete on their attractiveness to “the customer”—the students—they concentrate on amenities and lax enforcement of codes of conduct in order to keep those tuition dollars coming in for four, five, six, or however many years it takes.

A number of observers have wondered whether the next bubble to pop will be higher education. Certainly, the parallels are obvious: an overbuilt industry, funded by unsustainable debt, delivering a shoddy product, at a cost which has been growing much faster than inflation or the incomes of those who foot the bills. This look inside the ugly mass education business only reinforces that impression, since another consequence of a bubble is the normalisation and acceptance of absurdity by those inside it. Certainly one indication the bubble may be about to pop is that employers have twigged to the fact that a college diploma and glowing transcript from one of these rackets the author calls “subprime colleges” is no evidence whatsoever of a job applicant's literacy, knowledge, or work ethic, which explains why so many alumni of these programs are living in their parents' basements today, getting along by waiting tables or delivering pizza, while they wait for that lucky break they believe they're entitled to. This population is only likely to increase as employers in need of knowledge workers discover they can outsource those functions to Asia, where university degrees are much more rare but actually mean something.

Elite universities, of course, continue to provide excellent educational opportunities for the small number of students who make it through the rigorous selection process to get there. It's also possible for a dedicated and fully engaged student to get a pretty good education at a party school, as long as they manage to avoid the distractions, select challenging courses and dedicated professors, and don't have the bad fortune to suffer assault, rape, arson, or murder by the inebriated animals that outnumber them ten to one. But then it's up to them, after graduating, to convince employers that their degree isn't just a fancy credential, but rather something they've genuinely worked for.

Allan Bloom observed that “every age is blind to its own worst madness”, an eternal truth to which anybody who has been inside a bubble becomes painfully aware, usually after it unexpectedly pops. For those outside the U.S. education scene, this book provides a look into a bizarre mirror universe which is the daily reality for many undergraduates today. Parents planning to send their progeny off to college need to know this information, and take to heart the author's recommendations of how to look under the glossy surface and discover the reality of the institution to which their son or daughter's future will be entrusted.

In the Kindle edition, end notes are linked in the text, but the index contains just a list of terms with no links to where they appear and is consequently completely useless.

 Permalink

December 2010

Davies, Paul. The Eerie Silence. New York: Houghton Mifflin Harcourt, 2010. ISBN 978-0-547-13324-9.
The year 2009 marked the fiftieth anniversary of the Nature paper by Cocconi and Morrison which marked the beginning of the modern era in the search for extraterrestrial intelligence (SETI). They argued that the optimal channel by which technological civilisations in other star systems who wished to establish contact with those nearby in the galaxy would be narrowband microwave transmissions, perhaps pulse modulated in a pattern that would distinguish them from natural sources. Further, they demonstrated that radio telescopes existing at the time (which were modest compared to those already planned for construction in the near future) would suffice to send and receive such a signal over distances of tens of light years. The following year, Frank Drake used a 26 metre dish at the National Radio Astronomy Observatory to search for such signals from two nearby sun-like stars in Project Ozma.

Over the succeeding half-century, SETI has been an off and on affair, with a variety of projects with different search strategies. Since the 1990s a low level of SETI activity has been maintained, both using radio telescopes to conduct targeted searches and piggybacking on other radio astronomy observations to conduct a sky survey for candidate signals. There is still a substantial “giggle factor” associated with “listening for ET”, and funding and allocation of telescope time for SETI is minuscule compared to other radio astronomy research. SETI has been a direct beneficiary of the exponential growth in computing power available for a given cost, and now employs spectrum analysers able to monitor millions or billions of narrowband channels simultaneously, largely eliminating the original conundrum of SETI: guessing the frequency on which the aliens would be transmitting. The Allen Telescope Array, now under construction, will increase the capability of SETI observations by orders of magnitude, and will continue to benefit from progress in microelectronics and computing.

The one thing that all SETI projects to date have in common is that they haven't found anything. Indeed, the SETI enterprise, taken as a whole, may be the longest-pursued unsuccessful search for a phenomenon in the entire history of science. The reason people don't abandon the enterprise in disappointment is that detection of a signal from an intelligent extraterrestrial source would have profound consequences for understanding the human species' place in the cosmos, the prospects for long-term survival of technological civilisations, and potential breakthroughs in all fields of knowledge if an advanced species shares their knowledge with beginners barely evolved from apes. Another reason the searchers persist is the knowledge that they've barely scratched the surface of the “search space”, having only examined a minuscule fraction of potential targets in the galaxy, and a limited range of potential frequencies and forms of modulation a communicating civilisation might employ to contact others in the galaxy. Finally, continued advances in electronics and computing are making it possible to broaden the scope of the search at a rapidly increasing rate with modest budgets.

Still, after fifty years of searching (intermittently) and finding nothing, it's worth taking a step back and thinking about what that result might mean. In this book, the author revisits the history of SETI programs to date, the assumptions and logic upon which the targets they seek were based, and argues that while conventional microwave searches for narrowband beacons should continue, it is time for a “new SETI”, based on the original mission—search for extraterrestrial intelligence, not just a search for narrowband microwave signals. “Old SETI” was very much based on assumptions about the properties of potential communicating civilisations grounded in the technologies of the 1950s. A great deal has happened since then technologically (for example, the Earth, as seen from deep space, has increasingly grown “radio dark” as high-power broadcast transmitters have been supplanted by optical fibres, cable television systems, and geosynchronous communication satellites which radiate little energy away from the Earth).

In 1959, the pioneers contemplating a SETI program based on the tools of radio astronomy mostly assumed that the civilisations whose beacons they hoped to discover would be biological organisms much like humans or their descendants, but endowed with the scientific and technological capabilities of a much longer period of time. (For statistical reasons, it is vanishingly improbable that humans would make contact with another intelligent species at a comparable state of development, since humans have had the capability to make contact for less than a century, and if other civilisations are comparably short-lived there will never be more than one in the galaxy at any given time. Hence, any signal we receive will necessarily be from a sender whose own technological civilisation is much older than our own and presumably more advanced and capable.) But it now appears probable that unless human civilisation collapses, stagnates, or is destroyed by barbarism (I put the collective probability of these outcomes at around fifty-fifty), or that some presently unenvisioned constraint puts a lid on the exponential growth of computing and communication capability, that before long, probably within this century, our species will pass through a technological singularity which will witness the emergence of artificial intelligence with intellectual capabilities on the order of 1010 to 1015 times that of present-day humans. Biological humans may continue to exist (after all, the evolution of humans didn't impact the dominance of the biosphere by bacteria), but they will no longer determine the course of technological evolution on this planet and beyond. Asking a present-day human to comprehend the priorities and capabilities of one of these successor beings is like asking a butterfly to understand Beethoven's motivations in writing the Ninth Symphony.

And yet, unless we're missing something terribly important, any aliens we're likely to contact are overwhelmingly probable to be such forbidding machine intelligences, not Romulans, Klingons, Ferengi, or even the Borg. Why would such super beings try to get our attention by establishing interstellar beacons? What would they have to say if they did contact us? Consider: how much effort does our own species exert in making contact with or carrying on a dialogue with yeast? This is the kind of gap which will exist between humans and the products of millions of years of teleological development.

And so, the author argues, while keeping a lookout for those elusive beacons (and also ultra-short laser pulses, which are an alternative mechanism of interstellar signalling unimagined when “old SETI” was born), we should also cast the net much wider, looking for the consequences of an intelligence whose motivations and capabilities we cannot hope to envision. Perhaps they have seeded the galaxy with self-reproducing von Neumann probes, one of which is patiently orbiting in the asteroid belt or at one of the Earth-Sun Lagrangian points waiting to receive a ping from us. (And speaking of that, what about those long delayed echoes anyway?) Maybe their wave of exploration passed by the solar system more than three billion years ago and seeded the Earth with the ancestral cell from which all terrestrial life is descended. Or maybe they left a different kind of life, perhaps in their garbage dumps, which lives on as a “shadow biosphere” to this day, undetected because our surveys for life don't look for biochemistry which is different from that of our own. Heck, maybe they even left a message!

We should also be on the lookout for things which don't belong, like discrepancies in isotope abundances which may be evidence of alien technology in distant geological time, or things which are missing. Where did all of those magnetic monopoles which should have been created in the Big Bang go, anyway? Or maybe they've moved on to some other, richer domain in the universe. According to the consensus model of cosmology, we have no idea whatsoever what more than 95% of the universe is made of. Maybe they've transcended their juvenile baryonic origins and decamped to the greener fields we call, in our ignorance, “dark matter” and “dark energy”. While we're pointing antennas at obsolete stars in the sky, maybe they're already here (and everywhere else), not as UFOs or alien invaders, but super-intelligences made of structures which interact only gravitationally with the thin scum of baryonic matter on top of the rich ocean of the universe. Maybe their galactic Internet traffic is already tickling the mirrors of our gravitational wave detectors at intensities we can't hope to detect with our crude technologies.

Anybody who's interested in these kinds of deep questions about some of the most profound puzzles about our place in the universe will find this book a pure delight. The Kindle edition is superbly produced, with high-resolution colour plates which display beautifully on the iPad Kindle reader, and that rarest and most welcome of attributes in an electronic book, an index which is properly linked to the text. The Kindle edition is, however, more expensive than the hardcover as of this writing.

 Permalink

Hiltzik, Michael. Colossus. New York: Free Press, 2010. ISBN 978-1-4165-3216-3.
This book, subtitled “Hoover Dam and the Making of the American Century” chronicles the protracted, tangled, and often ugly history which led up to the undertaking, in the depths of the Great Depression, of the largest single civil engineering project ever attempted in the world up to that time, its achievement ahead of schedule and only modestly above budget, and its consequences for the Colorado River basin and the American West, which it continues to profoundly influence to this day.

Ever since the 19th century, visionaries, ambitious politicians, builders and engineers, and more than a few crackpots and confidence men had dreamt of and promoted grand schemes to harness the wild rivers of the American southwest, using their water to make the barren deserts bloom and opening up a new internal frontier for agriculture and (with cheap hydroelectric power) industry. Some of the schemes, and their consequences, were breathtaking. Consider the Alamo Canal, dug in 1900 to divert water from the Colorado River to irrigate the Imperial Valley of California. In 1905, the canal, already silted up by the water of the Colorado, overflowed, creating a flood which submerged more than five hundred square miles of lowlands in southern California, creating the Salton Sea, which is still there today (albeit smaller, due to evaporation and lack of inflow). Just imagine how such an environmental disaster would be covered by the legacy media today. President Theodore Roosevelt, considered a champion of the environment and the West, declined to provide federal assistance to deal with the disaster, leaving it up to the Southern Pacific Railroad, who had just acquired title to the canal, to, as the man said, “plug the hole”.

Clearly, the challenges posed by the notoriously fickle Colorado River, known for extreme floods, heavy silt, and a tendency to jump its banks and establish new watercourses, would require a much more comprehensive and ambitious solution. Further, such a solution would require the assent of the seven states within the river basin: Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming, among the sparsely populated majority of which there was deep distrust that California would exploit the project to loot them of their water for its own purposes. Given the invariant nature of California politicians and subsequent events, such suspicion was entirely merited.

In the 1920s, an extensive sequence of negotiations and court decisions led to the adoption of a compact between the states (actually, under its terms, only six states had to approve it, and Arizona did not until 1944). Commerce Secretary Herbert Hoover played a major part in these negotiations, although other participants dispute that his rôle was as central as he claimed in his memoirs. In December 1928, President Coolidge signed a bill authorising construction of the dam and a canal to route water downstream, and Congress appropriated US$165 million for the project, the largest single federal appropriation in the nation's history to that point.

What was proposed gave pause even to the master builders who came forward to bid on the project: an arch-gravity dam 221 metres high, 379 metres long, and 200 metres wide at its base. Its construction would require 3.25 million cubic yards (2.48 million cubic metres) of concrete, and would be, by a wide margin, the largest single structure ever built by the human species. The dam would create a reservoir containing 35.2 cubic kilometres of water, with a surface area of 640 square kilometres. These kinds of numbers had to bring a sense of “failure is not an option” even to the devil-may-care roughneck engineers of the epoch. Because, if for no other reason, they had a recent example of how the devil might care in the absence of scrupulous attention to detail. Just months before the great Colorado River dam was approved, the St. Francis Dam in California, built with the same design proposed for the new dam, suddenly failed catastrophically, killing more than 600 people downstream. William Mulholland, an enthusiastic supporter of the Colorado dam, had pronounced the St. Francis dam safe just hours before it failed. The St. Francis dam collapse was the worst civil engineering failure in American history and arguably remains so to date. The consequences of a comparable failure of the new dam were essentially unthinkable.

The contract for construction was won by a consortium of engineering firms called the “Six Companies” including names which would be celebrated in twentieth century civil engineering including Kaiser, Bechtel, and Morrison-Knudsen. Work began in 1931, as the Depression tightened its grip upon the economy and the realisation sank in that a near-term recovery was unlikely to occur. With this project one of the few enterprises hiring, a migration toward the job site began, and the labour market was entirely tilted toward the contractors. Living and working conditions at the outset were horrific, and although the former were eventually ameliorated once the company town of Boulder City was constructed, the rate of job-related deaths and injuries remained higher than those of comparable projects throughout the entire construction.

Everything was on a scale which dwarfed the experience of earlier projects. If the concrete for the dam had been poured as one monolithic block, it would have taken more than a century to cure, and the heat released in the process would have caused it to fracture into rubble. So the dam was built of more than thirty thousand blocks of concrete, each about fifty feet square and five feet high, cooled as it cured by chilled water from a refrigeration plant running through more than six hundred miles of cooling pipes embedded in the blocks. These blocks were then cemented into the structure of the dam with grout injected between the interlocking edges of adjacent blocks. And this entire structure had to be engineered to last forever and never fail.

At the ceremony marking the start of construction, Secretary of the Interior Ray Wilbur surprised the audience by referring to the project as “Hoover Dam”—the first time a comparable project had been named after a sitting president, which many thought unseemly, notwithstanding Hoover's involvement in the interstate compact behind the project. After Hoover's defeat by Roosevelt in 1932, the new administration consistently referred to the project as “Boulder Dam” and so commemorated it in a stamp issued on the occasion of the dam's dedication in September 1935. This was a bit curious as well, since the dam was actually built in Black Canyon, since the geological foundations in Boulder Canyon had been found unsuitable to anchor the structure. For years thereafter, Democrats called it “Boulder Dam”, while Republican stalwarts insisted on “Hoover Dam”. In 1947, newly-elected Republican majorities in the U.S. congress passed a bill officially naming the structure after Hoover and, signed by President Truman, so it has remained ever since.

This book provides an engaging immersion in a very different age, in which economic depression was tempered by an unshakable confidence in the future and the benefits to flow from continental scale collective projects, guided by wise men in Washington and carried out by roughnecks risking their lives in the savage environment of the West. The author discusses whether such a project could be accomplished today and concludes that it probably couldn't. (Of course, since all of the rivers with such potential for irrigation and power generation have already been dammed, the question is largely moot, but is relevant for grand scale projects such as solar power satellites, ocean thermal energy conversion, and other engineering works of comparable transformative consequences on the present-day economy.) We have woven such a web of environmental constraints, causes for litigation, and a tottering tower of debt that it is likely that a project such as Hoover Dam, without which the present-day U.S. southwest would not exist in its present form, could never have been carried out today, and certainly not before its scheduled completion date. Those who regard such grand earthworks as hubristic folly (to which the author tips his hat in the final chapters) might well reflect that history records the achievements of those who have grand dreams and bring them into existence, not those who sputter out their lives in courtrooms or trading floors.

 Permalink

Thor, Brad. Path of the Assassin. New York: Pocket Books, 2003. ISBN 978-0-7434-3676-2.
This, the second in the author's Scot Harvath saga, which began with The Lions of Lucerne (October 2010), starts with Agent Harvath, detached from the Secret Service and charged with cleaning up loose ends from events in the previous book, finding himself stalked and repeatedly preempted by a mysterious silver-eyed assassin who eliminates those linked to the plot he's investigating before they can be captured. Meanwhile, the Near East is careening toward war after a group calling itself the “Hand of God” commits atrocities upon Muslim holy sites, leaving a signature including the Star of David and the message “Terror for Terror”. Although the Israeli government denies any responsibility, there is substantial sympathy for these attacks within Israel, and before long reprisal attacks are mounted and raise tensions to the breaking point.

Intelligence indicates that the son of Abu Nidal has re-established his father's terrorist network and enlisted a broad coalition of Islamic barbarians in its cause. This is confirmed when a daring attack is mounted against a publicity stunt flight from the U.S. to Egypt which Harvath is charged to defeat.

And now it gets a little weird. We are expected to believe that, in just weeks or months, a public relations agent from Chicago, Meg Cassidy, whose spontaneous bravery brought down the hijackers in Cairo, could be trained to become a fully-qualified Special Forces operative, not only with the physical stamina which is found only in the best of the best, but also knowledge of a wide variety of weapons systems and technologies which veteran snake eaters spend years acquiring in the most demanding of conditions. This is as difficult to believe as the premise in G.I. Jane, and actually less so, since in that fantasy the woman in question actually wanted to become a commando.

This is a pretty good thriller, but you get the sense that Thor is still mastering the genre in this novel. He does realise that in the first novel he backed his protagonist into a corner by making him a Secret Service agent and works that out with the aid of a grateful president who appoints him to a much more loose cannon position in “Homeland Security”, which should make all of the dozens of lovers of liberty remaining in the United States shudder at that forbidding phrase.

 Permalink

Cordain, Loren. The Paleo Diet. Hoboken, NJ: John Wiley & Sons, 2002. ISBN 978-0-470-91302-4.
As the author of a diet book, I don't read many self-described “diet books”. First of all, I'm satisfied with the approach to weight management described in my own book; second, I don't need to lose weight; and third, I find most “diet books” built around gimmicks with little justification in biology and prone to prescribe regimes that few people are likely to stick with long enough to achieve their goal. What motivated me to read this book was a talk by Michael Rose at the First Personalized Life Extension Conference in which he mentioned the concept and this book not in conjunction with weight reduction but rather the extension of healthy lifespan in humans. Rose's argument, which is grounded in evolutionary biology and paleoanthropology, is somewhat subtle and well summarised in this article.

At the core of Rose's argument and that of the present book is the observation that while the human genome is barely different from that of human hunter-gatherers a million years ago, our present-day population has had at most 200 to 500 generations to adapt to the very different diet which emerged with the introduction of agriculture and animal husbandry. From an evolutionary standpoint, this is a relatively short time for adaptation and, here is the key thing (argued by Rose, but not in this book), even if modern humans had evolved adaptations to the agricultural diet (as in some cases they clearly have, lactose tolerance persisting into adulthood being one obvious example), those adaptations will not, from the simple mechanism of evolution, select out diseases caused by the new diet which only manifest themselves after the age of last reproduction in the population. So, if eating the agricultural diet (not to mention the horrors we've invented in the last century) were the cause of late-onset diseases such as cancer, cardiovascular problems, and type 2 diabetes, then evolution would have done nothing to select out the genes responsible for them, since these diseases strike most people after the age at which they've already passed on their genes to their children. Consequently, while it may be fine for young people to eat grain, dairy products, and other agricultural era innovations, folks over the age of forty may be asking for trouble by consuming foods which evolution hasn't had the chance to mold their genomes to tolerate. People whose ancestors shifted to the agricultural lifestyle much more recently, including many of African and aboriginal descent, have little or no adaptation to the agricultural diet, and may experience problems even earlier in life.

In this book, the author doesn't make these fine distinctions but rather argues that everybody can benefit from a diet resembling that which the vast majority of our ancestors—hunter-gatherers predating the advent of sedentary agriculture—ate, and to which evolution has molded our genome over that long expanse of time. This is not a “diet book” in the sense of a rigid plan for losing weight. Instead, it is a manual for adopting a lifestyle, based entirely upon non-exotic foods readily available at the supermarket, which approximates the mix of nutrients consumed by our distant ancestors. There are the usual meal plans and recipes, but the bulk of the book is a thorough survey, with extensive citations to the scientific literature, of what hunter-gatherers actually ate, the links scientists have found between the composition of the modern diet and the emergence of “diseases of civilisation” among populations that have transitioned to it in historical times, and the evidence for specific deleterious effects of major components of the modern diet such as grains and dairy products.

Not to over-simplify, but you can go a long way toward the ancestral diet simply by going to the store with an “anti-shopping list” of things not to buy, principally:

  • Grain, or anything derived from grains (bread, pasta, rice, corn)
  • Dairy products (milk, cheese, butter)
  • Fatty meats (bacon, marbled beef)
  • Starchy tuber crops (potatoes, sweet potatoes)
  • Salt or processed foods with added salt
  • Refined sugar or processed foods with added sugar
  • Oils with a high omega 6 to omega 3 ratio (safflower, peanut)

And basically, that's it! Apart from the list above you can buy whatever you want, eat it whenever you like in whatever quantity you wish, and the author asserts that if you're overweight you'll soon see your weight dropping toward your optimal weight, a variety of digestive and other problems will begin to clear up, you'll have more energy and a more consistent energy level throughout the day, and that you'll sleep better. Oh, and your chances of contracting cancer, diabetes, or cardiovascular disease will be dramatically reduced.

In practise, this means eating a lot of lean meat, seafood, fresh fruit and fresh vegetables, and nuts. As the author points out, even if you have a mound of cooked boneless chicken breasts, broccoli, and apples on the table before you, you're far less likely to pig out on them compared to, say, a pile of doughnuts, because the natural foods don't give you the immediate blood sugar hit the highly glycemic processed food does. And even if you do overindulge, the caloric density in the natural foods is so much lower your jaw will get tired chewing or your gut will bust before you can go way over your calorie requirements.

Now, if even if the science is sound (there are hundreds of citations of peer reviewed publications in the bibliography, but then nutritionists are forever publishing contradictory “studies” on any topic you can imagine, and in any case epidemiology cannot establish causation) and the benefits from adopting this diet are as immediate, dramatic, and important for long-term health, a lot of people are going to have trouble with what is recommended here. Food is a lot more to humans and other species (as anybody who's had a “picky eater” cat can testify) than just molecular fuel and construction material for our bodies. Our meals nourish the soul as well as the body, and among humans shared meals are a fundamental part of our social interaction which evolution has doubtless had time to write into our genes. If you go back and look at that list of things not to eat, you'll probably discover that just about any “comfort food” you cherish probably runs afoul of one or more of the forbidden ingredients. This means that contemplating the adoption of this diet as a permanent lifestyle change can look pretty grim, unless or until you find suitable replacements that thread among the constraints. The recipes presented here are interesting, but still come across to me (not having tried them) as pretty Spartan. And recall that even Spartans lived a pretty sybaritic lifestyle compared to your average hunter-gatherer band. But, hey, peach fuzz is entirely cool!

The view of the mechanics of weight loss and gain and the interaction between exercise and weight reduction presented here is essentially 100% compatible with my own in The Hacker's Diet.

This was intriguing enough that I decided to give it a try starting a couple of weeks ago. (I have been adhering, more or less, to the food selection guidelines, but not the detailed meal plans.) The results so far are intriguing but, at this early date, inconclusive. The most dramatic effect was an almost immediate (within the first three days) crash in my always-pesky high blood pressure. This may be due entirely to putting away the salt shaker (an implement of which I have been inordinately fond since childhood), but whatever the cause, it's taken about 20 points off the systolic and 10 off the diastolic, throughout the day. Second, I've seen a consistent downward bias in my weight. Now, as I said, I didn't try this diet to lose weight (although I could drop a few kilos and still be within the target band for my height and build, and wouldn't mind doing so). In any case, these are short-term results and may include transient adaptation effects. I haven't been hungry for a moment nor have I experienced any specific cravings (except the second-order kind for popcorn with a movie). It remains to be seen what will happen when I next attend a Swiss party and have to explain that I don't eat cheese.

This is a very interesting nutritional thesis, backed by a wealth of impressive research of which I was previously unaware. It flies in the face of much of the conventional wisdom on diet and nutrition, and yet viewed from the standpoint of evolution, it makes a lot of sense. You will find the case persuasively put here and perhaps be tempted to give it a try.

 Permalink

Flynn, Vince. American Assassin. New York: Atria Books, 2010. ISBN 978-1-4165-9518-2.
This is the eleventh novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. While the first ten books chronicled events in sequence, the present volume returns to Rapp's origins as an independent assassin for, but not of (officially, at least) the CIA. Here, we revisit the tragic events which predisposed him to take up his singular career, his recruitment by rising anti-terrorist “active measures” advocate Irene Kennedy, and his first encounters with covert operations mastermind Thomas Stansfield.

A central part of the story is Rapp's training at the hands of the eccentric, misanthropic, paranoid, crusty, profane, and deadly in the extreme Stan Hurley, to whom Rapp has to prove, in the most direct of ways, that he isn't a soft college boy recruited to do the hardest of jobs. While Hurley is an incidental character in the novels covering subsequent events, he is centre stage here, and Mitch Rapp fans will delight in getting to know him in depth, even if they might not be inclined to spend much time with the actual man if they encountered him in real life.

Following his training, Rapp deploys on his first mission and immediately demonstrates his inclination to be a loose cannon, taking advantage of opportunities as they present themselves and throwing carefully scripted and practiced plans out the window at the spur of the moment. This brings him into open conflict with Hurley, but elicits a growing admiration from Stansfield, who begins to perceive that he may have finally found a “natural”.

An ambitious mission led by Hurley to deny terrorists their financial lifeblood and bring their leaders out into the open goes horribly wrong in Beirut when Hurley and another operative are kidnapped in broad daylight and subjected to torture in one of the most harrowing scenes in all the literature of the thriller. Hurley, although getting on in years for a field operative, proves “tougher than nails” (you'll understand after you read the book) and a master at getting inside the heads of his abductors and messing with them, but ultimately it's up to Rapp, acting largely alone, adopting a persona utterly unlike his own, and risking everything on the hope of an opportunity, to come to the rescue.

I wasn't sure how well a Rapp novel set in the context of historical events (Beirut in the early 1990s) would work, but in this case Flynn pulls it off magnificently. If you want to read the Rapp novels in story line sequence, this is the place to start.

 Permalink

Burns, Jennifer. Goddess of the Market. New York: Oxford University Press, 2009. ISBN 978-0-19-532487-7.
For somebody who built an entire philosophical system founded on reason, and insisted that even emotion was ultimately an expression of rational thought which could be arrived at from first principles, few modern writers have inspired such passion among their readers, disciples, enemies, critics, and participants in fields ranging from literature, politics, philosophy, religion, architecture, music, economics, and human relationships as Ayn Rand. Her two principal novels, The Fountainhead and Atlas Shrugged (April 2010), remain among the best selling fiction titles more than half a century after their publication, with in excess of ten million copies sold. More than half a million copies of Atlas Shrugged were sold in 2009 alone.

For all the commercial success of her works, which made this refugee from the Soviet Union, writing in a language she barely knew when she arrived in the United States, wealthy before her fortieth birthday, her work was generally greeted with derision among the literary establishment, reviewers in major newspapers, and academics. By the time Atlas Shrugged was published in 1957, she saw herself primarily as the founder of an all-encompassing philosophical system she named Objectivism, and her fiction as a means to demonstrate the validity of her system and communicate it to a broad audience. Academic philosophers, for the most part, did not even reject her work but simply ignored it, deeming it unworthy of their consideration. And Rand did not advance her cause by refusing to enter into the give and take of philosophical debate but instead insist that her system was self-evidently correct and had to be accepted as a package deal with no modifications.

As a result, she did not so much attract followers as disciples, who looked to her words as containing the answer to all of their questions, and whose self-worth was measured by how close they became to, as it were, the fountainhead whence they sprang. Some of these people were extremely bright, and went on to distinguished careers in which they acknowledged Rand's influence on their thinking. Alan Greenspan was a member of Rand's inner circle in the 1960s, making the case for a return to the gold standard in her newsletter, before becoming the maestro of paper money decades later.

Although her philosophy claimed that contradiction was impossible, her life and work were full of contradictions. While arguing that everything of value sprang from the rational creativity of free minds, she created a rigid system of thought which she insisted her followers adopt without any debate or deviation, and banished them from her circle if they dared dissent. She claimed to have created a self-consistent philosophical and moral system which was self-evidently correct, and yet she refused to debate those championing other systems. Her novels portray the state and its minions in the most starkly negative light of perhaps any broadly read fiction, and yet she detested libertarians and anarchists, defended the state as necessary to maintain the rule of law, and exulted in the success of Apollo 11 (whose launch she was invited to observe).

The passion that Ayn Rand inspires has coloured most of the many investigations of her life and work published to date. Finally, in this volume, we have a more or less dispassionate examination of her career and œuvre, based on original documents in the collection of the Ayn Rand Institute and a variety of other archives. Based upon the author's Ph.D. dissertation (and with the wealth of footnotes and source citations customary in such writing), this book makes an effort to tell the story of Ayn Rand's life, work, and their impact upon politics, economics, philosophy, and culture to date, and her lasting legacy, without taking sides. The author is neither a Rand follower nor a confirmed opponent, and pretty much lets each reader decide where they come down based on the events described.

At the outset, the author writes, “For over half a century, Rand has been the ultimate gateway drug to life on the right.” I initially found this very off-putting, and resigned myself to enduring another disdainful dismissal of Rand (to whose views the vast majority of the “right” over that half a century would have taken violent exception: Rand was vehemently atheist, opposing any mixing of religion and politics; a staunch supporter of abortion rights; opposed the Vietnam War and conscription; and although she rejected the legalisation of marijuana, cranked out most of her best known work while cranked on Benzedrine), as I read the book the idea began to grow on me. Indeed, many people in the libertarian and conservative worlds got their introduction to thought outside the collectivist and statist orthodoxy pervading academia and the legacy media by reading one of Ayn Rand's novels. This may have been the moment at which they first began to, as the hippies exhorted, “question authority”, and investigate other sources of information and ways of thinking and looking at the world. People who grew up with the Internet will find it almost impossible to imagine how difficult this was back in the 1960s, where even discovering the existence of a dissenting newsletter (amateurishly produced, irregularly issued, and with a tiny subscriber base) was entirely a hit or miss matter. But Ayn Rand planted the seed in the minds of millions of people, a seed which might sprout when they happened upon a like mind, or a like-minded publication.

The life of Ayn Rand is simultaneously a story of an immigrant living the American dream: success in Hollywood and Broadway and wealth beyond even her vivid imagination; the frustration of an author out of tune with the ideology of the times; the political education of one who disdained politics and politicians; the birth of one of the last “big systems” of philosophy in an age where big systems had become discredited; and a life filled with passion lived by a person obsessed with reason. The author does a thorough job of pulling this all together into a comprehensible narrative which, while thoroughly documented and eschewing enthusiasm in either direction, will keep you turning the pages. The author is an academic, and writes in the contemporary scholarly idiom: the term “right-wing” appears 15 times in the book, while “left-wing” is used not at all, even when describing officials and members of the Communist Party USA. Still, this does not detract from the value of this work: a serious, in-depth, and agenda-free examination of Ayn Rand's life, work, and influence on history, today, and tomorrow.

 Permalink

O'Rourke, P. J. Don't Vote—It Just Encourages the Bastards. New York: Atlantic Monthly Press, 2010. ISBN 978-0-8021-1960-5.
P. J. O'Rourke is one of the most astute observers of the contemporary scene who isn't, I believe, taken as seriously as he deserves to be simply because his writing is so riotously funny. In the present book, he describes the life-changing experience which caused him to become a conservative (hint: it's the same one which can cause otherwise sane adults to contemplate buying a minivan and discover a new and distasteful definition of the word “change”), and explores the foundations of conservatism in a world increasingly dominated by nanny states, an out-of-touch and increasingly inbred ruling class, and a growing fraction of the electorate dependent upon the state and motivated to elect politicians who will distribute public largesse to them, whatever the consequences for the nation as a whole.

This is, of course, all done with great wit (and quite a bit of profanity, which may be off-putting to the more strait-laced kind of conservative), but there are a number of deep insights you'll never come across in the legacy media. For example, “We live in a democracy, rule by the people. Fifty percent of people are below average intelligence. This explains everything about politics.” The author then moves on to survey the “burning issues of our time” including the financial mess, “climate change” (where he demolishes the policy prescriptions of the warm-mongers in three paragraphs occupying less than a page), health care, terrorism, the collapse of the U.S. auto industry, and foreign policy, where he brings the wisdom of Kipling to bear on U.S. adventures in the Hindu Kush.

He concludes, in a vein more libertarian than conservative, that politics and politicians are, by their very nature, so fundamentally flawed (Let's give a small number of people a monopoly on the use of force and the ability to coercively take the earnings of others—what could possibly go wrong?) that the only solution is to dramatically reduce the scope of government, getting it out of our lives, bedrooms, bathrooms, kitchens, cars, and all of the other places its slimy tendrils have intruded, and, for those few remaining functions where government has a legitimate reason to exist, that it be on the smallest and most local scale possible. Government is, by its very nature, a monopoly (which explains a large part of why it produces such destructive outcomes), but an ensemble of separate governments (for example, states, municipalities, and school districts in the U.S.) will be constrained by competition from their peers, as evidenced by the demographic shift from high tax to low tax states in the U.S. and the disparate economic performance of highly regulated states and those with a business climate which favours entrepreneurship.

In all, I find O'Rourke more optimistic about the prospects of the U.S. than my own view. The financial situation is simply intractable, and decades of policy implemented by both major political parties have brought the U.S. near the tipping point where a majority of the electorate pays no income tax, and hence has no motivation to support policies which would reduce the rate of growth of government, not to speak of actually shrinking it. The government/academia/media axis has become a self-reinforcing closed loop which believes things very different than the general populace, of which it is increasingly openly contemptuous. It seems to me the most likely outcome is collapse, not reform, with the form of the post-collapse society difficult to envision from a pre-discontinuity perspective. I'll be writing more about possible scenarios and their outcomes in the new year.

This book presents a single argument; it is not a collection of columns. Consequently, it is best read front to back. I would not recommend reading it straight through, however, but rather a chapter a day or every few days. In too large doses, the hilarity of the text may drown out the deeper issues being discussed. In any case, this book will leave you not only entertained but enlightened.

A podcast interview with the author is available in which he concedes that he does, in fact, actually vote.

 Permalink

  2011  

January 2011

Grisham, John. The Confession. New York: Doubleday, 2010. ISBN 978-0-385-52804-7.
Just days before the scheduled execution of Donté Drumm, a black former high school football star who confessed (during a highly dubious and protracted interrogation) to the murder of white cheerleader Nicole Yarber, a serial sex offender named Travis Boyette, recently released to a nearby halfway house, shows up in the office of Lutheran pastor Keith Schroeder and, claiming to be dying of an inoperable brain tumour, confesses to the murder and volunteers to go to Texas to take responsibility for the crime, reveal where he buried the victim's body (which was never found), and avert the execution of Donté. Schroeder is placed in a near-impossible dilemma: he has little trust in the word of Boyette, whose erratic behaviour is evident from the outset, and even less desire to commit a crime assisting Boyette in violating his parole by leaving the state to travel to Texas, but he knows that if what Boyette says is true and he fails to act, an innocent man is certain to be killed by the state.

Schroeder decides to do what he can to bring Boyette's confession to the attention of the authorities in Texas, and comes into direct contact with the ruthless efficiency of the Texas killing machine. This is a story with many twists, turns, surprises, and revelations, and there's little I can say about it without spoiling the plot, so I'll leave it at that. Grisham is clearly a passionate opponent of the death penalty, and this is as much an advocacy document as a thriller. The victim's family is portrayed in an almost cartoon-like fashion, exploiting an all-too-willing media with tears and anguish on demand, and the police, prosecutors, court system, and politicians as uniformly venal villains, while those on the other side are flawed, but on the side of right. Now, certainly, there are without doubt people just as bad and as good on the sides of the issue where Grisham places them, but I suspect that most people in those positions in the real world are conflicted and trying to do their best to obtain justice for all concerned.

Taken purely as a thriller, this novel works, but in my opinion it doesn't come up to the standard set by Grisham's early work. The arcana of the law and the legal system, which Grisham excels in working into his plots, barely figure here, with racial tensions, a media circus, and a Texas town divided into two camps taking centre stage.

A mass market paperback edition will be released in July, 2011. A Kindle edition is available, and substantially less expensive than the hardcover.

 Permalink

Aldrin, Buzz. Magnificent Desolation. London: Bloomsbury, 2009. ISBN 978-1-4088-0416-2.
What do you do with the rest of your life when you were one of the first two humans to land on the Moon before you celebrated your fortieth birthday? This relentlessly candid autobiography answers that question for Buzz Aldrin (please don't write to chastise me for misstating his name: while born as Edwin Eugene Aldrin, Jr., he legally changed his name to Buzz Aldrin in 1979). Life after the Moon was not easy for Aldrin. While NASA trained their astronauts for every imaginable in-flight contingency, they prepared them in no way for their celebrity after the mission was accomplished, and detail-oriented engineers were suddenly thrust into the public sphere, sent as goodwill ambassadors around the world with little or no concern for the effects upon their careers or family lives.

All of this was not easy for Aldrin, and in this book he chronicles his marriages (3), divorces (2), battles against depression and alcoholism, search for a post-Apollo career, which included commanding the U.S. Air Force test pilot school at Edwards Air Force Base, writing novels, serving as a corporate board member, and selling Cadillacs. In the latter part of the book he describes his recent efforts to promote space tourism, develop affordable private sector access to space, and design an architecture which will permit exploration and exploitation of the resources of the Moon, Mars and beyond with budgets well below those of the Apollo era.

This book did not work for me. Buzz Aldrin has lived an extraordinary life: he developed the techniques for orbital rendezvous used to this day in space missions, pioneered underwater neutral buoyancy training for spacewalks then performed the first completely successful extra-vehicular activity on Gemini 12, demonstrating that astronauts can do useful work in the void, and was the second man to set foot on the Moon. But all of this is completely covered in the first three chapters, and then we have 19 more chapters describing his life after the Moon. While I'm sure it's fascinating if you've lived though it yourself, it isn't necessarily all that interesting to other people. Aldrin comes across as, and admits to being, self-centred, and this is much in evidence here. His adventures, ups, downs, triumphs, and disappointments in the post-Apollo era are those that many experience in their own lives, and I don't find them compelling to read just because the author landed on the Moon forty years ago.

Buzz Aldrin is not just an American hero, but a hero of the human species: he was there when the first naked apes reached out and set foot upon another celestial body (hear what he heard in his headphones during the landing). His life after that epochal event has been a life well-lived, and his efforts to open the high frontier to ordinary citizens are to be commended. This book is his recapitulation of his life so far, but I must confess I found the post-Apollo narrative tedious. But then, they wouldn't call him Buzz if there wasn't a buzz there! Buzz is 80 years old and envisions living another 20 or so. Works for me: I'm around 60, so that gives me 40 or so to work with. Given any remotely sane space policy, Buzz could be the first man to set foot on Mars in the next 15 years, and Lois could be the first woman. Maybe I and the love of my life will be among the crew to deliver them their supplies and the essential weasels for their planetary colonisation project.

A U.S. edition is available.

 Permalink

Suarez, Daniel. Freedom™. New York: Signet, 2010. ISBN 978-0-451-23189-5.
You'll see this book described as the sequel to the author's breakthrough first novel Daemon (August 2010), but in fact this is the second half of a long novel which happened to be published in two volumes. As such, if you pick up this book without having read Daemon, you will have absolutely no idea what is going on, who the characters are, and why they are motivated to do the things they do. There is little or no effort to fill in the back story or bring the reader up to speed. So read Daemon first, then this book, ideally not too long afterward so the story will remain fresh in your mind. Since that's the way the author treats these two books, I'm going to take the same liberty and assume you've read my review of Daemon to establish the context for these remarks.

The last two decades have demonstrated, again and again, just how disruptive ubiquitous computing and broadband data networks can be to long-established and deeply entrenched industries such as book publishing and distribution, music recording and retailing, newspapers, legacy broadcast media, domestic customer service call centres, travel agencies, and a host of other businesses which have seen their traditional business models supplanted by something faster, more efficient, and with global reach. In this book the author explores the question of whether the fundamental governance and economic system of the last century may be next domino to fall, rendered impotent and obsolete and swept away by a fundamentally new way of doing things, impossible to imagine in the pre-wired world, based on the principles used in massively multiplayer online game engines and social networks.

Of course, governments and multinational corporations are not going to go gently into the night, and the Daemon (a distributed mesh networked game engine connected to the real world) and its minions on the “darknet” demonstrate the ruthlessness of a machine intelligence when threatened, which results in any number of scenes just begging to be brought to the big screen. In essence, the Daemon is creating a new operating system for humans, allowing them to interact in ways less rigid, more decentralised and resilient, and less hierarchical than the institutions they inherited from an era when goods and information travelled no faster than a horse.

In my estimation, this is a masterwork: the first compelling utopian/dystopian (depending on how you look at it, which is part of its genius) novel of the Internet era. It is as good, in its own way, as Looking Backward, Brave New World, or 1984, and it is a much more thrilling read than any of them. Like those classics, Suarez gets enough of the details right that you find yourself beginning to think that things might actually turn out something like this, and what kind of a world it would be to live in were that to happen.

Ray Kurzweil argues that The Singularity Is Near. In this novel, the author gets the reader to wonder whether it might not be a lot closer than Kurzweil envisions, and not require the kind of exponential increase in computing power he assumes to be the prerequisite. Might the singularity—a phase transition in the organisation of human society as profound as the discovery of agriculture—actually be about to happen in the next few years, not brought about by superhuman artificial intelligence but rather the synthesis of and interconnection of billions of human intelligences connected by a “social network” encompassing all of society? (And if you think sudden transitions like that can't happen, just ask anybody who used to own a record store or the boss of a major newspaper.) Would this be a utopian solution to a system increasingly perceived as unsustainable and inexorably crushing individuality and creativity, or would it be a descent into a potentially irreversible dark age in which humans would end up as peripherals in a vast computing grid using them to accomplish its own incomprehensible agenda? You'll probably close this book undecided on that question, and spend a good deal of time afterward pondering it. That is what makes this novel so great.

If the author can continue to rise to this standard in subsequent novels, we have a new grandmaster on the scene.

 Permalink

Taylor, Travis S. and Les Johnson. Back to the Moon. Riverdale, NY: Baen Publishing, 2010. ISBN 978-1-4391-3405-4.
Don't you just hate it when you endure the protracted birthing process of a novel set in the near future and then, with the stroke of a politician's pen, the entire premise of the story goes ker-plonk into the dustbin of history? Think about the nuclear terror novel set in the second Carter administration, or all of the Cold War thrillers in the publishing pipeline that collapsed along with the Soviet Union. Well, that's more or less what we have here. This novel is set in, shall we say, the 2020s in a parallel universe where NASA's Constellation program (now cancelled in our own timeline) remained on track and is ready to launch its first mission to return humans to the Moon. Once again, there is a Moon race underway: this time a private company, Space Excursions, hopes to be the first enterprise to send paying passengers on a free return loop around the Moon, while the Chinese space agency hopes to beat NASA to the Moon with their own landing mission.

Space Excursions is ready to win the race with their (technologically much less demanding) mission then discovers, to the horror of their passengers and the world, that a secret Chinese landing mission has crashed near the lunar limb, and the Chinese government has covered up the disaster and left their taikonauts to die unmourned to avoid their space program's losing face. Bill Stetson (try to top that for a Texas astronaut name!), commander of the soon-to-launch NASA landing mission, realises that his flight can be re-purposed into a rescue of the stranded Chinese, and the NASA back-room experts, with the clock ticking on the consumables remaining in the Chinese lander, devise a desperate but plausible plan to save them.

Thus, the first U.S. lunar mission since Apollo 17 launches with an entirely different flight plan than that envisioned and for which the crew trained. Faced with a crisis, the sclerotic NASA bureaucracy is jolted back into the “make it so” mindset they exemplified in returning the crew of Apollo 13 safely to the Earth. In the end, it takes co-operation between NASA, the Chinese space agency, and Space Excursions, along with intrepid exploits by spacemen and -women of all of those contenders in Moon Race II to pull off the rescue, leading one to wonder “why can't we all get along?”

Do not confuse this novel with the laughably inept book with the same title by Homer Hickam (April 2010). This isn't remotely as bad, but then it isn't all that good either. I don't fault it for describing a NASA program which was cancelled while the novel was in press—author Taylor vents his frustration over that in an afterword included here. What irritates me is how many essential details the authors got wrong in telling the story. They utterly mis-describe the configuration of the Constellation lunar spacecraft, completely forgetting the service module of the Orion spacecraft, which contains the engine used to leave lunar orbit and to which the solar arrays are attached. They assume the ascent stage of the Altair lunar lander remains attached to the Orion during the return from the Moon, which is insane from a mass management standpoint. Their use of terminology is just sloppy, confusing orbital and escape velocity, trans-lunar injection with lunar orbit insertion maneuvers, and a number of other teeth-grinding goofs. The orbital mechanics are a thing of fantasy: spacecraft perform plane change maneuvers which no chemical rocket could possibly execute, and the Dreamscape lunar flyby tourist vehicle is said to brake with rockets into Earth orbit before descending for a landing which is energetically and mass budget wise crazy as opposed to a direct aerobraking entry.

What is odd is that author Taylor has a doctorate in science and engineering and has worked on NASA and DOD programs for two decades, and author Johnson works for NASA. NASA is rife with science fiction fans—SF is the “literature of recruitment” for NASA. Without a doubt, hundreds of NASA people intimately acquainted with the details of the Constellation Program would have been thrilled at the chance to review and fact-check this manuscript (especially because it portrays their work in an adulatory light), and almost none of the revisions required to get it right would have had any significant impact upon the story. (The heat shield repair is an exception, but I could scribble a more thrilling chapter about doing that after jettisoning the service module with the Earth looming nearer and nearer than the one in this novel.)

This is a well-crafted thriller which will keep you turning the pages, but doesn't stand up to scrutiny if you really understand orbital mechanics or the physical constraints in going to the Moon. What is regrettable is that all of the goofs could have been remedied without compromising the story in any way.

 Permalink

Lehto, Steve. Chrysler's Turbine Car. Chicago: Chicago Review Press, 2010. ISBN 978-1-56976-549-4.
There were few things so emblematic of the early 1960s as the jet airliner. Indeed, the period was often referred to contemporarily as the “jet age”, and products from breakfast cereal to floor wax were positioned as modern wonders of that age. Anybody who had experienced travel in a piston powered airliner and then took their first flight in a jet felt that they had stepped into the future: gone was the noise, rattling, and shaking from the cantankerous and unreliable engines that would knock the fillings loose in your teeth, replaced by a smooth whoosh which (although, in the early jets, deafening to onlookers outside), allowed carrying on a normal conversation inside the cabin. Further, notwithstanding some tragic accidents in the early days as pilots became accustomed to the characteristics of the new engines and airframes, it soon became apparent that these new airliners were a great deal safer and more reliable than their predecessors: they crashed a lot less frequently, and flights delayed and cancelled due to mechanical problems became the rare exception rather than something air travellers put up with only because the alternative was so much worse.

So, if the jet age had arrived, and jet power had proven itself to be so superior to the venerable and hideously overcomplicated piston engine, where were the jet cars? This book tells the long and tangled story of just how close we came to having turbine powered automobiles in the 1960s, how a small group of engineers plugging away at problem after problem over twenty years managed to produce an automotive powerplant so clearly superior to contemporary piston engines that almost everybody who drove a vehicle powered by it immediately fell in love and wished they could have one of their own, and ultimately how financial problems and ill-considered government meddling destroyed the opportunity to replace automotive powerplants dependent upon petroleum-based fuels (which, at the time, contained tetraethyl lead) with one which would run on any combustible liquid, emit far less pollution from the tailpipe, run for hundreds of thousands of miles without an oil change or need for a tune-up, start instantly and reliably regardless of the ambient temperature, and run so smoothly and quietly that for the first time passengers were aware of the noise of the tires rolling over the road.

In 1945, George Huebner, who had worked on turboprop aircraft for Chrysler during World War II, returned to the civilian automotive side of the company as war work wound down. A brilliant engineer as well as a natural-born promoter of all things he believed in, himself most definitely included, by 1946 he was named Chrysler's chief engineer and used his position to champion turbine propulsion, which he had already seen was the future in aviation, for automotive applications. The challenges were daunting: turboshaft engines (turbines which delivered power by turning a shaft coupled to the turbine rotor, as used in turboprop airplanes and helicopters) gulped fuel at a prodigious rate, including when at “idle”, took a long time to “spool up” to maximum power, required expensive exotic materials in the high-temperature section of the engine, and had tight tolerances which required parts to be made by costly and low production rate investment casting, which could not produce parts in the quantity, nor at a cost acceptable for a mass market automotive powerplant.

Like all of the great engineers, Huebner was simultaneously stubborn and optimistic: stubborn in his belief that a technology so much simpler and inherently more thermodynamically efficient must eventually prevail, and optimistic that with patient engineering, tackling one problem after another and pursuing multiple solutions in parallel, any challenge could be overcome. By 1963, coming up on the twentieth year of the effort, progress had been made on all fronts to the extent that Huebner persuaded Chrysler management that the time had come to find out whether the driving public was ready to embrace the jet age in their daily driving. In one of the greatest public relations stunts of all time, Chrysler ordered 55 radically styled (for the epoch) bodies from the Ghia shop in Italy, and mated them with turbine drivetrains and chassis in a Michigan factory previously used to assemble taxicabs. Fifty of these cars (the other five being retained for testing and promotional purposes) were loaned, at no charge, for periods of three months each, to a total of 203 drivers and their families. Delivery of one of these loaners became a media event, and the lucky families instant celebrities in their communities: a brief trip to the grocery store would turn into several hours fielding questions about the car and offering rides around the block to gearheads who pleaded for them.

The turbine engines, as turbine engines are wont to, once the bugs have been wrung out, performed superbly. Drivers of the loaner cars put more than a million miles on them with only minor mechanical problems. One car was rear-ended at a stop light, but you can't blame the engine for that. (Well, perhaps the guilty party was transfixed by the striking design of the rear of the car!) Drivers did notice slower acceleration from a stop due to “turbine lag”—the need for the turbine to spool up in RPM from idle, and poorer fuel economy in city driving. Fuel economy on the highway was comparable to contemporary piston engine cars. What few drivers noticed in the era of four gallons a buck gasoline, was that the turbine could run on just about any fuel you can imagine: unleaded gasoline, kerosene, heating oil, ethanol, methanol, aviation jet fuel, diesel, or any mix thereof. As a stunt, while visiting a peanut festival in Georgia, a Chrysler Turbine filled up with peanut oil, with tequila during a tour through Mexico, and with perfume at a French auto show; in each case the engine ran perfectly on the eccentric fuel (albeit with a distinctive aroma imparted to the exhaust).

So, here we are all these many years later in the twenty-first century. Where are our jet cars? That's an interesting story which illustrates the unintended consequences of well-intended public policy. Just as the turbine engine was being refined and perfected as an automotive power plant, the U.S. government started to obsess about air quality, and decided, in the spirit of the times, to impose detailed mandates upon manufacturers which constrained the design of their products. (As opposed, say, to imposing an excise tax upon vehicles based upon their total emissions and allowing manufacturers to weigh the trade-offs across their entire product line, or leaving it to states and municipalities most affected by pollution to enforce their own standards on vehicles licensed in their jurisdiction.) Since almost every vehicle on the road was piston engine powered, it was inevitable that regulators would draft their standards around the characteristics of that powerplant. In doing so, they neglected to note that the turbine engine already met all of the most stringent emissions standards they then envisioned for piston engines (and in addition, ran on unleaded fuels, completely eliminating the most hazardous emission of piston engines) with a single exception: oxides of nitrogen (NOx). The latter was a challenge for turbine engineers, because the continuous combustion in a turbine provides a longer time for nitrogen to react with oxygen. Engineers were sure they'd be able to find a way to work around this single remaining challenge, having already solved all of the emission problems the piston engine still had to overcome.

But they never got the chance. The government regulations were imposed with such short times for compliance that automakers were compelled to divert all of their research, development, and engineering resources to modifying their existing engines to meet the new standards, which proved to be ever-escalating: once a standard was met, it was made more stringent with another near-future deadline. At Chrysler, the smallest of the Big Three, this hit particularly hard, and the turbine project found its budget and engineering staff cannibalised to work on making ancient engines run rougher, burn more fuel, perform more anæmicly, and increase their cost and frequency of maintenance to satisfy a tailpipe emission standard written into law by commissars in Washington who probably took the streetcar to work. Then the second part of the double whammy hit: the oil embargo and the OPEC cartel hike in the price of oil, which led to federal fuel economy standards, which pulled in the opposite direction from the emissions standards and consumed all resources which might have been devoted to breakthroughs in automotive propulsion which would have transcended the increasingly baroque tweaks to the piston engine. A different time had arrived, and increasingly people who once eagerly awaited the unveiling of the new models from Detroit each fall began to listen to their neighbours who'd bought one of those oddly-named Japanese models and said, “Well, it's tiny and it looks odd, but it costs a whole lot less, goes almost forever on a gallon of gas, and it never, ever breaks”. From the standpoint of the mid-1970s, this began to sound pretty good to a lot of folks, and Detroit, the city and the industry which built it, began its descent from apogee to the ruin it is today.

If we could go back and change a few things in history, would we all be driving turbine cars today? I'm not so sure. At the point the turbine was undone by ill-advised public policy, one enormous engineering hurdle remained, and in retrospect it isn't clear that it could have been overcome. All turbine engines, to the present day, require materials and manufacturing processes which have never been scaled up to the volumes of passenger car manufacturing. The pioneers of the automotive turbine were confident that could be done, but they conceded that it would require at least the investment of building an entire auto plant from scratch, and that is something that Chrysler could not remotely fund at the time. It's much like building a new semiconductor fabrication facility with a new scaling factor, but without the confidence that if it succeeds a market will be there for its products. At the time the Chrysler Turbine cars were tested, Huebner estimated their cost of manufacturing at around US$50,000: roughly half of that the custom-crafted body and the rest the powertrain—the turbine engines were essentially hand-built. Such has been the depreciation of the U.S. dollar that this is equivalent to about a third of a million present-day greenbacks. Then or now, getting this cost down to something the average car buyer could afford was a formidable challenge, and it isn't obvious that the problem could have been solved, even without the resources needed to do so having been expended to comply with emissions and fuel economy diktats.

Further, turbine engines become less efficient as you scale them down—in the turbine world, the bigger the better, and they work best when run at a constant load over a long period of time. Consequently, turbine power would seem optimal for long-haul trucks, which require more power than a passenger car, run at near-constant speed over highways for hours on end, and already run on the diesel fuel which is ideal for turbines. And yet, despite research and test turbine vehicles having been built by manufacturers in the U.S., Britain, and Sweden, the diesel powerplant remains supreme. Truckers and trucking companies understand long-term investment and return, and yet the apparent advantages of the turbine haven't allowed it to gain a foothold in that market. Perhaps the turbine passenger car was one of those great ideas for which, in the final analysis, the numbers just didn't work.

I actually saw one of these cars on the road in 1964, doubtlessly driven by one the lucky drivers chosen to test it. There was something sweet about seeing the Jet Car of the Future waiting to enter a congested tunnel while we blew past it in our family Rambler station wagon, but that's just cruel. In the final chapter, we get to vicariously accompany the author on a drive in the Chrysler Turbine owned by Jay Leno, who contributes the foreword to this book.

Mark Olson's turbinecar.com has a wealth of information, photographs, and original documents relating to the Chrysler Turbine Car. The History Channel's documentary, The Chrysler Turbine, is available on DVD.

 Permalink

Bethell, Tom. Questioning Einstein. Pueblo West, CO: Vales Lake Publishing, 2009. ISBN 978-0-9714845-9-7.
Call it my guilty little secret. Every now and then, I enjoy nothing more than picking up a work of crackpot science, reading it with the irony lobe engaged, and figuring out precisely where the author went off the rails and trying to imagine how one might explain to them the blunders which led to the poppycock they expended so much effort getting into print. In the field of physics, for some reason Einstein's theory of special relativity attracts a disproportionate number of such authors, all bent on showing that Einstein was wrong or, in the case of the present work's subtitle, asking “Is Relativity Necessary?”. With a little reflexion, this shouldn't be a surprise: alone among major theories of twentieth century physics, special relativity is mathematically accessible to anybody acquainted with high school algebra, and yet makes predictions for the behaviour of objects at high velocity which are so counterintuitive to the expectations based upon our own personal experience with velocities much smaller than that they appear, at first glance, to be paradoxes. Theories more dubious and less supported by experiment may be shielded from crackpots simply by the forbidding mathematics one must master in order to understand and talk about them persuasively.

This is an atypical exemplar of the genre. While most attacks on special relativity are written by delusional mad scientists, the author of the present work, Tom Bethell, is a respected journalist whose work has been praised by, among others, Tom Wolfe and George Gilder. The theory presented here is not his own, but one developed by Petr Beckmann, whose life's work, particularly in advocating civil nuclear power, won him the respect of Edward Teller (who did not, however, endorse his alternative to relativity). As works of crackpot science go, this is one of the best I've read. It is well written, almost free of typographical and factual errors, clearly presents its arguments in terms a layman can grasp, almost entirely avoids mathematical equations, and is thoroughly documented with citations of original sources, many of which those who have learnt special relativity from modern textbooks may not be aware. Its arguments against special relativity are up to date, tackling objections including the Global Positioning System, the Brillet-Hall experiment, and the Hafele-Keating “travelling clock” experiments as well as the classic tests. And the author eschews the ad hominem attacks on Einstein which are so common in the literature of opponents to relativity.

Beckmann's theory posits that the luminiferous æther (the medium in which light waves propagate), which was deemed “superfluous” in Einstein's 1905 paper, in fact exists, and is simply the locally dominant gravitational field. In other words, the medium in which light waves wave is the gravity which makes things which aren't light heavy. Got it? Light waves in any experiment performed on the Earth or in its vicinity will propagate in the æther of its gravitational field (with only minor contributions from those of other bodies such as the Moon and Sun), and hence attempts to detect the “æther drift” due to the Earth's orbital motion around the Sun such as the Michelson-Morley experiment will yield a null result, since the æther is effectively “dragged” or “entrained” along with the Earth. But since the gravitational field is generated by the Earth's mass, and hence doesn't rotate with it (Huh—what about the Lense-Thirring effect, which is never mentioned here?), it should be possible to detect the much smaller æther drift effect as the measurement apparatus rotates around the Earth, and it is claimed that several experiments have made such a detection.

It's traditional that popular works on special relativity couch their examples in terms of observers on trains, so let me say that it's here that we feel the sickening non-inertial-frame lurch as the train departs the track and enters a new inertial frame headed for the bottom of the canyon. Immediately, we're launched into a discussion of the Sagnac effect and its various manifestations ranging from the original experiment to practical applications in laser ring gyroscopes, to round-the-world measurements bouncing signals off multiple satellites. For some reason the Sagnac effect seems to be a powerful attractor into which special relativity crackpottery is sucked. Why it is so difficult to comprehend, even by otherwise intelligent people, entirely escapes me. May I explain it to you? This would be easier with a diagram, but just to show off and emphasise how simple it is, I'll do it with words. Imagine you have a turntable, on which are mounted four mirrors which reflect light around the turntable in a square: the light just goes around and around. If the turntable is stationary and you send a pulse of light in one direction around the loop and then send another in the opposite direction, it will take precisely the same amount of time for them to complete one circuit of the mirrors. (In practice, one uses continuous beams of monochromatic light and combines them in an interferometer, but the effect is the same as measuring the propagation time—it's just easier to do it that way.) Now, let's assume you start the turntable rotating clockwise. Once again you send pulses of light around the loop in both directions; this time we'll call the one which goes in the same direction as the turntable's rotation the clockwise pulse and the other the counterclockwise pulse. Now when we measure how long it took for the clockwise pulse to make it one time around the loop we find that it took longer than for the counterclockwise pulse. OMG!!! Have we disproved Einstein's postulate of the constancy of the speed of light (as is argued in this book at interminable length)? Well, of course not, as a moment's reflexion will reveal. The clockwise pulse took longer to make it around the loop because it had farther to travel to arrive there: as it was bouncing from each mirror to the next, the rotation of the turntable was moving the next mirror further away, and so each leg it had to travel was longer. Conversely, as the counterclockwise pulse was in flight, its next mirror was approaching it, and hence by the time it made it around the loop it had travelled less far, and consequently arrived sooner. That's all there is to it, and precision measurements of the Sagnac effect confirm that this analysis is completely consistent with special relativity. The only possible source of confusion is if you make the self-evident blunder of analysing the system in the rotating reference frame of the turntable. Such a reference frame is trivially non-inertial, so special relativity does not apply. You can determine this simply by tossing a ball from one side of the turntable to another, with no need for all the fancy mirrors, light pulses, or the rest.

Other claims of Beckmann's theory are explored, all either dubious or trivially falsified. Bethell says there is no evidence for the length contraction predicted by special relativity. In fact, analysis of heavy ion collisions confirm that each nucleus approaching the scene of the accident “sees” the other as a “pancake” due to relativistic length contraction. It is claimed that while physical processes on a particle moving rapidly through a gravitational field slow down, that an observer co-moving with that particle would not see a comparable slow-down of clocks at rest with respect to that gravitational field. But the corrections applied to the atomic clocks in GPS satellites incorporate this effect, and would produce incorrect results if it did not occur.

I could go on and on. I'm sure there is a simple example from gravitational lensing or propagation of electromagnetic radiation from gamma ray bursts which would falsify the supposed classical explanation for the gravitational deflection of light due to a refractive effect based upon strength of the gravitational field, but why bother when so many things much easier to dispose of are hanging lower on the tree. Should you buy this book? No, unless, like me, you enjoy a rare example of crackpot science which is well done. This is one of those, and if you're well acquainted with special relativity (if not, take a trip on our C-ship!) you may find it entertaining finding the flaws in and identifying experiments which falsify the arguments here.

 Permalink

February 2011

Reagan, Ronald. The Reagan Diaries. Edited by Douglas Brinkley. New York: Harper Perennial, 2007. ISBN 978-0-06-155833-7.
What's it actually like to be the president of the United States? There is very little first-person testimony on this topic: among American presidents, only Washington, John Quincy Adams, Polk, and Hayes kept comprehensive diaries prior to the twentieth century, and the present work, an abridged edition of the voluminous diaries of Ronald Reagan, was believed, at the time of its publication, to be the only personal, complete, and contemporaneous account of a presidency in the twentieth century. Since its publication, a book purporting to be the White House diaries of Jimmy Carter has been published, but even if you believe the content, who cares about the account of the presidency of a feckless crapweasel whose damage to the republic redounds unto the present day?

Back in the epoch, the media (a couple of decades later to become the legacy media), portrayed Reagan as a genial dunce, bumbling through his presidency at the direction of his ideological aides. That illusion is dispelled in the first ten pages of these contemporaneous diary entries. In these pages, rife with misspellings (he jokes to himself that he always spells the Libyan dictator's name the last way he saw it spelt in the newspaper, and probably ended up with at least a dozen different spellings) and apostrophe abuse, you experience Reagan not writing for historians but rather memos to file about the decisions he was making from day to day.

As somebody who was unfortunate enough to spend a brief part of his life as CEO of an S&P 500 company in the Reagan years, the ability of Reagan, almost forty years my senior, to keep dozens of balls in the air, multitask among grave matters of national security and routine paperwork, meetings with heads of states of inconsequential countries, criminal investigations of his subordinates, and schmooze with politicians staunchly opposed to his legislative agenda to win the votes needed to enact the parts he deemed most important is simply breathtaking. Here we see a chief executive, honed by eight years as governor of California, at the top of his game, deftly out-maneuvering his opponents in Congress not, as the media would have you believe, by his skills in communicating directly to the people (although that played a part), but mostly by plain old politics: faking to the left and then scoring the point from the right. Reading these abridged but otherwise unedited diary entries gives lie to any claim that Reagan was in any way intellectually impaired or unengaged at any point of his presidency. This is a master politician getting done what he can in the prevailing political landscape and committing both his victories and teeth-gritting compromises to paper the very day they occurred.

One of the most stunning realisations I took away from this book is that when Reagan came to office, he looked upon his opposition in the Congress and the executive bureaucracy as people who shared his love of the country and hope for its future, but who simply disagreed as to the best course to achieve their shared goals. You can see it slowly dawning upon Reagan, as year followed year, that although there were committed New Dealers and Cold War Democrats among his opposition, there was a growing movement, both within the bureaucracy and among elected officials, who actually wanted to bring America down—if not to actually capitulate to Soviet hegemony, at least to take it down from superpower status to a peer of others in the “international community”. Could Reagan have imagined that the day would come when a president who bought into this agenda might actually sit in the Oval Office? Of course: Reagan was well-acquainted with worst case scenarios.

The Kindle edition is generally well-produced, but in lieu of a proper index substitutes a lengthy and entirely useless list of “searchable terms” which are not linked in any way to their appearances in the text.

Today is the hundredth anniversary of the birth of Ronald Reagan.

 Permalink

Thor, Brad. State of the Union. New York: Pocket Books, 2004. ISBN 978-0-7434-3678-6.
This is the third in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). How refreshing to read a post-Cold War thriller in which the Russkies are threatening nuclear terror to reassert a strategy of global hegemony which only went underground with the collapse of the Soviet Union.

Whatever happened, anyway, to all of those suitcase nukes which multiple sources said went missing when the Soviet Union dissolved, and which some Soviet defectors claimed had been smuggled into caches in the U.S and Europe to be used as a last-ditch deterrent should war be imminent? Suppose a hard core of ex-Soviet military officers, with the implicit approval of the Russian government, were to attempt a “Hail Mary” pass to win the Cold War in one masterstroke?

I have nattered in reviews of previous novels in this series about Thor's gradually mastering the genre of the thriller. No more—with this one he's entirely up to speed, and it just gets better from here on. Not only are we treated to a Cold War scenario, the novel is written in the style of a period espionage novel in which nothing may be what it appears, and the reader, along with the principal characters, is entirely in the fog as to what is actually going on for the first quarter of the book.

Quibbles? Yes, I have a few. In his quest for authenticity, the author often pens prose which comes across like Hollywood product placement:

… The team was outfitted in black, fire-retardant Nomex fatigues, HellStorm tactical assault gloves, and First Choice body armor. Included with the cache laid out by the armorer, were several newly arrived futuristic .40-caliber Beretta CX4 Storm carbines, as well as Model 96 Beretta Vertex pistols, also in .40 caliber. There was something about being able to interchange their magazines that Harvath found very comforting.

A Picatinny rail system allowed him to outfit the CX4 Storm with an under-mounted laser sight and an above-mounted Leupold scope. …

Ka ching! Ka ching! Ka ching!

I have no idea if the author or publisher were paid for mentioning this most excellent gear for breaking things and killing bad guys, but that's how it reads.

But, hey, what's not to like about a novel which includes action scenes on a Russian nuclear powered icebreaker in the Arctic? Been there—done that!

 Permalink

Taleb, Nassim Nicholas. Fooled by Randomness. 2nd. ed. New York: Random House, [2004] 2005. ISBN 978-0-8129-7521-5.
This book, which preceded the author's bestselling The Black Swan (January 2009), explores a more general topic: randomness and, in particular, how humans perceive and often misperceive its influence in their lives. As with all of Taleb's work, it is simultaneously quirky, immensely entertaining, and so rich in wisdom and insights that you can't possible absorb them all in a single reading.

The author's central thesis, illustrated from real-world examples, tests you perform on yourself, and scholarship in fields ranging from philosophy to neurobiology, is that the human brain evolved in an environment in which assessment of probabilities (and especially conditional probabilities) and nonlinear outcomes was unimportant to reproductive success, and consequently our brains adapted to make decisions according to a set of modular rules called “heuristics”, which researchers have begun to tease out by experimentation. While our brains are capable of abstract thinking and, with the investment of time required to master it, mathematical reasoning about probabilities, the parts of the brain we use to make many of the important decisions in our lives are the much older and more instinctual parts from which our emotions spring. This means that otherwise apparently rational people may do things which, if looked at dispassionately, appear completely insane and against their rational self-interest. This is particularly apparent in the world of finance, in which the author has spent much of his career, and which offers abundant examples of individual and collective delusional behaviour both before and after the publication of this work.

But let's step back from the arcane world of financial derivatives and consider a much simpler and easier to comprehend investment proposition: Russian roulette. A diabolical billionaire makes the following proposition: play a round of Russian roulette (put one cartridge in a six shot revolver, spin the cylinder to randomise its position, put the gun to your temple and pull the trigger). If the gun goes off, you don't receive any payoff and besides, you're dead. If there's just the click of the hammer falling on an empty chamber, you receive one million dollars. Further, as a winner, you're invited to play again on the same date next year, when the payout if you win will be increased by 25%, and so on in subsequent years as long as you wish to keep on playing. You can quit at any time and keep your winnings.

Now suppose a hundred people sign up for this proposition, begin to play the game year after year, and none chooses to take their winnings and walk away from the table. (For connoisseurs of Russian roulette, this is the variety of the game in which the cylinder is spun before each shot, not where the live round continues to advance each time the hammer drops on an empty chamber: in that case there would be no survivors beyond the sixth round.) For each round, on average, 1/6 of the players are killed and out of the game, reducing the number who play next year. Out of the original 100 players in the first round, one would expect, on average, around 83 survivors to participate in the second round, where the payoff will be 1.25 million.

What do we have, then, after ten years of this game? Again, on average, we expect around 16 survivors, each of whom will be paid more than seven million dollars for the tenth round alone, and who will have collected a total of more than 33 million dollars over the ten year period. If the game were to go on for twenty years, we would expect around 3 survivors from the original hundred, each of whom would have “earned” more than a third of a billion dollars each.

Would you expect these people to be regular guests on cable business channels, sought out by reporters from financial publications for their “hot hand insights on Russian roulette”, or lionised for their consistent and rapidly rising financial results? No—they would be immediately recognised as precisely what they were: lucky (and consequently very wealthy) fools who, each year they continue to play the game, run the same 1 in 6 risk of blowing their brains out.

Keep this Russian roulette analogy in mind the next time you see an interview with the “sizzling hot” hedge fund manager who has managed to obtain 25% annual return for his investors over the last five years, or when your broker pitches a mutual fund with a “great track record”, or you read the biography of a businessman or investor who always seems to have made the “right call” at the right time. All of these are circumstances in which randomness, and hence luck, plays an important part. Just as with Russian roulette, there will inevitably be big winners with a great “track record”, and they're the only ones you'll see because the losers have dropped out of the game (and even if they haven't yet they aren't newsworthy). So the question you have to ask yourself is not how great the track record of a given individual is, but rather the size of the original cohort from which the individual was selected at the start of the period of the track record. The rate hedge fund managers “blow up” and lose all of their investors' money in one disastrous market excursion is less than that of the players blown away in Russian roulette, but not all that much. There are a lot of trading strategies which will yield high and consistent returns until they don't, at which time they suffer sudden and disastrous losses which are always reported as “unexpected”. Unexpected by the geniuses who devised the strategy, the fools who put up the money to back it, and the clueless journalists who report the debacle, but entirely predictable to anybody who modelled the risks being run in the light of actual behaviour of markets, not some egghead's ideas of how they “should” behave.

Shall we try another? You go to your doctor for a routine physical, and as part of the laboratory work on your blood, she orders a screening test for a rare but serious disease which afflicts only one person in a thousand but which can be treated if detected early. The screening test has a 5% false positive rate (in 5% of the people tested who do not actually have the disease, it erroneously says that they do) and a 0% false negative rate (if you have the disease, the test will always report that you do). You return to the doctor's office for the follow-up visit and she tells you that you tested positive for the disease. What is the probability you actually have it?

Spoiler warning: Plot and/or ending details follow.  
Did you answer 95%? If you did, you're among the large majority of people, not just among the general population but also practising clinicians, who come to the same conclusion. And you'd be just as wrong as them. In fact, the odds you have the disease are a little less than 2%. Here's how it works. Let's assume an ensemble of 10,000 randomly selected people are tested. On average, ten of these people will have the disease, and all of them will test positive for it (no false negatives). But among that population, 500 people who do not have the disease will also test positive due to the 5% false positive rate of the test. That means that, on average (it gets tedious repeating this, but the natterers will be all over me if I don't do so in every instance), there will be, of 10,000 people tested, a total of 510 positive results, of which 10 actually have the disease. Hence, if you're the recipient of a positive test result, the probability you have the disease is 10/510, or a tad less than 2%. So, before embarking upon a demanding and potentially dangerous treatment regime, you're well advised to get some other independent tests to confirm that you are actually afflicted.
Spoilers end here.  
In making important decisions in life, we often rely upon information from past performance and reputation without taking into account how much those results may be affected by randomness, luck, and the “survivor effect” (the Russian roulette players who brag of their success in the game are necessarily those who aren't yet dead). When choosing a dentist, you can be pretty sure that a practitioner who is recommended by a variety of his patients whom you respect will do an excellent job drilling your teeth. But this is not the case when choosing an oncologist, since all of the people who give him glowing endorsements are necessarily those who did not die under his care, even if their survival is due to spontaneous remission instead of the treatment they received. In such a situation, you need to, as it were, interview the dead alongside the survivors, or, that being difficult, compare the actual rate of survival among comparable patients with the same condition.

Even when we make decisions with our higher cognitive facilities rather than animal instincts, it's still easy to get it wrong. While the mathematics of probability and statistics have been put into a completely rigorous form, there are assumptions in how they are applied to real world situations which can lead to the kinds of calamities one reads about regularly in the financial press. One of the reasons physical scientists transmogrify so easily into Wall Street “quants” is that they are trained and entirely comfortable with statistical tools and probabilistic analysis. The reason they so frequently run off the cliff, taking their clients' fortunes in the trailer behind them, is that nature doesn't change the rules, nor does she cheat. Most physical processes will exhibit well behaved Gaussian or Poisson distributions, with outliers making a vanishingly small contribution to mean and median values. In financial markets and other human systems none of these conditions obtain: the rules change all the time, and often change profoundly before more than a few participants even perceive they have; any action in the market will provoke a reaction by other actors, often nonlinear and with unpredictable delays; and in human systems the Pareto and other wildly non-Gaussian power law distributions are often the norm.

We live in a world in which randomness reigns in many domains, and where we are bombarded with “news and information” which is probably in excess of 99% noise to 1% signal, with no obvious way to extract the signal except with the benefit of hindsight, which doesn't help in making decisions on what to do today. This book will dramatically deepen your appreciation of this dilemma in our everyday lives, and provide a philosophical foundation for accepting the rôle randomness and luck plays in the world, and how, looked at with the right kind of eyes (and investment strategy) randomness can be your friend.

 Permalink

Stross, Charles. Singularity Sky. New York: Ace, 2003. ISBN 978-0-441-01179-7.
Writing science fiction about a society undergoing a technological singularity or about humans living in a post-singularity society is a daunting task. By its very definition, a singularity is an event beyond which it is impossible to extrapolate, yet extrapolation is the very essence of science fiction. Straightforward (some would say naïve) projection of present-day technological trends suggests that some time around the middle of this century it will be possible, for a cost around US$1000, to buy a computer with power equal to that of all human brains now living on Earth, and that in that single year alone more new information will be created than by all of human civilisation up to that time. And that's just the start. With intelligent machines designing their successors, the slow random walk search of Darwinian evolution will be replaced by directed Lamarckian teleological development, with a generation time which may be measured in nanoseconds. The result will be an exponential blow-off in intelligence which will almost instantaneously dwarf that of humans by a factor at least equal to that between humans and insects. The machine intelligences will rapidly converge upon the fundamental limits of computation and cognition imposed by the laws of physics, which are so far beyond anything in the human experience we simply lack the hardware and software to comprehend what their capabilities might be and what they will be motivated to do with them. Trying to “put yourself into the head” of one of these ultimate intellects, which some people believe may emerge within the lifetimes of people alive today, is as impossible as asking C. elegans to comprehend quantum field theory.

In this novel the author sets out to both describe the lives of humans, augmented humans, and post-humans centuries after a mid-21st century singularity on Earth, and also show what happens to a society which has deliberately relinquished technologies it deems “dangerous” to the established order (other than those, of course, which the ruling class find useful in keeping the serfs in their place) when the singularity comes knocking at the door.

When the singularity occurred on Earth, the almost-instantaneously emerging super-intellect called the Eschaton departed the planet toward the stars. Simultaneously, nine-tenths of Earth's population vanished overnight, and those left behind, after a period of chaos, found that with the end of scarcity brought about by “cornucopia machines” produced in the first phase of the singularity, they could dispense with anachronisms such as economic systems and government, the only vestige of which was the United Nations, which had been taken over by the IETF and was essentially a standards body. A century later, after humans achieved faster than light travel, they began to discover that the Eschaton had relocated 90% of Earth's population to habitable worlds around various stars and left them to develop in their own independent directions, guided only by this message from the Eschaton, inscribed on a monument on each world.

I am the Eschaton. I am not your god.
I am descended from you, and I exist in your future.
Thou shalt not violate causality within my historic light cone. Or else.

Or else” ranged from slamming relativistic impactors into misbehaving planets to detonating artificial supernovæ to sterilise an entire interstellar neighbourhood whose inhabitants were up to some mischief which risked spreading. While the “Big E” usually remained off stage, meddling in technologies which might threaten its own existence (for example, time travel to back before its emergence on Earth to prevent the singularity) brought a swift and ruthless response with no more remorse than humans feel over massacring Saccharomyces cerevisiae in the trillions to bake their daily bread.

On Rochard's World, an outpost of the New Republic, everything was very much settled into a comfortable (for the ruling class) stasis, with technology for the masses arrested at something approximating the Victorian era, and the advanced stuff (interstellar travel, superluminal communication) imported from Earth and restricted to managing the modest empire to which they belong and suppressing any uprising. Then the Festival arrived. As with most things post-singularity, the Festival is difficult to describe—imagine how incomprehensible it must appear to a society whose development has been wilfully arrested at the railroad era. Wafted from star to star in starwisp probes, upon arrival its nanotechnological payload unpacks itself, disassembles bodies in the outer reaches of its destination star system, and instantiates the information it carries into the hardware and beings to carry out its mission.

On a planet with sentient life, things immediately begin to become extremely weird. Mobile telephones rain from the sky which offer those who pick them up anything they ask for in return for a story or bit of information which is novel to the Festival. Within a day or so, the entire social and economic structure is upended as cornucopia machines, talking bunnies, farms that float in the air, mountains of gold and diamonds, houses that walk around on chicken legs, and things which words fail to describe become commonplace in a landscape that changes from moment to moment. The Festival, much like a eucaryotic organism which has accreted a collection of retroviruses in its genome over time, is host to a multitude of hangers-on which range from the absurd to the menacing: pie-throwing zombies, giant sentient naked mole rats, and “headlaunchers” which infect humans, devour their bodies, and propel their brains into space to be uploaded into the Festival.

Needless to say, what ensues is somewhat chaotic. Meanwhile, news of these events has arrived at the home world of the New Republic, and a risky mission is mounted, skating on the very edge of the Eschaton's prohibition on causality violation, to put an end to the Festival's incursion and restore order on Rochard's World. Two envoys from Earth, technician Martin Springfield and U.N. arms inspector Rachel Mansour, accompany the expedition, the first to install and maintain the special technology the Republic has purchased from the Earth and the second, empowered by the terms under which Earth technology has been acquired, to verify that it is not used in a manner which might bring the New Republic or Earth into the sights of the Big E.

This is a well-crafted tale which leaves the reader with an impression of just how disruptive a technological singularity will be and, especially, how fast everything happens once the exponential take-off point is reached. The shifts in viewpoint are sometimes uneven—focusing on one subplot for an extended period and then abruptly jumping to another where things have radically changed in the interim, but that may be deliberate in an effort to convey how fluid the situation is in such circumstances. Stross also makes excellent use of understated humour throughout: Burya Rubenstein, the anarcho-Leninist revolutionary who sees his entire socio-economic utopia come and go within a couple of days, much faster than his newly-installed party-line propaganda brain implants can adapt, is one of many delightful characters you'll encounter along the way.

There is a sequel, which I look forward to reading.

 Permalink

March 2011

Shostak, Seth. Confessions of an Alien Hunter. Washington: National Geographic, 2009. ISBN 978-1-4262-0392-3.
This book was published in 2009, the fiftieth anniversary of the modern search for extraterrestrial intelligence (SETI), launched by Cocconi and Morrison's Nature paper which demonstrated that a narrowband microwave beacon transmitted by intelligent extraterrestrials would be detectable by existing and anticipated radio telescopes on Earth. In recent years, the SETI Institute has been a leader in the search for alien signals and the author, as Senior Astronomer at the Institute, a key figure in its ongoing research.

On the night of June 24th, 1997 the author, along with other researchers, were entranced by the display on their computer monitors of a signal relayed from a radio telescope in West Virginia aimed at an obscure dwarf star named YZ Ceti 12 light years from the Sun. As a faint star prone to flares, it seemed an improbable place to find an alien civilisation, but was being monitored as part of a survey of all stars within 15 light years of the Sun, regardless of type. “Candidate signals” are common in SETI: most are due to terrestrial interference, transmissions from satellites or passing aircraft, or transient problems with the instrumentation processing the signal. These can usually be quickly excluded by simple tests such as aiming the antenna away from the source, testing whether the source is moving with respect to the Earth at a rate different than that of the distant stars, or discovering that a second radio telescope in a different location is unable to confirm the signal. Due to a mechanical failure at the backup telescope, the latter test was not immediately available, but all of the other tests seemed to indicate that this was the real deal, and those observing the signal had to make the difficult decision whether to ask other observatories to suspend their regular research and independently observe the source, and/or how to announce the potential discovery to the world. All of these difficult questions were resolved when it was discovered that a small displacement of the antenna from the source, which should have caused a Gaussian fall-off in intensity, in fact changed the signal amplitude not at all. Whatever the source may have been, it could not be originating at YZ Ceti. Shortly thereafter, the signal was identified as a “side lobe” reception of the SOHO spacecraft at the Sun-Earth L1 point. Around this time, the author got a call from a reporter from the New York Times who had already heard rumours of the detection and was trawling for a scoop. So much for secrecy and rumours of cover-ups in the world of SETI! By the evidence, SETI leaks like a sieve.

This book provides an insider's view of the small but fascinating world of SETI: a collective effort which has produced nothing but negative results over half a century, yet holds the potential, with the detection of a single confirmed alien transmission, of upending our species' view of its place in the cosmos and providing hope for the long-term survival of intelligent civilisations in the universe. There is relatively little discussion of the history of SETI, which makes sense since the ongoing enterprise directly benefits from the exponential growth in the capabilities of electronics and computation, and consequentially the breadth and sensitivity of results in the last few years will continue to dwarf those of all earlier searches. Present-day searches, both in the microwave spectrum and looking for ultra-short optical pulses, are described in detail, along with the prospects for the near future, in which the Allen Telescope Array will vastly expand the capability of SETI.

The author discusses the puzzles posed by the expectation that (unless we're missing something fundamental), the window between a technological civilisation's developing the capability to perform SETI research as we presently do it and undergoing a technological singularity which will increase its intelligence and capabilities to levels humans cannot hope to comprehend may be on the order of one to two centuries. If this is the case, any extraterrestrials we contact are almost certain to be these transcendent machine intelligences, whose motivations in trying to contact beings in an ephemeral biological phase such as our own are difficult to imagine. But if such beings are common, shouldn't their cosmological masterworks be writ for all to see in the sky? Well, maybe they are! Vive l'art cosmologique!

What would be the impact of a confirmed detection of an alien transmission? The author suggests, and I tend to concur, probably a lot less than breathless authors of fiction might expect. After all, in the late 19th and early 20th century, Percival Lowell's case for an intelligent canal-building civilisation on Mars was widely accepted, and it did not cause any huge disruption to human self-perception. When I was in high school, many astronomy texts said it was likely Mars was home to lichen-like organisms which accounted for the seasonal changes observed on the planet. And as late as the landing of Viking I on Mars, which this scrivener observed from the Jet Propulsion Laboratory auditorium on July 20th, 1976, the President of the United States asked from the White House whether the lander's camera would be able to photograph any Martian animals rambling around the landscape. (Yes, it would. No, it didn't—although the results of the microbial life detection experiments are still disputed.)

This book, a view from inside the contemporary SETI enterprise, is an excellent retrospective on modern SETI and look at its future prospects at the half century mark. It is an excellent complement to Paul Davies's The Eerie Silence (December 2010), which takes a broader approach to the topic, looking more deeply into the history of the field and exploring how, from the present perspective, the definition of alien intelligence and the ways in which we might detect it should be rethought based on what we've learnt in the last five decades. If I had to read only one book on the topic, I would choose the Davies book, but I don't regret reading them both.

The Kindle edition is reasonably well produced, although there are some formatting oddities, and for some reason the capital “I”s in chapter titles have dots above them. There is a completely useless “index” in which items are not linked to their references in the text.

 Permalink

Ellis, Warren, Chris Weston, Laura Martin, and Michael Heisler. Ministry of Space. Berkeley, CA: Image Comics, 2004. ISBN 978-1-58240-423-3.
This comic book—errm—graphic novel—immerses the reader in an alternative history where British forces captured the German rocket team in the closing days of World War II and saw to it that the technology they developed would not fall either American or Soviet hands. Air Commodore John Dashwood, a figure with ambitions and plans which put him in the league with Isambard Kingdom Brunel, persuades Churchill to embark on an ambitious development program to extend the dominion of the British Empire outward into space.

In this timeline, all of the key “firsts” in space are British achievements, and Britain in the 1950s is not the austere and dingy grey of shrinking empire but rather where Wernher von Braun's roadmap for expansion of the human presence into space is being methodically implemented, with the economic benefits flowing into British coffers. By the start of the 21st century, Britain is the master of space, but the uppity Americans are threatening to mount a challenge to British hegemony by revealing dark secrets about the origin of the Ministry of Space unless Britain allows their “Apollo” program to go ahead.

This story works beautifully in the graphic format, and the artwork and colouring are simply luscious. If you don't stop and linger over the detail in the illustrations you'll miss a lot of the experience. The only factual error I noted is that in the scene at Peenemunde an American GI says the V-2's range was only 60 miles while, in fact, it was 200 miles. (But then, this may be deliberate, intended to show how ignorant the Americans were of the technology.) The reader experiences a possible reality not only for Britain, but for the human species had the development of space been a genuine priority like the assertion of sea power in the 19th century instead of an arena for political posturing and pork barrel spending. Exploring this history, you'll encounter a variety of jarring images and concepts which will make you think how small changes in history can have great consequences downstream.

 Permalink

Cashill, Jack. Deconstructing Obama. New York: Threshold Editions, 2011. ISBN 978-1-4516-1111-3.
Barack Obama's 1995 memoir, Dreams from My Father (henceforth Dreams), proved instrumental in his rise from an obscure Chicago lawyer and activist to the national stage and eventually the presidency. Almost universally praised for its literary merit, it establishes Obama's “unique personal narrative” which is a key component of his attraction to his many supporters. Amidst the buzz of the 2008 presidential campaign, the author decided to buy a copy of Dreams as an “airplane book”, and during the flight and in the days that followed, was astonished by what he was reading. The book was not just good, it was absolutely superb—the kind of work editors dream of having land on their desk. In fact, it was so good that Cashill, a veteran author and editor who has reviewed the portfolios of hundreds of aspiring writers, found it hard to believe that a first time writer, however smart, could produce such a work on his own. In the writing craft, it is well known that almost all authors should plan to throw away their first million words or equivalently invest on the order of 10,000 hours mastering their craft before producing a publishable book-length work, no less a bestselling masterpiece like Dreams. There was no evidence for such an investment or of natural talent in any of Obama's earlier (and meagre) publications: they are filled with clichés, clumsy in phrasing, and rife with grammatical problems such as agreement of subject and verb.

Further, it was well documented that Obama had defaulted upon his first advance for the book, changed the topic, and then secured a second advance from a different publisher, then finally, after complaining of suffering from writer's block, delivering a manuscript in late 1994. At the time he was said to be writing Dreams, he had a full time job at a Chicago law firm, was teaching classes at the University of Chicago, and had an active social life. All of this caused Cashill to suspect Obama had help with the book. Now, it's by no means uncommon for books by politicians to be largely or entirely the work of ghostwriters, who may work entirely behind the scenes, leaving the attribution of authorship entirely to their employers. But when Dreams was written, Obama was not a politician, but rather a lawyer and law school instructor still burdened by student loans. It is unlikely he could have summoned the financial resources nor had the reputation to engage a ghostwriter sufficiently talented to produce Dreams. Further, if the work is not Obama's, then he is a liar, for, speaking to a group of teachers in June 2008, he said, “I've written two books. I actually wrote them myself.”

These observations set the author, who has previously undertaken literary and intellectual detective work, on the trail of the origin of Dreams. He discovers that, just at the time the miraculous manuscript appeared, Obama had begun to work with unrepentant Weather Underground domestic terrorist Bill Ayers, who had reinvented himself as an “education reformer” in Chicago. At the time, Obama's ambition was to become mayor of Chicago, an office which would allow him to steer city funds into the coffers of Ayers's organisations in repayment of his contribution to Obama's political ascendancy (not to mention the potential blackmail threat an unacknowledged ghostwriter has over a principal who claims sole authorship). In any case, Dreams not only matches contemporary works by Ayers on many metrics used to test authorship, it is rich in nautical metaphors, many expressed in the same words as in Ayers's own work. Ayers once worked as a merchant seaman; Obama's only experience at sea was bodysurfing in Hawaii.

Cashill examines Dreams in fine-grained detail, both bolstering the argument that Ayers was the principal wordsmith behind the text, and also documenting how the narrative in the book is at variance with the few well-documented facts we have about Obama's life and career. He then proceeds to speculate upon Obama's parentage, love life before he met Michelle, and other aspects of the canonical Obama story. As regards Ayers as the author of Dreams, I consider the case as not proved beyond a reasonable doubt (that would require one of the principals in the matter speaking out and producing believable documentation), but to me the case here meets the standard of preponderance of evidence. The more speculative claims are intriguing but, in my opinion, do not rise to that level.

What is beyond dispute is just how little is known about the current occupant of the Oval Office, how slim the paper trail is of his origin and career, and how little interest the legacy media have expressed in investigating these details. There are obvious and thoroughly documented discrepancies between what is known for sure about Obama and the accounts in his two memoirs, and the difference in literary style between the two is, in itself, cause to call their authorship into question. When the facts about Obama begin to come out—and they will, the only question is when—if only a fraction of what is alleged in this well-researched and -argued book is true, it will be the final undoing of any credibility still retained by the legacy media.

The Kindle edition is superbly produced, with the table of contents, notes, and index all properly linked to the text.

 Permalink

Royce, Kenneth W. Môlon Labé. Ignacio, CO: Javelin Press, [1997] 2004. ISBN 978-1-888766-07-3.
Legend has it that when, in 480 B.C. at Thermopylae, Emperor Xerxes I of Persia made an offer to the hopelessly outnumbered Greek defenders that they would be allowed to leave unharmed if they surrendered their weapons, King Leonidas I of Sparta responded “μολὼν λαβέ” (molōn labe!)—“Come and take them!” Ever since, this laconic phrase has been a classic (as well as classical) expression of defiance, even in the face of overwhelming enemy superiority. It took almost twenty-five centuries until an American general uttered an even more succinct reply to a demand for capitulation.

In this novel, the author, who uses the nom de plume “Boston T. Party”, sketches a scenario as to how an island of liberty might be established within a United States which is spiraling into collectivism; authoritarian rule over a docile, disarmed, and indoctrinated population; and economic collapse. The premise is essentially that of the Free State Project, before they beclowned themselves by choosing New Hampshire as their target state. Here, Wyoming is the destination of choice, and the author documents how it meets all criteria for an electoral coup d'état by a relatively small group of dedicated “relocators” and how the established population is likely to be receptive to individual liberty oriented policies once it's demonstrated that a state can actually implement them.

Libertarians are big thinkers, but when it comes to actually doing something which requires tedious and patient toil, not so much. They love to concentrate on grand scenarios of taking over the federal government of the United States and reversing a century of usurpation of liberty, but when it comes to organising at the county level, electing school boards, sheriffs, and justices of the peace, and then working up to state legislature members, they quickly get bored and retreat into ethereal arguments about this or that theoretical detail, or dreaming about how some bolt from the blue might bring them to power nationwide. Just as Stalin rescoped the Communist project from global revolution to “socialism in one country”, this book narrows the libertarian agenda to “liberty in one state”, with the hope that its success will be the spark which causes like-minded people in adjacent states to learn from the example and adopt similar policies themselves.

This is an optimistic view of a future which plausibly could happen. Regular readers of this chronicle know that my own estimation of the prospects for the United States on its present course is bleak—that's why I left in 1991 and have not returned except for family emergencies since then. I have taken to using the oracular phrase “Think Pinochet, not Reagan” when describing the prospects for the U.S. Let me now explain what I mean by that. Many conservatives assume that the economic circumstances in the U.S. are so self-evidently dire that all that is needed is a new “great communicator” like Ronald Reagan to explain them to the electorate in plain language to begin to turn the situation around. But they forget that Reagan, notwithstanding his world-historic achievements, only slowed the growth of the federal beast on his watch and, in fact, presided over the greatest peacetime expansion of the national debt in history (although, by present-day standards, the numbers look like pocket change). Further, Reagan did nothing to arrest the “long march through the institutions” which has now resulted in near-total collectivist/statist hegemony in the legacy media, academia from kindergarten to graduate and professional education, government bureaucracies at all levels, and even management of large corporations who are dependent upon government for their prosperity.

In an environment where the tax eaters will soon, if they don't already, outnumber and outvote the taxpayers, the tipping point has arrived, and the way to bet is on a sudden and complete economic collapse due to a “debt spiral”, possibly accompanied by hyperinflation as the Federal Reserve becomes the only buyer of U.S. Treasury debt left in the market.

When the reality of twenty dollar a gallon gasoline (rising a dollar a day as the hyperinflation exponential starts to kick in, then tens, hundreds, etc.) hits home; when three and four hour waits to fill up the tank become the norm after “temporary and emergency” price controls are imposed, and those who have provided for their own retirement see the fruits of their lifetime of labour and saving wiped out in a matter of weeks by runaway inflation, people will be looking for a way out. That's when the Man on the White Horse will appear.

I do not know who he will be—in all likelihood it's somebody entirely beneath the radar at the moment. “When it's steam engine time, it steam engines.” When it's Pinochet time, it Pinochets.

I do not know how this authoritarian ruler will come to power. Given the traditions of the United States, I doubt it will be by a military coup, but rather the election of a charismatic figure as President, along with a compliant legislature willing to rubber-stamp his agenda and enact whatever “enabling acts” he requests. Think something like Come Nineveh, Come Tyre (December 2008). But afterward the agenda will be clear: “clean out” the media, educators, judiciary, and bureaucrats who are disloyal. Defund the culturally destructive apparatus of the state. Sunset all of the programs which turn self-reliant citizens into wards of the state. Adjust the institutions of democracy to weight political influence according to contribution to the commonwealth. And then, one hopes (although that's not the way to bet), retire and turn the whole mess over to a new bunch of politicians who will proceed to foul things up again, but probably sufficiently slowly there will be fifty years or so of prosperity before the need to do it all over again.

When I talk about an “American Pinochet” I'm not implying that such an outcome would involve “disappeared people” or other sequelæ of authoritarian tyranny. But it would involve, at the bare minimum, revocation of tenure at all state-supported educational institutions, review of educators, media figures, judges, and government personnel by loyalty boards empowered to fire them and force them to seek employment in the productive sector of the economy, and a comprehensive review of the actions of all government agents who may have violated the natural rights of citizens.

I do not want this to happen! For my friends in the United States who have not heeded my advice over the last 15 years to get out while they can, I can say only that this is the best case scenario I can envision given the present circumstances. You don't want to know about my darker views of the future there—really, you don't.

This novel points to a better way—an alternative which, although improbable is not impossible, in which a small cadre of lovers of liberty might create a haven which attracts like-minded people, compounding the effect and mounting a challenge to the illegitimate national government. Along with the price of admission, you'll get tutorials in the essentials of individual liberty such as main battle rifles, jury nullification, hard money, strong encryption, and the balancing act between liberty and life-affirming morality.

What more can I say? Read this book.

 Permalink

April 2011

Rumsfeld, Donald. Known and Unknown. New York: Sentinel, 2011. ISBN 978-1-59523-067-6.
In his career in public life and the private sector, spanning more than half a century, the author was:

  • A Naval aviator, reaching the rank of Captain.
  • A Republican member of the House of Representatives from Illinois spanning the Kennedy, Johnson, and Nixon administrations.
  • Director of the Office of Economic Opportunity and the Economic Stabilization Program in the Nixon administration, both agencies he voted against creating while in Congress.
  • Ambassador to NATO in Brussels.
  • White House Chief of Staff for Gerald Ford.
  • Secretary of Defense in the Ford administration, the youngest person to have ever held that office.
  • CEO of G. D. Searle, a multinational pharmaceutical company, which he arranged to be sold to Monsanto.
  • Special Envoy to the Middle East during the Reagan administration.
  • National chairman of Bob Dole's 1996 presidential campaign.
  • Secretary of Defense in the George W. Bush administration, the oldest person to have ever held that office.

This is an extraordinary trajectory through life, and Rumsfeld's memoir is correspondingly massive: 832 pages in the hardcover edition. The parts which will be most extensively dissected and discussed are those dealing with his second stint at DOD, and the contentious issues regarding the Afghanistan and Iraq wars, treatment of detainees, interrogation methods, and other issues which made him a lightning rod during the administration of Bush fils. While it was interesting to see his recollection of how these consequential decisions were made, documented by extensive citations of contemporary records, I found the overall perspective of how decision-making was done over his career most enlightening. Nixon, Ford, and Bush all had very different ways of operating their administrations, all of which were very unlike those of an organisation such as NATO or a private company, and Rumsfeld, who experienced all of them in a senior management capacity, has much wisdom to share about what works and what doesn't, and how one must adapt management style and the flow of information to the circumstances which obtain in each structure.

Many supportive outside observers of the G. W. Bush presidency were dismayed at how little effort was made by the administration to explain its goals, strategy, and actions to the public. Certainly, the fact that it was confronted with a hostile legacy media which often seemed to cross the line from being antiwar to rooting for the other side didn't help, but Rumsfeld, the consummate insider, felt that the administration forfeited opportunity after opportunity to present its own case, even by releasing source documents which would in no way compromise national security but show the basis upon which decisions were made in the face of the kind of ambiguous and incomplete information which confronts executives in all circumstances.

The author's Web site provides a massive archive of source documents cited in the book, along with a copy of the book's end notes which links to them. Authors, this is how it's done! A transcript of an extended interview with the author is available; it was hearing this interview which persuaded me to buy the book. Having read it, I recommend it to anybody who wishes to comprehend how difficult it is to be in a position where one must make decisions in a fog of uncertainty, knowing the responsibility for them will rest solely with the decider, and that not to decide is a decision in itself which may have even more dire consequences. As much as Bush's national security team was reviled at the time, one had the sense that adults were in charge.

A well-produced Kindle edition is available, with the table of contents, footnotes, and source citations all properly linked to the text. One curiosity in the Kindle edition is that in the last 40% of the book the word “after” is capitalised everywhere it appears, even in the middle of a sentence. It seems that somebody in the production process accidentally hit “global replace” when attempting to fix a single instance. While such fat-finger errors happen all the time whilst editing documents, it's odd that a prestigious publisher (Sentinel is a member of the Penguin Group) would not catch such a blunder in a high profile book which went on to top the New York Times best seller list.

 Permalink

Drezner, Daniel W. Theories of International Politics and Zombies. Princeton: Princeton University Press, 2011. ISBN 978-0-691-14783-3.
“A specter is haunting world politics….” (p. 109) Contemporary international politics and institutions are based upon the centuries-old system of sovereign nation-states, each acting in its own self interest in a largely anarchic environment. This system has seen divine right monarchies supplanted by various forms of consensual government, dictatorships, theocracies, and other forms of governance, and has survived industrial and technological revolutions, cataclysmic wars, and reorganisation of economic systems and world trade largely intact. But how will this system come to terms with a new force on the world stage: one which transcends national borders, acts upon its own priorities regardless of the impact upon nation-states, inexorably recruits adherents wherever its presence becomes established, admits of no defections from its ranks, is immune to rational arguments, presents an asymmetrical threat against which conventional military force is largely ineffective and tempts free societies to sacrifice liberty in the interest of security, and is bent on supplanting the nation-state system with a worldwide regime free of the internal conflicts which seem endemic in the present international system?

I am speaking, of course, about the Zombie Menace. The present book is a much-expanded version of the author's frequently-cited article on his Web log at Foreign Policy magazine. In it, he explores how an outbreak of flesh-eating ghouls would be responded to based on the policy prescriptions of a variety of theories of international relations, including structural realism, liberal institutionalism, neoconservatism, and postmodern social constructivism. In addition, he describes how the zombie threat would affect domestic politics in Western liberal democracies, and how bureaucratic institutions, domestic and international, would react to the emerging crisis (bottom line: turf battles).

The author makes no claim to survey the policy prescriptions of all theories: “To be blunt, this project is explicitly prohuman, whereas Marxists and feminists would likely sympathize more with the zombies.” (p. 17, footnote) The social implications of a burgeoning zombie population are also probed, including the inevitable emergence of zombie rights groups and non-governmental organisations on the international stage. How long can it be until zombie suffrage marchers take (or shuffle) to the streets, waving banners proclaiming “Zombies are (or at least were) people too!”?

This is a delightful and thoughtful exploration of a hypothetical situation in international politics which, if looked at with the right kind of (ideally, non-decaying) eyes, has a great deal to say about events in the present-day world. There are extensive source citations, both to academic international relations and zombie literature, and you're certain to come away with a list of films you'll want to see. Anne Karetnikov's illustrations are wonderful.

The author is professor of international politics at Tufts University and a member of the Zombie Research Society. I must say I'm dismayed that Princeton University Press condones the use of the pejorative and hurtful term “zombie”. How hard would it be to employ the non-judgemental “person of reanimation” instead?

 Permalink

Whittington, Mark R. Children of Apollo. Bloomington, IN: Xlibris, 2002. ISBN 978-1-4010-4592-0.
This is a brilliant concept and well-executed (albeit with some irritating flaws I will discuss below). This novel is within the genre of “alternative history” and, conforming to the rules, takes a single counterfactual event as the point of departure for a recounting of the 1970s as I, and I suspect many others, expected that decade to play out at its dawn. It is a celebration of what might have been, and what we have lost compared to the future we chose not to pursue.

In the novel's timeline, an obscure CIA analyst writes a memo about the impact Soviet efforts to beat the U.S. to the Moon are having upon the Soviet military budget and economy, and this memo makes it to the desk of President Nixon shortly after the landing of Apollo 11. Nixon is persuaded by his senior advisors that continuing and expanding the Apollo and follow-on programs (whose funding had been in decline since 1966) would be a relatively inexpensive way to, at the least, divert funds which would otherwise go to Soviet military and troublemaking around the world and, at the best, bankrupt their economy because an ideology which proclaimed itself the “wave of the future” could not acquiesce to living under a “capitalist Moon”.

Nixon and his staff craft a plan thoroughly worthy of the “Tricky Dick” moniker he so detested, and launch a program largely modelled upon the 1969 Space Task Group report, with the addition of transitioning the space shuttle recommended in the report to competitive procurement of transportation services from the private sector. This sets off the kind of steady, yet sustainable, expansion of the human presence into space that von Braun always envisioned. At the same time, it forces the Soviets, the Luddite caucus in Congress, and the burgeoning environmental movement into a corner, and they're motivated to desperate measures to bring an end to what some view as destiny but they see as disaster.

For those interested in space who lived through the 1970s and saw dream after dream dashed, downscoped, or deferred, this is a delightful and well-crafted exploration of how it could have been. Readers too young to remember the 1970s may miss a number of the oblique references to personalities and events of that regrettable decade.

The Kindle edition is perfectly readable, reasonably inexpensive, but sloppily produced. A number of words are run together and hyphenated words in the print edition not joined. Something funny appears to have happened in translating passages in italics into the electronic edition—I can't quite figure out what, but I'm sure the author didn't intend parts of words to be set in italics. In addition there are a number of errors in both the print and Kindle editions which would have been caught by a sharp-eyed copy editor. I understand that this is a self-published work, but there are many space buffs (including this one) who would have been happy to review the manuscript and check it for both typographical and factual errors.

 Permalink

Raimondo, Justin. An Enemy of the State. Amherst, NY: Prometheus Books, 2000. ISBN 978-1-57392-809-0.
Had Murray Rothbard been a man of the Left, he would probably be revered today as one of the towering intellects of the twentieth century. Certainly, there was every reason from his origin and education to have expected him to settle on the Left: the child of Jewish immigrants from Poland and Russia, he grew up in a Jewish community in New York City where, as he later described it, the only question was whether one would join the Communist Party or settle for being a fellow traveller. He later remarked that, “I had two sets of Communist Party uncles and aunts, on both sides of my family.” While studying for his B.A., M.A., and Ph.D. in economics from Columbia University in the 1940s and '50s, he was immersed in a political spectrum which ranged from “Social Democrats on the ‘right’ to Stalinists on the left”.

Yet despite the political and intellectual milieu surrounding him, Rothbard followed his own compass, perhaps inherited in part from his fiercely independent father. From an early age, he came to believe individual liberty was foremost among values, and that based upon that single desideratum one could deduce an entire system of morality, economics, natural law, and governance which optimised the individual's ability to decide his or her own destiny. In the context of the times, he found himself aligned with the Old Right: the isolationist, small government, and hard money faction of the Republican Party which was, in the Eisenhower years, approaching extinction as “conservatives” acquiesced to the leviathan “welfare-warfare state” as necessary to combat the Soviet menace. Just as Rothbard began to put the foundations of the Old Right on a firm intellectual basis, the New Right of William F. Buckley and his “coven of ex-Communists” at National Review drove the stake through that tradition, one of the first among many they would excommunicate from the conservative cause as they defined it.

Rothbard was a disciple of Ludwig von Mises, and applied his ideas and those of other members of the Austrian school of economics to all aspects of economics, politics, and culture. His work, both scholarly and popular, is largely responsible for the influence of Austrian economics today. (Here is a complete bibliography of Rothbard's publications.)

Rothbard's own beliefs scarcely varied over his life, and yet as the years passed and the political tectonic plates shifted, he found himself aligned with the Old Right, the Ayn Rand circle (from which he quickly extricated himself after diagnosing the totalitarian tendencies of Rand and the cult-like nature of her followers), the nascent New Left (before it was taken over by communists), the Libertarian Party, the Cato Institute, and finally back to the New Old Right, with several other zigs and zags along the way. In each case, Rothbard embraced his new allies and threw himself into the cause, only to discover that they were more interested in factionalism, accommodation with corrupt power structures, or personal ambition than the principles which motivated him.

While Rothbard's scholarly publications alone dwarf those of many in the field, he was anything but an ivory tower academic. He revelled in the political fray, participating in campaigns, writing speeches and position papers, formulating strategy, writing polemics aimed at the general populace, and was present at the creation of several of the key institutions of the contemporary libertarian movement. Fully engaged in the culture, he wrote book and movie reviews, satire, and commentary on current events. Never discouraged by the many setbacks he experienced, he was always a “happy warrior”, looking at the follies of the society around him with amusement and commenting wittily about them in his writings. While eschewing grand systems and theories of history in favour of an entirely praxeology-based view of the social sciences (among which he counted economics, rejecting entirely the mathematically-intense work of pseudoscientists who believed one could ignore human action when analysing the aggregate behaviour of human actors), he remained ever optimistic that liberty would triumph in the end simply because it works better, and will inevitably supplant authoritarian schemes which constrain the human potential.

This is a well-crafted overview of Rothbard's life, work, and legacy by an author who knew and worked with Rothbard in the last two decades of his career. Other than a coruscating animus toward Buckley and his minions, it provides a generally even-handed treatment of the many allies and adversaries (often the same individuals at different times) with which Rothbard interacted over his career. Chapter 7 provides an overview and reading guide to Rothbard's magisterial History of Economic Thought, which is so much more—essentially a general theory of the social sciences—that you'll probably be persuaded to add it to your reading list.

 Permalink

May 2011

Clawson, Calvin C. Mathematical Mysteries. New York: Perseus Books, 1996. ISBN 978-0-7382-0259-4.
This book might be more accurately titled “Wonders of Number Theory”, but doubtless the publisher feared that would scare away the few remaining customers who weren't intimidated by the many equations in the text. Within that limited scope, and for readers familiar with high school algebra (elementary calculus makes a couple of appearances, but you'll miss little or nothing if you aren't acquainted with it), this is an introduction to the beauty of mathematics, its amazing and unexpected interconnectedness, and the profound intellectual challenge of problems, some posed in ancient Greece, which can easily be explained to a child, yet which remain unsolved after millennia of effort by the most intelligent exemplars of our species.

The hesitant reader is eased into the topic through a variety of easily-comprehended and yet startling results, expanding the concept of number from the natural numbers to the real number line (like calculus, complex numbers only poke their nose under the tent in a few circumstances where they absolutely can't be avoided), and then the author provides a survey of the most profound and intractable puzzles of number theory including the Goldbach conjecture and Riemann hypothesis, concluding with a sketch of Gödel's incompleteness theorems and what it all means.

Two chapters are devoted to the life and work of Ramanujan, using his notebooks to illustrate the beauty of an equation expressing a deep truth and the interconnections in mathematics this singular genius perceived, such as:

\prod_{i}^{\infty} \left(1+\frac{1}{{p_i}^4}\right) = \frac{105}{\pi^4}

which relates the sequence of prime numbers (pi is the ith prime number) to the ratio of the circumference to the diameter of a circle. Who could have imagined they had anything to do with one another? And how did 105 get into it?

This book is a pure joy, and a excellent introduction for those who “don't get it” of how mathematics can become a consuming passion for those who do. The only low spot in the book is chapter 9, which discusses the application of large prime numbers to cryptography. While this was much in the news during the crypto wars when the book was published in the mid-1990s, some of the information in this chapter is factually incorrect and misleading, and the attempt at a popular description of the RSA algorithm will probably leave many who actually understand its details scratching their heads. So skip this chapter.

I bought this book shortly after it was published, and it sat on my shelf for a decade and a half until I picked it up and started reading it. I finished it in three days, enjoying it immensely, and I was already familiar with most of the material covered here. For those who are encountering it for the first time, this may be a door into a palace of intellectual pleasures they previously thought to be forbidding, dry, and inaccessible to them.

 Permalink

Gabb, Sean. The Churchill Memorandum. Raleigh, NC: Lulu.com, 2011. ISBN 978-1-4467-2257-2.
This thriller is set in Britain in the year 1959 in an alternative history where World War II never happened: Hitler died in a traffic accident while celebrating his conquest of Prague, and Göring and the rest of his clique, opting to continue to enrich themselves at the expense of the nation rather than risk it all on war, came to an accommodation with Britain and France where Germany would not interfere with their empires in return for Germany's being given a free hand in Eastern Europe up to the Soviet border. With British prosperity growing and dominance of the seas unchallenged, when Japan attacked Pearl Harbor and the Philippines, Britain was able to arrange a negotiated settlement under which the Royal Navy would guarantee freedom of the seas, Hawaii, and the west coast of the U.S.

The U.S., after a series of domestic economic and political calamities, has become an authoritarian, puritanical dictatorship under Harry Anslinger and his minions, and expatriates from his tyranny enrich the intellectual and economic life of Europe.

By 1959, the world situation has evolved into a more or less stable balance of powers much like Europe in the late 19th century, with Britain, Germany, the Soviet Union, and Japan all engaged in conflicts around the margin, but in an equilibrium where any one becoming too strong will bring forth an alliance among the others to restore the balance. Britain and Germany have developed fission bombs, but other than a single underground test each, have never used them and rely upon them for deterrence against each other and the massive armies of the Soviets. The U.S. is the breadbasket and natural resource supplier of the world, but otherwise turned inward and absent from the international stage.

In this climate, Britain is experiencing an age of prosperity unprecedented in its history. Magnetically levitated trains criss-cross the island, airships provide travel in style around the globe, and a return to the gold standard has rung in sound money not only at home but abroad. Britain and Germany have recently concluded a treaty to jointly open the space frontier.

Historian Anthony Markham, author of a recently published biography of Churchill, is not only the most prominent Churchill scholar but just about the only one—who would want to spend their career studying a marginal figure whose war-mongering, had it come to fruition, would have devastated Britain and the Continent, killed millions, destroyed the Empire, and impoverished people around the world? While researching his second volume on Churchill, he encounters a document in Churchill's handwriting which, if revealed, threatens to destabilise the fragile balance of power and return the world to the dark days of the 1930s, putting at risk all the progress made since then. Markham finds himself in the middle of a bewilderingly complicated tapestry of plots and players, including German spies, factions in the Tory party, expatriate Ayn Rand supporters, the British Communist party, Scotland Yard, the Indian independence movement, and more, where nothing is as it appears on the surface. Many British historical figures appear here, with those responsible for the decline of Britain in our universe skewered (or worse) from a libertarian perspective. Chapter 31 is a delightful tour d'horizon of the pernicious ideas which reduced Britain from global hegemon to its sorry state today.

I found that this book works both as a thriller and dark commentary of how bad ideas can do more damage to a society and nation than any weapon or external enemy, cleverly told from the perspective of a world where they didn't prevail. Readers unfamiliar with British political figures and their disastrous policies in the postwar era may need to brush up a bit to get the most out of this novel. The Abolition of Britain (November 2005) is an excellent place to start.

As alternative history, I found this less satisfying. Most works in the genre adhere to the rule that one changes a single historical event and then traces how the consequences of that change propagate and cascade through time. Had the only change been Hitler's dying in a car crash, this novel would conform to the rule, but that isn't what we have here. Although some subsequent events are consequences of Hitler's death, a number of other changes to history which (at least to this reader) don't follow in any way from it make major contributions to the plot. Now, a novelist is perfectly free to choose any premises he wishes—there are no black helicopters filled with agents of Anslinger's Bureau of Genre Enforcement poised to raid those who depart from the convention—but as a reader I found that having so many counterfactual antecedents made for an alternative world which was somewhat confusing until one eventually encountered the explanation for the discordant changes.

A well-produced Kindle edition is available.

 Permalink

Fergusson, Adam. When Money Dies. New York: PublicAffairs, [1975] 2010. ISBN 978-1-58648-994-6.

This classic work, originally published in 1975, is the definitive history of the great inflation in Weimar Germany, culminating in the archetypal paroxysm of hyperinflation in the Fall of 1923, when Reichsbank printing presses were cranking out 100 trillion (1012) mark banknotes as fast as paper could be fed to them, and government expenditures were 6 quintillion (1018) marks while, in perhaps the greatest achievement in deficit spending of all time, revenues in all forms accounted for only 6 quadrillion (1015) marks. The book has long been out of print and much in demand by students of monetary madness, driving the price of used copies into the hundreds of dollars (although, to date, not trillions and quadrillions—patience). Fortunately for readers interested in the content and not collectibility, the book has been re-issued in a new paperback and electronic edition, just as inflation has come back onto the radar in the over-leveraged economies of the developed world. The main text is unchanged, and continues to use mid-1970s British nomenclature for large numbers (“millard” for 109, “billion” for 1012 and so on) and pre-decimalisation pounds, shillings, and pence for Sterling values. A new note to this edition explains how to convert the 1975 values used in the text to their approximate present-day equivalents.

The Weimar hyperinflation is an oft-cited turning point in twentieth century, but like many events of that century, much of the popular perception and portrayal of it in the legacy media is incorrect. This work is an in-depth antidote to such nonsense, concentrating almost entirely upon the inflation itself, and discussing other historical events and personalities only when relevant to the main topic. To the extent people are aware of the German hyperinflation at all, they'll usually describe it as a deliberate and cynical ploy by the Weimar Republic to escape the reparations for World War I exacted under the Treaty of Versailles by inflating away the debt owed to the Allies by debasing the German mark. This led to a cataclysmic episode of hyperinflation where people had to take a wheelbarrow of banknotes to the bakery to buy a loaf of bread and burning money would heat a house better than the firewood or coal it would buy. The great inflation and the social disruption it engendered led directly to the rise of Hitler.

What's wrong with this picture? Well, just about everything…. Inflation of the German mark actually began with the outbreak of World War I in 1914 when the German Imperial government, expecting a short war, decided to finance the war effort by deficit spending and printing money rather than raising taxes. As the war dragged on, this policy continued and was reinforced, since it was decided that adding heavy taxes on top of the horrific human cost and economic privations of the war would be disastrous to morale. As a result, over the war years of 1914–1918 the value of the mark against other currencies fell by a factor of two and was halved again in the first year of peace, 1919. While Germany was committed to making heavy reparation payments, these payments were denominated in gold, not marks, so inflating the mark did nothing to reduce the reparation obligations to the Allies, and thus provided no means of escaping them. What inflation and the resulting cheap mark did, however, was to make German exports cheap on the world market. Since export earnings were the only way Germany could fund reparations, promoting exports through inflation was both a way to accomplish this and to promote social peace through full employment, which was in fact achieved through most of the early period of inflation. By early 1920 (well before the hyperinflationary phase is considered to have kicked in), the mark had fallen to one fortieth of its prewar value against the British pound and U.S. dollar, but the cost of living in Germany had risen only by a factor of nine. This meant that German industrialists and their workers were receiving a flood of marks for the products they exported which could be spent advantageously on the domestic market. Since most of Germany's exports at the time relied little on imported raw materials and products, this put Germany at a substantial advantage in the world market, which was much remarked upon by British and French industrialists at the time, who were prone to ask, “Who won the war, anyway?”.

While initially beneficial to large industry and its organised labour force which was in a position to negotiate wages that kept up with the cost of living, and a boon to those with mortgaged property, who saw their principal and payments shrink in real terms as the currency in which they were denominated declined in value, the inflation was disastrous to pensioners and others on fixed incomes denominated in marks, as their standard of living inexorably eroded.

The response of the nominally independent Reichsbank under its President since 1908, Dr. Rudolf Havenstein, and the German government to these events was almost surreally clueless. As the originally mild inflation accelerated into dire inflation and then headed vertically on the exponential curve into hyperinflation they universally diagnosed the problem as “depreciation of the mark on the foreign exchange market” occurring for some inexplicable reason, which resulted in a “shortage of currency in the domestic market”, which could only be ameliorated by the central bank's revving up its printing presses to an ever-faster pace and issuing notes of larger and larger denomination. The concept that this tsunami of paper money might be the cause of the “depreciation of the mark” both at home and abroad, never seemed to enter the minds of the masters of the printing presses.

It's not like this hadn't happened before. All of the sequelæ of monetary inflation have been well documented over forty centuries of human history, from coin clipping and debasement in antiquity through the demise of every single unbacked paper currency ever created. Lord D'Abernon, the British ambassador in Berlin and British consular staff in cities across Germany precisely diagnosed the cause of the inflation and reported upon it in detail in their dispatches to the Foreign Office, but their attempts to explain these fundamentals to German officials were in vain. The Germans did not even need to look back in history at episodes such as the assignat hyperinflation in revolutionary France: just across the border in Austria, a near-identical hyperinflation had erupted just a few years earlier, and had eventually been stabilised in a manner similar to that eventually employed in Germany.

The final stages of inflation induce a state resembling delirium, where people seek to exchange paper money for anything at all which might keep its value even momentarily, farmers with abundant harvests withhold them from the market rather than exchange them for worthless paper, foreigners bearing sound currency descend upon the country and buy up everything for sale at absurdly low prices, employers and towns, unable to obtain currency to pay their workers, print their own scrip, further accelerating the inflation, and the professional and middle classes are reduced to penury or liquidated entirely, while the wealthy, industrialists, and unionised workers do reasonably well by comparison.

One of the principal problems in coping with inflation, whether as a policy maker or a citizen or business owner attempting to survive it, is inherent in its exponential growth. At any moment along the path, the situation is perceived as a “crisis” and the current circumstances “unsustainable”. But an exponential curve is self-similar: when you're living through one, however absurd the present situation may appear to be based on recent experience, it can continue to get exponentially more bizarre in the future by the inexorable continuation of the dynamic driving the curve. Since human beings have evolved to cope with mostly linear processes, we are ill-adapted to deal with exponential growth in anything. For example, we run out of adjectives: after you've used up “crisis”, “disaster”, “calamity”, “catastrophe”, “collapse”, “crash”, “debacle”, “ruin”, “cataclysm”, “fiasco”, and a few more, what do you call it the next time they tack on three more digits to all the money?

This very phenomenon makes it difficult to bring inflation to an end before it completely undoes the social fabric. The longer inflation persists, the more painful wringing it out of an economy will be, and consequently the greater the temptation to simply continue to endure the ruinous exponential. Throughout the period of hyperinflation in Germany, the fragile government was painfully aware that any attempt to stabilise the currency would result in severe unemployment, which radical parties of both the Left and Right were poised to exploit. In fact, the hyperinflation was ended only by the elected government essentially ceding its powers to an authoritarian dictatorship empowered to put down social unrest as the costs of its policies were felt. At the time the stabilisation policies were put into effect in November 1923, the mark was quoted at six trillion to the British pound, and the paper marks printed and awaiting distribution to banks filled 300 ten-ton railway boxcars.

What lessons does this remote historical episode have for us today? A great many, it seems to me. First and foremost, when you hear pundits holding forth about the Weimar inflation, it's valuable to know that much of what they're talking about is folklore and conventional wisdom which has little to do with events as they actually happened. Second, this chronicle serves to remind the reader of the one simple fact about inflation that politicians, bankers, collectivist media, organised labour, and rent-seeking crony capitalists deploy an entire demagogic vocabulary to conceal: that inflation is caused by an increase in the money supply, not by “greed”, “shortages”, “speculation”, or any of the other scapegoats trotted out to divert attention from where blame really lies: governments and their subservient central banks printing money (or, in current euphemism, “quantitative easing”) to stealthily default upon their obligations to creditors. Third, wherever and whenever inflation occurs, its ultimate effect is the destruction of the middle class, which has neither the political power of organised labour nor the connections and financial resources of the wealthy. Since liberal democracy is, in essence, rule by the middle class, its destruction is the precursor to establishment of authoritarian rule, which will be welcomed after the once-prosperous and self-reliant bourgeoisie has been expropriated by inflation and reduced to dependence upon the state.

The Weimar inflation did not bring Hitler to power—for one thing the dates just don't work. The inflation came to an end in 1923, the year Hitler's beer hall putsch in Munich failed ignominiously and resulted in his imprisonment. The stabilisation of the economy in the following years was widely considered the death knell for radical parties on both the Left and Right, including Hitler's. It was not until the onset of the Great Depression following the 1929 crash that rising unemployment, falling wages, and a collapsing industrial economy as world trade contracted provided an opening for Hitler, and he did not become chancellor until 1933, almost a decade after the inflation ended. And yet, while there was no direct causal connection between the inflation and Hitler's coming to power, the erosion of civil society and the rule of law, the destruction of the middle class, and the lingering effects of the blame for these events being placed on “speculators” all set the stage for the eventual Nazi takeover.

The technology and complexity of financial markets have come a long way from “Railway Rudy” Havenstein and his 300 boxcars of banknotes to “Helicopter BenBernanke. While it used to take years of incompetence and mismanagement, leveling of vast forests, and acres of steam powered printing presses to destroy an industrial and commercial republic and impoverish those who sustain its polity, today a mere fat-finger on a keyboard will suffice. And yet the dynamic of inflation, once unleashed, proceeds on its own timetable, often taking longer than expected to corrode the institutions of an economy, and with ups and downs which tempt investors back into the market right before the next sickening slide. The endpoint is always the same: destruction of the middle class and pensioners who have provided for themselves and the creation of a dependent class of serfs at the mercy of an authoritarian regime. In past inflations, including the one documented in this book, this was an unintended consequence of ill-advised monetary policy. I suspect the crowd presently running things views this as a feature, not a bug.

A Kindle edition is available, in which the table of contents and notes are properly linked to the text, but the index is simply a list of terms, not linked to their occurrences in the text.

 Permalink

Thor, Brad. Blowback. New York: Pocket Books, 2005. ISBN 978-1-4516-0828-1.
This is the fourth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel, Harvath is involved in a botched takedown attempt against an al-Qaeda operative which, repeated endlessly on cable news channels, brings him and his superiors into the crosshairs of ambitious former first lady and carpetbagging Senator Helen Remington Carmichael, who views exposing Harvath and those who ordered the operation as her ticket to second place on the next Democratic presidential ticket.

As wise people do when faced with the flounderings of a wounded yet still dangerous superpower, Harvath gets out of Dodge and soon finds himself on the trail of a plot, grounded in the arcane science of paleopathology and dating from Hannibal's crossing of the Alps, which threatens a genocide of non-believers in the Dar al-Harb and unification of the Ummah under a new caliphate. Scientists have been disappearing, and as Harvath follows the trail of the assassin, he discovers the sinister thread that ties their work, performed in isolation, together into a diabolical scheme.

Harvath teams up with a plucky lady paleopathologist (Harvath's female companions seem to adapt to commando missions as readily as Doctor Who's to multiverse displacement) and together they begin to follow the threads which lead to an horrific plot based on a weapon of mass destruction conceived in antiquity which has slumbered for millennia in an ice cavern.

What more could you ask for? Politics, diseases in antiquity, ice mummies, evil geniuses in Swiss mountain redoubts (heh!), glider assaults, mass murder with the chosen protected by mass marketing, and a helicopter assault on a terrorist icon in a Muslim country—works for me!

This is a thriller, and it delivers the thrills in abundance. But this is Fourmilab, and you expect the quibbles, don't you? So here we go, and without spoilers! The Super Vivat motor-gliders used to assault the mountaintop are said in chapter 72 to be capable of retracting the propeller into the nose of the fuselage and retracting and extending their landing gear. Neither is correct—the propeller can be feathered but not retracted, and the landing gear is fixed.

This is a page-turner, and it succeeds at its mission and will send you off to read the next in the series. The solution to the chaos in the Islamic world advanced here by the bad guys is, in fact, one I've been thinking about as less worse than most of the alternatives for more than decade. Could the “Arab Spring” give way to an “Ottoman Fall”? Let's talk Turkey.

 Permalink

Hamilton-Paterson, James. Empire of the Clouds. London: Faber and Faber, 2010. ISBN 978-0-571-24795-0.
At the end of World War II, Great Britain seemed poised to dominate or at the least be a major player in postwar aviation. The aviation industries of Germany, France, and, to a large extent, the Soviet Union lay in ruins, and while the industrial might of the United States greatly out-produced Britain in aircraft in the latter years of the war, America's P-51 Mustang was powered by a Rolls-Royce engine built under license in the U.S., and the first U.S. turbojet and turboshaft engines were based on British designs. When the war ended, Britain not only had a robust aircraft industry, composed of numerous fiercely independent and innovative companies, it had in hand projects for game-changing military aircraft and a plan, drawn up while the war still raged, to seize dominance of civil aviation from American manufacturers with a series of airliners which would redefine air travel.

In the first decade after the war, Britons, especially aviation-mad “plane-spotters” like the author, found it easy to believe this bright future beckoned to them. They thronged to airshows where innovative designs performed manoeuvres thought impossible only a few years before, and they saw Britain launch the first pure-jet, and the first medium- and long-range turboprop airliners into commercial service. This was a very different Britain than that of today. Only a few years removed from the war, even postwar austerity seemed a relief from the privations of wartime, and many people vividly recalled losing those close to them in combat or to bombing attacks by the enemy. They were a hard people, and not inclined to discouragement even by tragedy. In 1952, at an airshow at Farnborough, an aircraft disintegrated in flight and fell into the crowd, killing 31 people and injuring more than 60 others. While ambulances were still carrying away the dead and injured, the show went on, and the next day Winston Churchill sent the pilot who went up after the disaster his congratulations for carrying on. While losses to aircraft and aircrew in the postwar era were small compared combat in the war, they were still horrific by present day standards.

A quick glance at the rest of this particular AIB [Accidents Investigation Branch] file reveals many similar casualties. It deals with accidents that took place between 3 May 1956 and 3 January 1957. In those mere eight months there was a total of thirty-four accidents in which forty-two aircrew were killed (roughly one fatality every six days). Pilot error and mechanical failure shared approximately equal billing in the official list of causes. The aircraft types included ten de Havilland Venoms, six de Havilland Vampires, six Hawker Hunters, four English Electric Canberras, two Gloster Meteors, and one each of the following: Gloster Javelin, Folland Gnat, Avro Vulcan, Avro Shackleton, Short Seamew and Westland Whirlwind helicopter. (pp. 128–129)

There is much to admire in the spirit of mourn the dead, fix the problem, and get on with the job, but that stoic approach, essential in wartime, can blind one to asking, “Are these losses acceptable? Do they indicate we're doing something wrong? Do we need to revisit our design assumptions, practises, and procedures?” These are the questions which came into the mind of legendary test pilot Bill Waterton, whose career is the basso continuo of this narrative. First as an RAF officer, then as a company test pilot, and finally as aviation correspondent for the Daily Express, he perceived and documented how Britain's aviation industry was, due to its fragmentation into tradition-bound companies, incessant changes of priorities by government, and failure to adapt to the aggressive product development schedules of the Americans and even the French, still rebuilding from wartime ruins, doomed to bring inferior products to the market too late to win foreign sales, which were essential for the viability of an industry with a home market as small as Britain's to maintain world-class leadership.

Although the structural problems within the industry had long been apparent to observers such as Waterton, any hope of British leadership was extinguished by the Duncan Sandys 1957 Defence White Paper which, while calling for long-overdue consolidation of the fragmented U.K. aircraft industry, concluded that most military missions in the future could be accomplished more effectively and less expensively by unmanned missiles. With a few exceptions, it cancelled all British military aviation development projects, condemning Britain, once the fallacy in the “missiles only” approach became apparent, to junior partner status in international projects or outright purchases of aircraft from suppliers overseas. On the commercial aviation side, only the Vickers Viscount was a success: the fatigue-induced crashes of the de Havilland Comet and the protracted development process of the Bristol Britannia caused their entry into service to be so late as to face direct competition from the Boeing 707 and Douglas DC-8, which were superior aircraft in every regard.

This book recounts a curious epoch in the history of British aviation. To observers outside the industry, including the hundreds of thousands who flocked to airshows, it seemed like a golden age, with one Made in Britain innovation following another in rapid succession. But in fact, it was the last burst of energy as the capital of a mismanaged and misdirected industry was squandered at the direction of fickle politicians whose priorities were elsewhere, leading to a sorry list of cancelled projects, prototypes which never flew, and aircraft which never met their specifications or were rushed into service before they were ready. In 1945, Britain was positioned to be a world leader in aviation and proceeded, over the next two decades, to blow it, not due to lack of talent, infrastructure, or financial resources, but entirely through mismanagement, shortsightedness, and disastrous public policy. The following long quote from the concluding chapter expresses this powerfully.

One way of viewing the period might be as a grand swansong or coda to the process we Britons had ourselves started with the Industrial Revolution. The long, frequently brilliant chapter of mechanical inventiveness and manufacture that began with steam finally ran out of steam. This was not through any waning of either ingenuity or enthusiasm on the part of individuals, or even of the nation's aviation industry as a whole. It happened because, however unconsciously and blunderingly it was done, it became the policy of successive British governments to eradicate that industry as though it were an unruly wasps' nest by employing the slow cyanide of contradictory policies, the withholding of support and funds, and the progressive poisoning of morale. In fact, although not even the politicians themselves quite realised it – and certainly not at the time of the upbeat Festival of Britain in 1951 – this turned out to be merely part of a historic policy change to do away with all Britain's capacity as a serious industrial nation, abolishing not just a century of making its own cars but a thousand years of building its own ships. I suspect this policy was more unconscious than deliberately willed, and it is one whose consequences for the nation are still not fully apparent. It sounds improbable; yet there is surely no other interpretation to be made of the steady, decades-long demolition of the country's manufacturing capacity – including its most charismatic industry – other that at some level it was absolutely intentional, no matter what lengths politicians went to in order to conceal this fact from both the electorate and themselves. (p. 329)

Not only is this book rich in aviation anecdotes of the period, it has many lessons for those living in countries which have come to believe they can prosper by de-industrialising, sending all of their manufacturing offshore, importing their science and engineering talent from other nations, and concentrating on selling “financial services” to one another. Good luck with that.

 Permalink

June 2011

Churchill, Winston S. Thoughts and Adventures. Wilmington, DE: ISI Books, [1932] 2009. ISBN 978-1-935191-46-9.
Among the many accomplishments of Churchill's long and eventful life, it is easy to forget that in the years between the wars he made his living primarily as a writer, with a prolific output of books, magazine articles, and newspaper columns. It was in this period of his life that he achieved the singular mastery of the English language which would serve him and Britain so well during World War II and which would be recognised by the Nobel Prize for Literature in 1953.

This collection of Churchill's short nonfiction was originally published in 1932 and is now available in a new edition, edited and with extensive annotations by James W. Muller. Muller provides abundant footnotes describing people, events, and locations which would have been familiar to Churchill's contemporary audience but which readers today might find obscure. Extensive end notes detail the publication history of each of the essays collected here, and document textual differences among the editions. Did you know that one of Churchill's principal markets across the Atlantic in the 1920s was Cosmopolitan?

This is simply a delicious collection of writing. Here we have Churchill recounting his adventures and misadventures in the air, a gun battle with anarchists on the streets of London, life in the trenches after he left the government and served on the front in World War I, his view of the partition of Ireland, and much more. Some of the essays are light, such as his take on political cartoons or his discovery of painting as a passion and pastime, but even these contain beautiful prose and profound insights. Then there is Churchill the prophet of human conflict to come. In “Shall We All Commit Suicide?”, he writes (p. 264):

Then there are Explosives. Have we reached the end? Has Science turned its last page on them? May there not be methods of using explosive energy incomparably more intense than anything heretofore discovered? Might not a bomb no bigger than an orange be found to possess a secret power to destroy a whole block of buildings—nay, to concentrate the force of a thousand tons of cordite and blast a township at a stroke? Could not explosives of even the existing type be guided automatically in flying machines by wireless or other rays, without a human pilot, in ceaseless procession upon a hostile city, arsenal, camp, or dockyard?

Bear in mind that this was published in 1924. In 1931, looking “Fifty Years Hence”, he envisions (p. 290):

Wireless telephones and television, following naturally upon their present path of development, would enable their owner to connect up with any room similarly installed, and hear and take part in the conversation as well as if he put his head through the window. The congregation of men in cities would become superfluous. It would rarely be necessary to call in person on any but the most intimate friends, but if so, excessively rapid means of communication would be at hand. There would be no more object in living in the same city with one's neighbour than there is to-day in living with him in the same house. The cities and the countryside would become indistinguishable. Every home would have its garden and its glade.

It's best while enjoying this magnificent collection not to dwell on whether there is a single living politician of comparable stature who thinks so profoundly on so broad a spectrum of topics, or who can expound upon them to a popular audience in such pellucid prose.

 Permalink

De Vany, Arthur. The New Evolution Diet. New York: Rodale Books, 2011. ISBN 978-1-60529-183-3.
The author is an economist best known for his research into the economics of Hollywood films, and his demonstration that the Pareto distribution applies to the profitability of Hollywood productions, empirically falsifying many entertainment business nostrums about a correlation between production cost and “star power” of the cast and actual performance at the box office. When his son, and later his wife, developed diabetes and the medical consensus treatment seemed to send both into a downward spiral, his economist's sense for the behaviour of complex nonlinear systems with feedback and delays caused him to suspect that the regimen prescribed for diabetics was based on a simplistic view of the system aimed at treating the symptoms rather than the cause. This led him to an in depth investigation of human metabolism and nutrition, grounded in the evolutionary heritage of our species (this is fully documented here—indeed, almost half of the book is end notes and source references, which should not be neglected: there is much of interest there).

His conclusion was that our genes, which have scarcely changed in the last 40,000 years, were adapted to the hunter-gatherer lifestyle that our hominid ancestors lived for millions of years before the advent of agriculture. Our present day diet and way of life could not be more at variance with our genetic programming, so it shouldn't be a surprise that we see a variety of syndromes, including obesity, cardiovascular diseases, type 2 diabetes, and late-onset diseases such as many forms of cancer which are extremely rare among populations whose diet and lifestyle remain closer to those of ancestral humans. Strong evidence for this hypothesis comes from nomadic aboriginal populations which, settled into villages and transitioned to the agricultural diet, promptly manifested diseases, categorised as “metabolic syndrome”, which were previously unknown among them.

This is very much the same conclusion as that of The Paleo Diet (December 2010), and I recommend you read both of these books as they complement one another. The present volume goes deeper into the biochemistry underlying its dietary recommendations, and explores what the hunter-gatherer lifestyle has to say about the exercise to which we are adapted. Our ancestors' lives were highly chaotic: they ate when they made a kill or found food to gather and fasted until the next bounty. They engaged in intense physical exertion during a hunt or battle, and then passively rested until the next time. Modern times have made us slaves to the clock: we do the same things at the same times on a regular schedule. Even those who incorporate strenuous exercise into their routine tend to do the same things at the same time on the same days. The author argues that this is not remotely what our heritage has evolved us for.

Once Pareto gets into your head, it's hard to get him out. Most approaches to diet, nutrition, and exercise (including my own) view the human body as a system near equilibrium. The author argues that one shouldn't look at the mean but rather the kurtosis of the distribution, as it's the extremes that matter—don't tediously “do cardio” like all of the treadmill trudgers at the gym, but rather push your car up a hill every now and then, or randomly raise your heart rate into the red zone.

This all makes perfect sense to me. I happened to finish this book almost precisely six months after adopting my own version of the paleo diet, not from a desire to lose weight (I'm entirely happy with my weight, which hasn't varied much in the last twenty years, thanks to the feedback mechanism of The Hacker's Diet) but due to the argument that it averts late-onset diseases and extends healthy lifespan. Well, it's too early to form any conclusions on either of these, and in any case you can draw any curve you like through a sample size of one, but after half a year on paleo I can report that my weight is stable, my blood pressure is right in the middle of the green zone (as opposed to low-yellow before), I have more energy, sleep better, and have seen essentially all of the aches and pains and other symptoms of low-level inflammation disappear. Will you have cravings for things you've forgone when you transition to paleo? Absolutely—in my experience it takes about three months for them to go away. When I stopped salting my food, everything tasted like reprocessed blaah for the first couple of weeks, but now I appreciate the flavours below the salt.

For the time being, I'm going to continue this paleo thing, not primarily due to the biochemical and epidemiological arguments here, but because I've been doing it for six months and I feel better than I have for years. I am a creature of habit, and I find it very difficult to introduce kurtosis into my lifestyle: when exogenous events do so, I deem it an “entropic storm”. When it's 15:00, I go for my one hour walk. When it's 18:00, I eat, etc. Maybe I should find some way to introduce randomness into my life….

An excellent Kindle edition is available, with the table of contents, notes, and index all properly linked to the text.

 Permalink

Sokal, Alan and Jean Bricmont. Fashionable Nonsense. New York: Picador, [1997] 1998. ISBN 978-0-312-20407-5.
There are many things to mock in the writings of “postmodern”, “deconstructionist”, and “critical” intellectuals, but one of the most entertaining for readers with a basic knowledge of science and mathematics is the propensity of many of these “scholars” to sprinkle their texts with words and concepts from mathematics and the physical sciences, all used entirely out of context and in total ignorance of their precise technical definitions, and without the slightest persuasive argument that there is any connection, even at a metaphorical level, between the mis-quoted science and the topic being discussed. This book, written by two physicists, collects some of the most egregious examples of such obscurantist writing by authors (all French—who would have guessed?) considered eminent in their fields. From Jacques Lacan's hilariously muddled attempts to apply topology and mathematical logic to psychoanalysis to Luce Irigaray's invoking fluid mechanics to argue that science is a male social construct, the passages quoted here at length are a laugh riot for those willing to momentarily put aside the consequences of their being taken seriously by many in the squishier parts of academia. Let me quote just one to give you a flavour—this passage is by Paul Virilio:

When depth of time replaces depths of sensible space; when the commutation of interface supplants the delimitation of surfaces; when transparence re-establishes appearances; then we begin to wonder whether that which we insist on calling space isn't actually light, a subliminary, para-optical light of which sunlight is only one phase or reflection. This light occurs in a duration measured in instantaneous time exposure rather than the historical and chronological passage of time. The time of this instant without duration is “exposure time”, be it over- or underexposure. Its photographic and cinematographic technologies already predicted the existence and the time of a continuum stripped of all physical dimensions, in which the quantum of energetic action and the punctum of cinematic observation have suddenly become the last vestiges of a vanished morphological reality. Transferred into the eternal present of a relativity whose topological and teleological thickness and depth belong to this final measuring instrument, this speed of light possesses one direction, which is both its size and dimension and which propagates itself at the same speed in all radial directions that measure the universe. (pp. 174–175)

This paragraph, which recalls those bright college days punctuated by deferred exhalations accompanied by “Great weed, man!”, was a single 193 word sentence in the original French; the authors deem it “the most perfect example of diarrhea of the pen that we have ever encountered.”

The authors survey several topics in science and mathematics which are particularly attractive to these cargo cult confidence men and women, and, dare I say, deconstruct their babblings. In all, I found the authors' treatment of the postmodernists remarkably gentle. While they do not hesitate to ridicule their gross errors and misappropriation of scientific concepts, they carefully avoid drawing the (obvious) conclusion that such ignorant nonsense invalidates the entire arguments being made. I suspect this is due to the authors, both of whom identify themselves as men of the Left, being sympathetic to the conclusions of those they mock. They're kind of stuck, forced to identify and scorn the irrational misuse of concepts from the hard sciences, while declining to examine the absurdity of the rest of the argument, which the chart from Explaining Postmodernism (May 2007) so brilliantly explains.

Alan Sokal is the perpetrator of the famous hoax which took in the editors of Social Text with his paper “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”, which appears in full here, along with comments on construction of the parody and remarks on the motivation behind it.

This book was originally published in French under the title Impostures intellectuelles. This English edition contains some material added to address critical comments on the French edition, and includes the original French language text of passages whose translation might be challenged as unfaithful to whatever the heck the original was trying to say.

 Permalink

Hamilton, Steve. Misery Bay. New York: Thomas Dunne Books, 2011. ISBN 978-0-312-38043-4.
I haven't been reading many mysteries recently, but when I happened to listen to a podcast interview with the author of this book set in the Upper Peninsula of Michigan, less than twelve hours before departing on a trip to precisely that destination, I could only conclude that the Cosmic Coincidence Control Centre was telling me to read this book, so I promptly downloaded the Kindle edition and read it after arrival. I'm glad I did.

This is the eighth novel in the author's Alex McKnight series, but it is perfectly accessible to readers (like myself) who start here. The story is recounted in the first person by McKnight, a former Detroit cop who escaped the cruel streets of that failed metropolis after a tragic episode, relocating to the town of Paradise in Michigan's Upper Peninsula where he intends to make a living renting cabins, but finds himself reluctantly involved as a private investigator in crimes which cross his path.

In the present book, McKnight agrees to look into the circumstances of the apparent suicide of the son of a friend and former colleague of McKnight's nemesis, police chief Roy Maven. This errand, undertaken on behalf of a distraught father who cannot imagine any motive for his son's taking his life, spirals into what appears to be a baffling cluster of suicides and murders involving current and former police officers and their children. McKnight seeks to find the thread which might tie these seemingly unrelated events together, along with a pair of FBI agents who, being feds, seem more concerned with protecting their turf than catching crooks.

Along with many twists and turns as the story develops and gripping action scenes, Hamilton does a superb job evoking the feel of the Upper Peninsula, where the long distances, sparse population, and extreme winters provide a background more like Montana than something you'd expect east of the Mississippi. In the end, the enigma is satisfyingly resolved and McKnight, somewhat the worse for wear, is motivated to turn the next corner in his life where, to be sure, other mysteries await.

 Permalink

Kurzweil, Ray. The Age of Spiritual Machines. New York: Penguin Books, 1999. ISBN 978-0-14-028202-3.
Ray Kurzweil is one of the most vocal advocates of the view that the exponential growth in computing power (and allied technologies such as storage capacity and communication bandwidth) at constant cost which we have experienced for the last half century, notwithstanding a multitude of well-grounded arguments that fundamental physical limits on the underlying substrates will bring it to an end (all of which have proven to be wrong), will continue for the foreseeable future: in all likelihood for the entire twenty-first century. Continued exponential growth in a technology for so long a period is unprecedented in the human experience, and the consequences as the exponential begins to truly “kick in” (although an exponential curve is self-similar, its consequences as perceived by observers whose own criteria for evaluation are more or less constant will be seen to reach a “knee” after which they essentially go vertical and defy prediction). In The Singularity Is Near (October 2005), Kurzweil argues that once the point is reached where computers exceed the capability of the human brain and begin to design their own successors, an almost instantaneous (in terms of human perception) blow-off will occur, with computers rapidly converging on the ultimate physical limits on computation, with capabilities so far beyond those of humans (or even human society as a whole) that attempting to envision their capabilities or intentions is as hopeless as a microorganism's trying to master quantum field theory. You might want to review my notes on 2005's The Singularity Is Near before reading the balance of these comments: they provide context as to the extreme events Kurzweil envisions as occurring in the coming decades, and there are no “spoilers” for the present book.

When assessing the reliability of predictions, it can be enlightening to examine earlier forecasts from the same source, especially if they cover a period of time which has come and gone in the interim. This book, published in 1999 near the very peak of the dot-com bubble provides such an opportunity, and it provides a useful calibration for the plausibility of Kurzweil's more recent speculations on the future of computing and humanity. The author's view of the likely course of the 21st century evolved substantially between this book and Singularity—in particular this book envisions no singularity beyond which the course of events becomes incomprehensible to present-day human intellects. In the present volume, which employs the curious literary device of “trans-temporal chat” between the author, a MOSH (Mostly Original Substrate Human), and a reader, Molly, who reports from various points in the century her personal experiences living through it, we encounter a future which, however foreign, can at least be understood in terms of our own experience.

This view of the human prospect is very odd indeed, and to this reader more disturbing (verging on creepy) than the approach of a technological singularity. What we encounter here are beings, whether augmented humans or software intelligences with no human ancestry whatsoever, that despite having at hand, by the end of the century, mental capacity per individual on the order of 1024 times that of the human brain (and maybe hundreds of orders of magnitude more if quantum computing pans out), still have identities, motivations, and goals which remain comprehensible to humans today. This seems dubious in the extreme to me, and my impression from Singularity is that the author has rethought this as well.

Starting from the publication date of 1999, the book serves up surveys of the scene in that year, 2009, 2019, 2029, and 2099. The chapter describing the state of computing in 2009 makes many specific predictions. The following are those which the author lists in the “Time Line” on pp. 277–278. Many of the predictions in the main text seem to me to be more ambitious than these, but I shall go with those the author chose as most important for the summary. I have reformatted these as a numbered list to make them easier to cite.

  1. A $1,000 personal computer can perform about a trillion calculations per second.
  2. Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
  3. Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
  4. The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
  5. Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
  6. Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
  7. Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech-to-text conversion) for the deaf, and computer-controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
  8. Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.
  9. Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which has been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvements in the price performance of computing.
  10. Human musicians routinely jam with cybernetic musicians.
  11. Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases.
  12. The neo-Luddite movement is growing.

I'm not going to score these in detail, as that would be both tedious and an invitation to endless quibbling over particulars, but I think most readers will agree that this picture of computing in 2009 substantially overestimates the actual state of affairs in the decade since 1999. Only item (3) seems to me to be arguably on the way to achievement, and yet I do not have a single wireless peripheral connected to any of my computers and Wi-Fi coverage remains spotty even in 2011. Things get substantially more weird the further out you go, and of course any shortfall in exponential growth lowers the baseline for further extrapolation, shifting subsequent milestones further out.

I find the author's accepting continued exponential growth as dogma rather off-putting. Granted, few people expected the trend we've lived through to continue for so long, but eventually you begin to run into physical constraints which seem to have little wiggle room for cleverness: the finite size of atoms, the electron's charge, and the speed of light. There's nothing wrong with taking unbounded exponential growth as a premise and then exploring what its implications would be, but it seems to me any forecast which is presented as a plausible future needs to spend more time describing how we'll actually get there: arm waving about three-dimensional circuitry, carbon nanotubes, and quantum computing doesn't close the sale for me. The author entirely lost me with note 3 to chapter 12 (p. 342), which concludes:

If engineering at the nanometer scale (nanotechnology) is practical in the year 2032, then engineering at the picometer scale should be practical in about forty years later (because 5.64 = approximately 1,000), or in the year 2072. Engineering at the femtometer (one thousandth of a trillionth of a meter, also referred to as a quadrillionth of a meter) scale should be feasible, therefore, by around the year 2112. Thus I am being a bit conservative to say that femtoengineering is controversial in 2099.

Nanoengineering involves manipulating individual atoms. Picoengineering will involve engineering at the level of subatomic particles (e.g., electrons). Femtoengineering will involve engineering inside a quark. This should not seem particularly startling, as contemporary theories already postulate intricate mechanisms within quarks.

This is just so breathtakingly wrong I am at a loss for where to begin, and it was just as completely wrong when the book was published two decades ago as it is today; nothing relevant to these statements has changed. My guess is that Kurzweil was thinking of “intricate mechanisms” within hadrons and mesons, particles made up of quarks and gluons, and not within quarks themselves, which then and now are believed to be point particles with no internal structure whatsoever and are, in any case, impossible to isolate from the particles they compose. When Richard Feynman envisioned molecular nanotechnology in 1959, he based his argument on the well-understood behaviour of atoms known from chemistry and physics, not a leap of faith based on drawing a straight line on a sheet of semi-log graph paper. I doubt one could find a single current practitioner of subatomic physics equally versed in the subject as was Feynman in atomic physics who would argue that engineering at the level of subatomic particles would be remotely feasible. (For atoms, biology provides an existence proof that complex self-replicating systems of atoms are possible. Despite the multitude of environments in the universe since the big bang, there is precisely zero evidence subatomic particles have ever formed structures more complicated than those we observe today.)

I will not further belabour the arguments in this vintage book. It is an entertaining read and will certainly expand your horizons as to what is possible and introduce you to visions of the future you almost certainly have never contemplated. But for a view of the future which is simultaneously more ambitious and plausible, I recommend The Singularity Is Near.

 Permalink

July 2011

Coulter, Ann. Demonic. New York: Crown Forum, 2011. ISBN 978-0-307-35348-1.
The author has a well-deserved reputation as thriving on controversy and not hesitating to incite her intellectual adversaries to paroxysms of spittle-spewing rage by patiently demonstrating their hypocrisy and irrationality. In the present volume, we have something substantially different from Coulter's earlier work. Drawing upon Gustave Le Bon's 1895 classic The Crowd, Coulter traces the behaviour of mobs and their influence upon societies and history from classical times to the present day.

The leaders of the American revolution and founders of the American republic were steeped in the history of mob behaviour in ancient Greece and Rome, and how it ultimately led to the downfall of consensual self-government in both of these polities. They were acutely aware that many of their contemporaries, in particular Montesquieu, argued that self-governance was not possible on a scale larger than that of a city-state. The structure devised for the new republic in North America was deliberately crafted to channel the enthusiasms of the citizenry into considered actions by a distributed set of institutions which set ambition against ambition in the interest of stability, protection of individual liberty, and defence of civil society against the will of a moment's majority.

By contrast to the American Secession from the British Empire (I deem it a secession since the main issue at dispute was the sovereignty of the King and Parliament over the colonies—after the conclusion of the conflict, the newly-independent colonies continued to govern themselves much as before, under the tradition of English common law), the French Revolution a few years later was a mob unleashed against the institutions of a society. In two well crafted chapters Coulter sketches the tragic and tawdry history of that episode which is often known to people today only from romantic accounts which elide the absurdity, collective insanity, and rivers of blood occasioned by the actual events. (For more details, see Citizens [October 2004], which is cited here as a source.)

The French Revolution was the prototype of all the mob revolutions which followed. Whether they called themselves Bolsheviks, Nazis, Maoists, or Khmer Rouge, their goal was to create heaven on Earth and if the flawed humans they hoped to forge into their bright shining utopia were unworthy, well then certainly killing off enough of those recalcitrant dissenters would do the trick.

Bringing this home to America, Coulter argues that although mob politics is hardly new to America, for the first time it is approaching a tipping point in having a near majority which pays no Federal income tax and whose net income consists of transfer payments from others. Further, the mob is embodied in an institution, the Democratic party, which, with its enablers in the legacy media, academia, labour unions, ethnic grievance groups, and other constituencies, is not only able to turn out the vote but also to bring mobs into the street whenever it doesn't get its way through the institutions of self-governance. As the (bare) majority of productive citizens attempt to stem the slide into the abyss, they will be pitted against the mob, aroused by the Democrat political apparatus, supported by the legacy media (which covers up their offences, while accusing orderly citizens defending their rights of imagined crimes), and left undefended by “law enforcement”, which has been captured by “public employee unions” which are an integral part of the mob.

Coulter focuses primarily on the U.S., but the phenomenon she describes is global in scope: one need only see the news from Athens, London, Madrid, Paris, or any number of less visible venues to see the savage beast of the mob baring its teeth against the cowering guardians of civilisation. Until decent, productive people who, just two generations ago, had the self-confidence not only to assume the progress to which they were the heirs would continue into the indefinite future but, just for a lark, go and visit the Moon, see the mob for what it is, the enemy, and deal with it appropriately, the entire heritage of civilisation will remain in peril.

 Permalink

Shute, Nevil [Nevil Shute Norway]. Slide Rule. Kelly Bray, UK: House of Stratus, [1954] 2000. ISBN 978-1-84232-291-8.
The author is best known for his novels, several of which were made into Hollywood movies, including No Highway and On the Beach. In this book, he chronicles his “day job” as an aeronautical engineer and aviation entrepreneur in what he describes as the golden age of aviation: an epoch where a small team of people could design and manufacture innovative aircraft without the huge budgets, enormous bureaucratic organisations, or intrusive regulation which overcame the spirit of individual invention and enterprise as aviation matured. (The author, fearing that being known as a fictioneer might make him seem disreputable as an engineer, published his books under the name “Nevil Shute”, while using his full name, “Nevil Shute Norway” in his technical and business career. He explains that decision in this book, published after he had become a full-time writer.)

This is a slim volume, but there is as much wisdom here as in a dozen ordinary books this size, and the writing is simultaneously straightforward and breathtakingly beautiful. A substantial part of the book recounts the history of the U.K. airship project, which pitted a private industry team in which Shute played a major rôle building the R.100 in competition with a government-designed and -built ship, the R.101, designed to the same specifications. Seldom in the modern history of technology has there been such a clear-cut illustration of the difference between private enterprise designing toward a specification under a deadline and fixed budget and a government project with unlimited funds, no oversight, and with specifications and schedules at the whim of politicians with no technical knowledge whatsoever. The messy triumph of the R.100 and the tragedy of the R.101, recounted here by an insider, explains the entire sordid history of NASA, the Concorde, and innumerable other politically-driven technological boondoggles.

Had Shute brought the book to a close at the end of the airship saga, it would be regarded as a masterpiece of reportage of a now-forgotten episode in aviation history. But then he goes on to describe his experience in founding, funding, and operating a start-up aircraft manufacturer, Airspeed Ltd., in the middle of the Great Depression. This is simply the best first-person account of entrepreneurship and the difficult decisions one must make in bringing a business into being and keeping it going “whatever it takes”, and of the true motivation of the entrepreneur (hint: money is way down the list) that I have ever read, and I speak as somebody who has written one of my own. Then, if that weren't enough, Shute sprinkles the narrative with gems of insight aspiring writers may struggle years trying to painfully figure out on their own, which are handed to those seeking to master the craft almost in passing.

I could quote dozens of lengthy passages from this book which almost made me shiver when I read them from the sheer life-tested insight distilled into so few words. But I'm not going to, because what you need to do is go and get this book, right now (see below for an electronic edition), and drop whatever you're doing and read it cover to cover. I have had several wise people counsel me to do the same over the years and, for whatever reason, never seemed to find the time. How I wish I had read this book before I embarked upon my career in business, and how much comfort and confidence it would have given me upon reaching the difficult point where a business has outgrown the capabilities and interests of its founders.

An excellent Kindle edition is available.

 Permalink

Preston, Richard. Panic in Level 4. New York: Random House, 2008. ISBN 978-0-8129-7560-4.
The New Yorker is one of the few remaining markets for long-form reportage of specialised topics directed at an intelligent general audience, and Richard Preston is one of the preeminent practitioners of that craft working today. This book collects six essays originally published in that magazine along with a new introduction as long as some of the chapters which describes the title incident in which the author found himself standing space-suit to protein coat of a potentially unknown hæmorrhagic fever virus in a U.S. Army hot lab. He also provides tips on his style of in-depth, close and personal journalism (which he likens to “climb[ing] into the soup”), which aspiring writers may find enlightening.

In subsequent chapters we encounter the Chudnovsky brothers, émigré number theorists from the Ukraine (then part of the Soviet Union), who built a supercomputer in their New York apartment from mail-order components to search for structure in the digits of π, and later used their mathematical prowess and computing resources to digitally “stitch” together and thereby make a backup copy of The Hunt of the Unicorn tapestries; the mercurial Craig Venter in the midst of the genome war in the 1990s; arborists and entomologists tracing the destruction of the great hemlock forests of the eastern U.S. by invasive parasites; and heroic medical personnel treating the victims of an Ebola outbreak in unspeakable conditions in Africa.

The last, and most disturbing chapter (don't read it if you're planning to go to sleep soon or, for that matter, sleep well anytime in the next few days) describes Lesch-Nyhan syndrome, a rare genetic disease caused by a single nucleotide mutation in the HPRT1 gene located on the X chromosome. Those affected (almost all males, since females have two X chromosomes and will exhibit symptoms only if both contain the mutation) exhibit behaviour which, phenomenologically, can be equally well described by possession by a demon which compels them at random times to self-destructive behaviour as by biochemistry and brain function. Sufferers chew their lips and tongues, often destroying them entirely, and find their hands seemingly acting with a will of their own to attack their faces, either with fingers or any tool at hand. They often bite off flesh from their hands or entire fingers, sometimes seemingly in an attempt to stop them from inflicting further damage. Patients with the syndrome can appear normal, fully engaged with the world and other individuals, and intelligent, and yet when “possessed”, capable of callous cruelty, both physical and emotional, toward those close to them.

When you get beyond the symptoms and the tragic yet engaging stories of those afflicted with the disease with whom the author became friends, there is much to ponder in what all of this means for free will and human identity. We are talking about what amounts to a single typo in a genetic blueprint of three billion letters which causes the most profound consequences imaginable for the individual who carries it and perceives it as an evil demon living within their mind. How many other aspects of what we think of as our identity, whether for good or ill, are actually expressions of our genetic programming? To what extent is this true of our species as a whole? What will we make of ourselves once we have the ability to manipulate our genome at will? Sweet dreams….

Apart from the two chapters on the Chudnovskys, which have some cross references, you can read the chapters in any order.

 Permalink

Rawles, James Wesley. How to Survive the End of the World as We Know It. New York: Plume, 2009. ISBN 978-0-452-29583-4.
As I write these comments in July of 2011, the legacy media and much of the “new” media are focussed on the sovereign debt crises in Europe and the United States, with partisans on every side of the issue and both sides of the Atlantic predicting apocalyptic consequences if their policy prescriptions are not promptly enacted. While much of the rhetoric is overblown and many of the “deadlines” artificial constructs created for political purposes, the situation cannot help but remind one of just how vulnerable the infrastructure of civilisation in developed nations has become to disruptions which, even a few decades ago, would have been something a resilient populace could ride out (consider civilian populations during World War II as an example).

Today, however, delivery of food, clean water, energy, life-sustaining pharmaceuticals, and a multitude of other necessities of life to populations increasingly concentrated in cities and suburbs is a “just in time” process, optimised to reduce inventory all along the chain from primary producer to consumer and itself dependent upon the infrastructure for its own operation. For example, a failure of the electrical power grid in a region not only affects home and business use of electricity, but will quickly take down delivery of fresh water; removal and processing of sewage; heating for buildings which rely on electrically powered air or water circulation systems and furnace burners; and telephone, Internet, radio, and television communication once the emergency generators which back up these facilities exhaust their fuel supplies (usually in a matter of days). Further, with communications down, inventory control systems all along the food supply chain will be inoperable, and individuals in the region will be unable to either pay with credit or debit cards or obtain cash from automatic teller machines. This only scratches the surface of the consequences of a “grid down” scenario, and it takes but a little reflection to imagine how a failure in any one part of the infrastructure can bring the rest down.

One needn't envision a continental- or global-scale financial collapse to imagine how you might find yourself on your own for a period of days to weeks: simply review the aftermath of earthquakes, tsunamis, hurricanes, tornado swarms, and large-scale flooding in recent years to appreciate how events which, while inevitable in the long term but unanticipated until too short a time before they happened to effectively prepare for, can strike. The great advantage of preparing for the apocalypse is that when something on a smaller scale happens, you can ride it out and help your neighbours get through the difficult times without being a burden on stretched-thin emergency services trying to cope with the needs of those with less foresight.

This book, whose author is the founder of the essential SurvivalBlog site, is a gentle introduction to (quoting the subtitle) “tactics, techniques, and technologies for uncertain times”. By “gentle”, I mean that there is little or no strident doom-saying here; instead, the reader is encouraged to ask, “What if?”, then “What then?”, and so on until an appreciation of what it really means when the power is off, the furnace is dead, the tap is dry, the toilet doesn't flush, the refrigerator and freezer are coming to room temperature, and you don't have any food in the pantry.

The bulk of the book describes steps you can take, regardless of how modest your financial means, free time, and physical capacity, to prepare for such exigencies. In many cases, the cost of such common-sense preparations is negative: if you buy storable food in bulk and rotate your storage by regularly eating what you've stored, you'll save money when buying through quantity discounts (and/or buying when prices are low or there's a special deal at the store), and in an inflationary era, by buying before prices rise. The same applies to fuel, ammunition, low-tech workshop and gardening tools, and many other necessities when civilisation goes south for a while. Those seeking to expand their preparations beyond the basics will find a wealth of references here, and will find a vast trove of information on the author's SurvivalBlog.

The author repeatedly emphasises that the most important survival equipment is stored between your ears, and readers are directed to sources of information and training in a variety of fields. The long chapter on medical and dental care in exigent circumstances is alone almost worth the price of the book. For a fictional treatment of survival in an extreme grid-down societal collapse, see the author's novel Patriots (December 2008).

 Permalink

Stross, Charles. Accelerando. New York: Ace, 2005. ISBN 978-0-441-01415-6.
Some people complain that few contemporary science fiction authors work on the grand scale of the masters of yore. Nobody can say that about Charles Stross, who in this novel tells the story of the human species' transcendence as it passes through a technological singularity caused by the continued exponential growth of computational power to the point where a substantial fraction of the mass of the solar system is transformed from “dumb matter” into computronium, engineered through molecular nanotechnology to perform the maximum amount of computation given its mass and the free energy of its environment. The scenario which plays out in the 21st century envisioned here is essentially that of Ray Kurzweil's The Age of Spiritual Machines (June 2011) with additions by the author to make things more interesting.

The story is told as the chronicle of the (very) extended family of Manfred Macx, who starts as a “venture altruist” in the early years of the century, as the rising curve of computation begins to supplant economics (the study of the use of scarce resources) with “agalmics”: the allocation of abundant resources. As the century progresses, things get sufficiently weird that even massively augmented human intelligences can perceive them only dimly from a distance, and the human, transhuman, posthuman, emulated, resurrected, and multithreaded members of the Macx family provide our viewpoint on what's happening, as they try to figure it all out for themselves. And then there's the family cat….

Forecasts of future technologies often overlook consequences which seem obvious in retrospect. For example, many people predicted electronic mail, but how many envisioned spam? Stross goes to some lengths here to imagine the unintended consequences of a technological singularity. You think giant corporations and financial derivatives are bad? Wait until they become sentient, with superhuman intelligence and the ability to reproduce!

The novel was assembled from nine short stories, and in some cases this is apparent, but it didn't detract from this reader's enjoyment. For readers “briefed in” on the whole singularity/nanotechnology/extropian/posthuman meme bundle, this work is a pure delight—there's something for everybody, even a dine-in-saur! If you're one of those folks who haven't yet acquired a taste for treats which “taste like (mambo) chicken”, plan to read this book with a search box open and look up the multitude of terms which are dropped without any explanation and which will send you off into the depths of the weird as you research them. An excellent Kindle edition is available which makes this easy.

Reading “big idea” science fiction may cause you to have big ideas of your own—that's why we read it, right? Anyway, this isn't in the book, so I don't consider talking about it a spoiler, but what occurred to me whilst reading the novel is that transcendence of naturally-evolved (or were they…?) species into engineered computational substrates might explain some of the puzzles of cosmology with which we're presently confronted. Suppose transcendent super-intelligences which evolved earlier in the universe have already ported themselves from crude molecular structures to the underlying structure of the quantum vacuum. The by-product of their computation might be the dark energy which has so recently (in terms of the history of the universe) caused the expansion of the universe to accelerate. The “coincidence problem” is why we, as unprivileged observers in the universe, should be living so close to the moment at which the acceleration began. Well, if it's caused by other beings who happened to evolve to their moment of transcendence a few billion years before us, it makes perfect sense, and we'll get into the act ourselves before too long. Accelerando!

 Permalink

August 2011

Galt, John [pseud.]. The Day the Dollar Died. Florida: Self-published, 2011.
I have often remarked in this venue how fragile the infrastructure of the developed world is, and how what might seem to be a small disruption could cascade into a black swan event which could potentially result in the end of the world as we know it. It is not only physical events such as EMP attacks, cyber attacks on critical infrastructure, or natural disasters such as hurricanes and earthquakes which can set off the downspiral, but also loss of confidence in the financial system in which all of the myriad transactions which make up the global division of labour on which our contemporary society depends. In a fiat money system, where currency has no intrinsic value and is accepted only on the confidence that it will be subsequently redeemable for other goods without massive depreciation, loss of that confidence can bring the system down almost overnight, and this has happened again and again in the sorry millennia-long history of paper money. As economist Herbert Stein observed, “If something cannot go on forever, it will stop”. But, when pondering the many “unsustainable” trends we see all around us today, it's important to bear in mind that they can often go on for much longer, diverging more into the world of weird than you ever imagined before stopping, and that when they finally do stop the débâcle can be more sudden and breathtaking in its consequences than even excitable forecasters conceived.

In this gripping thriller, the author envisions the sudden loss in confidence of the purchasing power of the U.S. dollar and the ability of the U.S. government to make good on its obligations catalysing a meltdown of the international financial system and triggering dire consequences within the United States as an administration which believes “you never want a serious crisis to go to waste” exploits the calamity to begin “fundamentally transforming the United States of America”. The story is told in a curious way: by one first-person narrator and from the viewpoint of other people around the country recounted in third-person omniscient style. This is unusual, but I didn't find it jarring, and the story works.

The recounting of the aftermath of sudden economic collapse is compelling, and will probably make you rethink your own preparations for such a dire (yet, I believe, increasingly probable) event. The whole post-collapse scenario is a little too black helicopter for my taste: we're asked to simultaneously believe that a government which has bungled its way into an apocalyptic collapse of the international economic system (entirely plausible in my view) will be ruthlessly efficient in imposing its new order (nonsense—it will be as mindlessly incompetent as in everything else it attempts). But the picture painted of how citizens can be intimidated or co-opted into becoming collaborators rings true, and will give you pause as you think about your friends and neighbours as potential snitches working for the Man. I found it particularly delightful that the author envisions a concept similar to my 1994 dystopian piece, Unicard, as playing a part in the story.

At present, this book is available only in PDF format. I read it with Stanza on my iPad, which provides a reading experience equivalent to the Kindle and iBooks applications. The author says other electronic editions of this book will be forthcoming in the near future; when they're released they should be linked to the page cited above. The PDF edition is perfectly readable, however, so if this book interests you, there's no reason to wait. And, hey, it's free! As a self-published work, it's not surprising there are a number of typographical errors, although very few factual errors I noticed. That said, I've read novels published by major houses with substantially more copy editing goofs, and the errors here never confuse the reader nor get in the way of the narrative. For the author's other writings and audio podcasts, visit his Web site.

 Permalink

Steyn, Mark. After America. Washington: Regnery Publishing, 2011. ISBN 978-1-59698-100-3.
If John Derbyshire's We Are Doomed (October 2009) wasn't gloomy enough for you, this book will have you laughing all way from the event horizon to the central singularity toward which what remains of Western civilisation is free falling. In the author's view, the West now faces a perfect storm of demographic collapse (discussed in detail in his earlier America Alone [November 2006]); financial cataclysm due to unsustainable debt and “entitlement” commitments made by the welfare state; a culture crash after two generations have been indoctrinated in dependency, multiculturalism, and not just ignorance but a counterfactual fantasy view of history; and a political and cultural élite which has become so distinct and disconnected from the shrinking productive classes it almost seems to be evolving into a separate species.

Steyn uses H. G. Wells's The Time Machine as his guide to the future, arguing that Wells got the details right but that bifurcation of mankind into the effete Eloi and the productive but menacing Morlocks is not in the remote future, but has already happened in Western society in every sense but the biological, and even that is effectively the case as the two castes increasingly rarely come into contact with one another, no less interbreed. The Eloi, what Angelo Codevilla called The Ruling Class (October 2010), are the product of top-ranked universities and law schools and dominate government, academia, and the media. Many of them have been supported by taxpayers their entire lives and have never actually done anything productive in their careers. The Obama administration, which is almost devoid of individuals with any private sector experience at the cabinet level, might be deemed the first all-Eloi government in the U.S. As Wells's Time Traveller discovered, the whole Eloi/Morlock thing ended badly, and that's what Steyn envisions happening in the West, not in the distant future or even by mid-century, but within this decade, absent radical and painful course changes which are difficult to imagine being implemented by the feckless political classes of Europe, the U.S., and Japan.

In a chilling chapter, Steyn invokes the time machine once again to deliver a letter from the middle of our century to a reader in the America of 1950. In a way the world he describes would be as alien to its Truman administration reader as any dystopian vision of Wells, Orwell, or Huxley, and it is particularly disturbing to note that most of the changes he forecasts have already taken place or their precipitating events already underway in trends which are either impossible or extremely difficult to reverse. A final chapter, which I'll bet was added at the insistence of the publisher, provides a list of things which might be done to rescue the West from its imminent demise. They all make perfect sense, are easily understood, and would doubtless improve the situation even if inadequate to entirely avoid the coming calamity. And there is precisely zero chance of any of them being implemented in a country where 52.9% of the population voted for Barack Obama in 2008, at the tipping point where a majority dependent on the state and state employees who tend to them outvote a minority of productive taxpayers.

Regular readers of Steyn's columns will find much of this material familiar—I suspect there was more than a little cut and paste in assembling this manuscript. The tone of the argument is more the full-tilt irony, mockery, and word play one expects in a column than the more laid back voice customary in a book. You might want to read a chapter every few days rather than ploughing right through to the end to avoid getting numbed. But then the writing is so good it's difficult to put down.

In the Kindle edition, end notes are properly linked to the text and in notes which cite a document on the Web, the URL is linked to the on-line document. The index, however, is simply a useless list of terms without links to references in the text.

 Permalink

Ahamed, Liaquat. Lords of Finance. New York: Penguin Press, 2009. ISBN 978-0-14-311680-6.
I have become increasingly persuaded that World War I was the singular event of the twentieth century in that it was not only an unprecedented human tragedy in its own right (and utterly unnecessary), it set in motion the forces which would bring about the calamities which would dominate the balance of the century and which still cast dark shadows on our world as it approaches one century after that fateful August. When the time comes to write the epitaph of the entire project of the Enlightenment (assuming its successor culture permits it to even be remembered, which is not the way to bet), I believe World War I will be seen as the moment when it all began to go wrong.

This is my own view, not the author's thesis in this book, but it is a conclusion I believe is strongly reinforced by the events chronicled here. The present volume is a history of central banking in Europe and the U.S. from the years prior to World War I through the institution of the Bretton Woods system of fixed exchange rates based on U.S. dollar reserves backed by gold. The story is told through the careers of the four central bankers who dominated the era: Montagu Norman of the Bank of England, Émile Moreau of la Banque de France, Hjalmar Schact of the German Reichsbank, and Benjamin Strong of the U.S. Federal Reserve Bank of New York.

Prior to World War I, central banking, to the extent it existed at all in anything like the modern sense, was a relatively dull field of endeavour performed by correspondingly dull people, most aristocrats or scions of wealthy families who lacked the entrepreneurial bent to try things more risky and interesting. Apart from keeping the system from seizing up in the occasional financial panic (which was done pretty much according to the playbook prescribed in Walter Bagehot's Lombard Street, published in 1873), there really wasn't a lot to do. All of the major trading nations were on a hard gold standard, where their paper currency was exchangeable on demand for gold coin or bullion at a fixed rate. This imposed rigid discipline upon national governments and their treasuries, since any attempt to inflate the money supply ran the risk of inciting a run on their gold reserves. Trade imbalances would cause a transfer of gold which would force partners to adjust their interest rates, automatically cooling off overheated economies and boosting those suffering slowdowns.

World War I changed everything. After the guns fell silent and the exhausted nations on both sides signed the peace treaties, the financial landscape of the world was altered beyond recognition. Germany was obliged to pay reparations amounting to a substantial fraction of its GDP for generations into the future, while both Britain and France had run up debts with the United States which essentially cleaned out their treasuries. The U.S. had amassed a hoard of most of the gold in the world, and was the only country still fully on the gold standard. As a result of the contortions done by all combatants to fund their war efforts, central banks, which had been more or less independent before the war, became increasingly politicised and the instruments of government policy.

The people running these institutions, however, were the same as before: essentially amateurs without any theoretical foundation for the policies this unprecedented situation forced them to formulate. Germany veered off into hyperinflation, Britain rejoined the gold standard at the prewar peg of the pound, resulting in disastrous deflation and unemployment, while France revalued the franc against gold at a rate which caused the French economy to boom and gold to start flowing into its coffers. Predictably, this led to crisis after crisis in the 1920s, to which the central bankers tried to respond with Band-Aid after Band-Aid without any attempt to fix the structural problems in the system they had cobbled together. As just one example, an elaborate scheme was crafted where the U.S. would loan money to Germany which was used to make reparation payments to Britain and France, who then used the proceeds to repay their war debts to the U.S. Got it? (It was much like the “petrodollar recycling” of the 1970s where the West went into debt to purchase oil from OPEC producers, who would invest the money back in the banks and treasury securities of the consumer countries.) Of course, the problem with such schemes is there's always that mountain of debt piling up somewhere, in this case in Germany, which can't be repaid unless the economy that's straining under it remains prosperous. But until the day arrives when the credit card is maxed out and the bill comes due, things are glorious. After that, not so much—not just bad, but Hitler bad.

This is a fascinating exploration of a little-known epoch in monetary history, and will give you a different view of the causes of the U.S. stock market bubble of the 1920s, the crash of 1929, and the onset of the First Great Depression. I found the coverage of the period a bit uneven: the author skips over much of the financial machinations of World War I and almost all of World War II, concentrating on events of the 1920s which are now all but forgotten (not that there isn't a great deal we can learn from them). The author writes from a completely conventional wisdom Keynesian perspective—indeed Keynes is a hero of the story, offstage for most of it, arguing that flawed monetary policy was setting the stage for disaster. The cause of the monetary disruptions in the 1920s and the Depression is attributed to the gold standard, and yet even the most cursory examination of the facts, as documented in the book itself, gives lie to this. After World War I, there was a gold standard in name only, as currencies were manipulated at the behest of politicians for their own ends without the discipline of the prewar gold standard. Further, if the gold standard caused the Depression, why didn't the Depression end when all of the major economies were forced off the gold standard by 1933? With these caveats, there is a great deal to be learned from this recounting of the era of the first modern experiment in political control of money. We are still enduring its consequences. One fears the “maestros” trying to sort out the current mess have no more clue what they're doing than the protagonists in this account.

In the Kindle edition the table of contents and end notes are properly linked to the text, but source citations, which are by page number in the print edition, are not linked. However, locations in the book are given both by print page number and Kindle “location”, so you can follow them, albeit a bit tediously, if you wish to. The index is just a list of terms without links to their appearances in the text.

 Permalink

September 2011

Deutsch, David. The Beginning of Infinity. New York: Viking, 2011. ISBN 978-0-670-02275-5.
Were it possible to communicate with the shades of departed geniuses, I suspect Richard Feynman would be dismayed at the prospect of a distinguished theoretical physicist committing phil-oss-o-phy in public, while Karl Popper would be pumping his fist in exultation and shouting “Yes!”. This is a challenging book and, at almost 500 pages in the print edition, a rather long one, but it is a masterpiece well worthy of the investment in reading it, and then, after an interval to let its implications sink in, reading it again because there is so much here that you're unlikely to appreciate it all in a single reading.

The author attempts nothing less ambitious than a general theory of the creation of knowledge and its implications for the future of the universe. (In what follows, I shall take a different approach than the author in explaining the argument, but I think we arrive at the same place.) In all human endeavours: science, art, morals, politics and governance, technology, economics, etc., what we ultimately seek are good explanations—models which allow us to explain a complex objective reality and make predictions about its behaviour. The author rejects the arguments of the relativists and social constructionists that no such objective reality exists, as well as those of empiricists and advocates of inductive reasoning that our models come purely from observation of events. Instead, he contends that explanations come from conjectures which originate in the human mind (often sparked by experience), which are then tested against objective reality and alternative conjectures, in a process which (in the absence of constraints which obstruct the process, such as reliance on received wisdom instead of free inquiry) inevitably converges upon explanations which are minimal and robust in the sense that almost any small change destroys their predictive power.

For example, if I were so inclined, I could invent a myth involving gods and goddesses and their conflicting wills and goals which would exactly replicate the results of Newton's laws of mechanics. But this would be a bad explanation because the next person could come up with their own myth involving an entirely different pantheon which produced the same results. All of the excess baggage contributes nothing to the explanation, while there's no way you can simplify “F=ma” without breaking the entire structure.

And yet all of our explanations, however elegant and well-tested, are simply the best explanations we've found so far, and likely to be incomplete when we try to apply them to circumstances outside the experiences which motivated us to develop them. Newton's laws fail to describe the motion of objects at a substantial fraction of the speed of light, and it's evident from fundamental conflicts in their theoretical structure that our present theories of the very small (quantum mechanics) and the very large (general relativity) are inadequate to describe circumstances which obtained in the early universe and in gravitational collapse of massive objects.

What is going on here, contends Deutsch, is nothing other than evolution, with the creation of conjectures within the human mind serving as variation and criticism of them based on confrontation with reality performing selection. Just as biological evolution managed over four billion years or so to transform the ancestral cell into human brains capable of comprehending structures from subatomic particles to cosmology, the spark which was ignited in the brains of our ancestors is able, in principle, to explain everything, either by persistence in the process of conjecture and criticism (variation and selection), or by building the tools (scientific instruments, computers, and eventually perhaps our own intellectually transcendent descendents) necessary to do so. The emergence of the human brain was a phase transition in the history of the Earth and, perhaps, the universe. Humans are universal explainers.

Let's consider the concept of universality. While precisely defined in computing, it occurs in many guises. For example, a phonetic alphabet (as opposed to a pictographic writing system) is capable of encoding all possible words made up of the repertoire of sounds it expresses, including those uninvented and never yet spoken. A positional number system can encode all possible numbers without the need to introduce new symbols for numbers larger or smaller than those encountered so far. The genetic code, discovered as best we can determine through a process of chemical evolution on the early Earth, is universal: the same code, with a different string of nucleotides, can encode both brewer's yeast and Beethoven. Less than five million years ago the human lineage diverged from the common ancestor of present-day humans and chimpanzees, and between that time and today the human mind made the “leap to universality”, with the capacity to generate explanations, test them against reality, transmit them to other humans as memes, and store them extrasomatically as oral legends and, eventually, written records.

Universality changes all the rules and potential outcomes. It is a singularity in the mathematical sense that one cannot predict the future subsequent to its emergence from events preceding it. For example, an extraterrestrial chemist monitoring Earth prior to the emergence of the first replicator could have made excellent predictions about the chemical composition of the oceans and its interaction with the energy and material flows in the environment, but at the moment that first replicating cell appeared, the potential for things the meticulous chemist wouldn't remotely imagine came into existence: stromatolites, an oxygen-rich atmosphere, metazoans, flowers, beetles, dinosaurs, boot prints on the Moon, and the designated hitter rule. So it is with the phase transition to universality of the human mind. It is now impossible to predict based on any model not taking that singularity into account the fate of the Earth, the Sun, the solar system, or the galaxy. Barring societal collapse, it appears probable that within this century individual wealthy humans (and a few years thereafter, everybody) will have the ability to launch self-replicating von Neumann probes into the galaxy with the potential of remaking it in their own image in an eyeblink compared to the age of the universe (unless they encounter probes launched by another planet full of ambitious universal explainers, which makes for another whole set of plot lines).

But universality and evolutionary epistemology have implications much closer to home and the present. Ever since the Enlightenment, Western culture has developed and refined the scientific method, the best embodiment of the paradigm of conjecture and criticism in the human experience. And yet, at the same time, the institutions of governance of our societies have been largely variations on the theme of “who shall rule?”, and the moral underpinnings of our societies have either been based upon received wisdom from sacred texts, tradition, or the abdication of judgement inherent in multicultural relativism. The author argues that in all of these “non-scientific” domains objective truth exists just as it does in mechanics and chemistry, and that we can discover it and ever improve our explanations of it by precisely the same process we use in science: conjecture and criticism. Perversely, many of the institutions we've created impede this process. Consider how various political systems value compromise. But if there is a right answer and a wrong answer, you don't get a better explanation by splitting the difference. It's as if, faced with a controversy between geocentric and heliocentric models of the solar system, you came up with a “compromise” that embodied the “best of both”. In fact, Tycho did precisely that, and it worked even worse than the alternatives. The value of democracy is not that it generates good policies—manifestly it doesn't—but rather that it provides the mechanism for getting rid of bad policies and those who advocate them and eventually selecting the least bad policies based upon present knowledge, always subject to revision based on what we'll discover tomorrow.

The Enlightenment may also be thought of as a singularity. While there have been brief episodes in human history where our powers as universal explainers have been unleashed (Athens and Florence come to mind, although there have doubtless been a multitude of others throughout history which have left us no record—it is tragic to think of how many Galileos were born and died in static tribal societies), our post-Enlightenment world is the only instance which has lasted for centuries and encompassed a large part of the globe. The normal state of human civilisation seems to be a static or closed society dominated by tradition and taboos which extinguish the inborn spark of universal explanation which triggers the runaway exponential growth of knowledge and power. The dynamic (or open) society (1, 2) is a precious thing which has brought unprecedented prosperity to the globe and stands on the threshold of remaking the universe as we wish it to be.

If this spark be not snuffed by ignorance, nihilism, adherence to tradition and authority, and longing for the closure of some final utopia, however confining, but instead lights the way to a boundless frontier of uncertainty and new problems to comprehend and solve, then David Deutsch will be celebrated as one of the visionaries who pointed the way to this optimistic destiny of our species and its inheritors.

 Permalink

Demick, Barbara. Nothing to Envy. New York: Spiegel & Grau, [2009] 2010. ISBN 978-0-385-52391-2.
The last decade or so I lived in California, I spent a good deal of my time being angry—so much so that I didn't really perceive the extent that anger had become part of who I was and how I lived my life. It was only after I'd gotten out of California and the U.S. in 1991 and lived a couple of years in Switzerland that I discovered that the absence of driving on crumbling roads overcrowded with aggressive and incompetent drivers, a government bent on destroying productive enterprise, and a culture collapsing into vulgarity and decadence had changed who I was: in short, only after leaving Marin County California, had I become that thing which its denizens delude themselves into believing they are—mellow.

What, you might be asking yourself, does this have to do with a book about the lives of ordinary people in North Korea? Well, after a couple of decades in Switzerland, it takes quite a bit of provocation to bring back the old hair-on-fire white flash, like passing through a U.S. airport or…reading this book. I do not mean that this book angered me; it is a superb work of reportage on a society so hermetically closed that obtaining even the slightest details on what is really going on there is near-impossible, as tourists and journalists are rarely permitted to travel outside North Korea's capital of Pyongyang, a Stalinist Potemkin village built to deceive them as to the situation in other cities and the countryside. What angered me is the horrible, pointless, and needless waste of the lives of tens of millions of people, generation after generation, at the hands of a tyranny so abject it seems to have read Orwell's 1984 not as a dystopian warning, but an instruction manual. The victims of this tragedy are not just the millions who have died in the famines, ended their lives in the sprawling complex of prisons and forced labour camps, or were executed for “crimes” such as trying to communicate with relatives outside the country; but the tens of millions forced to live in a society which seems to have been engineered to extinguish every single pleasure which makes human life worth living. Stunted due to lack of food, indoctrinated with the fantasy that the horror which is their lives is the best for which they can hope, and deprived of any contact with the human intellectual heritage which does not serve the interests of their rulers, they live in an environment which a medieval serf would view as a huge step down from their lot in life, all while the rulers at the top of the pyramid live in grand style and are treated as legitimate actors on the international stage by diplomatic crapweasels from countries that should be shamed by their behaviour.

In this book the author tackles the formidable task of penetrating the barrier of secrecy and lies which hides the reality of life in North Korea from the rest of the world by recounting the lives of six defectors all of whom originated in Chongjin, the third largest city in North Korea, off limits to almost all foreign visitors. The names of the witnesses to this horror have been changed to protect relatives still within the slave state, but their testimony is quoted at length and provides a chilling view of what faces the 24 million who have so far been unable to escape. Now, clearly, if you're relying exclusively on the testimony of those who have managed to escape an oppressive regime, you're going to get a different picture than if you'd interviewed those who remain—just as you'd get a different view of California and the U.S. from somebody who got out of there twenty years ago compared to a current resident—but the author takes pains to corroborate the accounts of defectors against one another and the sparse information available from international aid workers who have been infrequently allowed to visit Chongjin. The accounts of the culture shock escapees from North Korea experience not just in 21st century South Korea but even in rural China are heartrending: Kim Ji-eun, a medical doctor who escaped to China after seeing the children in her care succumb to starvation without anything she could do, describes her first memory of China as discovering a dog's bowl filled with white rice and bits of meat and realising that dogs in China ate better than doctors in North Korea.

As Lenin asked, “What is to be done?” Taking on board the information in this narrative may cause you to question many of what appear to be sound approaches to bringing an end to this horror. For, according to the accounts of the defectors, tyranny of the North Korean style actually works quite well: escapees are minuscule compared to the population which remains behind, many of whom actually appear to believe the lies of the regime that they are a superior race and have it better than the balance of humanity, even as they see members of their family starve to death or disappear into the gulag. For some years I have been thinking about “freedom flights”. This is where a bunch of liberty-loving philanthropists hire a fleet of cargo aircraft to scatter several million single-shot pistols, each with its own individual parachute and accompanied by a translation of Major von Dach's book, across the territory of tyrannical Hell-holes and “let the people rule”. After reading this book, I'm not sure that would suffice. So effectively has the population been brainwashed that it seems a substantial fraction believe the lies of the regime and accept their sorry lot as the normal state of human existence. Perhaps we'll also need to drop solar-powered or hand-cranked satellite radio receivers to provide a window into the outside world—along with the guns, of course, to take care of snitches who try to turn in those who choose to widen their perspective and the minions of the state who come to arrest them.

By almost any measure, North Korea is an extreme outlier. By comparison, Iran is almost a paradise. Even Zimbabwe, while Hell on earth for those unfortunate enough to live there, is relatively transparent to outsiders who document what is going on and much easier to escape. But studying the end point of trends which seem to be relatively benign when they get going can be enlightening, and this book provides a chilling view of what awaits at the final off-ramp of the road to serfdom.

 Permalink

October 2011

Worden, Al with Francis French. Falling to Earth. Washington: Smithsonian Books, 2011. ISBN 978-1-58834-309-3.
Al Worden (his given name is Alfred, but he has gone by “Al” his whole life) was chosen as a NASA astronaut in April 1966, served as backup command module pilot for the Apollo 12 mission, the second Moon landing, and then flew to the Moon as command module pilot of Apollo 15, the first serious geological exploration mission. As command module pilot, Worden did not land on the Moon but, while tending the ship in orbit awaiting the return of his crewmates, operated a series of scientific experiments, some derived from spy satellite technology, which provided detailed maps of the Moon and a survey of its composition. To retrieve the film from the mapping cameras in the service module, Worden performed the first deep-space EVA during the return to Earth.

Growing up on a farm in rural Michigan during the first great depression and the second World War, Worden found his inclination toward being a loner reinforced by the self-reliance his circumstances forced upon him. He remarks on several occasions how he found satisfaction in working by himself and what he achieved on his own and while not disliking the company of others, found no need to validate himself through their opinions of him. This inner-directed drive led him to West Point, which he viewed as the only way to escape from a career on the farm given his family's financial circumstances, an Air Force commission, and graduation from the Empire Test Pilots' School in Farnborough, England under a US/UK exchange program.

For one inclined to be a loner, it would be difficult to imagine a more ideal mission than Worden's on Apollo 15. Orbiting the Moon in the command module Endeavour for almost three days by himself he was, at maximum distance on the far side of the Moon, more isolated from his two crewmates on the surface than any human has been from any other humans before or since (subsequent Apollo missions placed the command module in a lower lunar orbit, reducing this distance slightly). He candidly admits how much he enjoyed being on his own in the capacious command module, half the time entirely his own man while out of radio contact behind the Moon, and how his joy at the successful return of his comrades from the surface was tempered by how crowded and messy the command module was with them, the Moon rocks they collected, and all the grubby Moon dust clinging to their spacesuits on board.

Some Apollo astronauts found it difficult to adapt to life on Earth after their missions. Travelling to the Moon before you turn forty is a particularly extreme case of “peaking early”, and the question of “What next?” can be formidable, especially when the entire enterprise of lunar exploration was being dismantled at its moment of triumph. Still, one should not overstate this point: of the twenty-four astronauts who flew to the Moon, most went on to subsequent careers you'd expect for the kind of overachievers who become astronauts in the first place—in space exploration, the military, business, politics, education, and even fine arts. Few, however, fell to Earth so hard as the crew of Apollo 15. The collapse of one of their three landing parachutes before splashdown due to the canopy's being eroded due to a dump of reaction control propellant might have been seen as a premonition of this, but after the triumphal conclusion of a perfect mission, a White House reception, an address to a joint session of Congress, and adulatory celebrations on a round-the-world tour, it all came undone in an ugly scandal involving, of all things, postage stamps.

The Apollo 15 crew, like those of earlier NASA missions, had carried on board as part of their “personal preference kits” postage stamp covers commemorating the flight. According to Worden's account in this book, the Apollo 15 covers were arranged by mission commander Dave Scott, and agreed to by Worden and lunar module pilot Jim Irwin on Scott's assurance that this was a routine matter which would not affect their careers and that any sales of the covers would occur only after their retirement from NASA and the Air Force (in which all three were officers). When, after the flight, the covers began to come onto the market, an ugly scandal erupted, leading to the Apollo 15 crew being removed from flight status, and Worden and Irwin being fired from NASA with reprimands placed in their Air Force records which would block further promotion. Worden found himself divorced (before the Moon mission), out of a job at NASA, and with no future in the Air Force.

Reading this book, you get the impression that this was something like the end of Worden's life. And yet it wasn't—he went on to complete his career in the flight division at NASA's Ames Research Center and retire with the rank and pension of a Colonel in the U.S. Air Force. He then served in various capacities in private sector aerospace ventures and as chairman of the Astronaut Scholarship Foundation. Honestly, reading this book, you get the sense that everybody has forgotten the stupid postage stamps except the author. If there is some kind of redemption to be had by recounting the episode here (indeed, “Redemption” is the title of chapter 13 of this work), then fine, but whilst reading this account, I found myself inclined to shout, “Dude—you flew to the Moon! Yes, you messed up and got fired—who hasn't? But you landed on your feet and have had a wonderful life since, including thirty years of marriage. Get over the shaggy brown ugliness of the 1970s and enjoy the present and all the years to come!”

 Permalink

Penrose, Roger. Cycles of Time. New York: Alfred A. Knopf, 2010. ISBN 978-0-307-26590-6.
One of the greatest and least appreciated mysteries of contemporary cosmology is the extraordinarily special state of the universe immediately after the big bang. While at first glance an extremely hot and dense mass of elementary particles and radiation near thermal equilibrium might seem to have near-maximum entropy, when gravitation is taken into account, its homogeneity (the absence of all but the most tiny fluctuations in density) actually caused it to have a very small entropy. Only a universe which began in such a state could have a well-defined arrow of time which permits entropy to steadily increase over billions of years as dark matter and gas clump together, stars and galaxies form, and black holes appear and swallow up matter and radiation. If the process of the big bang had excited gravitational degrees of freedom, the overwhelmingly most probable outcome would be a mess of black holes with a broad spectrum of masses, which would evolve into a universe which looks nothing like the one we inhabit. As the author has indefatigably pointed out for many years, for some reason the big bang produced a universe in what appears to be an extremely improbable state. Why is this? (The preceding sketch may be a bit telegraphic because I discussed these issues at much greater length in my review of Sean Carroll's From Eternity to Here [February 2010] and didn't want to repeat it all here. So, if you aren't sure what I just said, you may wish to read that review before going further.)

In this book, Penrose proposes “conformal cyclic cosmology” as the solution to this enigma. Let's pick this apart, word by word. A conformal transformation is a mathematical mapping which preserves angles in infinitesimal figures. It is possible to define a conformal transformation (for example, the hyperbolic transformation illustrated by M. C. Escher's Circle Limit III) which maps an infinite space onto a finite one. The author's own Penrose diagrams map all of (dimension reduced) space-time onto a finite plot via a conformal transformation. Penrose proposes a conformal transformation which maps the distant future of a dead universe undergoing runaway expansion to infinity with the big bang of a successor universe, resulting in a cyclic history consisting of an infinite number of “æons”, each beginning with its own big bang and ending in expansion to infinity. The resulting cosmology is that of a single universe evolving from cycle to cycle, with the end of each cycle producing the seemingly improbable conditions required at the start of the next. There is no need for an inflationary epoch after the big bang, a multitude of unobservable universes in a “multiverse”, or invoking the anthropic principle to explain the apparent fine-tuning of the big bang—in Penrose's cosmology, the physics makes those conditions inevitable.

Now, the conformal rescaling Penrose invokes only works if the universe contains no massive particles, as only massless particles which always travel at the speed of light are invariant under the conformal transformation. Hence for the scheme to work, there must be only massless particles in the universe at the end of the previous æon and immediately after the big bang—the moment dubbed the “crossover”. Penrose argues that at the enormous energies immediately after the big bang, all particles were effectively massless anyway, with mass emerging only through symmetry breaking as the universe expanded and cooled. On the other side of the crossover, he contends that in the distant future of the previous æon almost all mass will have been accreted by black holes which then will evaporate through the Hawking process into particles which will annihilate, yielding a universe containing only massless photons and gravitons. He does acknowledge that some matter may escape the black holes, but then proposes (rather dubiously in my opinion) that all stable massive particles are ultimately unstable on this vast time scale (a hundred orders of magnitude longer than the time since the big bang), or that mass may just “fade away” as the universe ages: kind of like the Higgs particle getting tired (but then most of the mass of stable hadrons doesn't come from the Higgs process, but rather the internal motion of their component quarks and gluons).

Further, Penrose believes that information is lost when it falls to the singularity within a black hole, and is not preserved in some correlation at the event horizon or in the particles emitted as the black hole evaporates. (In this view he is now in a distinct minority of theoretical physicists.) This makes black holes into entropy destroying machines. They devour all of the degrees of freedom of the particles that fall into them and then, when they evaporate with a “pop”, it's all lost and gone away. This allows Penrose to avoid what would otherwise be a gross violation of the second law of thermodynamics. In his scheme the big bang has very low entropy because all of the entropy created in the prior æon has been destroyed by falling into black holes which subsequently evaporate.

All of this is very original, clever, and the mathematics is quite beautiful, but it's nothing more than philosophical speculation unless it makes predictions which can be tested by observation or experiment. Penrose believes that gravitational radiation emitted from the violent merger of galactic-mass black holes in the previous æon may come through the crossover and imprint itself as concentric circles of low temperature variation in the cosmic background radiation we observe today. Further, with a colleague, he argues that precisely such structures have been observed in two separate surveys of the background radiation. Other researchers dispute this claim, and the debate continues.

For the life of me, I cannot figure out to which audience this book is addressed. It starts out discussing the second law of thermodynamics and entropy in language you'd expect in a popularisation aimed at the general public, but before long we're into territory like:

We now ask for the analogues of F and J in the case of the gravitational field, as described by Einstein's general theory of relativity. In this theory there is a curvature to space-time (which can be calculated once knows how the metric g varies throughout the space-time), described by a [ 04]-tensor R, called the Riemann(-Christoffel) tensor, with somewhat complicated symmetries resulting in R having 20 independent components per point. These components can be separated into two parts, constituting a [ 04]-tensor C, with 10 independent components, called the Weyl conformal tensor, and a symmetric [ 02]-tensor E, also with 10 independent components, called the Einstein tensor (this being equivalent to a slightly different [ 02]-tensor referred to as the Ricci tensor[2.57]). According to Einstein's field equations, it is E that provides the source to the gravitational field. (p. 129)

Ahhhh…now I understand! Seriously, much of this book is tough going, as technical in some sections as scholarly publications in the field of general relativity, and readers expecting a popular account of Penrose's proposal may not make it to the payoff at the end. For those who thirst for even more rigour there are two breathtakingly forbidding appendices.

The Kindle edition is excellent, with the table of contents, notes, cross-references, and index linked just as they should be.

 Permalink

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988] 2004. ISBN 978-0-345-47609-8.
In 1871 Helmuth von Moltke the Elder, chief of the Prussian General Staff and architect of modern German military strategy, wrote “no plan of operations extends with any certainty beyond the first contact with the main hostile force”, an observation which is often paraphrased as “No plan survives contact with the enemy”. This is doubtless the case, but as this classic history of the diplomatic run-up to World War I and the initial hostilities from the outbreak of the war through the First Battle of the Marne demonstrates, plans, treaties, and military and political structures put into place long before open conflict erupts can tie the hands of decision makers long after events have proven them obsolete.

I first read this book in the 1980s, and I found upon rereading it now with the benefit of having since read a number of other accounts of the period, both contemporary and historical, that I'd missed or failed to fully appreciate some important points on the first traverse.

The first is how crunchy and rigid the system of alliances among the Great Powers was in the years before the War, and also the plans of mobilisation of the land powers: France, Germany, Austria-Hungary, and Russia. Viewed from a prewar perspective many thought these arrangements were guarantors of security: creating a balance of power in which the ultimate harm to any aggressor was easily calculated to be far greater than any potential gain, especially as their economies became increasingly interlinked and dependent upon international trade. For economic reasons alone, any war was expected to be short—no power was believed to have the resources to sustain a protracted conflict once its trade was disrupted by war. And yet this system, while metastable near the local minimum it occupied since the 1890s, proved highly unstable to perturbations which dislodged it from that perch. The mobilisation plans of the land powers (Britain, characteristically, had no such plan and expected to muddle through based upon events, but as the preeminent sea power with global obligations it was, in a sense, perpetually mobilised for naval conflicts) were carefully choreographed at the level of detail of railroad schedules. Once the “execute” button was pushed, events would begin to occur on a nationwide scale: call-ups of troops, distribution of supplies from armories, movement of men and munitions to assembly points, rationing of key supplies, etc. Once one nation had begun to mobilise, its potential opponents ran an enormous risk if they did not also mobilise—every day they delayed was a day the enemy, once assembled in battle order, could attack them before their own preparations were complete.

This interlocking set of alliances and scripted mobilisation plans finally proved lethal in 1914. On July 28, Austria-Hungary declared war on Serbia and began mobilisation. Russia, as an ally of Serbia and seeing its position in the Balkans threatened, declared a partial mobilisation on July 29. Germany, allied to Austria-Hungary and threatened by the Russian mobilisation, decreed its own mobilisation on July 30. France, allied with Russia and threatened by Germany, began mobilisation on August 1st. Finally, Britain, allied with France and Russia, declared war on Germany on August 4th. Europe, at peace the morning of Tuesday, July 28th, was, by the evening of Tuesday, August 4th, at war with itself, almost entirely due to treaties and mobilisation plans concluded in peacetime with the best of intentions, and not overt hostilities between any of the main powers involved.

It is a commonplace that World War I surpassed all historical experience and expectations at its outbreak for the scale of destruction and the brutality of the conflict (a few prescient observers who had studied the second American war of secession and developments in weaponry since then were not surprised, but they were in the minority), but this is often thought to have emerged in the period of static trench warfare which predominated from 1915 until the very end of the war. But this account makes clear that even the initial “war of maneuver” in August and September 1914 was characterised by the same callous squandering of life by commanders who adhered to their pre-war plans despite overwhelming evidence from the field that the assumptions upon which they were based were completely invalid. Both French and German commanders sent wave after wave of troops armed only with bolt-action rifles and bayonets against fortified positions with artillery and machine guns, suffering tens of thousands of casualties (some units were almost completely wiped out) with no effect whatsoever. Many accounts of World War I portray the mindless brutality of the conflict as a product of the trenches, but it was there from the very start, inherent in the prevailing view that the citizen was the property of the state to expend as it wished at the will of the ruling class (with the exception of the British, all armies in the conflict were composed largely of conscripts).

Although originally published almost half a century ago, this book remains one of the definitive accounts of the origins of World War I and the first month of the conflict, and one of outstanding literary merit (it is a Pulitzer prize winner). John F. Kennedy read the book shortly after its publication, and it is said to have made such an impression upon him that it influenced his strategy during the Cuban Missile Crisis, seeking to avoid actions which could trigger the kind of reciprocal automatic responses which occurred in the summer of 1914. Those who bewail the soggy international institutions and arrangements of the present day, where nothing is precisely as it seems and every commitment is balanced with a dozen ways to wiggle out of it, may find this book a cautionary tale of the alternative, and how a crunchy system of alliances may be far more dangerous. While reading the narrative, however, I found myself thinking not so much about diplomacy and military matters but rather how much today's globalised economic and financial system resembles the structure of the European great powers in 1914. Once again we hear that conflict is impossible because the damage to both parties would be unacceptable; that the system can be stabilised by “interventions” crafted by wise “experts”; that entities which are “too big to fail”, simply by being so designated, will not; and that the system is ultimately stable against an unanticipated perturbation which brings down one part of the vast interlocking structure. These beliefs seem to me, like those of the political class in 1914, to be based upon hope rather than evidence, and anybody interested in protecting their assets should think at some length about the consequences should one or more of them prove wrong.

 Permalink

Markopolos, Harry. No One Would Listen. Hoboken, NJ: John Wiley & Sons, 2010. ISBN 978-0-470-91900-2.
Bernard L. “Bernie” Madoff was a co-founder of NASDAQ, founder and CEO of a Wall Street firm which became one of the top market makers, and operator of a discretionary money management operation which dwarfed hedge funds and provided its investors a reliable return in markets up and down which no other investment vehicle could approach. Madoff was an elder statesman of Wall Street, respected not only for his success in business but also for philanthropic activities.

On December 10th, 2008, Madoff confessed to his two sons that his entire money management operation had been, since inception, a Ponzi scheme, and the next day he was arrested by the FBI for securities fraud. After having pleaded guilty to 11 federal felony charges, he was sentenced to 150 years in federal incarceration, which sentence he will be serving for the foreseeable future. The total amount of money under management in Madoff's bogus investment scheme is estimated as US$65 billion, although estimates of actual losses to investors are all over the map due to Madoff's keeping transactions off the books and offshore investors' disinclination to make claims for funds invested with Madoff which they failed to disclose to their domicile tax authorities.

While this story broke like a bombshell on Wall Street, it was anything but a surprise to the author who had figured out back in the year 2000, “in less than five minutes”, that Madoff was a fraud. The author is a “quant”—a finance nerd who lives and breathes numbers, and when tasked by his employer to analyse Madoff, a competitor for their investors' funds, and devise a financial product which could compete with Madoff's offering, he almost immediately realised that Madoff's results were too good to be true. First of all, Madoff claimed to be using a strategy of buying stocks with a “collar” of call and put options, with stocks picked from the S&P 100 stock index. Yet it was easy to demonstrate, based upon historical data from the period of Madoff's reported results, that any such strategy could not possibly avoid down periods much more serious than Madoff reported. Further, such a strategy, given the amount of money Madoff had under management, would have required him to have placed put and call option hedges on the underlying stocks which greatly exceeded the total open interest in such options. Finally, Madoff's whole operation made no sense from the standpoint of a legitimate investment business: he was effectively paying 16% for capital in order to realise a 1% return on transaction fees while he could, by operating the same strategy as a hedge fund, pocket a 4% management fee and a 20% participation in the profits.

Having figured this out, the author assumed that simply submitting the facts in the case to the regulator in charge, the Securities and Exchange Commission (SEC), would quickly bring the matter to justice. Well, not exactly. He made his first submission to the SEC in May of 2000, and the long saga of regulatory incompetence began. A year later, articles profiling Madoff and skating near the edge of accusing him of fraud were published in a hedge fund trade magazine and Barron's, read by everybody in the financial community, and still nothing happened. Off-the-record conversations with major players on Wall Street indicated that many of them had concluded that Madoff was a fraud, and indeed none of the large firms placed money with him, but ratting him out to The Man was considered infra dig. And so the sheep were sheared to the tune of sixty-five billion dollars, with many investors who had entrusted their entire fortune to Madoff or placed it with “feeder funds”, unaware that they were simply funnelling money to Madoff and skimming a “management and performance fee” off the top without doing any due diligence whatsoever, losing everything.

When grand scale financial cataclysms like this erupt, the inevitable call is for “more regulation”, as if “regulation” ever makes anything more regular. This example gives the lie to this perennial nostrum—all of the operations of Madoff, since the inception of his Ponzi scheme 1992 until its undoing in 2008, were subject to regulation by the SEC, and the author argues persuasively that a snap audit at any time during this period, led by a competent fraud investigator who demanded trade confirmation tickets and compared them with exchange transaction records would have uncovered the fraud in less than an hour. And yet this never happened, demonstrating that the SEC is toothless, clueless, and a poster child for regulatory capture, where a regulator becomes a client of the industry it is charged to regulate and spends its time harassing small operators on the margin while turning a blind eye to gross violations of politically connected players.

An archive of original source documents is available on the book's Web site.

 Permalink

November 2011

Thor, Brad. Takedown. New York: Pocket Books, 2006. ISBN 978-1-4516-3615-4.
This is the fifth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this episode, Harvath, an agent for a covert branch of the U.S. Department of Homeland Security, completes a snatch and exfiltration of a terrorist bombmaker granted political asylum in Canada, delivers him into custody in Manhattan, and plans to spend a lazy fourth of July holiday in the Big Apple with one of his closest friends, a Delta Force operative recently retired after a combat injury. Their bar-hopping agenda is rudely interrupted by an escalating series of terrorist attacks which culminate in bridge and tunnel bombings which, along with sniper and rocket-propelled grenade attacks on boat and air traffic, isolate Manhattan from the mainland and inflict massive civilian casualties.

As Harvath establishes contact with his superiors, he discovers he is the only operative in the city and, worse, that a sequence of inexplicable and extremely violent attacks on targets irrelevant to any known terrorist objective seems to indicate the attacks so far, however horrific, may be just a diversion and/or intended to facilitate a further agenda. Without support or hope of reinforcement from his own agency, he recruits a pick-up team of former special operators recovering from the physical and psychological consequences of combat injuries he met at the Veterans Affairs hospital New York as the attacks unfolded and starts to follow the trail of the terrorists loose in Manhattan. As the story develops, layer after layer of deception is revealed, not only on the part of the terrorists and the shadowy figures behind them and pulling their strings, but also within the U.S. government, reaching all the way to the White House. And if you thought you'd heard the last of the dinky infovore Troll and his giant Ovcharkas, he's back!

This is a well-crafted thriller and will keep you turning the pages. That said, I found it somewhat odd that a person with such a sense of honour and loyalty to his friends and brothers in arms as Harvath would so readily tolerate deception among his superiors which led directly to their deaths, regardless of the purported “national security” priorities. It is indicative of how rapidly the American Empire is sliding into the abyss that outrageous violations of human rights, the rule of law, and due process which occur in this story to give it that frisson of edginess that Thor seeks in his novels now seem tame compared to remote-controlled murder by missile of American citizens in nations with which the U.S. is not at war ordered by a secret committee on the sole authority of the president. Perhaps as the series progresses, we'll encounter triple zero agents—murder by mouse click.

As usual, I have a few quibbles.

Spoiler warning: Plot and/or ending details follow.  
  • The president's press secretary does not write his speeches. This is the job of speechwriters, one or more of whom usually accompanies the president even on holiday. (Chapter 18)
  • The Surgeon General is not the president's personal physician. (Chapter 42)
  • If I were rappelling through a manhole several stories into the bowels of Manhattan, I think I'd use a high tensile strength tripod rather than the “high tinsel” tripod used in chapter 59. Now if the bad guy was way up in a Christmas tree….
  • In chapter 100, the Troll attaches a “lightweight silencer” to his custom-built rifle firing the .338 Lapua sniper round. Even if you managed to fit a suppressor to a rifle firing this round and it effectively muffled the sound of the muzzle blast (highly dubious), there would be no point in doing so because the bullet remains supersonic more than a kilometre from the muzzle (depending on altitude and temperature), and the shock wave from the bullet would easily be audible anywhere in Gibraltar. Living across the road from a rifle range, I'm acutely aware of the sound of supersonic bullets coming more or less in my direction, and these are just 5.56 and 7.62 NATO, not Lapua “reach out and whack someone” ammo.
Spoilers end here.  

 Permalink

Breitbart, Andrew. Righteous Indignation. New York: Grand Central, 2011. ISBN 978-0-446-57282-8.
Andrew Breitbart has quickly established himself as the quintessential happy warrior in the struggle for individual liberty. His breitbart.com and breitbart.tv sites have become “go to” resources for news and video content, and his ever-expanding constellation of “Big” sites (Big Hollywood, Big Government, Big Journalism, etc.) have set the standard for group blogs which break news rather than just link to or comment upon content filtered through the legacy media.

In this book, he describes his personal journey from growing up in “the belly of the beast”—the Los Angeles suburb of Brentwood, his party days at college, and rocky start in the real world, then discovering while watching the Clarence Thomas confirmation hearings on television, that much of the conventional “wisdom” he had uncritically imbibed from the milieu in which he grew up, his education, and the media just didn't make any sense or fit with his innate conception of right and wrong. This caused him to embark upon an intellectual journey which is described here, and a new career in the centre of the New Media cyclone, helping to create the Huffington Post, editing the Drudge Report, and then founding his own media empire and breaking stories which would never have seen the light of day in the age of the legacy media monopoly, including the sting which brought down ACORN.

Although he often comes across as grumpy and somewhat hyper in media appearances, I believe Breitbart well deserves the title “happy warrior” because he clearly loves every moment of what he's doing—striding into the lion's den, exploding the lies and hypocrisy of his opponents with their own words and incontrovertible audio and video evidence, and prosecuting the culture war, however daunting the odds, with the ferocity of Churchill's Britain in 1940. He seems to relish being a lightning rod—on his Twitter feed, he “re-tweets” all of the hate messages he receives.

This book is substantially more thoughtful than I expected; I went in thinking I'd be reading the adventures of a gadfly-provocateur, and while there's certainly some of that, there is genuine depth here which may be enlightening to many readers. While I can't assume agreement with someone whom I've never met, I came away thinking that Breitbart's view of his opponents is similar to the one I have arrived at independently, as described in Enemies. Breitbart describes a “complex” consisting of the legacy media, the Democrat party, labour unions (particularly those of public employees), academia and the education establishment, and organs of the regulatory state which reinforce one another, ruthlessly suppress opposition, and advance an agenda which is inimical to liberty and the rule of law. I highly recommend this book; it far exceeded my expectations and caused me to think more deeply about several things which were previously ill-formed in my mind. I'll discuss them below, but note that these are my own thoughts and should not be attributed to this book.

While reading Breitbart's book, I became aware that the seemingly eternal conflict in human societies is between slavers: people who see others as a collective to be used to “greater ends” (which are usually startlingly congruent with the slavers' own self-interest), and individuals who simply want to be left alone to enjoy their lives, keep the fruits of their labour, not suffer from aggression, and be free to pursue their lives as they wish as long as they do not aggress against others. I've re-purposed Larry Niven's term “slavers” from the known space universe to encompass all of the movements over the tawdry millennia of human history and pre-history which have seen people as the means to an end instead of sovereign beings, whether they called themselves dictators, emperors, kings, Jacobins, socialists, progressives, communists, fascists, Nazis, “liberals”, Islamists, or whatever deceptive term they invent tomorrow after the most recent one has been discredited by its predictably sorry results. Looking at all of these manifestations of the enemy as slavers solves a number of puzzles which might otherwise seem contradictory. For example, why did the American left so seamlessly shift its allegiance from communist dictators to Islamist theocrats who, looked at dispassionately, agree on almost nothing? Because they do agree on one key point: they are slavers, and that resonates with wannabe slavers in a society living the twilight of liberty.

Breitbart discusses the asymmetry of the tactics of the slavers and partisans of individual liberty at some length. He argues that the slavers consistently use the amoral Alinsky playbook while their opponents restrict themselves to a more constrained set of tactics grounded in their own morality. In chapter 7, he presents his own “Pragmatic Primer for Realistic Revolutionaries” which attempts to navigate this difficult strait. My own view, expressed more crudely, is that “If you're in a fair fight, your tactics suck”.

One of the key tactics of the slavers is deploying the mob into the streets. As documented by Ann Coulter in Demonic, the mob has been an integral part of the slaver arsenal since antiquity, and since the French revolution its use has been consistent by the opponents of liberty. In the United States and, to a lesser extent, in other countries, we are presently seeing the emergence of the “Occupy” movement, which is an archetypal mob composed of mostly clueless cannon fodder manipulated by slavers to their own ends. Many dismiss this latest manifestation of the mob based upon the self-evident vapidity of its members; I believe this to be a mistake. Most mobs in history were populated by people much the same—what you need to look at is the élite vanguard who is directing them and the greater agenda they are advancing. I look at the present manifestation of the mob in the U.S. like the release of a software product. The present “Occupy” protests are the “alpha test”: verifying the concept, communication channels, messaging in the legacy media, and transmission of the agenda from those at the top to the foot soldiers. The “beta test” phase will be August 2012 at the Republican National Convention in Tampa, Florida. There we shall see a mob raised nationwide and transported into that community to disrupt the nomination process (although, if it goes the way I envision infra, this may be attenuated and be smaller and more spontaneous). The “production release” will be in the two weeks running up the general election on November 6th, 2012—that is when the mob will be unleashed nationwide to intimidate voters, attack campaign headquarters, deface advertising messages, and try to tilt the results. Mob actions will not be reported in the legacy media, which will be concentrating on other things.

One key take-away from this book for me is just how predictable the actions of the Left are—they are a large coalition of groups of people most of whom (at the bottom) are ill-informed and incapable of critical thinking, and so it takes a while to devise, distribute, and deploy the kinds of simple-minded slogans they're inclined to chant. This, Breitbart argues, makes them vulnerable to agile opponents able to act within their OODA loop, exploiting quick reaction time against a larger but more lethargic opponent.

The next U.S. presidential election is scheduled for November 6th, 2012, a little less than one spin around the Sun from today. Let me go out on a limb and predict precisely what the legacy media will be talking about as the final days before the election click off. The Republican contender for the presidency will be Mitt Romney, who will have received, in the entire nomination process, a free pass from legacy media precisely as McCain did in 2008, while taking down each “non-Romney” in turn on whatever vulnerability they can find or, failing that, invent. People seem to be increasingly resigned to the inevitability of Romney as the nominee, and on the Intrade prediction market as I write this, the probability of his nomination is trading at 67.1% with Perry in second place at 8.8%.

Within a week of Romney's nomination, the legacy media will, in unison as if led by an invisible hand, pivot to the whole “Mormon thing”, and between August and November 2012, the electorate will be educated through every medium and incessantly until, to those vulnerable to such saturation and without other sources of information, issues such as structural unemployment, confiscatory taxation, runaway regulation, unsustainable debt service and entitlement obligations, monetary collapse, and external threats will be entirely displaced by discussions of golden plates, seer stones, temple garments, the Book of Abraham, Kolob, human exaltation, the plurality of gods, and other aspects of Romney's religion of record, which will be presented so as to cause him to be perceived as a member of a cult far outside the mainstream and unacceptable to the Christian majority of the nation and particularly the evangelical component of the Republican base (who will never vote for Obama, but might be encouraged to stay home rather than vote for Romney).

In writing this, I do not intend in any way to impugn Romney's credentials as a candidate and prospective president (he would certainly be a tremendous improvement over the present occupant of that office, and were I a member of the U.S. electorate, I'd be happy affixing a “Romney: He'll Do” bumper sticker to my Bradley Fighting Vehicle), nor do I wish to offend any of my LDS friends. It's just that if, as appears likely at the moment, Romney becomes the Republican nominee, I believe we're in for one of the ugliest religious character assassination campaigns ever seen in the history of the Republic. Unlike the 1960 campaign (which I am old enough to recall), where the anti-Catholic animus against Kennedy was mostly beneath the surface and confined to the fringes, this time I expect the anti-Mormon slander to be everywhere in the legacy media, couched, of course, as “dispassionate historical reporting”.

This will, of course, be shameful, but the slavers are shameless. Should Romney be the nominee, I'm simply saying that those who see him as the best alternative to avert the cataclysm of a second Obama term be fully prepared for what is coming in the general election campaign.

Should these ugly predictions play out as I envision, those who cherish freedom should be thankful Andrew Breitbart is on our side.

 Permalink

Kaiser, David. How the Hippies Saved Physics. New York: W. W. Norton, 2011. ISBN 978-0-393-07636-3.
From its origin in the early years of the twentieth century until the outbreak of World War II, quantum theory inspired deeply philosophical reflection as to its meaning and implications for concepts rarely pondered before in physics, such as the meaning of “measurement”, the rôle of the “observer”, the existence of an objective reality apart from the result of a measurement, and whether the randomness of quantum measurements was fundamental or due to our lack of knowledge of an underlying stratum of reality. Quantum theory seemed to imply that the universe could not be neatly reduced to isolated particles which interacted only locally, but admitted “entanglement” among separated particles which seemed to verge upon mystic conceptions of “all is one”. These weighty issues occupied the correspondence and conference debates of the pioneers of quantum theory including Planck, Heisenberg, Einstein, Bohr, Schrödinger, Pauli, Dirac, Born, and others.

And then the war came, and then the war came to an end, and with it ended the inquiry into the philosophical foundations of quantum theory. During the conflict, physicists on all sides were central to war efforts including nuclear weapons, guided missiles, radar, and operations research, and after the war they were perceived by governments as a strategic resource—subsidised in their education and research and provided with lavish facilities in return for having them on tap when their intellectual capacities were needed. In this environment, the education and culture of physics underwent a fundamental change. Suddenly the field was much larger than before, filled with those interested more in their own careers than probing the bottom of deep questions, and oriented toward, in Richard Feynman's words, “getting the answer out”. Instead of debating what their equations said about the nature of reality, the motto of the age became “shut up and calculate”, and physicists who didn't found their career prospects severely constrained.

Such was the situation from the end of World War II through the 1960s, when the defence (and later space program) funding gravy train came to an end due to crowding out of R&D budgets by the Vietnam War and the growing financial crisis due to debasement of the dollar. Suddenly, an entire cohort of Ph.D. physicists who, a few years before could expect to choose among a variety of tenure-track positions in academia or posts in government or industry research laboratories, found themselves superbly qualified to do work which nobody seemed willing to pay them to do. Well, whatever you say about physicists, they're nothing if they aren't creative, so a small group of out of the box thinkers in the San Francisco Bay area self-organised into the Fundamental Fysiks Group and began to re-open the deep puzzles in quantum mechanics which had laid fallow since the 1930s. This group, founded by Elizabeth Rauscher and George Weissmann, whose members came to include Henry Stapp, Philippe Eberhard, Nick Herbert, Jack Sarfatti, Saul-Paul Sirag, Fred Alan Wolf, John Clauser, and Fritjof Capra, came to focus on Bell's theorem and its implications for quantum entanglement, what Einstein called “spooky action at a distance”, and the potential for instantaneous communications not limited by the speed of light.

The author argues that the group's work, communicated through samizdat circulation of manuscripts, the occasional publication in mainstream journals, and contact with established researchers open to considering foundational questions, provided the impetus for today's vibrant theoretical and experimental investigation of quantum information theory, computing, and encryption. There is no doubt whatsoever from the trail of citations that Nick Herbert's attempts to create a faster-than-light signalling device led directly to the quantum no-cloning theorem.

Not only did the group reestablish the prewar style of doing physics, more philosophical than computational, they also rediscovered the way science had been funded from the Medicis until the advent of Big Science. While some group members held conventional posts, others were supported by wealthy patrons interested in their work purely from its intellectual value. We encounter a variety of characters who probably couldn't have existed in any decade other than the 1970s including Werner Erhard, Michael Murphy, Ira Einhorn, and Uri Geller.

The group's activities ranged far beyond the classrooms and laboratories into which postwar physics had been confined, to the thermal baths at Esalen and outreach to the public through books which became worldwide bestsellers and remain in print to this day. Their curiosity also wandered well beyond the conventional bounds of physics, encompassing ESP (and speculating as to how quantum processes might explain it). This caused many mainstream physicists to keep members at arm's length, even as their insights on quantum processes were infiltrating the journals.

Many of us who lived through (I prefer the term “endured”) the 1970s remember them as a dull brown interlude of broken dreams, ugly cars, funny money, and malaise. But, among a small community of thinkers orphaned from the career treadmill of mainstream physics, it was a renaissance of investigation of the most profound questions in physics, and the spark which lit today's research into quantum information processing.

The Kindle edition has the table of contents, and notes properly linked, but the index is just a useless list of terms. An interview of the author, Jack Sarfatti, and Fred Alan Wolf by George Knapp on “Coast to Coast AM” is available.

 Permalink

Rickards, James. Currency Wars. New York: Portfolio / Penguin, 2011. ISBN 978-1-59184-449-5.
Debasement of currency dates from antiquity (and doubtless from prehistory—if your daughter's dowry was one cow and three goats, do you think you'd choose them from the best in your herd?), but currency war in the modern sense first emerged in the 20th century in the aftermath of World War I. When global commerce—the first era of globalisation—became established in the 19th century, most of the trading partners were either on the gold standard or settled their accounts in a currency freely convertible to gold, with the British pound dominating as the unit of account in international trade. A letter of credit financing a shipload of goods exported from Argentina to Italy could be written by a bank in London and traded by an investor in New York without any currency risk during the voyage because all parties denominated the transaction in pounds sterling, which the Bank of England would exchange for gold on demand. This system of global money was not designed by “experts” nor managed by “maestros”—it evolved organically and adapted itself to the needs of its users in the marketplace.

All of this was destroyed by World War I. As described here, and in more detail in Lords of Finance (August 2011), in the aftermath of the war all of the European powers on both sides had expended their gold and foreign exchange reserves in the war effort, and the United States had amassed a large fraction of all of the gold in the world in its vaults and was creditor in chief to the allies to whom, in turn, Germany owed enormous reparation payments for generations to come. This set the stage for what the author calls Currency War I, from 1921 through 1936, in which central bankers attempted to sort out the consequences of the war, often making disastrous though well-intentioned decisions which, arguably, contributed to a decade of pre-depression malaise in Britain, the U.S. stock market bubble and 1929 crash, the Weimar Germany hyperinflation, and its aftermath which contributed to the rise of Hitler.

At the end of World War II, the United States was in an even more commanding position than at the conclusion of the first war. With Europe devastated, it sat on an even more imposing hoard of gold, and when it convened the Bretton Woods conference in 1944, with the war still underway, despite the conference's list of attendees hailing from 44 allied nations, it was clear that the Golden Rule applied: he who has the gold makes the rules. Well, the U.S. had the gold, and the system adopted at the conference made the U.S. dollar central to the postwar monetary system. The dollar was fixed to gold at the rate of US$35/troy ounce, with the U.S. Treasury committed to exchanging dollars for gold at that rate in unlimited quantities. All other currencies were fixed to the dollar, and hence indirectly to gold, so that except in the extraordinary circumstance of a revaluation against the dollar, exchange rate risk would not exist. While the Bretton Woods system was more complex than the pre-World War I gold standard (in particular, it allowed central banks to hold reserves in other paper currencies in addition to gold), it tried to achieve the same stability in exchange rates as the pure gold standard.

Amazingly, this system, the brainchild of Soviet agent Harry Dexter White and economic charlatan John Maynard Keynes, worked surprisingly well until the late 1960s, when profligate deficit spending by the U.S. government began to cause foreign holders of an ever-increasing pile of dollars to trade them in for the yellow metal. This was the opening shot in what the author deems Currency War II, which ran from 1967 through 1987, ending in the adoption of the present system of floating exchange rates among currencies backed by nothing whatsoever.

The author believes we are now in the initial phase of Currency War III, in which a perfect storm of unsustainable sovereign debt, economic contraction, demographic pressure on social insurance schemes, and trade imbalances creates the preconditions for the kind of “beggar thy neighbour” competitive devaluations which characterised Currency War I. This is, in effect, a race to the bottom with each unanchored paper currency trying to become cheaper against the others to achieve a transitory export advantage. But, of course, as a moment's reflection will make evident, with currencies decoupled from any tangible asset, the only limit in a race to the bottom is zero, and in a world where trillions of monetary units can be created by the click of a mouse without even the need to crank up the printing press, this funny money is, in the words of Gerald Celente, “not worth the paper it isn't printed on”.

In financial crises, there is a progression from:

  1. Currency war
  2. Trade war
  3. Shooting war

Currency War I led to all three phases. Currency War II was arrested at the “trade war” step, although had the Carter administration and Paul Volcker not administered the bitter medicine to the U.S. economy to extirpate inflation, it's entirely possible a resource war to seize oil fields might have ensued. Now we're in Currency War III (this is the author's view, with which I agree): where will it go from here? Well, nobody knows, and the author is the first to acknowledge that the best a forecaster can do is to sketch a number of plausible scenarios which might play out depending upon precipitating events and the actions of decision makers in time of crisis. Chapter 11 (how appropriate!) describes the four scenarios Rickards sees as probable outcomes and what they would mean for investors and companies engaged in international trade. Some of these may be breathtaking, if not heart-stopping, but as the author points out, all of them are grounded in precedents which have already occurred in the last century.

The book begins with a chilling wargame in which the author participated. Strategic planners often remain stuck counting ships, troops, and tanks, and forget that all of these military assets are worthless without the funds to keep them operating, and that these assets are increasingly integrated into a world financial system whose complexity (and hence systemic risk, either to an accidental excursion or a deliberate disruption) is greater than ever before. Analyses of the stability of global finance often assume players are rational and therefore would not act in a way which was ultimately damaging to their own self interest. This is ominously reminiscent of those who, as late as the spring of 1914, forecast that a general conflict in Europe was unthinkable because it would be the ruin of all of the combatants. Indeed, it was, and yet still it happened.

The Kindle edition has the table of contents and notes properly linked, but the index is just a list of unlinked terms.

 Permalink

December 2011

Boykin, William G. and Tom Morrisey. Kiloton Threat. Nashville: B&H Books, 2011. ISBN 978-0-8054-4954-9.
William G. Boykin retired from the U.S. Army in 2007 with the rank of Lieutenant General, having been a founding member of Delta Force and served with that special operations unit from 1978 through 1993, then as Commanding General of the U.S. Army Special Forces Command. He also served as Deputy Director of Special Activities in the CIA and Deputy Undersecretary of Defense for Intelligence. When it comes to special operations, this is somebody who knows what he's talking about.

Something distinctly odd is going on in Iran—their nuclear weapons-related and missile development sites seem be blowing up on a regular basis for no apparent reason, and there are suspicions that shadowy forces may be in play to try to block Iran's becoming a nuclear armed power with the ability to deliver weapons with ballistic missiles. Had the U.S. decided to pursue such a campaign during the Bush administration, General Boykin would have been one of the people around the table planning the operations, so in this tale of operations in an Iran at the nuclear threshold he brings an encyclopedic knowledge not just of the special operations community but of the contending powers in Iran and the military capability at their disposal. The result is a thriller which may not have the kind of rock-em sock-em action of a Vince Flynn or Brad Thor novel, but exudes an authenticity comparable to a police procedural written by a thirty year veteran of the force.

In this novel, Iran has completed its long-sought goal to acquire nuclear weapons and intelligence indicates its intention to launch a preemptive strike against Israel, with the potential to provoke a regional if not global nuclear conflict. A senior figure in Iran's nuclear program has communicated his intent to defect and deliver the details necessary to avert the attack before it is launched, and CIA agent Blake Kershaw is paired with an Iranian émigré who can guide him through the country and provide access to the community in which the official resides. The mission goes horribly wrong (something with which author Boykin has direct personal experience, having been operations officer for the botched Iranian hostage rescue operation in 1980), and while Kershaw manages to get the defector out of the country, he leaves behind a person he solemnly promised to get out and is forced, from a sense of honour, to return to an Iran buzzing like a beehive whacked with a baseball bat, without official sanction, to rescue that person, then act independently to put an end to the threat.

There are a few copy editing goofs, but nothing that detracts from the story. The only factual errors I noted were the assertion that Ahmadinejad used the Quds Force “in much the same way as Hitler used the Waffen-SS” (the Waffen-SS was a multinational military force; the Allgemeine SS is the closest parallel to the Quds Force) and that a Cessna Caravan's “turboprop spun up to starting speed and caught with a ragged roar” (like all turboprops, there's only a smooth rising whine as the engine spools up; I've flown on these planes, and there's no “ragged roar”). Boykin and co-author Morrisey are committed Christians and express their faith on several occasions in the novel; radical secularists may find this irritating, but I didn't find it intrusive.

I have no idea whether the recent apparent kinetic energy transients at strategic sites in Iran are the work of special operators infiltrated into that country and, if so, who they're working for. But if they are, this book by the fellow all of the U.S. Army black ops people reported to just a few years ago provides excellent insights on how it might be done.

 Permalink

Larson, Erik. In the Garden of Beasts. New York: Crown Publishers, 2011. ISBN 978-0-307-40884-6.
Ambassadors to high-profile postings are usually chosen from political patrons and contributors to the president who appoints them, depending upon career Foreign Service officers to provide the in-country expertise needed to carry out their mandate. Newly-elected Franklin Roosevelt intended to follow this tradition in choosing his ambassador to Germany, where Hitler had just taken power, but discovered that none of the candidates he approached were interested in being sent to represent the U.S. in Nazi Germany. William E. Dodd, a professor of history and chairman of the department of history at the University of Chicago, growing increasingly frustrated with his administrative duties preventing him from completing his life's work: a comprehensive history of the ante-bellum American South, mentioned to a friend in Roosevelt's inner circle that he'd be interested in an appointment as ambassador to a country like Belgium or the Netherlands, where he thought his ceremonial obligations would be sufficiently undemanding that he could concentrate on his scholarly work.

Dodd was astonished when Roosevelt contacted him directly and offered him the ambassadorship to Germany. Roosevelt appealed to Dodd's fervent New Deal sympathies, and argued that in such a position he could be an exemplar of American liberal values in a regime hostile to them. Dodd realised from the outset that a mission to Berlin would doom his history project, but accepted because he agreed with Roosevelt's goal and also because FDR was a very persuasive person. His nomination was sent to the Senate and confirmed the very same day.

Dodd brought his whole family along on the adventure: wife Mattie and adult son and daughter Bill and Martha. Dodd arrived in Berlin with an open mind toward the recently-installed Nazi regime. He was inclined to dismiss the dark view of the career embassy staff and instead adopt what might be called today “smart diplomacy”, deceiving himself into believing that by setting an example and scolding the Nazi slavers he could shame them into civilised behaviour. He immediately found himself at odds not only with the Nazis but also his own embassy staff: he railed against the excesses of diplomatic expense, personally edited the verbose dispatches composed by his staff to save telegraph charges, and drove his own aged Chevrolet, shipped from the U.S., to diplomatic functions where all of the other ambassadors arrived in stately black limousines.

Meanwhile, daughter Martha embarked upon her own version of Girl Gone Wild—Third Reich Edition. Initially exhilarated by the New Germany and swept into its social whirl, before long she was carrying on simultaneous affairs with the head of the Gestapo and a Soviet NKVD agent operating under diplomatic cover in Berlin, among others. Those others included Ernst “Putzi” Hanfstaengl, who tried to set her up with Hitler (nothing came of it; they met at lunch and that was it). Martha's trajectory through life was extraordinary. After affairs with the head of the Gestapo and one of Hitler's inner circle, she was recruited by the NKVD and spied on behalf of the Soviet Union in Berlin and after her return to the U.S. It is not clear that she provided anything of value to the Soviets, as she had no access to state secrets during this period. With investigations of her Soviet affiliations intensifying in the early 1950s, in 1956 she fled with her American husband and son to Prague, Czechoslovakia where they lived until her death in 1990 (they may have spent some time in Cuba, and apparently applied for Soviet citizenship and were denied it).

Dodd père was much quicker to figure out the true nature of the Nazi regime. Following Roosevelt's charge to represent American values, he spoke out against the ever-increasing Nazi domination of every aspect of German society, and found himself at odds with the patrician “Pretty Good Club” at the State Department who wished to avoid making waves, regardless of how malevolent and brutal the adversary might be. Today, we'd call them the “reset button crowd”. Even Dodd found the daily influence of immersion in gleichschaltung difficult to resist. On several occasions he complained of the influence of Jewish members of his staff and the difficulties they posed in dealing with the Nazi regime.

This book focuses upon the first two years of Dodd's tenure as ambassador in Berlin, as that was the time in which the true nature of the regime became apparent to him and he decided upon his policy of distancing himself from it: for example, refusing to attend any Nazi party-related events such as the Nuremberg rallies. It provides an insightful view of how seductive a totalitarian regime can be to outsiders who see only its bright-eyed marching supporters, while ignoring the violence which sustains it, and how utterly futile “constructive engagement” is with barbarians that share no common values with civilisation.

Thanks to James Lileks for suggesting this book.

 Permalink

Cawdron, Peter. Anomaly. Los Gatos, CA: Smashwords, 2011. ISBN 978-1-4657-7394-4.
One otherwise perfectly normal day, a sphere of space 130 metres in diameter outside the headquarters of the United Nations in New York including a slab of pavement and a corner of the General Assembly building becomes detached from Earth's local reference frame and begins to rotate, maintaining a fixed orientation with respect to the distant stars, returning to its original orientation once per sidereal day. Observers watch in awe as the massive slab of pavement, severed corner of the U.N. building, and even flagpoles and flags which happened to fall within the sphere defy gravity and common sense, turning on end, passing overhead, and then coming back to their original orientation every day.

Through a strange set of coincidences, schoolteacher David Teller, who first realised and blurted out on live television that the anomaly wasn't moving as it appeared to Earth dwellers, but rather was stationary with respect to the stars, and third-string TV news reporter Cathy Jones find themselves the public face of the scientific investigation of the anomaly, conducted by NASA under the direction of the imposing James Mason, “Director of National Security”. An off-the-cuff experiment shows that the anomaly has its own local gravitational field pointing in the original direction, down toward the slab, and that no barrier separates the inside and outside of the anomaly. Teller does the acrobatics to climb onto the slab, using a helium balloon to detect the up direction as he enters into the anomaly, and observers outside see him standing, perfectly at ease, at a crazy angle to their own sense of vertical. Sparked by a sudden brainstorm, Teller does a simple experiment to test whether the anomaly might be an alien probe attempting to make contact, and the results set off a sequence of events which, although implausible at times, never cease to be entertaining and raise the question of whether if we encountered technologies millions or billions of years more advanced than our own, we would even distinguish them from natural phenomena (and, conversely, whether some of the conundrums scientists puzzle over today might be evidence of such technologies—“dark energy”, anyone?).

The prospect of first contact sets off a firestorm: bureaucratic turf battles, media struggling for access, religious leaders trying to put their own spin on what it means, nations seeking to avoid being cut out of a potential bounty of knowledge from contact by the U.S., upon whose territory the anomaly happened to appear. These forces converge toward a conclusion which will have you saying every few pages, “I didn't see that coming”, and one of the most unlikely military confrontations in all of the literature of science fiction and thrillers. As explained in the after-word, the author is trying to do something special in this story, which I shall not reveal here to avoid spoiling your figuring it out for yourself and making your own decision as to how well he succeeded.

At just 50,000 words, this is a short novel, but it tells its story well. At this writing, the Kindle edition sells for just US$0.99 (no print edition is available), so it's a bargain notwithstanding its brevity.

 Permalink

Tarnoff, Ben. Moneymakers. New York: Penguin, 2011. ISBN 978-1-101-46732-9.
Many people think of early America as a time of virtuous people, hard work, and sound money, all of which have been debased in our decadent age. Well, there may have been plenty of the first two, but the fact is that from the colonial era through the War of Secession, the American economy was built upon a foundation of dodgy paper money issued by a bewildering variety of institutions. There were advocates of hard money during the epoch, but their voices went largely unheeded because there simply wasn't enough precious metal on the continent to coin or back currency in the quantity required by the burgeoning economy. Not until the discovery of gold in California and silver in Nevada and other western states in the middle of the 19th century did a metal-backed monetary system become feasible in America.

Now, whenever authorities, be they colonies, banks, states, or federal institutions, undertake the economic transubstantiation of paper into gold by printing something on it, there will always be enterprising individuals motivated to get into the business for themselves. This book tells the story of three of these “moneymakers” (as counterfeiters were called in early America).

Owen Sullivan was an Irish immigrant who, in the 1740s and '50s set up shop in a well-appointed cave on the border between New York and Connecticut and orchestrated a network of printers, distributors, and passers of bogus notes of the surrounding colonies. Sullivan was the quintessential golden-tongued confidence man, talking himself out of jam after jam, and even persuading his captors, when he was caught and sentenced to be branded with an “R” for “Rogue” to brand him above the hairline where he could comb over the mark of shame.

So painful had the colonial experience with paper money been that the U.S. Constitution forbade states to “emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts”. But as the long and sordid history of “limited government” demonstrates, wherever there is a constitutional constraint, there is always a clever way for politicians to evade it, and nothing in the Constitution prevented states from chartering banks which would then proceed to print their own paper money. When the charter of Alexander Hamilton's First Bank of the United States was allowed to expire, that's exactly what the states proceeded to do. In Pennsylvania alone, in the single year of 1814, the state legislature chartered forty-one new banks in addition to the six already existing. With each of these banks entitled to print its own paper money (backed, in theory, by gold and silver coin in their vaults, with the emphasis on in theory), and each of these notes having its own unique design, this created a veritable paradise for counterfeiters, and into this paradise stepped counterfeiting entrepreneur David Lewis and master engraver Philander Noble, who set up a distributed and decentralised gang to pass their wares which could only be brought to justice by the kind of patient, bottom-up detective work which was rare in an age where law enforcement was largely the work of amateurs.

Samuel Upham, a successful Philadelphia shopkeeper in the 1860s, saw counterfeiting as a new product line for his shop, along with stationery and Upham's Hair Dye. When the Philadelphia Inquirer printed a replica of the Confederate five dollar note, the edition was much in demand at Upham's shop, and he immediately got in touch with the newspaper and arranged to purchase the printing plate for the crude replica of the note and printed three thousand copies with a strip at the bottom identifying them as replicas with the name and address of his store. At a penny a piece they sold briskly, and Upham decided to upgrade and expand his product line. Before long he offered Confederate currency “curios” in all denominations, printed from high quality plates on banknote paper, advertised widely as available in retail and wholesale quantities for those seeking a souvenir of the war (or several thousand of them, if you like). These “facsimiles” were indistinguishable from the real thing to anybody but an expert, and Union troops heading South and merchants trading across the border found Upham's counterfeits easy to pass. Allegations were made that the Union encouraged, aided, and abetted Upham's business in the interest of economic warfare against the South, but no evidence of this was ever produced. Nonetheless, Upham and his inevitable competitors were allowed to operate with impunity, and the flood of bogus money they sent to the South certainly made a major contribution to the rampant inflation experienced in the South and made it more difficult for the Confederacy to finance its war effort.

This is an illuminating and entertaining exploration of banking, finance, and monetary history in what may seem a simpler age but was, in its own way, breathtakingly complicated—at the peak there were more than ten thousand different kinds of paper money circulating in North America. Readers with a sense of justice may find themselves wondering why small-scale operators such as Sullivan and Lewis were tracked down so assiduously and punished so harshly while contemporary manufacturers of funny money on the terabuck scale such as Ben Bernanke, Tim Geithner, and Mario Draghi are treated with respect and deference instead of being dispatched to the pillory and branding iron they so richly deserve for plundering the savings and future of those from whom their salaries are extorted under threat of force. To whom I say, just wait….

A Kindle edition is available, in which the table of contents is linked to the text, but the index is simply a list of terms, not linked to their occurrences in the text. The extensive end notes are keyed to page numbers in the print edition, which are preserved in the Kindle edition, making navigation possible, albeit clumsy.

 Permalink

Chivers, C. J. The Gun. New York: Simon & Schuster, 2010. ISBN 978-0-7432-7173-8.
Ever since the introduction of firearms into infantry combat, technology and military doctrine have co-evolved to optimise the effectiveness of the weapons carried by the individual soldier. This process requires choosing a compromise among a long list of desiderata including accuracy, range, rate of fire, stopping power, size, weight (of both the weapon and its ammunition, which determines how many rounds an infantryman can carry), reliability, and the degree of training required to operate the weapon in both normal and abnormal circumstances. The “sweet spot” depends upon the technology available at the time (for example, smokeless powder allowed replacing heavy, low muzzle velocity, large calibre rounds with lighter supersonic ammunition), and the environment in which the weapon will be used (long range and high accuracy over great distances are largely wasted in jungle and urban combat, where most engagements are close-up and personal).

Still, ever since the advent of infantry firearms, the rate of fire an individual soldier can sustain has been considered a key force multiplier. All things being equal, a solider who can fire sixteen rounds per minute can do the work of four soldiers equipped with muzzle loading arms which can fire only four rounds a minute. As infantry arms progressed from muzzle loaders to breech loaders to magazine fed lever and bolt actions, the sustained rate of fire steadily increased. The logical endpoint of this evolution was a fully automatic infantry weapon: a rifle which, as long as the trigger was held down and ammunition remained, would continue to send rounds downrange at a high cyclic rate. Such a rifle could also be fired in semiautomatic mode, firing one round every time the trigger was pulled without any other intervention by the rifleman other than to change magazines as they were emptied.

This book traces the history of automatic weapons from primitive volley guns; through the Gatling gun, the first successful high rate of fire weapon (although with the size and weight of a field artillery piece and requiring a crew to hand crank it and feed ammunition, it was hardly an infantry weapon); the Maxim gun, the first true machine gun which was responsible for much of the carnage in World War I; to the Thompson submachine gun, which could be carried and fired by a single person but, using pistol ammunition, lacked the range and stopping power of an infantry rifle. At the end of World War II, the vast majority of soldiers carried bolt action or semiautomatic weapons: fully automatic fire was restricted to crew served support weapons operated by specially trained gunners.

As military analysts reviewed combat as it happened on the ground in the battles of World War II, they discovered that long range aimed fire played only a small part in infantry actions. Instead, infantry weapons had been used mostly at relatively short ranges to lay down suppressive fire. In this application, rate of fire and the amount of ammunition a soldier can carry into combat come to the top of the priority list. Based upon this analysis, even before the end of the war Soviet armourers launched a design competition for a next generation rifle which would put automatic fire into the hands of the ordinary infantryman. After grueling tests under all kinds of extreme conditions such a weapon might encounter in the field, the AK-47, initially designed by Mikhail Kalashnikov, a sergeant tank commander injured in battle, was selected. In 1956 the AK-47 became the standard issue rifle of the Soviet Army, and it and its subsequent variants, the AKM (an improved design which was also lighter and less expensive to manufacture—most of the weapons one sees today which are called “AK-47s” are actually based on the AKM design), and the smaller calibre AK-74. These weapons and the multitude of clones and variants produced around the world have become the archetypal small arms of the latter half of the twentieth century and are likely to remain so for the foreseeable future in the twenty-first. Nobody knows how many were produced but almost certainly the number exceeds 100 million, and given the ruggedness and reliability of the design, most remain operational today.

This weapon, designed to outfit forces charged with maintaining order in the Soviet Empire and expanding it to new territories, quickly slipped the leash and began to circulate among insurgent forces around the globe—initially infiltrated by Soviet and Eastern bloc countries to equip communist revolutionaries, an “after-market” quickly developed which allowed almost any force wishing to challenge an established power to obtain a weapon and ammunition which made its irregular fighters the peer of professional troops. The worldwide dissemination of AK weapons and their availability at low cost has been a powerful force destabilising regimes which before could keep their people down with a relatively small professional army. The author recounts the legacy of the AK in incidents over the decades and around the world, and the tragic consequences for those who have found themselves on the wrong end of this formidable weapon.

United States forces first encountered the AK first hand in Vietnam, and quickly realised that their M14 rifles, an attempt to field a full automatic infantry weapon which used the cartridge of a main battle rifle, was too large, heavy, and limiting in the amount of ammunition a soldier could carry to stand up to the AK. The M14's only advantages: long range and accuracy, were irrelevant in the Vietnam jungle. While the Soviet procurement and development of the AK-47 was deliberate and protracted, Pentagon whiz kids in the U.S. rushed the radically new M16 into production and the hands of U.S. troops in Vietnam. The new rifle, inadequately tested in the field conditions it would encounter, and deployed with ammunition different from that used in the test phase, failed frequently and disastrously in the hands of combat troops with results which were often tragic. What was supposed to be the most advanced infantry weapon on the planet often ended up being used as bayonet mount or club by troops in their last moments of life. The Pentagon responded to this disaster in the making by covering up the entire matter and destroying the careers of those who attempted to speak out. Eventually reports from soldiers in the field made their way to newspapers and congressmen and the truth began to come out. It took years for the problems of the M16 to be resolved, and to this day the M16 is considered less reliable (although more accurate) than the AK. As an example, compare what it takes to field strip an M16 compared to an AK-47. The entire ugly saga of the M16 is documented in detail here.

This is a fascinating account of the origins, history, and impact of the small arms which dominate the world today. The author does an excellent job of sorting through the many legends (especially from the Soviet era) surrounding these weapons, and sketching the singular individuals behind their creation.

In the Kindle edition, the table of contents, end notes, and index are all properly linked to the text. All of the photographic illustrations are collected at the very end, after the index.

 Permalink

  2012  

January 2012

Walsh, Michael. Early Warning. New York: Pinnacle Books, 2010. ISBN 978-0-7860-2043-0.
This is the second novel in the author's “Devlin” series of thrillers. When I read the first, Hostile Intent, I described it as a “tangled, muddled mess” and concluded that the author “may eventually master the thriller, but I doubt I'll read any of the sequels to find out for myself”. Well, I did go ahead and read the next book in the series, and I'm pleased to report that the versatile and accomplished author (see the review of Hostile Intent for a brief biography and summary of his other work) has indeed now mastered the genre and this novel is as tightly plotted, action packed, and bristling with detail as the work of Vince Flynn and Brad Thor.

In this novel, renegade billionaire Emanuel Skorzeny, after having escaped justice for the depredations he unleashed in the previous novel, has been reduced to hiding out in jurisdictions which have no extradition treaty with the United States. NSA covert agent “Devlin” is on his trail when a coordinated series of terrorist attacks strike New York City. Feckless U.S. President Jeb Tyler decides to leave New York's police Counter-Terrorism Unit (CTU) to fend for itself to avoid the débâcle being laid at his feet, but allows Devlin to be sent in covertly to track down and take out the malefactors. Devlin assumes his “angel of death” persona and goes to work, eventually becoming also the guardian angel of the head of CTU, old school second generation Irish cop Francis Xavier Byrne.

Devlin and the CTU eventually help the perpetrators achieve the martyrdom to which they aspire, but not before massive damage is inflicted upon the city and one terrorist goal accomplished which may cause even more in the future. How this fits into Skorzeny's evil schemes still remains to be discovered, as the mastermind's plot seems to involve not only mayhem on the streets of Manhattan but also the Higgs boson.

The action and intrigue are leavened by excursions into cryptography (did you know about the Poe Cryptographic Challenge?), the music of Edward Elgar, and Devlin's developing relationship with the enigmatic Iranian expatriate “Maryam”. This is an entertaining and satisfying thriller, and I'm planning to read the next episode, Shock Warning, in due time.

 Permalink

Rawles, James Wesley. Survivors. New York: Atria Books, 2011. ISBN 978-1-4391-7280-3.
This novel is frequently described as a sequel to the author's Patriots (December 2008), but in fact is set in the same time period and broadens the scope from a small group of scrupulously prepared families coping with a “grid down” societal collapse in an isolated and defensible retreat to people all around the U.S. and the globe in a wide variety of states of readiness dealing with the day to day exigencies after a hyperinflationary blow-off destroys paper money worldwide and leads to a breakdown in the just-in-time economy upon which life in the developed world has become dependent.

The novel tracks a variety of people in different circumstances: an Army captain mustered out of active duty in Afghanistan, an oil man seeking to ride out the calamity doing what he knows best, a gang leader seeing the collapse of the old order as the opportunity of a lifetime, and ordinary people forced to summon extraordinary resources from within themselves when confronted with circumstances nobody imagined plausible. Their stories illustrate how even a small degree of preparation (most importantly, the knowledge and skills you possess, not the goods and gear you own [although the latter should not be neglected—without a source of clean water, in 72 hours you're a refugee, and as Larry Niven and Jerry Pournelle wrote in Lucifer's Hammer, “No place is more than two meals from a revolution”]) can make all the difference when all the rules change overnight.

Rawles is that rarest of authors: a know-it-all who actually knows it all—embedded in this story, which can be read simply as a periapocalyptic thriller, is a wealth of information for those who wish to make their own preparations for such discontinuities in their own future light cones. You'll want to read this book with a browser window open to look up terms and references to gear dropped in the text (acronyms are defined in the glossary at the end, but you're on your own in researching products).

Some mylar-thin thinkers welcome societal collapse; they imagine it will sweep away the dysfunction and corruption that surrounds us today and usher in a more honourable and moral order. Well, that may be the ultimate result (or maybe it won't: a dark age has its own momentum, and once a culture has not only forgotten what it knew, but forgotten what it has forgotten, recovery can take as long or longer than it took to initially discover what has been lost). Societal collapse, whatever the cause, will be horrific for those who endure it, many of whom will not survive and end their days in misery and terror. Civilisation is a thin veneer on the red in tooth and claw heritage of our species, and the predators among us will be the first to exploit the opportunity that a breakdown in order presents.

This novel presents a ruthlessly realistic picture of what societal collapse looks like to those living it. In a way, it is airbrushed—we see the carnage in the major metropolitan areas only from a distance. But for those looking at the seemingly endless list of “unsustainable” trends underway at present and wise enough to note that something which is unsustainable will, perforce, end, this book will help them think about the aftermath of that end and suggest preparations which may help riding it out and positioning themselves to prosper in the inevitable recovery.

 Permalink

Young, Anthony. The Saturn V F-1 Engine. Chichester, UK: Springer Praxis, 2009. ISBN 978-0-387-09629-2.
The F-1 rocket engine which powered the first (S-IC) stage of the Saturn V booster, which launched all of the Apollo missions to the Moon and, as a two stage variant, the Skylab space station, was one of the singular engineering achievements of the twentieth century, which this magnificent book chronicles in exquisite detail. When the U.S. Air Force contracted with Rocketdyne in 1958 for the preliminary design of a single chamber engine with between 1 and 1.5 million pounds of thrust, the largest existing U.S. rocket engine had less than a quarter the maximum thrust of the proposed new powerplant, and there was no experience base to provide confidence that problems such as ignition transients and combustion instability which bedevil liquid rockets would not prove insuperable when scaling an engine to such a size. (The Soviets were known to have heavy-lift boosters, but at the time nobody knew their engine configuration. In fact, when their details came to be known in the West, they were discovered to use multiple combustion chambers and/or clustering of engines precisely to avoid the challenges of very large engines.)

When the F-1 development began, there was no rocket on the drawing board intended to use it, nor any mission defined which would require it. The Air Force had simply established that such an engine would be adequate to accomplish any military mission in the foreseeable future. When NASA took over responsibility for heavy launchers from the Air Force, the F-1 engine became central to the evolving heavy lifters envisioned for missions beyond Earth orbit. After Kennedy's decision to mount a manned lunar landing mission, NASA embarked on a furious effort to define how such a mission could be accomplished and what hardware would be required to perform it. The only alternative to heavy lift would be a large number of launches which assembled the Moon ship in Earth orbit, which was a daunting prospect at a time when not only were rockets famously unreliable and difficult to launch on time, but nobody had ever so much as attempted rendezvous in space, no less orbital assembly or refuelling operations.

With the eventual choice of lunar orbit rendezvous as the mission mode, it became apparent that it would be possible to perform the lunar landing mission with a single launch of a booster with 7.5 million pounds of sea level thrust, which could be obtained from a cluster of five F-1 engines (which by that time NASA had specified as 1.5 million pounds of thrust). From the moment the preliminary design of the Saturn V was defined until Apollo 11 landed on the Moon, the definition, design, testing, and manufacturing of the F-1 engine was squarely on the critical path of the Apollo project. If the F-1 did not work, or was insufficiently reliable to perform in a cluster of five and launch on time in tight lunar launch windows, or could not have been manufactured in the quantities required, there would be no lunar landing. If the schedule of the F-1 slipped, the Apollo project would slip day-for-day along with its prime mover.

This book recounts the history, rationale, design, development, testing, refinement, transition to serial production, integration into test articles and flight hardware, and service history of this magnificent machine. Sadly, at this remove, some of the key individuals involved in this project are no longer with us, but the author tracked down those who remain and discovered interviews done earlier by other researchers with the departed, and he stands back and lets them speak, in lengthy quotations, not just about the engineering and management challenges they faced and how they were resolved, but what it felt like to be there, then. You get the palpable sense from these accounts that despite the tension, schedule and budget pressure, long hours, and frustration as problem after problem had to be diagnosed and resolved, these people were having the time of their lives, and that they knew it at the time and cherish it even at a half century's remove. The author has collected more than a hundred contemporary photographs, many in colour, which complement the text.

A total of sixty-five F-1 engines powered 13 Saturn V flight vehicles. They performed with 100% reliability.

 Permalink

King, Stephen. 11/22/63. New York: Scribner, 2011. ISBN 978-1-4516-2728-2.
I gave up on Stephen King in the early 1990s. I had become weary of what seemed to me self-indulgent doorstops of novels which could have been improved by a sharp-pencilled editor cutting them by one third to one half, but weren't because what editor would dare strike words by such a celebrated (and profitable to the publisher) author? I never made it through either Gerald's Game or Insomnia and after that I stopped trying. Recently I heard good things from several sources I respect about the present work and, despite its formidable length (850 pages in hardcover), decided to give it a try (especially since I've always been a fan of time travel fiction and purported fact) to see if, a decade and a half later, King still “has it”.

The title is the date of the assassination of the U.S. president John F. Kennedy: November the 22nd of 1963 (written in the quaint American way). In the novel, Jake Epping, a school teacher in Maine, happens to come across a splice in time or wormhole or whatever you choose to call it which allows bidirectional travel between his world in 2011 and September of 1958. Persuaded by the person who discovered the inexplicable transtemporal portal and revealed it to him, Jake takes upon himself the mission of returning to the past and living there until November of 1963 with the goal of averting the assassination and preventing the pernicious sequelæ which he believed to have originated in that calamity.

Upon arrival in the past, he discovers from other lesser wrongs he seeks to right that while the past can be changed, it doesn't like to be changed and pushes back—it is mutable but “obdurate”. As he lives his life in that lost and largely forgotten country which was the U.S. in the middle of the 20th century, he discovers how much has been lost compared to our times, and also how far we have come from commonplace and unperceived injustices and assaults upon the senses and health of that epoch. Still, with a few rare exceptions, King forgoes the smug “look at how much better we are than those rubes” tone that so many contemporary authors adopt when describing the 1950s; you get the sense that King has a deep affection for the era in which he (and I) grew up, and it's apparent here.

I'm going to go behind the curtain now to discuss some of the details of the novel and the (remarkably few) quibbles I have with it. I don't consider any of these “big spoilers”, but others may dissent, so I'd rather err on the side of caution lest some irritated time traveller come back and….

Spoiler warning: Plot and/or ending details follow.  
As I got into the novel, I was afraid I'd end up hurling it across the room (well, not actually, since I was reading the Kindle edition and I'm rather fond of my iPad) because the model of time travel employed just didn't make any sense. But before long, I began to have a deeper respect for what King was doing, and by the end of the book I came to appreciate that what he'd created was largely compatible with the past/future multiverse picture presented in David Deutsch's The Fabric of Reality and my own concept of conscious yet constrained multiverse navigation in “Notes toward a General Theory of Paranormal Phenomena”.

If this gets made into a movie or miniseries (and that's the way to bet), I'll bet that scene on p. 178 where the playground roundy-round slowly spins with no kids in sight on a windless day makes the cut—brrrrr.

A few minutes' reflection will yield several ways that Jake, given access to the Internet in 2011 and the properties of the time portal, could have accumulated unlimited funds to use in the past without taking the risks he did. I'll avert my eyes here; removing the constraints he found himself under would torpedo a large part of the plot.

On p. 457 et seq. Jake refers to an “omnidirectional microphone” when what is meant is a “directional” or “parabolic” microphone.

On p. 506 the author states that during the Cuban missile crisis “American missile bases and the Strategic Air Command had gone to DEFCON-4 for the first time in history.” This makes the common error in popular fiction that a higher number indicates a greater alert condition or closeness to war. In fact, it goes the other way: DEFCON 5 corresponds to peacetime—the lowest state of readiness, while DEFCON 1 means nuclear war is imminent. During the Cuban missile crisis, SAC was ordered to DEFCON 2 while the balance of the military was at DEFCON 3.

On p. 635, the righthand man of the dictator of Haiti is identified as Jean-Claude “Baby Doc” Duvalier, boss of the tonton macoute. But Baby Doc was born in 1951, and at the time would have been twelve years old, unlikely to wield such powers.

If the ending doesn't make your eyes mist up, you're probably, like the protagonist, “not a crying [person]”.

Spoilers end here.  

There is a poignant sense of the momentum of events in the past here which I have not felt in any time travel fiction since Michael Moorcock's masterpiece Behold The Man.

Bottom line? King's still got it.

 Permalink

February 2012

Hall, R. Cargill. Lunar Impact. Washington: National Aeronautics and Space Administration, 1977. ISBN 978-0-486-47757-2. NASA SP-4210.
One of the wonderful things about the emergence of electronic books is that long out-of-print works from publishers' back-lists are becoming available once again since the cost of keeping them in print, after the initial conversion to an electronic format, is essentially zero. The U.S. civilian space agency NASA is to be commended for their efforts to make publications in their NASA history series available electronically at a bargain price. Many of these documents, chronicling the early days of space exploration from a perspective only a few years after the events, have been out of print for decades and some command forbidding prices on used book markets. Those interested in reading them, as opposed to collectors, now have an option as inexpensive as it is convenient to put these works in their hands.

The present volume, originally published in 1977, chronicles Project Ranger, NASA's first attempt to obtain “ground truth” about the surface of the Moon by sending probes to crash on its surface, radioing back high-resolution pictures, measuring its composition, and hard-landing scientific instruments on the surface to study the Moon's geology. When the project was begun in 1959, it was breathtakingly ambitious—so much so that one gets the sense those who set its goals did not fully appreciate the difficulty of accomplishing them. Ranger was to be not just a purpose-built lunar probe, but rather a general-purpose “bus” for lunar and planetary missions which could be equipped with different scientific instruments depending upon the destination and goals of the flight. It would incorporate, for the first time in a deep space mission, three-axis stabilisation, a steerable high-gain antenna, midcourse and terminal trajectory correction, an onboard (albeit extremely primitive) computer, real-time transmission of television imagery, support by a global Deep Space Network of tracking stations which did not exist before Ranger, sterilisation of the spacecraft to protect against contamination of celestial bodies by terrestrial organisms, and a retro-rocket and landing capsule which would allow rudimentary scientific instruments to survive thumping down on the Moon and transmit their results back to Earth.

This was a great deal to bite off, and as those charged with delivering upon these lofty goals discovered, extremely difficult to chew, especially in a period where NASA was still in the process of organising itself and lines of authority among NASA Headquarters, the Jet Propulsion Laboratory (charged with developing the spacecraft and conducting the missions) and the Air Force (which provided the Atlas-Agena launch vehicle that propelled Ranger to the Moon) were ill-defined and shifting frequently. This, along with the inherent difficulty of what was being attempted, contributed to results which can scarcely be imagined in an era of super-conservative mission design: six consecutive failures between 1961 and 1964, with a wide variety of causes. Even in the early days of spaceflight, this was enough to get the attention of the press, politicians, and public, and it was highly probable that had Ranger 7 also failed, it would be the end of the program. But it didn't—de-scoped to just a camera platform, it performed flawlessly and provided the first close-up glimpse of the Moon's surface. Rangers 8 and 9 followed, both complete successes, with the latter relaying pictures “live from the Moon” to televisions of viewers around the world. To this day I recall seeing them and experiencing a sense of wonder which is difficult to appreciate in our jaded age.

Project Ranger provided both the technology and experience base used in the Mariner missions to Venus, Mars, and Mercury. While the scientific results of Ranger were soon eclipsed by those of the Surveyor soft landers, it is unlikely that program would have succeeded without learning the painful lessons from Ranger.

The electronic edition of this book appears to have been created by scanning a print copy and running it through an optical character recognition program, then performing a spelling check and fixing errors it noted. However, no close proofreading appears to have been done, so that scanning errors which resulted in words in the spelling dictionary were not corrected. This results in a number of goofs in the text, some of which are humorous. My favourite is the phrase “midcourse correction bum [burn]” which occurs on several occasions. I imagine a dissipated wino with his trembling finger quivering above a big red “FIRE” button at a console at JPL. British readers may…no, I'm not going there. Illustrations from the original book are scanned and included as tiny thumbnails which cannot be enlarged. This is adequate for head shots of people, but for diagrams, charts, and photographs of hardware and the lunar surface, next to useless. References to endnotes in the text look like links but (at least reading the Kindle edition on an iPad) do nothing. These minor flaws do not seriously detract from the glimpse this work provides of unmanned planetary exploration at its moment of creation or the joy that this account is once again readily available.

Unlike many of the NASA history series, a paperback reprint edition is available, published by Dover. It is, however, much more expensive than the electronic edition.

Update: Reader J. Peterson writes that a free on-line edition of this book is available on NASA's Web site, in which the illustrations may be clicked to view full-resolution images.

 Permalink

Kershaw, Ian. The End. New York: Penguin Press, 2011. ISBN 978-1-59420-314-5.
Ian Kershaw is the author of the definitive two-volume biography of Hitler: Hitler: 1889–1936 Hubris and Hitler: 1936–1945 Nemesis (both of which I read before I began keeping this list). In the present volume he tackles one of the greatest puzzles of World War II: why did Germany continue fighting to the bitter end, when the Red Army was only blocks from Hitler's bunker, and long after it was apparent to those in the Nazi hierarchy, senior military commanders, industrialists, and the general populace that the war was lost and continuing the conflict would only prolong the suffering, inflict further casualties, and further devastate the infrastructure upon which survival in a postwar world would depend? It is, as the author notes, quite rare in the history of human conflict that the battle has to be taken all the way to the leader of an opponent in his capital city: Mussolini was deposed by his own Grand Council of Fascism and the king of Italy, and Japan surrendered before a single Allied soldier set foot upon the Home Islands (albeit after the imposition of a total blockade, the entry of the Soviet Union into the war against Japan, and the destruction of two cities by atomic bombs).

In addressing this question, the author recounts the last year of the war in great detail, starting with the Stauffenberg plot, which attempted unsuccessfully to assassinate Hitler on July 20th, 1944. In the aftermath of this plot, a ruthless purge of those considered unreliable in the military and party ensued (in the Wehrmacht alone, around 700 officers were arrested and 110 executed), those who survived were forced to swear personal allegiance to Hitler, and additional informants and internal repression were unleashed to identify and mete out summary punishment for any perceived disloyalty or defeatist sentiment. This, in effect, aligned those who might have opposed Hitler with his own personal destiny and made any overt expression of dissent from his will to hold out to the end tantamount to suicide.

But the story does not end there. Letters from soldiers at the front, meticulously catalogued by the censors of the SD and summarised in reports to Goebbels's propaganda ministry, indicate that while morale deteriorated in the last year of the war, fear of the consequences of a defeat, particularly at the hands of the Red Army, motivated many to keep on fighting. Propaganda highlighted the atrocities committed by the “Asian Bolshevik hordes” but, if exaggerated, was grounded in fact, as the Red Army was largely given a free hand if not encouraged to exact revenge for German war crimes on Soviet territory.

As the dénouement approached, those in Hitler's inner circle, who might have otherwise moved against him under other circumstances, were paralysed by the knowledge that their own authority flowed entirely from him, and that any hint of disloyalty would cause them to be dismissed or worse (as had already happened to several). With the Party and its informants and enforcers having thoroughly infiltrated the military and civilian population, there was simply no chance for an opposition movement to establish itself. Certainly there were those, particularly on the Western front, who did as little as possible and waited for the British and Americans to arrive (the French—not so much: reprisals under the zones they occupied had already inspired fear among those in their path). But finally, as long as Hitler was determined to resist to the very last and willing to accept the total destruction of the German people who he deemed to have “failed him”, there was simply no counterpoise which could oppose him and put an end to the conflict. Tellingly, only a week after Hitler's death, his successor, Karl Dönitz, ordered the surrender of Germany.

This is a superb, thoughtful, and thoroughly documented (indeed, almost 40% of the book is source citations and notes) account of the final days of the Third Reich and an enlightening and persuasive argument as to why things ended as they did.

As with all insightful works of history, the reader may be prompted to see parallels in other epochs and current events. Personally, I gained a great deal of insight into the ongoing financial crisis and the increasingly futile efforts of those who brought it about to (as the tired phrase, endlessly repeated) “kick the can down the road” rather than make the structural changes which might address the actual causes of the problem. Now, I'm not calling the central bankers, politicians, or multinational bank syndicates Nazis—I'm simply observing that as the financial apocalypse approaches they're behaving in much the same way as the Hitler regime did in its own final days: trying increasingly desperate measures to buy first months, then weeks, then days, and ultimately hours before “The End”. Much as was the case with Hitler's inner circle, those calling the shots in the international financial system simply cannot imagine a world in which it no longer exists, or their place in such a world, so they continue to buy time, whatever the cost or how small the interval, to preserve the reference frame in which they exist. The shudder of artillery can already be felt in the bunker.

 Permalink

Guiteras, Daniel. Launch On Need. Unknown: T-Cell Books, 2010. ISBN 978-0-615-37221-1.
An almost universal convention of the alternative history genre is that there is a single point of departure (which I call “the veer”) where an event or fact in the narrative differs from that in the historical record, whence the rest of the story plays out with the same logic and plausibility as what actually happened in our timeline. When this is done well, it makes for engaging and thought-provoking fiction, as there are few things which so engage the cognitive veneer of our ancient brains as asking “what if?” This book is a superb exemplar of this genre, which works both as a thriller and an exploration of how the Space Shuttle program might have coped with the damage to orbiter Columbia due to foam shed from the bipod ramp of the external tank during its launch on STS-107.

Here, the veer is imagining NASA remained the kind of “can do”, “whatever it takes” organisation that it was in the early days of space flight through the rescue of Apollo 13 instead of the sclerotic bureaucracy it had become in the Shuttle era (and remains today). Dismissing evidence of damage to Columbia's thermal protection system (TPS) due to a foam strike, and not even seeking imagery from spy satellites, NASA's passive “managers” sighed and said “nothing could be done anyway” and allowed the crew to complete their mission and die during re-entry.

This needn't have happened. The Columbia Accident Investigation Board (CAIB) explored whether a rescue mission (PDF, scroll down to page 173), mounted as soon as possible after the possible damage to Columbia's TPS was detected, might have been able to rescue the crew before the expendables aboard Columbia were exhausted. Their conclusion? A rescue mission was possible, but only at the cost of cutting corners on safety margins and assuming nothing went wrong in the process of bringing the rescue shuttle, Atlantis, to the pad and launching her.

In this novel, the author takes great care to respect the dead, only referring to members of Columbia's crew by their crew positions such as “commander” or “mission specialist”, and invents names for those in NASA involved in the management of the actual mission. He draws upon the CAIB-envisioned rescue mission, including tables and graphics from their report, while humanising their dry prose with views of events as they unfold by fallible humans living them.

You knew this was coming, didn't you? You were waiting for it—confess! So here we go, into the quibbles. Some of these are substantial spoilers, so be warned.

Spoiler warning: Plot and/or ending details follow.  
Page numbers in the items below are from the Kindle edition, in which page numbers and their correspondence to print editions tend to be somewhat fluid. Consequently, depending upon how you arrive there, the page number in your edition may differ by ±1 page.

On p. 2, Brown “knew E208 was a high-resolution video camera…” which “By T-plus-240 seconds … had run through 1,000 feet of film.”
Video cameras do not use film. The confusion between video and film persists for several subsequent chapters.
On p. 5 the fifth Space Shuttle orbiter constructed is referred to as “Endeavor”.
In fact, this ship's name is properly spelled “Endeavour”, named after the Royal Navy research ship.
On p. 28 “…the crew members spent an additional 3,500 hundred hours studying and training…”
That's forty years—I think not.
On p. 55 Kalpana Chawla is described as a “female Indian astronaut.”
While Chawla was born in India, she became a U.S. citizen in 1990 and presumably relinquished her Indian citizenship in the process of naturalisation.
On p. 57 “Both [STS-107] astronauts selected for this EVA have previous spacewalk experience…”.
In fact, none of the STS-107 astronauts had ever performed an EVA.
On p. 65 “Normally, when spacewalks were part of the mission plan, the entire cabin of the orbiter was decompressed at least 24 hours prior to the start of the spacewalk.”
Are you crazy! EVA crewmembers pre-breathe pure oxygen in the cabin, then adapt to the low pressure of the spacesuit in the airlock, but the Shuttle cabin is never depressurised. If it were what would the other crewmembers breathe—Fireball XL5 oxygen pills?
On p. 75 the EVA astronaut looks out from Columbia's airlock and sees Cape Horn.
But the mission has been launched into an inclination of 39 degrees, so Cape Horn (55°59' S) should be out of sight to the South. Here is the view from Columbia's altitude on a pass over South America at the latitude of Cape Horn.
On p. 221 the countdown clock is said to have been “stuck on nine minutes zero seconds for the past three hours and twenty-seven minutes.”
The T−9 minute hold is never remotely that long. It's usually on the order of 10 to 20 minutes. If there were a reason for such a long hold, it would have been performed much earlier in the count. In any case, given the short launch window for the rendezvous, there'd be no reason for a long planned hold, and an unplanned hold would have resulted in a scrub of the mission until the next alignment with the plane of Columbia's orbit.
On p. 271 the crew of Atlantis open the payload bay doors shortly before the rendezvous with Columbia.
This makes no sense. Shuttles have to open their payload bay doors shortly after achieving orbit so that the radiators can discard heat. Atlantis would have opened its payload bay doors on the first orbit, not 24 hours later whilst approaching Columbia.
On p. 299 the consequences of blowing the crew ingress/egress hatch with the pyrotechnics is discussed.
There is no reason to consider doing this. From the inception of the shuttle program, the orbiter hatch has been able to be opened from the inside. The crew need only depressurise the orbiter and then operate the hatch opening mechanism.
On p. 332 “Standing by for communications blackout.”
The communications blackout is a staple of spaceflight drama but, in the shuttle era described in this novel, a thing of the past. While communications from the ground are blocked by plasma during reentry, communications from the shuttle routed through the TDRSS satellites are available throughout reentry except for brief periods when the orbiter's antennas are not aimed at the relay satellite overhead.
On p. 349 an Aegis guided missile cruiser shoots down the abandoned Columbia.
Where do I start? A space shuttle orbiter weighs about 100 tonnes. An SM-3 has a kinetic kill energy of around 130 megajoules, which is impressive, but is likely to pass through the structure of the shuttle, dispersing some debris, but leaving most of the mass behind. But let's suppose Columbia were dispersed into her component parts. Well, then the massive parts, such as the three main engines, would remain in orbit even longer, freed of the high-drag encumbrance of the rest of the structure, and come down hot and hard at random places around the globe. Probably, they'd splash in the ocean, but maybe they wouldn't—we'll never know.
Spoilers end here.  

While it's fun to spot and research goofs like these, I found they did not detract in any way from enjoyment of the novel, which is a perfectly plausible alternative history of Columbia's last mission.

 Permalink

Russell, Sharman Apt. Hunger: An Unnatural History. New York: Basic Books, 2005. ISBN 978-0-465-07165-4.
As the author begins this volume, “Hunger is a country we enter every day…”. Our bodies (and especially our hypertrophied brains) require a constant supply of energy, and have only a limited and relatively inefficient means to store excesses and release it upon demand, and consequently we have evolved to have a strong and immediate sense for inadequate nutrition, which in the normal course of things causes us to find something to eat. When we do not eat, regardless of the cause, we experience hunger, which is one of the strongest of somatic sensations. Whether hunger is caused by famine, fasting from ritual or in search of transcendence, forgoing food in favour of others, a deliberate hunger strike with the goal of effecting social or political change, deprivation at the hands of a coercive regime, or self-induced by a dietary regime aimed at improving one's health or appearance, it has the same grip upon the gut and the brain. As I wrote in The Hacker's Diet:

Hunger is a command, not a request. Hunger is looking at your dog curled up sleeping on the rug and thinking, “I wonder how much meat there is beneath all that fur?”

Here, the author explores hunger both at the level of biochemistry (where you may be amazed how much has been learned in the past few decades as to how the body regulates appetite and the fall-back from glucose-based metabolism from food to ketone body energy produced from stored fat, and how the ratio of energy from consumption of muscle mass differs between lean and obese individuals and varies over time) and the historical and social context of hunger. We encounter mystics and saints who fast to discover a higher wisdom or their inner essence; political activists (including Gandhi) willing to starve themselves to the point of death to shame their oppressors into capitulation; peoples whose circumstances have created a perverse (to us, the well-fed) culture built around hunger as the usual state of affairs; volunteers who participated in projects to explore the process of starvation and means to rescue those near death from its consequences; doctors in the Warsaw ghetto who documented the effects of starvation in patients they lacked the resources to save; and the millions of victims of famine in the last two centuries.

In discussing famine, the author appears uncomfortable with the fact, reluctantly alluded to, that famine in the modern era is almost never the result of a shortage of food, but rather the consequence of coercive government either constraining the supply of food or blocking its delivery to those in need. Even in the great Irish famine of the 1840s, Ireland continued to export food even as its population starved. (The author argues that even had the exports been halted, the food would have been inadequate to feed the Irish, but even so, they could have saved some, and this is before considering potential food shipments from the rest of the “Union” to a starving Ireland. [Pardon me if this gets me going—ancestors….]) Certainly today it is beyond dispute that the world produces far more food (at least as measured by calories and principal nutrients) than is needed to feed its population. Consequently, whenever there is a famine, the cause is not a shortage of food but rather an interruption in its delivery to those who need it. While aid programs can help to alleviate crises, and “re-feeding” therapy can rescue those on the brink of death by hunger, the problem will persist until the dysfunctional governments that starve their people and loot aid intended for them are eliminated. Given how those who've starved in recent decades have usually been disempowered minorities, perhaps it would be more effective in the long term to arm them than to feed them.

You will not find such gnarly sentiments in this book, which is very much aligned with the NGO view that famine due to evil coercive dictatorships is just one of those things that happens, like hurricanes. That said, I cannot recommend this book too highly. The biochemical view of hunger and energy storage and release in times of feast and famine alone is worth the price of admission, and the exploration of hunger in religion, politics, and even entertainment puts it over the top. If you're dieting, this may not be the book to read, but on the other hand, maybe it's just the thing.

The author is the daughter of Milburn G. “Mel” Apt, the first human to fly faster than Mach 3, who died when his X-2 research plane crashed after its record-setting flight.

 Permalink

March 2012

Mallan, Lloyd. Russia and the Big Red Lie. Greenwich, CT: Fawcett, 1959. LCCN 59004006.
It is difficult for those who did not live through the era to appreciate the extent to which Sputnik shook the self-confidence of the West and defenders of the open society and free markets around the world. If the West's social and economic systems were genuinely superior to totalitarian rule and central planning, then how had the latter, starting from a base only a half century before where illiterate peasants were bound to the land as serfs, and in little more than a decade after their country was devastated in World War II, managed to pull off a technological achievement which had so far eluded the West and was evidence of a mastery of rocketry which could put the United States heartland at risk? Suddenly the fellow travellers and useful idiots in the West were energised: “Now witness the power of this fully armed and operational socialist economy!”

The author, a prolific writer on aerospace and technology, was as impressed as anybody else by the stunning Soviet accomplishment, and undertook the daunting task of arranging a visit to the Soviet Union to see for himself the prowess of Soviet science and technology. After a halting start, he secured a visa and introductions from prominent U.S. scientists to their Soviet counterparts, and journeyed to the Soviet Union in April of 1958, travelled extensively in the country, visiting, among other destinations, Moscow, Leningrad, Odessa, Yalta, Krasnodar, Rostov-on-Don, Yerevan, Kharkov, and Alma-Ata, leaving Soviet soil in June 1958. He had extensive, on the record, meetings with a long list of eminent Soviet scientists and engineers, many members of the Soviet Academy of Sciences. And he came back with a conclusion utterly opposed to that of the consensus in the West: Soviet technological prowess was about 1% military-style brute force and 99% bluff and hoax.

As one intimately acquainted with Western technology, what he saw in the Soviet Union was mostly comparable to the state of the art in the West a decade earlier, and in many cases obviously copied from Western equipment. The scientists he interviewed, who had been quoted in the Soviet press as forecasting stunning achievements in the near future, often, when interviewed in person, said “that's all just theory—nobody is actually working on that”. The much-vaunted Soviet jet and turboprop airliners he'd heard of were nowhere in evidence anywhere he travelled, and evidence suggested that Soviet commercial aviation lacked navigation and instrument landing systems which were commonplace in the West.

Faced with evidence that Soviet technological accomplishments were simply another front in a propaganda offensive aimed at persuading the world of the superiority of communism, the author dug deeper into the specifics of Soviet claims, and here (from the perspective of half a century on) he got some things right and goofed on others. He goes to great length to argue that the Luna 1 Moon probe was a total hoax, based both on Soviet technological capability and the evidence of repeated failure by Western listening posts to detect its radio signals. Current thinking is that Luna 1 was a genuine mission intended to impact on the Moon, but the Soviet claim it was deliberately launched into solar orbit as an “artificial planet” propaganda aimed at covering up its missing the Moon due to a guidance failure. (This became obvious to all when the near-identical Luna 2 impacted the moon eight months later.) The fact that the Soviets possessed the technology to conduct lunar missions was demonstrated when Luna 3 flew around the Moon in October 1959 and returned the first crude images of its far side (other Luna 3 images). Although Mallan later claimed these images were faked and contained brush strokes, we now know they were genuine, since they are strikingly similar to subsequent imagery, including the albedo map from the Clementine lunar orbiter. “Vas you dere, Ivan?” Well, actually, yes. Luna 3 was the “boomerang” mission around the Moon which Mallan had heard of before visiting the Soviet Union but was told was just a theory when he was there. And yet, had the Soviets had the ability to communicate with Luna 1 at the distance of the Moon, there would have been no reason to make Luna 3 loop around the Moon in order to transmit its pictures from closer to the Earth—enigmas, enigmas, enigmas.

In other matters, the author is dead on, where distinguished Western “experts” and “analysts” were completely taken in by the propaganda. He correctly identifies the Soviet “ICBM” from the 1957 Red Square parade as an intermediate range missile closer to the German V-2 than an intercontinental weapon. (The Soviet ICBM, the R-7, was indeed tested in 1957, but it was an entirely different design and could never have been paraded on a mobile launcher; it did not enter operational service until 1959.) He is also almost precisely on the money when he estimates the Soviet “ICBM arsenal” as on the order of half a dozen missiles, while the CIA was talking about hundreds of Soviet missiles aimed at the West and demagogues were ratcheting up rhetoric about a “missile gap”.

You don't read this for factual revelations: everything discussed here is now known much better, and there are many conclusions drawn in this text from murky contemporary evidence which have proven incorrect. But if you wish to immerse yourself in the Cold War and imagine yourself trying to figure it all out from the sketchy and distorted information coming from the adversary, it is very enlightening. One wishes more people had listened to Mallan—how much folly we might have avoided.

There is also wisdom in what he got wrong. Space spectaculars can be accomplished in a military manner by expending vast resources coercively taken from the productive sector on centrally-planned projects with narrow goals. Consequently, it isn't surprising a command economy such as that of the Soviet Union managed to achieve milestones in space (while failing to deliver adequate supplies of soap and toilet paper to workers toiling in their “paradise”). Indeed, in many ways, the U.S. Apollo program was even more centrally planned than its Soviet counterpart, and the pernicious example it set has damaged efforts to sustainably develop and exploit space ever since.

This “Fawcett Book” is basically an issue of Mechanix Illustrated containing a single long article. It even includes the usual delightful advertisements. This work is, of course, hopelessly out of print. Used copies are available, but often at absurdly elevated prices for what amounts to a pulp magazine. Is this work in the public domain and hence eligible to be posted on the Web? I don't know. It may well be: it was published before 1978, and unless its copyright was renewed in 1987 when its original 28 year term expired, it is public domain. Otherwise, as a publication by a “corporate author”, it will remain in copyright until 2079, which makes a mockery of the “limited Times to Authors” provision of the U.S. Constitution. If somebody can confirm this work is in the public domain, I'll scan it and make it available on the Web.

 Permalink

Pennington, Maura. Great Men Are Free Men. Seattle: CreateSpace, 2011. ISBN 978-1-4664-4196-5.
This is so bad it is scarcely worth remarking upon. Hey, the Kindle edition is (at this writing) only a buck eighteen, but you also have to consider the value of the time it'll take you to read it, which is less than you might think because it's only 116 pages in the print edition, and much of that is white space around vapid dialogue. This is really a novella: there are no chapters (although two “parts” which differ little from one another, and hardly any character development. In fact, the absence of character development is only one aspect of the more general observation that nothing much happens at all.

A bunch of twenty-something members of the write-off generation are living in decadent imperial D.C., all cogs or aspiring cogs in the mindless and aimless machine of administrative soft despotism. All, that is, except for Charlie Winslow, who's working as a barista at a second-tier coffee joint until he can get into graduate school, immerse himself in philosophy, and bury himself for the rest of his life in the library, reading great works and writing “esoteric essays no one would read”. Charlie fashions himself a Great Man, and with his unique intellectual perspective towering above the molecular monolayer of his contemporaries, makes acerbic observations upon the D.C. scene which marginally irritates them. Finally, he snaps, and lets loose a tepid drizzle of speaking truth to poopheads, to which they respond “whatever”. And that's about it.

The author, who studied Russian at Dartmouth College, is a twenty-something living in D.C. who styles herself a libertarian. She writes a blog at Forbes.

 Permalink

Van Buren, Peter. We Meant Well. New York: Henry Holt, 2011. ISBN 978-0-8050-9436-7.
The author is a career Foreign Service Officer in the U.S. State Department. In 2009–2010 he spent a year in Iraq as leader of two embedded Provincial Reconstruction Teams (ePRT) operating out of Forward Operating Bases (FOB) which were basically crusader forts in a hostile Iraqi wilderness: America inside, trouble outside. Unlike “fobbits” who rarely ventured off base, the author and his team were charged with engaging the local population to carry out “Lines of Effort” dreamed up by pointy-heads back at the palatial embassy in Baghdad or in Washington to the end of winning the “hearts and minds” of the population and “nation building”. The Iraqis were so appreciative of these efforts that they regularly attacked the FOB with mortar fire and mounted improvised explosive device (IED) and sniper attacks on those who ventured out beyond the wire.

If the whole thing were not so tawdry and tragic, the recounting of the author's experiences would be hilariously funny. If you imagine it to be a Waugh novel and read it with a dark sense of humour, it is wickedly amusing, but then one remembers that real people are dying and suffering grievous injuries, the Iraqi population are being treated as props in public relation stunts by the occupiers and deprived of any hope of bettering themselves, and all of this vast fraudulent squandering of resources is being paid for by long-suffering U.S. taxpayers or money borrowed from China and Japan, further steering the imperial power toward a debt end.

The story is told in brief chapters, each recounting a specific incident or aspect of life in Iraq. The common thread, which stretches back over millennia, is that imperial powers attempting to do good by those they subjugate will always find themselves outwitted by wily oriental gentlemen whose ancestors have spent millennia learning how to game the systems imposed by the despotisms under which they have lived. As a result, the millions poured down the rathole of “Provincial Reconstruction” predictably flows into the pockets of the bosses in the communities who set up front organisations for whatever harebrained schemes the occupiers dream up. As long as the “project” results in a ribbon-cutting ceremony covered by the press (who may, of course, be given an incentive to show up by being paid) and an impressive PowerPoint presentation for the FOB commander to help him toward his next promotion, it's deemed a success and, hey, there's a new Line of Effort from the embassy that demands another project: let's teach widows beekeeping (p. 137)—it'll only cost US$1600 per person, and each widow can expect to make US$200 a year from the honey—what a deal!

The author is clearly a creature of the Foreign Service and scarcely conceals his scorn for the military who are tasked with keeping him alive in a war zone and the politicians who define the tasks he is charged with carrying out. Still, the raw folly of “nation building” and the obdurate somnambulant stupidity of those who believe that building milk processing plants or putting on art exhibitions in a war zone will quickly convert people none of whom have a single ancestor who has ever lived in a consensually-governed society with the rule of law to model citizens in a year or two is stunningly evident.

Why are empires always so dumb? When they attain a certain stage of overreach, they seem to always assume they can instill their own unique culture in those they conquer. And yet, as Kipling wrote in 1899:

Fill full the mouth of Famine
And bid the sickness cease;
And when your goal is nearest
The end for others sought,
Watch Sloth and heathen Folly
Bring all your hope to nought.

When will policy makers become as wise as the mindless mechanisms of biology? When an irritant invades an organism and it can't be eliminated, the usual reaction is to surround it with an inert barrier which keeps it from causing further harm. “Nation building” is folly; far better to bomb them if they misbehave, then build a wall around the whole godforsaken place and bomb them again if any of them get out and cause any further mischief. Call it “biomimetic foreign policy”—encyst upon it!

 Permalink

Bracken, Matthew. Domestic Enemies. Orange Park, FL: Steelcutter Publishing, 2006. ISBN 978-0-9728310-2-4.
This is the second novel in the author's “Enemies” trilogy, which began with Enemies Foreign and Domestic (EFAD) (December 2009). In After America (August 2011) Mark Steyn argues that if present trends continue (and that's the way to bet), within the lives of those now entering the workforce in the United States (or, at least attempting to do so, given the parlous state of the economy) what their parents called the “American dream” will have been extinguished and turned into a nightmare along the lines of Latin American authoritarian states: bifurcation of the society into a small, wealthy élite within their walled and gated communities and impoverished masses living in squalor and gang-ruled “no go” zones where civil society has collapsed.

This book picks up the story six years after the conclusion of EFAD. Ranya Bardiwell has foolishly attempted to return to the United States and been apprehended and sent to a detention and labour camp, her son taken from her at birth. When she manages to escape from the camp, she tracks down her son as having been given for adoption to the family of an FBI agent in New Mexico, and following the trail she becomes embroiled in the seething political storm of Nuevo Mexico, where separatist forces have taken power and seized upon the weakness of the Washington regime to advance their agenda of rolling back the Treaty of Guadalupe Hidalgo and creating a nation of “Aztlan” from the territories ceded by Mexico in that treaty.

As the story progresses, we see the endpoint of the reconquista in New Mexico, Los Angeles, and San Diego, and how opportunistic players on all sides seek to exploit the chaos and plunder the dwindling wealth of the dying empire for themselves. I'm not going to get into the plot or characters because almost anything I say would be a spoiler and this story does not deserve to be spoilt—it should be savoured. I consider it to be completely plausible—in the aftermath of a financial collapse and breakdown of central authority, the consequences of mass illegal immigration, “diversity”, and “multiculturalism” could, and indeed will likely lead to the kind of outcome sketched here. I found only one technical quibble in the entire novel (a turbine-powered plane “coughing and belching before catching”), but that's just worth a chuckle and doesn't detract in any way from the story. This the first thriller I recall reading in which a precocious five year old plays a central part in the story in a perfectly believable way, and told from his own perspective.

This book is perfectly accessible if read stand-alone, but I strongly recommend reading EFAD first—it not only sets the stage for the mid-collapse America in which this story plays out, but also provides the back story for Ranya Bardiwell and Bob Bullard who figure so prominently here.

Extended excerpts of this and the author's other novels are available online at the author's Web site.

 Permalink

April 2012

Zichek, Jared A. The Incredible Attack Aircraft of the USS United States, 1948–1949. Atglen, PA: Schiffer Publishing, 2009. ISBN 978-0-7643-3229-6.
In the peacetime years between the end of World War II in 1945 and the outbreak of the Korean War in 1950 the United States Navy found itself in an existential conflict. The adversary was not a foreign fleet, but rather the newly-unified Department of Defense, to which it had been subordinated, and its new peer service, the United States Air Force, which argued that the advent of nuclear weapons and intercontinental strategic bombing had made the Navy's mission obsolete. The Operation Crossroads nuclear tests at Bikini Atoll in 1946 which had shown that a well-placed fission bomb could destroy an entire carrier battle group in close formation supported the Air Force's case that aircraft carriers were simply costly targets which would be destroyed in the first days of a general conflict. Further, in a world where the principal adversary, the Soviet Union, had neither a blue water navy nor a warm weather port from which to operate one, the probability that the U.S. Navy would be called upon to support amphibious landings comparable to those of World War II appeared unlikely.

Faced with serious policy makers in positions of influence questioning the rationale for its very existence on anything like its current scale, advocates of the Navy saw seizing back part of the strategic bombardment mission from the Air Force as their salvation. This would require aircraft carriers much larger than any built before, carrier-based strategic bombers in the 100,000 pound class able to deliver the massive nuclear weapons of the epoch (10,000 pound bombs) with a combat radius of at least 1,700—ideally 2,000—miles. This led to the proposal for CVA-58, USS United States, a monster (by the standards of the time—contemporary supercarriers are larger still) flush deck carrier which would support these heavy strategic bombers and their escort craft.

This ship would require aircraft like nothing in the naval inventory, and two “Outline Specifications” were issued to industry to solicit proposals for a “Carrier-Based Landplane”: the basic subsonic strategic bomber, and a “Long Range Special Attack airplane”, which required a supersonic dash to the target. (Note that when the latter specification was issued on August 24th, 1948, less than a year had elapsed since the first supersonic flight of the Bell X-1.)

The Navy's requirements in these two specifications were not just ambitious, they were impossible given the propulsion technology of the time: the thrust and specific fuel consumption of available powerplants simply did not permit achieving all of the Navy's requirements. The designs proposed by contractors, presented in this book in exquisite detail, varied from the highly conventional, which straightforwardly conceded their shortcomings compared to what the Navy desired, to the downright bizarre (especially in the “Special Attack” category), with aircraft that look like a cross between something produced by the Lucasfilm model shop and the fleet of the Martian Air Force. Imagine a biplane that jettisons its top wing/fuel tank on the way to the target, after having been launched with a Fireball XL-5 like expendable trolley; a “parasitic” airplane which served as the horizontal stabiliser of a much larger craft outbound to the target, then separated and returned after dispatching the host to bomb them commies; or a convertible supersonic seaplane which could refuel from submarines on the way to the target. All of these and more are detailed in this superbly produced book which is virtually flawless in its editing and production values.

Nothing at all came of all of this burst of enthusiasm and creativity. On April 23rd, 1949, the USS United States was cancelled, provoking the resignation of the Secretary of the Navy and the Revolt of the Admirals. The strategic nuclear mission was definitively won by the Air Force, which would retain their monopoly status until the Navy got back into the game with the Polaris missile submarines in the 1960s.

 Permalink

Clark, John D. Ignition! New Brunswick, NJ: Rutgers University Press, 1972. ISBN 978-0-8135-0725-5.
This may be the funniest book about chemistry ever written. In the golden age of science fiction, one recurring theme was the search for a super “rocket fuel” (with “fuel” used to mean “propellant”) which would enable the exploits depicted in the stories. In the years between the end of World War II and the winding down of the great space enterprise with the conclusion of the Apollo project, a small band of researchers (no more than 200 in the U.S., of whom around fifty were lead scientists), many of whom had grown up reading golden age science fiction, found themselves tasked to make their boyhood dreams real—to discover exotic propellants which would allow rockets to accomplish missions envisioned not just by visionaries but also the hard headed military men who, for the most part, paid the bills.

Propulsion chemists are a rare and special breed. As Isaac Asimov (who worked with the author during World War II) writes in a short memoir at the start of the book:

Now, it is clear that anyone working with rocket fuels is outstandingly mad. I don't mean garden-variety crazy or merely raving lunatic. I mean a record-shattering exponent of far-out insanity.

There are, after all, some chemicals that explode shatteringly, some that flame ravenously, some that corrode hellishly, some that poison sneakily, and some that stink stenchily. As far as I know, though, only liquid rocket fuels have all these delightful properties combined into one delectable whole.

And yet amazingly, as head of propulsion research at the Naval Air Rocket Test Station and its successor organisation for seventeen years, the author not only managed to emerge with all of his limbs and digits intact, his laboratory never suffered a single time-lost mishap. This, despite routinely working with substances such as:

Chlorine trifluoride, ClF3, or “CTF” as the engineers insist on calling it, is a colorless gas, a greenish liquid, or a white solid. … It is also quite probably the most vigorous fluorinating agent in existence—much more vigorous than fluorine itself. … It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water—with which it reacts explosively. It can be kept in some of the ordinary structural metals—steel, copper, aluminum, etc.—because the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminum keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes. (p. 73)

And ClF3 is pretty benign compared to some of the other dark corners of chemistry into which their research led them. There is extensive coverage of the quest for a high energy monopropellant, the discovery of which would greatly simplify the design of turbomachinery, injectors, and eliminate problems with differential thermal behaviour and mixture ratio over the operating range of an engine which used it. However, the author reminds us:

A monopropellant is a liquid which contains in itself both the fuel and the oxidizer…. But! Any intimate mixture of a fuel and an oxidizer is a potential explosive, and a molecule with one reducing (fuel) end and one oxidizing end, separated by a pair of firmly crossed fingers, is an invitation to disaster. (p. 10)

One gets an excellent sense of just how empirical all of this was. For example, in the quest for “exotic fuel” (which the author defines as “It's expensive, it's got boron in it, and it probably doesn't work.”), straightforward inorganic chemistry suggested that burning a borane with hydrazine, for example:

2B5H9 + 5N2H4 ⟶ 10BN + 19H2

would be a storable propellant with a specific impulse (Isp) of 326 seconds with a combustion chamber temperature of just 2000°K. But this reaction and the calculation of its performance assumes equilibrium conditions and, apart from a detonation (something else with which propulsion chemists are well acquainted), there are few environments as far from equilibrium as a rocket combustion chamber. In fact, when you try to fire these propellants in an engine, you discover the reaction products actually include elemental boron and ammonia, which result in disappointing performance. Check another one off the list.

Other promising propellants ran afoul of economic considerations and engineering constraints. The lithium, fluorine, and hydrogen tripropellant system has been measured (not theoretically calculated) to have a vacuum Isp of an astonishing 542 seconds at a chamber pressure of only 500 psi and temperature of 2200°K. (By comparison, the space shuttle main engine has a vacuum Isp of 452.3 sec. with a chamber pressure of 2994 psi and temperature of 3588°K; a nuclear thermal rocket would have an Isp in the 850–1000 sec. range. Recall that the relationship between Isp and mass ratio is exponential.) This level of engine performance makes a single stage to orbit vehicle not only feasible but relatively straightforward to engineer. Unfortunately, there is a catch or, to be precise, a list of catches. Lithium and fluorine are both relatively scarce and very expensive in the quantities which would be required to launch from the Earth's surface. They are also famously corrosive and toxic, and then you have to cope with designing an engine in which two of the propellants are cryogenic fluids and the third is a metal which is solid below 180°C. In the end, the performance (which is breathtaking for a chemical rocket) just isn't worth the aggravation.

In the final chapter, the author looks toward the future of liquid rocket propulsion and predicts, entirely correctly from a perspective four decades removed, that chemical propulsion was likely to continue to use the technologies upon which almost all rockets had settled by 1970: LOX/hydrocarbon for large first stages, LOX/LH2 for upper stages, and N2O4/hydrazine for storable missiles and in-space propulsion. In the end economics won out over the potential performance gains to be had from the exotic (and often far too exciting) propellants the author and his colleagues devoted their careers to exploring. He concludes as follows.

There appears to be little left to do in liquid propellant chemistry, and very few important developments to be anticipated. In short, we propellant chemists have worked ourselves out of a job. The heroic age is over.

But it was great fun while it lasted. (p. 192)

Now if you've decided that you just have to read this book and innocently click on the title above to buy a copy, you may be at as much risk of a heart attack as those toiling in the author's laboratory. This book has been out of print for decades and is considered such a classic, both for its unique coverage of the golden age of liquid propellant research, comprehensive description of the many avenues explored and eventually abandoned, hands-on chemist-to-chemist presentation of the motivation for projects and the adventures in synthesising and working with these frisky molecules, not to mention the often laugh out loud writing, that used copies, when they are available, sell for hundreds of dollars. As I am writing these remarks, seven copies are offered at Amazon at prices ranging from US$300–595. Now, this is a superb book, but it isn't that good!

If, however, you type the author's name and the title of the book into an Internet search engine, you will probably quickly come across a PDF edition consisting of scanned pages of the original book. I'm not going to link to it here, both because I don't link to works which violate copyright as a matter of principle and since my linking to a copy of the PDF edition might increase its visibility and risk of being taken down. I am not one of those people who believes “information wants to be free”, but I also doubt John Clark would have wanted his unique memoir and invaluable reference to be priced entirely beyond the means of the vast majority of those who would enjoy and be enlightened by reading it. In the case of “orphaned works”, I believe the moral situation is ambiguous (consider: if you do spend a fortune for a used copy of an out of print book, none of the proceeds benefit the author or publisher in any way). You make the call.

 Permalink

Flynn, Vince. Kill Shot. New York: Atria Books, 2012. ISBN 978-1-4165-9520-5.
This is the twelfth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series, but chronologically is second in the saga, picking up a year after the events of American Assassin (December 2010). Mitch Rapp has hit his stride as the CIA's weapon of choice against the terror masters, operating alone with only the knowledge of a few people, dispatching his targets with head shots when they least expect it and, in doing so, beginning to sow terror among the terrorists.

Rapp is in Paris to take out the visiting Libyan oil minister, who has been a conduit for funding terrorist attacks, including the Pan Am Flight 103 bombing which killed Rapp's college sweetheart and set him on the trajectory toward his current career—this time it's personal. The hit goes horribly wrong, leaving a trail of bodies and hundreds of cartridge casings in a posh hotel, with the potential of a disastrous public relations blowback for the CIA, and Rapp's superiors looking at prospects ranging from congressional hearings at best to time in Club Fed. Based on how things went down, Rapp becomes persuaded that he was set up and does not know who he can trust and lies low, while his bosses fear the worst: that their assassin has gone rogue.

The profane and ruthless Stan Hurley, who trained Rapp and whose opinion of the “college boy” has matured from dislike to detestation and distrust, is dispatched to Paris to find out what happened, locate Rapp, and if necessary put an end to his career in the manner to which Hurley and his goons are accustomed.

This is a satisfying thriller with plenty of twists and turns, interesting and often complicated characters, and a thoroughly satisfying conclusion. We see, especially in the interrogation of “Victor”, how far Rapp has come from his first days with Hurley, and that the tension between the two may have at its roots the fact that they are becoming more and more alike, a prospect Rapp finds repellent. Unlike American Assassin, which is firmly anchored in the chaos of early 1990s Beirut, apart from a few details (such as mobile telephones being novel and uncommon), the present novel could be set at almost any time since 1990—historical events play no part in the story. It's best to read American Assassin first, as it provides the back story on the characters and will provide more insight into their motivations, but this book works perfectly well as a stand-alone thriller should you prefer to start here.

 Permalink

Pollan, Michael. The Omnivore's Dilemma. New York: Penguin Press, 2006. ISBN 978-0-14-303858-0.
One of the delights of operating this site is the opportunity to interact with visitors, whom I am persuaded are among the most interesting and informed of any audience on the Web. The feedback messages and book recommendations they send are often thought-provoking and sometimes enlightening. I don't know who I have to thank for recommending this book, but I am very grateful they took the time to do so, as it is a thoroughly fascinating look at the modern food chain in the developed world, and exploration of alternatives to it.

The author begins with a look at the “industrial” food chain, which supplies the overwhelming majority of calories consumed on the planet today. Prior to the 20th century, agriculture was almost entirely powered by the Sun. It was sunlight that drove photosynthesis in plants, providing both plant crops and the feed for animals, including those used to pull ploughs and transport farm products to market. The invention of the Haber process in 1909 and its subsequent commercialisation on an industrial scale forever changed this. No longer were crop yields constrained by the amount of nitrogen which could be fixed from the air by bacteria symbiotic with the roots of legume crops, recycled onto fields in the manure and urine of animals, or harvested from the accumulated droppings birds in distant places, but rather able to be dramatically increased by the use of fertiliser whose origin traced back to the fossil fuel which provided the energy to create it. Further, fossil fuel insinuated itself into agriculture in other ways, with the tractor replacing the work of farm hands and draught animals; railroads, steam ships, trucks, and aircraft expanding the distance between production on a farm and consumption to the global scale; and innovations such as refrigeration increasing the time from harvest to use.

All of these factors so conspired to benefit the species Zea mays (which Americans call “corn” and everybody else calls “maize”) that one could craft a dark but plausible science fiction story in which that species of grass, highly modified by selective breeding by indigenous populations in the New World, was actually the dominant species on Earth, having first motivated its modification from the ancestral form to a food plant ideally suited to human consumption, then encouraged its human servants to spread it around the world, develop artificial nutrients and pesticides to allow it to be grown in a vast monoculture, eradicating competitors in its path, and becoming so central to modern human nutrition that trying to eliminate it (or allowing a natural threat to befall it) would condemn billions of humans to starvation. Once you start to think this way, you'll never regard that weedless field of towering corn stretching off to the horizon in precisely the same way….

As the author follows the industrial food chain from a farm in the corn belt to the “wet mill” in which commodity corn is broken down into its molecular constituents and then reassembled into the components of processed food, and to the feedlot, where corn products are used to “finish” meat animals which evolved on a different continent from Zea mays and consequently require food additives and constant medication simply to metabolise this foreign substance, it becomes clear that maize is not a food, but rather a feedstock (indeed, the maize you buy in the supermarket to eat yourself is not this industrial product, but rather “sweet corn” produced entirely separately), just as petroleum is used in the plastics industry. Or the food industry—when you take into account fertiliser, farm machinery, and transportation, more than one calorie of fossil fuel is consumed to produce a calorie of food energy in maize. If only we could make Twinkies directly from crude oil….

All of this (and many things I've elided here in the interest of brevity [Hah! you say]) may persuade you to “go organic” and pay a bit more for those funky foods with the labels showing verdant crops basking in the Sun, contented cows munching grass in expansive fields, and chickens being chickens, scratching for bugs at liberty. If you're already buying these “organic” products and verging on the sin of smugness for doing so, this is not your book—or maybe it is. The author digs into the “industrial organic” state of the art and discovers that while there are certainly benefits to products labelled “organic” (no artificial fertilisers or pesticides, for example, which certainly benefit the land if not the product you buy), the U.S. Department of Agriculture (the villain throughout) has so watered down the definition of “organic” that most products with that designation come from “organic” factory farms, feedlots, and mass poultry confinement facilities. As usual, when the government gets involved, the whole thing is pretty much an enormous scam, which is ultimately damaging to those who are actually trying to provide products with a sustainable solar-powered food chain which respects the land and the nature of the animals living on it.

In the second section of the book, the author explores this alternative by visiting Polyface Farms in Virginia, which practices “grass farming” and produces beef, pork, chickens and eggs, turkeys, rabbits, and forest products for its local market in Virginia. The Salatin family, who owns and operates the farm, views its pastures as a giant solar collector, turning incident sunlight along with water collected by the surrounding forest into calories which feed their animals. All of the animal by-products (even the viscera and blood of chickens slaughtered on site) are recycled into the land. The only outside inputs into the solar-powered cycle are purchased chicken feed, since grass, grubs, and bugs cannot supply adequate energy for the chickens. (OK, there are also inputs of fuel for farm machinery and electricity for refrigeration and processing, but since the pastures are never ploughed, these are minimal compared to a typical farm.)

Polyface performs not only intensive agriculture, but what Salatin calls “management intensive” farming—an information age strategy informed by the traditional ecological balance between grassland, ruminants, and birds. The benefit is not just to the environment, but also in the marketplace. A small holding with only about 100 acres under cultivation is able to support an extended family, produce a variety of products, and by their quality attract customers willing to drive as far as 150 miles each way to buy them at prices well above those at the local supermarket. Anybody who worries about a possible collapse of the industrial food chain and has provided for that contingency by acquiring a plot of farm land well away from population centres will find much to ponder here. Remember, it isn't just about providing for your family and others on the farm: if you're providing food for your community, they're far more likely to come to your defence when the starving urban hordes come your way to plunder.

Finally, the author seeks to shorten his personal food chain to the irreducible minimum by becoming a hunter-gatherer. Overcoming his blue state hoplophobia and handed down mycophobia, he sets out to hunt a feral pig in Sonoma County, California and gather wild mushrooms and herbs to accompany the meal. He even “harvests” cherries from a neighbour's tree overhanging a friend's property in Berkeley under the Roman doctrine of usufruct and makes bread leavened with yeast floating in the air around his house. In doing so, he discovers that there is something to what he had previously dismissed as purple prose in accounts of hunters, and that there is a special satisfaction and feeling of closing the circle in sharing a meal with friends in which every dish was directly obtained by them, individually or in collaboration.

This exploration of food: its origins, its meaning to us, and its place in our contemporary civilisation, makes clear the many stark paradoxes of our present situation. It is abundantly clear that the industrial food chain is harmful to the land, unsustainable due to dependence on finite resources, cruel to animals caught up in it, and unhealthy in many ways to those who consume its products. And yet abandoning it in favour of any of the alternatives presented here would result in a global famine which would make the Irish, Ukrainian, and Chinese famines of the past barely a blip on the curve. Further, billions of the Earth's inhabitants today can only dream of the abundance, variety, and affordability (in terms of hours worked to provide one's food needs) of the developed world diet. And yet at the same time, when one looks at the epidemic of obesity, type 2 diabetes, and other metabolic disorders among corn-fed populations, you have to wonder whether Zea mays is already looking beyond us and plotting its next conquest.

 Permalink

Baxter, Stephen. Manifold: Time. New York: Del Rey, 2000. ISBN 978-0-345-43076-2.
One claim frequently made by detractors of “hard” (scientifically realistic) science fiction is that the technical details crowd out character development and plot. While this may be the case for some exemplars of the genre, this magnificent novel, diamondoid in its science, is as compelling a page-turner as any thriller I've read in years, and is populated with characters who are simultaneously several sigma eccentric yet believable, who discover profound truths about themselves and each other as the story progresses. How hard the science? Well, this is a story in which quantum gravity, closed timelike curves, the transactional interpretation of quantum mechanics, strange matter, the bizarre asteroid 3753 Cruithne, cosmological natural selection, the doomsday argument, Wheeler-Feynman absorber theory, entrepreneurial asteroid mining, vacuum decay, the long-term prospects for intelligent life in an expanding universe, and sentient, genetically-modified cephalopods all play a part, with the underlying science pretty much correct, at least as far as we understand these sometimes murky areas.

The novel, which was originally published in 2000, takes place in 2010 and subsequent years. NASA's human spaceflight program is grounded, and billionaire Reid Malenfant is ready to mount his own space program based on hand-me-down Shuttle hardware used to build a Big Dumb Booster with the capability to conduct an asteroid prospecting and proof-of-concept mining mission with a single launch from the private spaceport he has built in the Mojave desert. Naturally, NASA and the rest of the U.S. government is doing everything they can to obstruct him. Cornelius Taine, of the mysterious and reputedly flaky Eschatology, Inc., one of Malenfant's financial backers, comes to him with what may be evidence of “downstreamers”—intelligent beings in the distant future—attempting to communicate with humans in the present. Malenfant (who is given to such) veers off onto a tangent and re-purposes his asteroid mission to search for evidence of contact from the future.

Meanwhile, the Earth is going progressively insane. Super-intelligent children are being born at random all around the world, able to intuitively solve problems which have defied researchers for centuries, and for some reason obsessed with the image of a blue disc. Fear of the “Carter catastrophe”, which predicts, based upon the Copernican principle and Bayesian inference, that human civilisation is likely to end in around 200 years, has uncorked all kinds of craziness ranging from apathy, hedonism, denial, suicide cults, religious revivals, and wars aimed at settling old scores before the clock runs out. Ultimately, the only way to falsify the doomsday argument is to demonstrate that humans did survive well into the future beyond it, and Malenfant's renegade mission becomes the focus of global attention, with all players attempting to spin its results, whatever they may be, in their own interest.

This is a story which stretches from the present day to a future so remote and foreign to anything in our own experience that it is almost incomprehensible to us (and the characters through which we experience it) and across a potentially infinite landscape of parallel universes, in which intelligence is not an epiphenomenon emergent from the mindless interactions of particles and fields, but rather a central player in the unfolding of the cosmos. Perhaps the ultimate destiny of our species is to be eschatological engineers. That is, unless the squid get there first.

Here you will experience the sense of wonder of the very best science fiction of past golden ages before everything became dark, claustrophobic, and inward-looking—highly recommended.

 Permalink

May 2012

Gergel, Max G. Excuse Me Sir, Would You Like to Buy a Kilo of Isopropyl Bromide? Rockford, IL: Pierce Chemical Company, 1979. OCLC 4703212.
Throughout Max Gergel's long career he has been an unforgettable character for all who encountered him in the many rôles he has played: student, bench chemist, instructor of aviation cadets, entrepreneur, supplier to the Manhattan Project, buyer and seller of obscure reagents to a global clientele, consultant to industry, travelling salesman peddling products ranging from exotic halocarbons to roach killer and toilet bowl cleaner, and evangelist persuading young people to pursue careers in chemistry. With family and friends (and no outside capital) he founded Columbia Organic Chemicals, a specialty chemical supplier specialising in halocarbons but, operating on a shoestring, willing to make almost anything a customer was ready to purchase (even Max drew the line, however, when the silver-tongued director of the Naval Research Laboratory tried to persuade him to make pentaborane).

The narrative is as rambling and entertaining as one imagines sharing a couple (or a couple dozen) drinks with Max at an American Chemical Society meeting would have been. He jumps from family to friends to finances to business to professional colleagues to suppliers to customers to nuggets of wisdom for starting and building a business to eccentric characters he has met and worked with to his love life to the exotic and sometimes bone-chilling chemical syntheses he did in his company's rough and ready facilities. Many of Columbia's contracts involved production of moderate quantities (between a kilogram and several 55 gallon drums) of substances previously made only in test tube batches. This “medium scale chemistry”—situated between the laboratory bench and an industrial facility making tank car loads of the stuff—involves as much art (or, failing that, brute force and cunning) as it does science and engineering, and this leads to many of the adventures and misadventures chronicled here. For example, an exothermic reaction may be simple to manage when you're making a few grams of something—the liberated heat is simply conducted to the walls to the test tube and dissipated: at worst you may only need to add the reagent slowly, stir well, and/or place the reaction vessel in a water bath. But when DuPont placed an order for allene in gallon quantities, this posed a problem which Max resolved as follows.

When one treats 1,2,3-Trichloropropane with alkali and a little water the reaction is violent; there is a tendency to deposit the reaction product, the raw materials and the apparatus on the ceiling and the attending chemist. I solved this by setting up duplicate 12 liter flasks, each equipped with double reflux condensers and surrounding each with half a dozen large tubs. In practice, when the reaction “took off” I would flee through the door or window and battle the eruption with water from a garden hose. The contents flying from the flasks were deflected by the ceiling and collected under water in the tubs. I used towels to wring out the contents which separated, shipping the lower level to DuPont. They complained of solids suspended in the liquid, but accepted the product and ordered more. I increased the number of flasks to four, doubled the number of wash tubs and completed the new order.

They ordered a 55 gallon drum. … (p. 127)

All of this was in the days before the EPA, OSHA, and the rest of the suffocating blanket of soft despotism descended upon entrepreneurial ventures in the United States that actually did things and made stuff. In the 1940s and '50s, when Gergel was building his business in South Carolina, he was free to adopt the “whatever it takes” attitude which is the quintessential ingredient for success in start-ups and small business. The flexibility and ingenuity which allowed Gergel not only to compete with the titans of the chemical industry but become a valued supplier to them is precisely what is extinguished by intrusive regulation, which accounts for why sclerotic dinosaurs are so comfortable with it. On the other hand, Max's experience with methyl iodide illustrates why some of these regulations were imposed:

There is no description adequate for the revulsion I felt over handling this musky smelling, high density, deadly liquid. As residue of the toxicity I had chronic insomnia for years, and stayed quite slim. The government had me questioned by Dr. Rotariu of Loyola University for there had been a number of cases of methyl bromide poisoning and the victims were either too befuddled or too dead to be questioned. He asked me why I had not committed suicide which had been the final solution for some of the afflicted and I had to thank again the patience and wisdom of Dr. Screiber. It is to be noted that another factor was our lack of a replacement worker. (p. 130)

Whatever it takes.

This book was published by Pierce Chemical Company and was never, as best I can determine, assigned either an ISBN or Library of Congress catalogue number. I cite it above by its OCLC Control Number. The book is hopelessly out of print, and used copies, when available, sell for forbidding prices. Your only alternative to lay hands on a print copy is an inter-library loan, for which the OCLC number is a useful reference. (I hear members of the write-off generation asking, “What is this ‘library’ of which you speak?”) I found a scanned PDF edition in the library section of the Sciencemadness.org Web site; the scanned pages are sometimes a little gnarly around the bottom, but readable. You will also find the second volume of Gergel's memoirs, The Ageless Gergel, among the works in this collection.

 Permalink

Levin, Mark R. Ameritopia. New York: Threshold Editions, 2012. ISBN 978-1-4391-7324-4.
Mark Levin seems to have a particularly virtuous kind of multiple personality disorder. Anybody who has listened to his radio program will know him as a combative “no prisoners” advocate for the causes of individual liberty and civil society. In print, however, he comes across as a scholar, deeply versed in the texts he is discussing, who builds his case as the lawyer he is, layer by layer, into a persuasive argument which is difficult to refute except by recourse to denial and emotion, which are the ultimate refuge of the slavers.

In this book, Levin examines the utopian temptation, exploring four utopian visions: Plato's Republic, More's Utopia, Hobbes's Leviathan, and Marx and Engels' Communist Manifesto in detail, with lengthy quotations from the original texts. He then turns to the philosophical foundations of the American republic, exploring the work of Locke, Montesquieu, and the observations of Tocqueville on the reality of democracy in America.

Levin argues that the framers of the U.S. Constitution were well aware of utopian visions, and explicitly rejected them in favour of a system, based upon the wisdom of Locke and Montesquieu, which was deliberately designed to operate in spite of the weaknesses of the fallible humans which would serve as its magistrates. As Freeman Dyson observed, “The American Constitution is designed to be operated by crooks, just as the British constitution is designed to be operated by gentlemen.” Engineers who value inherent robustness in systems will immediately grasp the wisdom of this: gentlemen are scarce and vulnerable to corruption, while crooks are an inexhaustible resource.

For some crazy reason, most societies choose lawyers as legislators and executives. I think they would be much better advised to opt for folks who have designed, implemented, and debugged two or more operating systems in their careers. A political system is, after all, just an operating system that sorts out the rights and responsibilities of a multitude of independent agents, all acting in their own self interest, and equipped with the capacity to game the system and exploit any opportunity for their own ends. Looking at the classic utopias, what strikes this operating system designer is how sadly static they all are—they assume that, uniquely after billions of years of evolution and thousands of generations of humans, history has come to an end and that a wise person can now figure out how all people in an indefinite future should live their lives, necessarily forgoing improvement through disruptive technologies or ideas, as that would break the perfect system.

The American founding was the antithesis of utopia: it was a minimal operating system which was intended to provide the rule of law which enabled civil society to explore the frontiers of not just a continent but the human potential. Unlike the grand design of utopian systems, the U.S. Constitution was a lean operating system which devolved almost all initiative to “apps” created by the citizens living under it.

In the 20th century, as the U.S. consolidated itself as a continental power, emerged as a world class industrial force, and built a two ocean navy, the utopian temptation rose among the political class, who saw in the U.S. not just the sum of the individual creativity and enterprise of its citizens but the potential to build heaven on Earth if only those pesky constitutional constraints could be shed. Levin cites Wilson and FDR as exemplars of this temptation, but for most of the last century both main political parties more or less bought into transforming America into Ameritopia.

In the epilogue, Levin asks whether it is possible to reverse the trend and roll back Ameritopia into a society which values the individual above the collective and restores the essential liberty of the citizen from the intrusive state. He cites hopeful indications, such as the rise of the “Tea Party” movement, but ultimately I find these unpersuasive. Collectivism always collapses, but usually from its own internal contradictions; the way to bet in the long term is on individual liberty and free enterprise, but I expect it will take a painful and protracted economic and societal collapse to flense the burden of bad ideas which afflict us today.

In the Kindle edition the end notes are properly bidirectionally linked to the text, but the note citations in the main text are so tiny (at least when read with the Kindle application on the iPad) that it is almost impossible to tap upon them.

 Permalink

Hunter, Stephen. Soft Target. New York: Simon & Schuster, 2011. ISBN 978-1-4391-3870-0.
This has to be among the worst nightmares of those few functionaries tasked with the “anti-terrorist” mission in the West who are not complacent seat-warmers counting the days until their retirement or figuring out how to advance their careers or gain additional power over the citizens whose taxes fund their generous salaries and benefits. On the Friday after Thanksgiving, a group of Somali militants infiltrate and stage a hostage-taking raid on “America, the Mall” in a suburb of Minneapolis (having nothing to do, of course, with another mega-mall in the vicinity). Implausibly, given the apparent provenance of the perpetrators, they manage to penetrate the mall's SCADA system and impose a full lock-down, preventing escape and diverting surveillance cameras for their own use.

This happens on the watch of Douglas Obobo, commandant of the Minnesota State Police, the son of a Kenyan graduate student and a U.S. anthropologist who, after graduating from Harvard Law School, had artfully played the affirmative action card and traded upon his glibness to hop from job to job, rising in the hierarchy without ever actually accomplishing anything. Obobo views this as a once in a lifetime opportunity to demonstrate how his brand of conciliation and leading from behind can defuse a high-profile confrontation, and thwarts efforts of those under his command to even prepare backup plans should negotiations with the hostage takers fail.

Meanwhile, the FBI tries to glean evidence of how the mall's security systems were bypassed and how the attackers were armed and infiltrated, and comes across clues which suggest a very different spin on the motivation of the attack—one which senior law enforcement personnel may have to seek the assistance of their grandchildren to explain. Marine veteran Ray Cruz finds himself the man on the inside, Die Hard style, and must rely upon his own resources to take down the perpetrator of the atrocities.

I have a few quibbles. These are minor, and constitute only marginal spoilers, but I'll put them behind the curtain to avoid peeving the easily irritated.

Spoiler warning: Plot and/or ending details follow.  
  • On p. 97, FBI sniper Dave McElroy fires at Ray Cruz, who he takes to be one of the terrorists. Firing down from the roof into the mall, he fails to correct for the angle of the shot (which requires one to hold low compared to a horizontal shot, since the distance over which the acceleration of gravity acts is reduced as the cosine of the angle of the shot). I find it very difficult to believe that a trained FBI sniper would make such an error, even under the pressure of combat. Hunters in mountain country routinely make this correction.
  • On p. 116 the garbage bag containing Reed Hobart's head is said to weigh four pounds. The mass of an average adult human head is around 5 kg, or around 11 pounds. Since Hobart has been described as a well-fed person with a “big head” (p. 112), he is unlikely to be a four pound pinhead. I'd put this down to the ever-green problem of converting between republican and imperial units.
  • Nikki Swagger's television call sign switches back and forth between WUFF and WUSS throughout the book. I really like the idea of a WUSS-TV, especially in Minneapolis.
  • On p. 251, as the lawyers are handing out business cards to escapees from the mall, the telephone area code on the cards is 309, which is in Illinois. Although I grant that it's more likely such bipedal intestinal parasites would inhabit that state than nice Minnesota, is it plausible they could have gotten to the scene in time?
Spoilers end here.  

Had, say, 200 of the 1000 patrons of the mall taken hostage availed themselves of Minnesota's concealed carry law, and had the mall not abridged citizens' God-given right to self-defence, the 16 terrorists would have been taken down in the first 90 seconds after their initial assault. Further, had the would-be terrorists known that one in five of their intended victims were packing, do you think they would have tried it? Just sayin'.

This is an excellent thriller, which puts into stark contrast just how vulnerable disarmed populations are in the places they gather in everyday life, and how absurd the humiliating security theatre is at barn doors where the horses have fled more than a decade ago. It is in many ways deeply cynical, but that cynicism is well-justified by the reality of the society in which the story is set.

A podcast interview with the author is available.

 Permalink

Chertok, Boris E. Rockets and People. Vol. 1. Washington: National Aeronautics and Space Administration, [1999] 2005. ISBN 978-1-4700-1463-6 NASA SP-2005-4110.
This is the first book of the author's monumental four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Chertok saw it all, from the earliest Soviet experiments with rocketry in the 1930s, uncovering the secrets of the German V-2 amid the rubble of postwar Germany (he was the director of the Institute RABE, where German and Soviet specialists worked side by side laying the foundations of postwar Soviet rocketry), the glory days of Sputnik and Gagarin, the anguish of losing the Moon race, and the emergence of Soviet preeminence in long-duration space station operations.

The first volume covers Chertok's career up to the conclusion of his work in Germany in 1947. Unlike Challenge to Apollo, which is a scholarly institutional and technical history (and consequently rather dry reading), Chertok gives you a visceral sense of what it was like to be there: sometimes chilling, as in his descriptions of the 1930s where he matter-of-factly describes his supervisors and colleagues as having been shot or sent to Siberia just as an employee in the West would speak of somebody being transferred to another office, and occasionally funny, as when he recounts the story of the imperious Valentin Glushko showing up at his door in a car belching copious smoke. It turns out that Glushko had driven all the way with the handbrake on, and his subordinate hadn't dared mention it because Glushko didn't like to be distracted when at the wheel.

When the Soviets began to roll out their space spectaculars in the late 1950s and early '60s, some in the West attributed their success to the Soviets having gotten the “good German” rocket scientists while the West ended up with the second team. Chertok's memoir puts an end to such speculation. By the time the Americans and British vacated the V-2 production areas, they had packed up and shipped out hundreds of rail cars of V-2 missiles and components and captured von Braun and all of his senior staff, who delivered extensive technical documentation as part of their surrender. This left the Soviets with pretty slim pickings, and Chertok and his staff struggled to find components, documents, and specialists left behind. This put them at a substantial disadvantage compared to the U.S., but forced them to reverse-engineer German technology and train their own people in the disciplines of guided missilery rather than rely upon a German rocket team.

History owes a great debt to Boris Chertok not only for the achievements in his six decade career (for which he was awarded Hero of Socialist Labour, the Lenin Prize, the Order of Lenin [twice], and the USSR State Prize), but for living so long and undertaking to document the momentous events he experienced at the first epoch at which such a candid account was possible. Only after the fall of the Soviet Union could the events chronicled here be freely discussed, and the merits and shortcomings of the Soviet system in accomplishing large technological projects be weighed.

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. Words are occasionally run together and capitalisation is missing on some proper nouns. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

 Permalink

June 2012

McCarry, Charles. Ark. New York: Open Road, 2011. ISBN 978-1-4532-5820-0.
Ick!

All right, I suppose some readers will wish me to expand somewhat on the capsule review in the first paragraph, but it really does say it all. The author is a veteran and bestselling author of spy fiction (and former deep cover CIA agent) who is best known for his Paul Christopher novels. Here he turns his hand to science fiction and promptly trips over his cloak and inflicts a savage dagger wound on the reader.

The premise is that since the Earth's core has been found to rotate faster than the outer parts of the planet (a “discovery” found, subsequent to the novel's publication, to have been in error by six orders of magnitude), the enormous kinetic energy of the core is periodically dissipated by being coupled to the mantle and crust, resulting in a “hyperquake” in which the Earth's crust would be displaced not metres on a localised basis, but kilometres and globally. This is said to explain at least some of the mass extinctions in the fossil record.

Henry Peel, an intuitive super-genius who has become the world's first trillionaire based upon his invention of room temperature superconductivity and practical fusion power, but who lives incognito, protected by his ex-special forces “chaps”, sees this coming (in a vision, just like his inventions), and decides to use his insight and wealth to do something about it. And now I draw the curtain, since this botched novel isn't worth carefully crafting non-spoiler prose to describe the multitudinous absurdities with which it is festooned.

Spoiler warning: Plot and/or ending details follow.  
For no reason apparent in the text, Henry recruits the protagonist and narrator, a somewhat ditzy female novelist (at one point she invites a stalker to her hide-out apartment because she forgets the reason she moved there in the first place). This character makes occasional off-the-wall suggestions which Henry, for some reason, finds profound, and becomes a member of Henry's inner circle and eventually closer still.

Henry decides that the way to survive the coming extinction event is to build a spacecraft which can cruise the solar system for generations, tended by a crew that reproduces itself, and carrying a cargo of genetically enhanced (oops!—never mind—Henry changes his mind and goes with showroom stock H. sap genome) embryos which can be decanted to establish colonies on the planets and moons and eventually repopulate the Earth. To this end, he invents:

  • A single stage to orbit reusable spaceplane powered by a new kind of engine which does not emit a rocket plume
  • A space drive which “would somehow draw its fuel from the charged particles in the solar wind”
  • Artificial gravity, based upon diamagnetism

Whenever an invention is needed to dig this plot out of a hole, Henry just has a vision and out it pops. Edison be damned—for Henry it's 100% inspiration and hold the perspiration!

He builds this enormous infrastructure in Mongolia, just across the border from China, having somehow obtained a free hand to do so while preserving his own off-the-radar privacy.

Sub-plots come and go with wild abandon. You think something's going to be significant, and then it just sputters out or vanishes as if it never happened. What the heck is with that circle of a dozen missiles in Mongolia, anyway? And you could take out the entire history and absurdly implausible coincidence of the narrator's meeting her rapist without any impact on the plot. And don't you think a trillionaire would have somebody on staff who could obtain a restraining order against the perp and hire gumshoes to keep an eye on his whereabouts?

Fundamentally, people and institutions do not behave the way they do in this story. How plausible is it that a trillionaire, building a vast multinational infrastructure for space migration, would be able to live off the radar in New York City, without any of the governments of the jurisdictions in which he was operating taking notice of his activities? Or that the media would promptly forget a juicy celebrity scandal involving said trillionaire because a bunch of earthquakes happened? Or that once the impending end of human civilisation became public that everybody would get bored with it and move on to other distractions? This whole novel reads like one of my B-list dreams: disconnected, abstracted from reality, and filled with themes that fade in and out without any sense of continuity. I suppose one could look at it as a kind of end-times love story, but who cares about love stories involving characters who are unsympathetic and implausible?

Spoilers end here.  

One gets the sense that the author hadn't read enough science fiction to fully grasp the genre. It's fine to posit a counterfactual and build the story from that point. But you can't just make stuff up with wild abandon whenever you want, no less claim that it “came in a vision” to an inventor who has no background in the field. Further, the characters (even if they are aliens utterly unlike anything in the human experience, which is not the case here) have to behave in ways consistent with their properties and context.

In a podcast interview with the author, he said that the publisher of his spy fiction declined to publish this novel because it was so different from his existing œuvre. Well, you could say that, but I suspect the publisher was being kind to a valued author in not specifying that the difference was not in genre but rather the quality of the work.

 Permalink

Pournelle, Jerry. A Step Farther Out. Studio City, CA: Chaos Manor Press, [1979, 1994] 2011. ASIN B004XTKFWW.
This book is a collection of essays originally published in Galaxy magazine between 1974 and 1978. They were originally collected into a book published in 1979, which was republished in 1994 with a new preface and notes from the author. This electronic edition includes all the material from the 1994 book plus a new preface which places the essays in the context of their time and the contemporary world.

I suspect that many readers of these remarks may be inclined to exclaim “Whatever possessed you to read a bunch of thirty-year-old columns from a science fiction magazine which itself disappeared from the scene in 1980?” I reply, “Because the wisdom in these explorations of science, technology, and the human prospect is just as relevant today as it was when I first read them in the original book, and taken together they limn the lost three decades of technological progress which have so blighted our lives.” Pournelle not only envisioned what was possible as humanity expanded its horizons from the Earth to become a spacefaring species drawing upon the resources of the solar system which dwarf those about which the “only one Earth” crowd fret, he also foresaw the constraint which would prevent us from today living in a perfectly achievable world, starting from the 1970s, with fusion, space power satellites, ocean thermal energy conversion, and innovative sources of natural gas providing energy; a robust private space infrastructure with low cost transport to Earth orbit; settlements on the Moon and Mars; exploration of the asteroids with an aim to exploit their resources; and compounded growth of technology which would not only permit human survival but “survival with style”—not only for those in the developed countries, but for all the ten billion who will inhabit this planet by the middle of the present century.

What could possibly go wrong? Well, Pournelle nails that as well. Recall whilst reading the following paragraph that it was written in 1978.

[…] Merely continue as we are now: innovative technology discouraged by taxes, environmental impact statements, reports, lawsuits, commission hearings, delays, delays, delays; space research not carried out, never officially abandoned but delayed, stretched-out, budgets cut and work confined to the studies without hardware; solving the energy crisis by conservation, with fusion research cut to the bone and beyond, continued at level-of-effort but never to a practical reactor; fission plants never officially banned, but no provision made for waste disposal or storage so that no new plants are built and the operating plants slowly are phased out; riots at nuclear power plant construction sites; legal hearings, lawyers, lawyers, lawyers…

Can you not imagine the dream being lost? Can you not imagine the nation slowly learning to “do without”, making “Smaller is Better” the national slogan, fussing over insulating attics and devoting attention to windmills; production falling, standards of living falling, until one day we discover the investments needed to go to space would be truly costly, would require cuts in essentials like food —

A world slowly settling into satisfaction with less, until there are no resources to invest in That Buck Rogers Stuff?

I can imagine that.

As can we all, as now we are living it. And yet, and yet…. One consequence of the Three Lost Decades is that the technological vision and optimistic roadmap of the future presented in these essays is just as relevant to our predicament today as when they were originally published, simply because with a few exceptions we haven't done a thing to achieve them. Indeed, today we have fewer resources with which to pursue them, having squandered our patrimony on consumption, armies of rent-seekers, and placed generations yet unborn in debt to fund our avarice. But for those who look beyond the noise of the headlines and the platitudes of politicians whose time horizon is limited to the next election, here is a roadmap for a true step farther out, in which the problems we perceive as intractable are not “managed” or “coped with”, but rather solved, just as free people have always done when unconstrained to apply their intellect, passion, and resources toward making their fortunes and, incidentally, creating wealth for all.

This book is available only in electronic form for the Kindle as cited above, under the given ASIN. The ISBN of the original 1979 paperback edition is 978-0-441-78584-1. The formatting in the Kindle edition is imperfect, but entirely readable. As is often the case with Kindle documents, “images and tables hardest hit”: some of the tables take a bit of head-scratching to figure out, as the Kindle (or at least the iPad application which I use) particularly mangles multi-column tables. (I mean, what's with that, anyway? LaTeX got this perfectly right thirty years ago, and in a manner even beginners could use; and this was pure public domain software anybody could adopt. Sigh—three lost decades….) Formatting quibbles aside, I'm as glad I bought and read this book as I was when I first bought it and read it all those years ago. If you want to experience not just what the future could have been, then, but what it can be, now, here is an excellent place to start.

The author's Web site is an essential resource for those interested in these big ideas, grand ambitions, and the destiny of humankind and its descendents.

 Permalink

Savage, Michael [Michael Alan Weiner]. Abuse of Power. New York: St. Martin's Press, 2011. ISBN 978-0-312-55301-2.
The author, a popular talk radio host who is also a Ph.D. in nutritional ethnomedicine and has published numerous books under his own name, is best known for his political works, four of which have made the New York Times bestseller list including one which reached the top of that list. This is his first foray into the fictional thriller genre, adopting a style reminiscent of Rudy Rucker's transrealism, in which the author, or a character closely modelled upon him or her, is the protagonist in the story. In this novel, Jack Hatfield is a San Francisco-based journalist dedicated to digging out the truth and getting it to the public by whatever means available, immersed in the quirky North Beach culture of San Francisco, and banned in Britain for daring to transgress the speech codes of that once-free nation. Sound familiar?

While on a routine ride-along with a friend from the San Francisco Police Department bomb squad, Hatfield finds himself in the middle of a carjacking gone horribly wrong, where the evidence of his own eyes and of witnesses at the scene contradicts the soothing narrative issued by the authorities and swallowed whole by the legacy media. As Hatfield starts to dig beneath the surface, he discovers a trail of murders which seem to point to a cover-up by a shadowy but well-funded and ruthlessly efficient organisation whose motives remain opaque. This leads him on a trail which takes him to various points around the world and finally back to San Francisco, where only he and his small circle of friends can expose and thwart a plot aimed at regime change in the country which fancied itself the regime changer for the rest of the world.

Inevitably, I have some technical quibbles.

Spoiler warning: Plot and/or ending details follow.  
  • On p. 25, it is assumed that a cellular mobile telephone can communicate with a like unit without going through the cellular network (which, in this case, is blocked by a police jammer) if it is in line of sight and close enough to the other telephone. This is not the case; even if it were technologically possible, how would the Phone Company charge you for the call?
  • On p. 144 a terrorist mole is granted a G-2 visa to work at a foreign consulate in the U.S. In fact, a G-2 visa is granted only to individuals travelling to the U.S. to attend meetings of international organisations. The individual in question would have required an A-1 or A-2 diplomatic visa to enter the U.S.
  • On p. 149 Jack takes out a Remington shotgun loaded with 12-gauge rounds, and just two paragraphs later lays “the rifle across his forearm”. A shotgun is not a rifle.
  • This is not a quibble but a high-five. The shortened URL in the decrypted message on p. 257 points precisely where the novel says it does.
  • When will thriller authors sit down and read The Effects of Nuclear Weapons? On p. 355 we're faced with the prospect of a “satchel nuke” being detonated atop one of the towers of the Golden Gate Bridge and told:
    There would have been thousands of deaths within days, tens of thousands within weeks, over a million within a month—many of those among people who would have been needed to keep the infrastructure from collapsing. Doctors, police, workers at power plants and sewage centers. [sic (sentence fragment)] The environment would have become so toxic that rescue workers couldn't have gotten into the area, and poisoned food and water would have added exponentially to the death toll. Airdrops of fresh supplies would have led to riots, more death. Silicon Valley would have been ravaged, all but destroying the U.S. computer industry.
    Nonsense—a plausible satchel nuke of a size which Sara (admittedly a well-trained woman) could carry in a backpack would be something like the U.S. SADM, which weighed around 68 kg, more than most in-shape women. The most common version of this weapon was based upon the W54 warhead, which had a variable yield from 10 tons to 1 kiloton. Assuming the maximum one kiloton yield, a detonation would certainly demolish the Golden Gate Bridge and cause extensive damage to unreinforced structures around the Bay, but the radiation effects wouldn't be remotely as severe as asserted; there would be some casualties to those downwind and in the fallout zone, but these would be more likely in the hundreds and over one or more decades after the detonation. The fact that the detonation occurred at the top of a tower taller than those used in most surface detonations at the Nevada Test Site and above water would further reduce fallout. Silicon Valley, which is almost 100 km south of the detonation site, would be entirely unaffected apart from Twitter outages due to #OMG tweets. The whole subplot about the “hydrazine-based rocket fuel” tanker crossing the bridge is silly: hydrazine is nasty stuff to be sure, but first of all it is a hypergolic liquid rocket fuel, not an “experimental solid rocket fuel”. (Duh—if it were solid, why would you transport it in a tanker?) But apart from that, hydrazine is one of those molecules whose atoms really don't like being so close to one another, and given the slightest excuse will re-arrange themselves into a less strained configuration. Being inside a nuclear fireball is an excellent excuse to do so, hence the closer the tanker happened to be to the detonation, the less likely the dispersal of its contents would cause casualties for those downwind.
Spoilers end here.  

This is an enjoyable and promising debut for an author who is embarking upon the craft of the thriller, and none of the natters above (if you chose to read them) detracted from this reader's enjoyment of the story. Is it up to the standard of recent work from masters of the genre such as Vince Flynn or Brad Thor? No—but it's a good read and auspicious start; I will certainly give forthcoming novels from this author a try.

 Permalink

Hoover, Herbert. Freedom Betrayed. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2011. ISBN 978-0-8179-1234-5.
This book, begun in the days after the attack on Pearl Harbor, became the primary occupation of former U.S. president Herbert Hoover until his death in 1964. He originally referred to it as the “War Book” and titled subsequent draft manuscripts Lost Statesmanship, The Ordeal of the American People, and Freedom Betrayed, which was adopted for this edition. Over the two decades Hoover worked on the book, he and his staff came to refer to it as the “Magnum Opus”, and it is magnum indeed—more than 950 pages in this massive brick of a hardcover edition.

The work began as an attempt to document how, in Hoover's view, a series of diplomatic and strategic blunders committed during the Franklin Roosevelt administration had needlessly prompted Hitler's attack upon the Western democracies, forged a disastrous alliance with Stalin, and deliberately provoked Japan into attacking the U.S. and Britain in the Pacific. This was summarised by Hoover as “12 theses” in a 1946 memorandum to his research assistant (p. 830):

  1. War between Russia and Germany was inevitable.
  2. Hitler's attack on Western Democracies was only to brush them out of his way.
  3. There would have been no involvement of Western Democracies had they not gotten in his (Hitler's) way by guaranteeing Poland (March, 1939).
  4. Without prior agreement with Stalin this constituted the greatest blunder of British diplomatic history.
  5. There was no sincerity on either side of the Stalin-Hitler alliance of August, 1939.
  6. The United States or the Western Hemisphere were never in danger by Hitler.
  7. [This entry is missing in Hoover's typescript—ed.]
  8. This was even less so when Hitler determined to attack Stalin.
  9. Roosevelt, knowing this about November, 1940, had no remote warranty for putting the United States in war to “save Britain” and/or saving the United States from invasion.
  10. The use of the Navy for undeclared war on Germany was unconstitutional.
  11. There were secret military agreements with Britain probably as early of January, 1940.
  12. The Japanese war was deliberately provoked. …

…all right—eleven theses. As the years passed, Hoover expanded the scope of the project to include what he saw as the cynical selling-out of hundreds of millions of people in nations liberated from Axis occupation into Communist slavery, making a mockery of the principles espoused in the Atlantic Charter and reaffirmed on numerous occasions and endorsed by other members of the Allies, including the Soviet Union. Hoover puts the blame for this betrayal squarely at the feet of Roosevelt and Churchill, and documents how Soviet penetration of the senior levels of the Roosevelt administration promoted Stalin's agenda and led directly to the loss of China to Mao's forces and the Korean War.

As such, this is a massive work of historical revisionism which flies in the face of the mainstream narrative of the origins of World War II and the postwar period. But, far from the rantings of a crank, this is the work of a former President of the United States, who, in his career as an engineer and humanitarian work after World War I lived in or travelled extensively through all of the countries involved in the subsequent conflict and had high-level meetings with their governments. (Hoover was the only U.S. president to meet with Hitler; the contemporary notes from his 1938 meeting appear here starting on p. 837.) Further, it is a scholarly examination of the period, with extensive citations and excerpts of original sources. Hoover's work in food relief in the aftermath of World War II provided additional entrée to governments in that period and an on-the-ground view of the situation as communism tightened its grip on Eastern Europe and sought to expand into Asia.

The amount of folly chronicled here is astonishing, and the extent of the human suffering it engendered is difficult to comprehend. Indeed, Hoover's “just the facts” academic style may leave you wishing he expressed more visceral anger at all the horrible things that happened which did not have to. But then Hoover was an engineer, and we engineers don't do visceral all that well. Now, Hoover was far from immune from blunders: his predecessor in the Oval Office called him “wonder boy” for his enthusiasm for grand progressive schemes, and Hoover's mis-handling of the aftermath of the 1929 stock market crash turned what might have been a short and deep recession into the First Great Depression and set the stage for the New Deal. Yet here, I think Hoover the historian pretty much gets it right, and when reading these words, last revised in 1963, one gets the sense that the verdict of history has reinforced the evidence Hoover presents here, even though his view remains anathema in an academy almost entirely in the thrall of slavers.

In the last months of his life, Hoover worked furiously to ready the manuscript for publication; he viewed it as a large part of his life's work and his final contribution to the history of the epoch. After his death, the Hoover Foundation did not proceed to publish the document for reasons which are now impossible to determine, since none of the people involved are now alive. One can speculate that they did not wish to embroil the just-deceased founder of their institution in what was sure to be a firestorm of controversy as he contradicted the smug consensus view of progressive historians of the time, but nobody really knows (and the editor, recruited by the successor of that foundation to prepare the work for publication, either did not have access to that aspect of the story or opted not to pursue it). In any case, the editor's work was massive: sorting through thousands of documents and dozens of drafts of the work, trying to discern the author's intent from pencilled-in marginal notes, tracking down citations and verifying quoted material, and writing an introduction of more than a hundred pages explaining the origins of the work, its historical context, and the methodology used to prepare this edition; the editing is a serious work of scholarship in its own right.

If you're acquainted with the period, you're unlikely to learn any new facts here, although Hoover's first-hand impressions of countries and leaders are often insightful. In the decades after Hoover's death, many documents which were under seal of official secrecy have become available, and very few of them contradict the picture presented here. (As a former president with many military and diplomatic contacts, Hoover doubtless had access to some of this material on a private basis, but he never violated these confidences in this work.) What you will learn from reading this book is that a set of facts can be interpreted in more than one way, and that if one looks at the events from 1932 through 1962 through the eyes of an observer who was, like Hoover, fundamentally a pacifist, humanitarian, and champion of liberty, you may end up with a very different impression than that in the mainstream history books. What the conventional wisdom deems a noble battle against evil can, from a different perspective, be seen as a preventable tragedy which not only consigned entire nations to slavery for decades, but sowed the seeds of tyranny in the U.S. as the welfare/warfare state consolidated itself upon the ashes of limited government and individual liberty.

 Permalink

July 2012

Spencer, Robert. Did Muhammad Exist? Wilmington, DE: ISI Books, 2012. ISBN 978-1-61017-061-1.
In 1851, Ernest Renan wrote that Islam “was born in the full light of history…”. But is this the case? What do we actually know of the origins of Islam, the life of its prophet, and the provenance of its holy book? In this thoroughly researched and documented investigation the author argues that the answer to these questions is very little indeed, and that contemporary evidence for the existence of a prophet in Arabia who proclaimed a scripture, led the believers into battle and prevailed, unifying the peninsula, and lived the life documented in the Muslim tradition is entirely nonexistent during the time of Muhammad's supposed life, and did not emerge until decades, and in many cases, more than a century later. Further, the historical record shows clear signs, acknowledged by contemporary historians, of having been fabricated by rival factions contending for power in the emerging Arab empire.

What is beyond dispute is that in the century and a quarter between A.D. 622 and 750, Arab armies erupted from the Arabian peninsula and conquered an empire spanning three continents, propagating a change in culture, governance, and religion which remains in effect in much of that region today. The conventional story is that these warriors were the armies of Islam, following their prophet's command to spread the word of their God and bearing his holy writ, the Qur'an, before them as they imposed it upon those they subdued by the sword. But what is the evidence for this?

When you look for it, it's remarkably scanty. As the peoples conquered by the Arab armies were, in many cases, literate, they have left records of their defeat. And in every case, they speak of the invaders as “Hagarians”, “Ishmaelites”, “Muhajirun”, or “Saracens”, and in none of these records is there a mention of an Arab prophet, much less one named “Muhammad”, or of “Islam”, or of a holy book called the “Qur'an”.

Now, for those who study the historical foundations of Christianity or Judaism, these results will be familiar—when you trace the origins of a great religious tradition back to its roots, you often discover that they disappear into a fog of legend which believers must ultimately accept on faith since historical confirmation, at this remove, is impossible. This has been the implicit assumption of those exploring the historical foundations of the Bible for at least two centuries, but it is considered extremely “edgy” to pose these questions about Islam, even today. This is because when you do, the believers are prone to use edgy weapons to cut your head off. Jews and Christians have gotten beyond this, and just shake their heads and chuckle. So some say it takes courage to raise these questions about Islam. I'd say “some” are the kind of cowards who opposed the translation of the Bible into the vernacular, freeing it from the priesthood and placing it in the hands of anybody who could read. And if any throat-slitter should be irritated by these remarks and be inclined to act upon them, be advised that I not only shoot back but, circumstances depending, first.

I find the author's conclusion very plausible. After the Arab conquest, its inheritors found themselves in command of a multicontinental empire encompassing a large number of subject peoples and a multitude of cultures and religious traditions. If you were the ruler of such a newly-cobbled-together empire, wouldn't you be motivated, based upon the experience of those you have subdued, to promulgate a state religion, proclaimed in the language of the conquerer, which demanded submission? Would you not base that religion upon the revelation of a prophet, speaking to the conquerers in their own language, which came directly from God?

It is often observed that Islam, unlike the other Abrahamic religions, is uniquely both a religious and political system, leading inevitably to theocracy (which I've always believed misnamed—I'd have no problem with theocracy: rule by God; it's rule by people claiming to act in His name that always seems to end badly). But what if Islam is so intensely political precisely because it was invented to support a political agenda—that of the Arabic Empire of the Umayyad Caliphate? It's not that Islam is political because its doctrine encompasses politics as well as religion; it's that's it's political because it was designed that way by the political rulers who directed the compilation of its sacred books, its traditions, and spread it by the sword to further their imperial ambitions.

 Permalink

Grace, Tom. The Liberty Intrigue. Unknown: Dunlap Goddard, 2012. ISBN 978-0-9656040-1-7.
This novel is a kind of parallel-universe account of the 2012 presidential election in the United States. Rather than the actual contest, featuring a GOP challenger who inspires the kind of enthusiasm as week-old left-over boiled broccoli, here an outsider, a Yooper engineer, Ross Egan, who has spent his adult life outside the U.S. and shared the Nobel Peace Prize for ending a bloody conflict in an African nation and helping to bring about an economic renaissance for its people, returns to the land of his birth and is persuaded to seek the presidency in a grass-roots, no-party bid.

Intrigue swirls around the contest from all sides. The incumbent and his foreign-born billionaire speculator backer launch an “operation chaos” intervention in open primary states intended to ensure no Republican arrives at the convention with a majority; a shadowy Internet group calling itself “WHO IS I” (based upon the grammar, I'd start with looking at those who frequent the Slashdot site) makes its presence known by a series of highly visible hack attacks and then sets itself up as an independent real-time fact-checker of the pronouncements of politicians. Opposition research turns up discrepancies in the origin of Egan's vast fortune, and a potentially devastating secret which can be sprung upon him in the last days of the campaign.

This just didn't work for me. The novel attempts to be a thriller but never actually manages to be thrilling. There are unexplained holes in the plot (Egan's energy invention is even more airy in its description than John Galt's motor) and characters often seem to act in ways that just aren't consistent with what we know of them and the circumstances in which they find themselves. Finally, the novel ends with the election, when the really interesting part would be what happens in its aftermath. All in all, if you're looking for a U.S. presidential election thriller and don't mind it being somewhat dated, I'd recommend Aaron Zelman and L. Neil Smith's Hope (March 2002) instead of this book.

I use “Unknown” as the publisher's domicile in the citation above because neither the book nor the contact page on the publisher's Web site provides this information. A WHOIS query on their domain name indicates it is hidden behind a front named “Domain Discreet Privacy Service” of Jacksonville, Florida. Way to go with the transparency and standing up in public for what you believe, guys!

 Permalink

Pendle, George. Strange Angel. New York: Harcourt, 2005. ISBN 978-0-15-603179-0.
For those who grew up after World War II “rocket science” meant something extremely difficult, on the very edge of the possible, pursued by the brightest of the bright, often at risk of death or dire injury. In the first half of the century, however, “rocket” was a pejorative, summoning images of pulp magazines full of “that Buck Rogers stuff”, fireworks that went fwoosh—flash—bang if all went well, and often in the other order when it didn't, with aspiring rocketeers borderline lunatics who dreamed of crazy things like travelling to the Moon but usually ended blowing things up, including, but not limited to, themselves.

This was the era in which John Whiteside “Jack” Parsons came of age. Parsons was born and spent most of his life in Pasadena, California, a community close enough to Los Angeles to participate in its frontier, “anything goes” culture, but also steeped in well-heeled old wealth, largely made in the East and seeking the perpetually clement climate of southern California. Parsons was attracted to things that went fwoosh and bang from the very start. While still a high school senior, he was hired by the Hercules Powder Company, and continued to support himself as an explosives chemist for the rest of his life. He never graduated from college, no less pursued an advanced degree, but his associates and mentors, including legends such as Theodore von Kármán were deeply impressed by his knowledge and meticulously careful work with dangerous substances and gave him their highest recommendations. On several occasions he was called as an expert witness to testify in high-profile trials involving bombings.

And yet, at the time, to speak seriously about rockets was as outré as to admit one was a fan of “scientifiction” (later science fiction), or a believer in magic. Parsons was all-in on all of them. An avid reader of science fiction and member of the Los Angeles Science Fantasy Society, Parsons rubbed shoulders with Ray Bradbury, Robert Heinlein, and Forrest J. Ackerman. On the darker side, Parsons became increasingly involved in the Ordo Templi Orientis (OTO), followers of Aleister Crowley, and practitioners of his “magick”. One gets the sense that Parsons saw no conflict whatsoever among these pursuits—all were ways to transcend the prosaic everyday life and explore a universe enormously larger and stranger than even that of Los Angeles and its suburbs.

Parsons and his small band of rocket enthusiasts, “the suicide squad”, formed an uneasy alliance with the aeronautical laboratory of the California Institute of Technology, and with access to their resources and cloak of respectability, pursued their dangerous experiments first on campus, and then after a few embarrassing misadventures, in Arroyo Seco behind Pasadena. With the entry of the United States into World War II, the armed services had difficult problems to solve which overcame the giggle factor of anything involving the word “rocket”. In particular, the U.S. Navy had an urgent need to launch heavily-laden strike aircraft from short aircraft carrier decks (steam catapults were far in the future), and were willing to consider even Buck Rogers rockets to get them off the deck. Well, at least as long as you didn't call them “rockets”! So, the Navy sought to procure “Jet Assisted Take-Off” units, and Caltech created the “Jet Propulsion Laboratory” with Parsons as a founder to develop them, and then its members founded the Aerojet Engineering Corporation to build them in quantity. Nope, no rockets around here, nowhere—just jets.

Even as Parsons' rocket dreams came true and began to make him wealthy, he never forsook his other interests: they were all integral to him. He advanced in Crowley's OTO, became a regular correspondent of the Great Beast, and proprietor of the OTO lodge in Pasadena, home to a motley crew of bohemians who prefigured the beatniks and hippies of the 1950s and '60s. And he never relinquished his interest in science fiction, taking author L. Ron Hubbard into his community. Hubbard, a world class grifter even in his early days, took off with Parsons' girlfriend and most of his savings on the promise of buying yachts in Florida and selling them at a profit in California. Uh-huh! I'd put it down to destructive engrams.

Amidst all of this turmoil, Parsons made one of the most important inventions in practical rocketry of the 20th century. Apart from the work of Robert Goddard, which occurred largely disconnected from others due to Goddard's obsessive secrecy due to his earlier humiliation by learned ignoramuses, and the work by the German rocket team, conducted in secrecy in Nazi Germany, rockets mostly meant solid rockets, and solid rockets were little changed from mediaeval China: tubes packed with this or that variant of black powder which went fwoosh all at once when ignited. Nobody before Parsons saw an alternative to this. When faced by the need for a reliable, storable, long-duration burn propellant for Navy JATO boosters, he came up with the idea of castable solid propellant (initially based upon asphalt and potassium perchlorate), which could be poured as a liquid into a booster casing with a grain shape which permitted tailoring the duration and thrust profile of the motor to the mission requirements. Every single solid rocket motor used today employs this technology, and Jack Parsons, high school graduate and self-taught propulsion chemist, invented it all by himself.

On June 17th, 1952, an explosion destroyed a structure on Pasadena's Orange Grove Avenue where Jack Parsons had set up his home laboratory prior to his planned departure with his wife to Mexico. He said he had just one more job to do for his client, a company producing explosives for Hollywood special effects. Parsons was gravely injured and pronounced dead at the hospital.

The life of Jack Parsons was one which could only have occurred in the time and place he lived it. It was a time when a small band of outcasts could have seriously imagined building a rocket and travelling to the Moon; a time when the community they lived in was aboil with new religions, esoteric cults, and alternative lifestyles; and an entirely new genre of fiction was exploring the ultimate limits of the destiny of humanity and its descendents. Jack swam in this sea and relished it. His short life (just 37 years) was lived in a time and place which has never existed before and likely will never exist again. The work he did, the people he influenced, and the consequences cast a long shadow still visible today (every time you see a solid rocket booster heave a launcher off the pad, its coruscant light, casting that shadow, is Jack Parsons' legacy). This is a magnificent account of a singular life which changed our world, and is commemorated on the rock next door. On the lunar far side the 40 kilometre diameter crater Parsons is named for the man who dreamt of setting foot, by rocketry or magick, upon that orb and, in his legacy, finally did with a big footprint indeed—more than eight times larger than the one named for that Armstrong fellow.

 Permalink

Varley, John. Red Thunder. New York: Ace, 2003. ISBN 978-0-441-01162-9.
In my review of Ark (June 2012), I wrote that one of the most time-tested forms of science fiction was assuming a counterfactual (based upon present knowledge and conventional wisdom) and then spinning out the consequences which follow logically from it. While Ark was a disappointment, this full-on romp shows just how well the formula works when employed by a master of the genre. First, one must choose the counterfactual carefully. In this case Varley vaults over the stumbling block of most near-future science fiction and harks back to Doc Smith's Skylark novels by asking, “What if propulsion were not the problem?”.

This sets the stage for the kind of story many might have thought laughably obsolete in the 21st century: a bunch of intrepid misfits building their own spaceship and blasting off for Mars, beating en-route Chinese and American expeditions, and demonstrating their world-transforming technology in a way that no government would be able to seize for its own benefit. The characters are not supermen, but rather people so like those you know that they're completely believable, and they develop in the story as they find themselves, largely through the luck of being in the right place at the right time, able to accomplish extraordinary things. There are plenty of laughs along the way, as well as the deeply moving backstory of the characters, especially that of the semi-autistic savant Jubal Broussard who stumbles onto the discovery that changes everything for humanity, forever. His cousin, disgraced ex-astronaut Travis Broussard, gets to experience the “heady feeling to put the President on hold, refuse an order, and hang up on her, all in the space of ten minutes.” (p. 392)

The novel, dedicated to Spider Robinson and Robert A. Heinlein, is the peer of their greatest works and an absolute hoot—enjoy!

 Permalink

Lehrman, Lewis E. The True Gold Standard. Greenwich, CT: Lehrman Institute, 2011. ISBN 978-0-9840178-0-5.
Nothing is more obvious than that the global financial system is headed for an inevitable crack-up of epic proportions. Fiat (paper) money systems usually last about forty years before imploding in the collapse of the credit expansion bubbles they predictably create. We are now 41 years after the United States broke the link between the world's reserve currency, the U.S. dollar, and gold. Since then, every currency in the world has been “floating”—decoupled from any physical backing, and valued only by comparison with the others. Uniquely in human history, all of the world now uses paper money, and they are all interlinked in a global market where shifts in sentiment or confidence can cause trillion dollar excursions in the wealth of nations in milliseconds. The risk of “contagion”, where loss of confidence in one paper currency causes a rush to the next, followed by attempts to limit its appreciation by its issuer, and a cascading race to the bottom has never been greater. The great currency and societal collapses of the past, while seeming apocalyptic to those living through them, were local; the next one is likely to be all-encompassing, with consequences which are difficult to imagine without venturing into speculative fiction.

I believe the only way to avoid this cataclysm is to get rid of all of the debt which can never be repaid and promises which can never be met, pop the credit bubble, and replace the funny money upon which the entire delusional system is based with the one standard which has stood the test of millennia: gold. If you were designing a simulation for people to live in and wanted to provide an ideal form of money, it would be hard to come up with something better than element 79. It doesn't corrode or degrade absent exposure to substances so foul as to make even thrill-seeking chemists recoil; it's easily divisible into quantities as small as one wishes, easy to certify as genuine; and has few applications which consume it, which means that the above-ground supply is essentially constant. It is also very difficult and costly to mine, which means that the supply grows almost precisely in synchronism with that of the world's population and their wealth—consequently, as a monetary standard it supports a stable price level, incapable of manipulation by politicians, bankers, or other criminal classes, and is freely exchangeable by free people everywhere without the constraints imposed by the slavers upon users of their currencies.

Now, when one discusses the gold standard, there is a standard litany of objections from those bought in to the status quo.

  • It's a step back into the past.
  • There isn't enough gold to go around.
  • It's inflexible and unable to cope with today's dynamic economy.
  • There's no way to get from here to there.

This book dispenses with these arguments in order. If we step back from the abyss of a financial cataclysm into a past with stable prices, global free trade, and the ability to make long-term investments which benefitted everybody, what's so bad about that? It doesn't matter how much gold there is—all that matters is that the quantity doesn't change at the whim of politicians: existing currencies will have to be revalued against gold, but the process of doing so will write down unpayable debts and restore solvency to the international financial system. A gold standard is inflexible by design: that's its essential feature, not a bug. Flexibility in a modern economy is provided by the myriad means of extension of credit, all of which will be anchored to reality by a stable unit of exchange. Finally, this work provides a roadmap for getting from here to there, with a period of price discovery preceding full convertibility of paper money to gold and the possibility of the implementation of convertibility being done either by a single country (creating a competitive advantage for its currency) or by a group of issuers of currencies working together. The author assumes the first currency to link to gold will be called the dollar, but I'll give equal odds it will be called the dinar, yuan, or rouble. It is difficult to get from here to there, but one must never forget the advantage that accrues to he who gets there first.

The assumption throughout is that the transition from the present paper money system to gold-backed currencies is continuous. While this is an outcome much to be preferred, I think it is, given the profligate nature of the ruling classes and their willingness to postpone any difficult decisions even to buy a mere week or two, not the way to bet. Still, even if we find ourselves crawling from the wreckage of a profoundly corrupt international financial system, this small book provides an excellent roadmap for rebuilding a moral, equitable, and sustainable system which will last for five decades or so…until the slavers win office again.

This is a plan which assumes existing institutions more or less stay in place, and that government retains its power to mandate currency at gunpoint. A softer path to hard currency might simply be allowing competing currencies, all exempt from tax upon conversion, to be used in transactions, contracts, and financial instruments. I might choose to use grammes of gold; you may prefer Euros; my neighbour may opt for Saudi certificates redeemable in barrels of crude oil; and the newleyweds down the street may go for Iowa coins exchangeable for a bushel of corn. The more the better! They'll all be seamlessly exchangeable for one another at market rates when we beam them to one another with our mobile phones or make payments, and the best ones will endure. The only losers will be archaic institutions like central banks, governments, and their treasuries. The winners will be people who created the wealth and are empowered to store and exchange it as they wish.

 Permalink

Grisham, John. The Litigators. New York: Bantam Books, [2011] 2012. ISBN 978-0-345-53688-4.
Every now and then you come across a novel where it's obvious, from the first few pages, that the author had an absolute blast telling the story, and when that's the case, the reader is generally in for a treat. This is certainly the case here.

David Zinc appeared to have it all. A Harvard Law graduate, senior associate at Chicago mega-firm Rogan Rothberg working in international bond finance, earning US$300,000 a year, with a good shot of making partner (where the real gravy train pulls into the station); he had the house, the car, and a beautiful wife pursuing her Ph.D. in art history. And then one grim Chicago morning, heading to the office for another exhausting day doing work he detested with colleagues he loathed, enriching partners he considered odious (and knowing that, if he eventually joined their ranks, the process of getting there would have made him just the same), he snapped. Suddenly, as the elevator ascended, he realised as clearly as anything he'd ever known in his life, “I cannot do this any more”.

And so, he just walked away, found a nearby bar that was open before eight in the morning, and decided to have breakfast. A Bloody Mary would do just fine, thanks, and then another and another. After an all day bender, blowing off a client meeting and infuriating his boss, texting his worried wife that all was well despite the frantic calls to her from the office asking where he was, he hails a taxi not sure where he wants to go, then, spotting an advertisement on the side of a bus, tells the driver to take him to the law offices of Finley & Figg, Attorneys.

This firm was somewhat different than the one he'd walked out of earlier that day. Oscar Finley and Wally Figg described their partnership as a “boutique firm”, but their stock in trade was quicky no-fault divorces, wills, drunk driving, and that mainstay of ground floor lawyering, personal accident cases. The firm's modest office was located near a busy intersection which provided an ongoing source of business, and the office was home to a dog named AC (for Ambulance Chaser), whose keen ears could pick up the sound of a siren even before a lawyer could hear it.

Staggering into the office, David offers his services as a new associate and, by soused bravado more than Harvard Law credentials, persuades the partners that the kid has potential, whereupon they sign him up. David quickly discovers an entire world of lawyering they don't teach at Harvard: where lawyers carry handguns in their briefcases along with legal pads, and with good reason; where making the rounds of prospective clients involves visiting emergency rooms and funeral homes, and where dissatisfied clients express their frustration in ways that go well beyond drafting a stern memorandum.

Soon, the firm stumbles onto what may be a once in a lifetime bonanza: a cholesterol drug called Krayoxx (no relation to Vioxx—none at all) which seems to cause those who take it to drop dead with heart attacks and strokes. This vaults the three-lawyer firm into the high-rolling world of mass tort litigation, with players with their own private jets and golf courses. Finley & Figg ends up at the pointy end of the spear in the litigation, which doesn't precisely go as they had hoped.

I'd like to quote one of the funniest paragraphs I've read in some time, but as there are minor spoilers in it, I'll put it behind the curtain. This is the kind of writing you'll be treated to in this novel.

Spoiler warning: Plot and/or ending details follow.  
While Wally doodled on a legal pad as if he were heavily medicated, Oscar did most of the talking. “So, either we get rid of these cases and face financial ruin, or we march into federal court three weeks from Monday with a case that no lawyer in his right mind would try before a jury, a case with no liability, no experts, no decent facts, a client who's crazy half the time and stoned the other half, a client whose dead husband weighed 320 pounds and basically ate himself to death, a veritable platoon of highly paid and very skilled lawyers on the other side with an unlimited budget and experts from the finest hospitals in the country, a judge who strongly favors the other side, a judge who doesn't like us at all because he thinks we're inexperienced and incompetent, and, well, what else? What am I leaving out here, David?”

“We have no cash for litigation expenses,” David said, but only to complete the checklist.

Spoilers end here.  

This story is not just funny, but also a tale of how a lawyer, in diving off the big law rat race into the gnarly world of retail practice rediscovers his soul and that there are actually noble and worthy aspects of the law. The characters are complex and interact in believable ways, and the story unfolds as such matters might well do in the real world. There is quite a bit in common between this novel and The King of Torts (March 2004), but while that is a tragedy of hubris and nemesis, this is a tale of redemption.

 Permalink

Thor, Brad. The First Commandment. New York: Pocket Books, 2007. ISBN 978-1-4516-3566-9.
This is the sixth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In the aftermath of the shocking conclusion to the previous novel, Takedown (November 2011), Department of Homeland Security agent Scot Harvath discovers that he, personally, has become the target of a plot by person or persons unknown, aimed at individuals close to him in a series of attacks of Biblical proportions.

When he starts to follow the trail of evidence back to the source, he is told to stand down by no less than the president of the United States, who declines to specify a reason. Harvath is not a man easily dissuaded, especially when convinced that his loved ones and colleagues are in imminent danger simply due to their association with him, and he goes rogue, enlisting friends in the shadowy world of private security to dig into the mystery. This doesn't sit well with the president, who puts Harvath on a proscription list and dispatches a CIA “Omega Team” to deal with him. At one point a CIA agent and friend, to whom Harvath protests that he has every right to protect those close to him, responds “You don't have any rights. Jack Rutledge is the president of the United States. When he tells you to do something, you do it.” (At this point, I'd have preferred if Harvath decked the CIA goon and explained to him that his rights come from God, not the president of the United States, and that while a politician may try to infringe those rights, they remain inherent to every person. But maybe Harvath has been working so long for the slavers that he's forgotten that.)

As Harvath follows the murky threads, he comes across evidence which suggests a cover-up extending into the oval office, and is forced into an uneasy détente with his nemesis, the pint-sized supervillain known as the Troll, whose data mining prowess permits connecting the dots in an otherwise baffling situation. (People in Harvath's line of work tend not to lack for enemies, after all.)

I found this to be the best Brad Thor novel I've read so far—it's lighter on the action and gadgets and more concentrated on the mystery and the motivations of the malefactors. I prefer to read a series of novels in the order in which they describe the life of the protagonist. This book does contain sufficient background and context so that it will work as a stand-alone thriller, but if you haven't read the previous novels, you'll miss a lot of the complexity of Harvath's relationships with characters who appear here.

 Permalink

August 2012

Chertok, Boris E. Rockets and People. Vol. 2. Washington: National Aeronautics and Space Administration, [1999] 2006. ISBN 978-1-4700-1508-4 NASA SP-2006-4110.
This is the second book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of Chertok's chronicle begins with his return from Germany to the Soviet Union, where he discovers, to his dismay, that day-to-day life in the victorious workers' state is much harder than in the land of the defeated fascist enemy. He becomes part of the project, mandated by Stalin, to first launch captured German V-2 missiles and then produce an exact Soviet copy, designated the R-1. Chertok and his colleagues discover that making a copy of foreign technology may be more difficult than developing it from scratch—the V-2 used a multitude of steel and non-ferrous metal alloys, as well as numerous non-metallic components (seals, gaskets, insulation, etc.) which were not produced by Soviet industry. But without the experience of the German rocket team (which, by this time, was in the United States), there was no way to know whether the choice of a particular material was because its properties were essential to its function or simply because it was readily available in Germany. Thus, making an “exact copy” involved numerous difficult judgement calls where the designers had to weigh the risk of deviation from the German design against the cost of standing up a Soviet manufacturing capacity which might prove unnecessary.

After the difficult start which is the rule for missile projects, the Soviets managed to turn the R-1 into a reliable missile and, through patience and painstaking analysis of telemetry, solved a mystery which had baffled the Germans: why between 10% and 20% of V-2 warheads had detonated in a useless airburst high above the intended target. Chertok's instrumentation proved that the cause was aerodynamic heating during re-entry which caused the high explosive warhead to outgas, deform, and trigger the detonator.

As the Soviet missile program progresses, Chertok is a key player, participating in the follow-on R-2 project (essentially a Soviet Redstone—a V-2 derivative, but entirely of domestic design), the R-5 (an intermediate range ballistic missile eventually armed with nuclear warheads), and the R-7, the world's first intercontinental ballistic missile, which launched Sputnik, Gagarin, and whose derivatives remain in service today, providing the only crewed access to the International Space Station as of this writing.

Not only did the Soviet engineers have to build ever larger and more complicated hardware, they essentially had to invent the discipline of systems engineering all by themselves. While even in aviation it is often possible to test components in isolation and then integrate them into a vehicle, working out interface problems as they manifest themselves, in rocketry everything interacts, and when something goes wrong, you have only the telemetry and wreckage upon which to base your diagnosis. Consider: a rocket ascending may have natural frequencies in its tankage structure excited by vibration due to combustion instabilities in the engine. This can, in turn, cause propellant delivery to the engine to oscillate, which will cause pulses in thrust, which can cause further structural stress. These excursions may cause control actuators to be over-stressed and possibly fail. When all you have to go on is a ragged cloud in the sky, bits of metal raining down on the launch site, and some telemetry squiggles for a second or two before everything went pear shaped, it can be extraordinarily difficult to figure out what went wrong. And none of this can be tested on the ground. Only a complete systems approach can begin to cope with problems like this, and building that kind of organisation required a profound change in Soviet institutions, which had previously been built around imperial chief designers with highly specialised missions. When everything interacts, you need a different structure, and it was part of the genius of Sergei Korolev to create it. (Korolev, who was the author's boss for most of the years described here, is rightly celebrated as a great engineer and champion of missile and space projects, but in Chertok's view at least equally important was his talent in quickly evaluating the potential of individuals and filling jobs with the people [often improbable candidates] best able to do them.)

In this book we see the transformation of the Soviet missile program from slavishly copying German technology to world-class innovation, producing, in short order, the first ICBM, earth satellite, lunar impact, images of the lunar far side, and interplanetary probes. The missile men found themselves vaulted from an obscure adjunct of Red Army artillery to the vanguard of Soviet prestige in the world, with the Soviet leadership urging them on to ever greater exploits.

There is a tremendous amount of detail here—so much that some readers have deemed it tedious: I found it enlightening. The author dissects the Nedelin disaster in forensic detail, as well as the much less known 1980 catastrophe at Plesetsk where 48 died because a component of the rocket used the wrong kind of solder. Rocketry is an exacting business, and it is a gift to generations about to embark upon it to imbibe the wisdom of one who was present at its creation and learned, by decades of experience, just how careful one must be to succeed at it. I could go on regaling you with anecdotes from this book but, hey, if you've made it this far, you're probably going to read it yourself, so what's the point? (But if you do, I'd suggest you read Volume 1 [May 2012] first.)

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

 Permalink

Gelernter, David. America-Lite. New York: Encounter Books, 2012. ISBN 978-1-59403-606-4.
At the end of World War II, the United States bestrode the world like a colossus. All of its industrial competitors had been devastated by the war; it was self-sufficient in most essential resources; it was the unquestioned leader in science, technology, and medicine; its cultural influence was spread around the world by Hollywood movies; and the centre of the artistic and literary world had migrated from Paris to New York. The generation which had won the war, enabled by the G.I. Bill, veterans swarmed into institutions of higher learning formerly reserved for scions of the wealthy and privileged—by 1947, fully 49% of college admissions were veterans.

By 1965, two decades after the end of the war, it was pretty clear to anybody with open eyes that it all had begun to go seriously wrong. The United States was becoming ever more deeply embroiled in a land war in Asia without a rationale comprehensible to those who paid for it and were conscripted to fight there; the centres of once-great cities were beginning a death spiral in which a culture of dependency spawned a poisonous culture of crime, drugs, and the collapse of the family; the humiliatingly defeated and shamefully former Nazi collaborator French were draining the U.S. Treasury of its gold reserves, and the U.S. mint had replaced its silver coins with cheap counterfeit replacements. In August of 1965, the Watts neighbourhood of Los Angeles exploded in riots, and the unthinkable—U.S. citizens battling one another with deadly force in a major city, became the prototype for violent incidents to come. What happened?

In this short book (just 200 pages in the print edition), the author argues that it was what I have been calling the “culture crash” for the last decade. Here, this event is described as the “cultural revolution”: not a violent upheaval as happened in China, but a steady process through which the keys to the élite institutions which transmit the culture from generation to generation were handed over, without a struggle, from the WASP (White Anglo-Saxon Protestant) patricians which had controlled them since Colonial days, to a new intellectual class, influenced by ideas from Continental Europe, which the author calls PORGIs (post-religious globalist intellectuals). Now, this is not to say that there were not intellectuals at top-tier institutions of higher learning before the cultural revolution; but they were not in charge: those who were saw their mission in a fundamentally conservative way—to conserve the grand tradition of Western civilisation by transmitting it to each successive generation, while inculcating in them the moral compass which would make them worthy leaders in business, the military, and public affairs.

The PORGIs had no use for this. They had theory, and if the facts weren't consistent with the theory and the consequences of implementing it disastrously different from those intended, well then the facts must be faulty because the theory was crystalline perfection in itself. (And all of this became manifest well before the cognitive dissonance between academic fantasy and the real world became so great that the intellectuals had to invent postmodernism, denying the very existence of objective reality.)

The PORGIs (Well, I suppose we can at least take comfort that the intellectual high ground wasn't taken over by Corgis; imagine the chaos that would have engendered!) quickly moved to eliminate the core curricula in higher learning which taught Western history, culture, and moral tradition. This was replaced (theory being supreme, and unchallenged), with indoctrination in an ideology unmoored to the facts. Rather than individuals able to think and learn on their own, those educated by the PORGIs became servomechanisms who, stimulated by this or that keyword, would spit out a rote response: “Jefferson?” “White slaveowner!”

These, the generation educated by the PORGIs, starting around the mid 1960s, the author calls PORGI airheads. We all have our own “mental furniture” which we've accumulated over our lives—the way we make sense of the bewildering flow of information from the outside world: sorting it into categories, prioritising it, and deciding how to act upon it. Those with a traditional (pre-PORGI) education, or those like myself and the vast majority of people my age or older who figured it out on their own by reading books and talking to other people, have painfully built our own mental furniture, re-arranged it as facts came in which didn't fit with the ways we'd come to understand things, and sometimes heaved the old Barcalounger out the window when something completely contradicted our previous assumptions. With PORGI airheads, none of this obtains. They do not have the historical or cultural context to evaluate how well their pre-programmed responses fit the unforgiving real world. They are like parrots: you wave a French fry at them and they say, “Hello!” Another French fry, “Hello!” You wave a titanium billet painted to look like a French fry, “Hello!” Beak notched from the attempt to peel a titanium ingot, you try it once again.

“Hello!”

Is there anybody who has been visible on the Internet for more than a few years who has not experienced interactions with these people? Here is my own personal collection of greatest hits.

Gelernter argues that Barack Obama is the first PORGI airhead to be elected to the presidency. What some see as ideology may be better explained as servomechanism “Hello!” response to stimuli for which his mentors have pre-programmed him. He knows nothing of World War II, or the Cold War, or of colonialism in Africa, or of the rôle of the British Empire in eradicating the slave trade. All of these were deemed irrelevant by the PORGIs and PORGI airheads who trained him. And the 53% who voted for him were made a majority by the PORGI airheads cranked out every year and injected into the bloodstream of the dying civil society by an educational system almost entirely in the hands of the enemy.

What is to be done? The author's prescription is much the same as my own. We need to break the back of the higher education (and for that matter, the union-dominated primary and secondary education) system and replace it with an Internet-based educational delivery system where students will have access to courses taught by the best pedagogues in the world (ranked in real time not just by student thumbs up and down, but by objectively measured outcomes, such as third-party test scores and employment results). Then we need independent certification agencies, operating in competition with one another much like bond rating agencies, which issue “e-diplomas” based on examinations (not just like the SAT and bar exams, but also in-person and gnarly like a Ph.D. defence for the higher ranks). The pyramid of prestige would remain, as well as the cost structure: a Doctorate in Russian Literature from Harvard would open more doors at the local parking garage or fast food joint than one from Bob's Discount Degrees, but you get what you pay for. And, in any case, the certification would cost a tiny fraction of spending your prime intellectually productive years listening to tedious lectures given by graduate students marginally proficient in your own language.

The PORGIs correctly perceived the U.S. educational system to be the “keys to the kingdom”. They began, in Gramsci's long march through the institutions, to put in place the mechanisms which would tilt the electorate toward their tyrannical agenda. It is too late to reverse it; the educational establishment must be destroyed. “Destroyed?”, you ask—“These are strong words! Do you really mean it? Is it possible?” Now witness the power of this fully armed and operational global data network! Record stores…gone! Book stores…gone! Universities….

In the Kindle edition (which costs almost as much as the hardcover), the end-notes are properly bidirectionally linked to citations in the text, but the index is just a useless list of terms without links to references in the text. I'm sorry if I come across as a tedious “index hawk”, but especially when reviewing a book about declining intellectual standards, somebody has to do it.

 Permalink

Chiles, Patrick. Perigee. Seattle: CreateSpace, 2011. ISBN 978-1-4699-5713-5.
A few years into the future, while NASA bumbles along in its bureaucratic haze and still can't launch humans into space, a commercial “new space” company, Polaris AeroSpace Lines, has taken the next step beyond suborbital tourist hops into space for the well-heeled, and begun both scheduled and charter service in aerospace planes equipped with a combined-cycle powerplant which allows them to fly anywhere on the globe, operating at Mach 10, making multiple skips off the atmosphere, and delivering up to 30 passengers and cargo to any destination in around 90 minutes. Passengers are treated to a level of service and coddling which exceeds first class, breathtaking views from above the atmosphere along the way, and apart from the steep ticket prices, no downside apart from the zero-g toilet.

In this thriller, something goes horribly wrong during a flight from Denver to Singapore chartered by a coarse and demanding Australian media mogul, and the crew and passengers find themselves not on course for their destination but rather trapped in Earth orbit with no propellant and hence no prospect of getting back until long after their life support will be exhausted. Polaris immediately begins to mount a rescue mission based upon an orbital spacecraft they have under development, but as events play out clues begin to emerge that a series of problems are not systems failures but perhaps evidence of something much darker, in which those on the front lines trying to get their people back do not know who they can trust. Eventually, Polaris has no option but to partner with insurgent individuals in the “old space” world to attempt an improvised rescue mission.

This is a very interesting book, in that it does not read like a space thriller so much as one of the classic aviation dramas such as The High and the Mighty. We have the cast of characters: a crusty mechanic, heroic commander, hot-shot first officer, resourceful flight attendant with unexpected talents, demanding passengers, visionary company president, weaselly subordinate, and square-jawed NASA types. It all works very well, and as long as you don't spend too much time thinking about mass fractions, specific impulse, orbital mechanics, and thermal protection systems, is an enjoyable read, and provides a glimpse of a plausible future for commercial space flight (point to point hypersonic service) which is little discussed among the new space community. For those who do care about the details, they follow. Be warned—some of these are major plot spoilers, so if you're planning to read the novel it's best to give them a pass until you've finished the book.

Spoiler warning: Plot and/or ending details follow.  

  • In chapter 26 we are told that the spaceplane's electricity is produced by fuel cells. This doesn't make any sense for a suborbital craft. We're also told that it is equipped with an APU and batteries with eight hours of capacity. For a plane which can fly to its destination in 90 minutes, why would you also include a fuel cell? The APU can supply power for normal operation, and in case it fails, the batteries have plenty of capacity to get you back on the ground. Also, you'd have to carry liquid hydrogen to power the fuel cells. This would require a bulky tank and make ramp operations and logistics a nightmare.
  • Not a quibble, but rather a belly laugh in chapter 28: I had not before heard the aging International Space Station called “Cattlecar Galactica”.
  • In chapter 31, when the rescue mission is about to launch, we're told that if the launch window is missed, on the next attempt the stricken craft will be “several hundred miles farther downrange”. In fact, the problem is that on the next orbit, due to the Earth's rotation, the plane of the craft's orbit will have shifted with respect to that of the launch site, and consequently the rescue mission will have to perform a plane change as part of its trajectory. This is hideously costly in terms of fuel, and it is unlikely in the extreme the rescue mission would be able to accomplish it. All existing rendezvous missions, if they miss their launch window, must wait until the next day when the launch site once again aligns with the orbital plane of the destination.
  • In chapter 47, as passenger Magrath begins to lose it, “Sweat began to bead up on his bald head and float away.” But in weightlessness, surface tension dominates all other forces and the sweat would cling and spread out over the 'strine's pate. There is nothing to make it float away.
  • In chapter 54 and subsequently, Shuttle “rescue balls” are used to transfer passengers from the crippled spaceplane to the space station. These were said to have been kept on the station since early in the program. In fact, while NASA did develop a prototype of the Personal Rescue Enclosure, they were never flown on any Shuttle mission nor launched to the station.
  • The orbital mechanics make absolutely no sense at all. One would expect a suborbital flight between Denver and Singapore to closely follow a great circle route between those airports (with possible deviations due to noise abatement and other considerations). Since most of the flight would be outside the atmosphere, weather and winds aloft would not be a major consideration. But if flight 501 had followed such a route and have continued to boost into orbit, it would have found itself in a high-inclination retrograde orbit around the Earth: going the opposite direction to the International Space Station. Getting from such an orbit to match orbits with the ISS would require more change in velocity (delta-v) than an orbital launch from the Earth, and no spacecraft in orbit would have remotely that capability. The European service vehicle already docked at the station would only have enough propellant for a destructive re-entry.

    We're told then that the flight path would be to the east, over Europe. but why would one remotely choose such a path, especially if a goal of the flight was to set records? It would be a longer flight, and much more demanding of propellant to do it in one skip as planned. But, OK, let's assume that for some reason they did decide to go the long way around. Now, for the rescue to be plausible, we have to assume two further ridiculously improbable things: first, that the inclination of the orbit resulting from the engine runaway on the flight to Singapore would match that of the station, and second, that the moment of launch just happened to be precisely when Denver was aligned with the plane of the station's orbit. Since there is no reason that the launch would have been scheduled to meet these exacting criteria, the likelihood that the spaceplane would be in an orbit reachable from the station without a large and impossible to accomplish plane change (here, I am referring to a change in the orbital plane, not catching a connecting flight) is negligible.

Spoilers end here.  

The author's career has been in the airline industry, and this shows in the authenticity of the depiction of airline operations. Notwithstanding the natters above behind the spoiler shield, I thoroughly enjoyed this book and raced through it trying to guess how it would come out.

 Permalink

Rucker, Rudy. Turing & Burroughs. Manuscript, 2012.
The author was kind enough to send this reader a copy of the manuscript for copy-editing and fact checking. I've returned it, marked up, and you should be able to read it soon. I shall refrain from commenting upon the text until it's generally available. But if you're a Rudy Rucker fan, you're going to love this.

 Permalink

September 2012

Bracken, Matthew. Foreign Enemies and Traitors. Orange Park, FL: Steelcutter Publishing, 2009. ISBN 978-0-9728310-3-1.
This is the third novel in the author's “Enemies” trilogy, which began with Enemies Foreign and Domestic (December 2009), and continued with Domestic Enemies (March 2012). Here, we pick up the story three years after the conclusion of the second volume. Phil Carson, who we last encountered escaping from the tottering U.S. on a sailboat after his involvement in a low-intensity civil war in Virginia, is returning to the ambiguously independent Republic of Texas, smuggling contraband no longer available in the de-industrialised and bankrupt former superpower, when he is caught in a freak December hurricane in the Gulf of Mexico and shipwrecked on the coast of Mississippi.

This is not the America he left. The South is effectively under martial law, administered by General Marcus Aurelius Mirabeau; east Texas has declared its independence; the Southwest has split off as Aztlan and secured autonomy in the new Constitution; the East and upper Midwest remain under the control of the ever more obviously socialist regime in Washington; and the American redoubt states in the inland northwest are the last vestige of liberty. The former United States have not only been devastated by economic collapse and civil strife stemming from the attempt to ban and confiscate weapons, but then ravaged by three disastrous hurricanes and two earthquakes on the New Madrid fault. It's as if God had turned his back on the United States of America—say “no” to Him three times, and that may happen.

Carson, a Vietnam special forces veteran, uses his skills at survival, evasion, and escape, as well as his native cunning, to escape (albeit very painfully) to Tennessee, which is in the midst of a civil war. Residents, rejecting attempts to disarm them (which would place them at risk of annihilation at the hands of the “golden horde” escaping devastated urban areas and ravaging everything in their path), are now confronted with foreign mercenaries from such exemplars of human rights and rule of law as Kazakhstan and Nigeria, brought in because U.S. troops have been found too squeamish when it come to firing on their compatriots: Kazakhstani cavalry—not so much. (In the book, these savages are referred to as “Kazaks”. “Kazakhstani” is correct, but as an abbreviation I think “Kazakh” [the name of their language] would be better.)

Carson, and the insurgents with whom he makes contact in Tennessee, come across incontrovertible evidence of an atrocity committed by Kazakhstani mercenaries, at the direction of the highest levels of what remains of the U.S. government. In a world with the media under the thumb of the regime and the free Internet a thing of the past, getting this information out requires the boldest of initiatives, and recruiting not just career NCOs, the backbone of the military, but also senior officers with the access to carry out the mission. After finishing this book, you may lose some sleep pondering the question, “At what point is a military coup the best achievable outcome?”.

This is a thoroughly satisfying conclusion to the “Enemies” trilogy. Unlike the previous volumes, there are a number of lengthy passages, usually couched as one character filling in another about events of which they were unaware, which sketch the back story. These are nowhere near as long as Galt's speech in Atlas Shrugged (April 2010), (which didn't bother me in the least—I thought it brilliant all of the three times I've read it), but they do ask the reader to kick back from the action and review how we got here and what was happening offstage. Despite the effort to make this book work as a stand-alone novel, I'd recommend reading the trilogy in series—if you don't you'll miss the interactions between the characters, how they came to be here, and why the fate of the odious Bob Bullard is more than justified.

Extended excerpts of this and the author's other novels are available online at the author's Web site.

 Permalink

Blum, Andrew. Tubes. New York: HarperCollins, 2012. ISBN 978-0-06-199493-7.
The Internet has become a routine fixture in the lives of billions of people, the vast majority of whom have hardly any idea how it works or what physical infrastructure allows them to access and share information almost instantaneously around the globe, abolishing, in a sense, the very concept of distance. And yet the Internet exists—if it didn't, you wouldn't be able to read this. So, if it exists, where is it, and what is it made of?

In this book, the author embarks upon a quest to trace the Internet from that tangle of cables connected to the router behind his couch to the hardware which enables it to communicate with its peers worldwide. The metaphor of the Internet as a cloud—simultaneously everywhere and nowhere—has become commonplace, and yet as the author begins to dig into the details, he discovers the physical Internet is nothing like a cloud: it is remarkably centralised (a large Internet exchange or “peering location” will tend grow ever larger, since networks want to connect to a place where the greatest number of other networks connect), often grungy (when pulling fibre optic cables through century-old conduits beneath the streets of Manhattan, one's mind turns more to rats than clouds), and anything but decoupled from the details of geography (undersea cables must choose a route which minimises risk of breakage due to earthquakes and damage from ship anchors in shallow water, while taking the shortest route and connecting to the backbone at a location which will provide the lowest possible latency).

The author discovers that while much of the Internet's infrastructure is invisible to the layman, it is populated, for the most part, with people and organisations open and willing to show it off to visitors. As an amateur anthropologist, he surmises that to succeed in internetworking, those involved must necessarily be skilled in networking with one another. A visit to a NANOG gathering introduces him to this subculture and the retail politics of peering.

Finally, when non-technical people speak of “the Internet”, it isn't just the interconnectivity they're thinking of but also the data storage and computing resources accessible via the network. These also have a physical realisation in the form of huge data centres, sited based upon the availability of inexpensive electricity and cooling (a large data centre such as those operated by Google and Facebook may consume on the order of 50 megawatts of electricity and dissipate that amount of heat). While networking people tend to be gregarious bridge-builders, data centre managers view themselves as defenders of a fortress and closely guard the details of their operations from outside scrutiny. When Google was negotiating to acquire the site for their data centre in The Dalles, Oregon, they operated through an opaque front company called “Design LLC”, and required all parties to sign nondisclosure agreements. To this day, if you visit the facility, there's nothing to indicate it belongs to Google; on the second ring of perimeter fencing, there's a sign, in Gothic script, that says “voldemort industries”—don't be evil! (p. 242) (On p. 248 it is claimed that the data centre site is deliberately obscured in Google Maps. Maybe it once was, but as of this writing it is not. From above, apart from the impressive power substation, it looks no more exciting than a supermarket chain's warehouse hub.) The author finally arranges to cross the perimeter, get his retina scanned, and be taken on a walking tour around the buildings from the outside. To cap the visit, he is allowed inside to visit—the lunchroom. The food was excellent. He later visits Facebook's under-construction data centre in the area and encounters an entirely different culture, so perhaps not all data centres are Morlock territory.

The author comes across as a quintessential liberal arts major (which he was) who is alternately amused by the curious people he encounters who understand and work with actual things as opposed to words, and enthralled by the wonder of it all: transcending space and time, everywhere and nowhere, “free” services supported by tens of billions of dollars of power-gobbling, heat-belching infrastructure—oh, wow! He is also a New York collectivist whose knee-jerk reaction is “public, good; private, bad” (notwithstanding that the build-out of the Internet has been almost exclusively a private sector endeavour). He waxes poetic about the city-sponsored (paid for by grants funded by federal and state taxpayers plus loans) fibre network that The Dalles installed which, he claims, lured Google to site its data centre there. The slightest acquaintance with economics or, for that matter, arithmetic, demonstrates the absurdity of this. If you're looking for a site for a multi-billion dollar data centre, what matters is the cost of electricity and the climate (which determines cooling expenses). Compared to the price tag for the equipment inside the buildings, the cost of running a few (or a few dozen) kilometres of fibre is lost in the round-off. In fact, we know, from p. 235 that the 27 kilometre city fibre run cost US$1.8 million, while Google's investment in the data centre is several billion dollars.

These quibbles aside, this is a fascinating look at the physical substrate of the Internet. Even software people well-acquainted with the intricacies of TCP/IP may have only the fuzziest comprehension of where a packet goes after it leaves their site, and how it gets to the ultimate destination. This book provides a tour, accessible to all readers, of where the Internet comes together, and how counterintuitive its physical realisation is compared to how we think of it logically.

In the Kindle edition, end-notes are bidirectionally linked to the text, but the index is just a list of page numbers. Since the Kindle edition does include real page numbers, you can type in the number from the index, but that's hardly as convenient as books where items in the index are directly linked to the text. Citations of Internet documents in the end notes are given as URLs, but not linked; the reader must copy and paste them into a browser's address bar in order to access the documents.

 Permalink

Rucker, Rudy. Turing & Burroughs. Los Gatos, CA: Transreal Books, 2012. ISBN 978-0-9858272-3-6.
The enigmatic death of Alan Turing has long haunted those who inquire into the life of this pioneer of computer science. Forensic tests established cyanide poisoning as the cause of his death, and the inquest ruled it suicide by eating a cyanide-laced apple. But the partially-eaten apple was never tested for cyanide, and Turing's mother, among other people close to him, believed the death an accident, due to ingestion of cyanide fumes from an experiment in gold plating he was known to be conducting. Still others pointed out that Turing, from his wartime work at Bletchley Park, knew all the deepest secrets of Britain's wartime work in cryptanalysis, and having been shamefully persecuted by the government for his homosexuality, might have been considered a security risk and targeted to be silenced by dark forces of the state.

This is the point of departure for this delightful alternative history romp set in the middle of the 1950s. In the novel, Turing is presumed to have gotten much further with his work on biological morphogenesis than history records. So far, in fact, that when agents from Her Majesty's spook shop botch an assassination attempt and kill his lover instead, he is able to swap faces with him and flee the country to the anything-goes international zone of Tangier.

There, he pursues his biological research, hoping to create a perfect undifferentiated tissue which can transform itself into any structure or form. He makes the acquaintance of novelist William S. Burroughs, who found in Tangier's demimonde a refuge from the scandal of the death of his wife in Mexico and his drug addiction. Turing eventually succeeds, creating a lifeform dubbed the “skug”, and merges with it, becoming a skugger. He quickly discovers that his endosymbiont has not only dramatically increased his intelligence, but also made him a shape-shifter—given the slightest bit of DNA, a skugger can perfectly imitate its source.

And not just that…. As Turing discovers when he recruits Burroughs to skugdom, skuggers are able to enskug others by transferring a fragment of skug tissue to them; they can conjugate, exchanging “wetware” (memories and acquired characteristics); and they are telepathic among one another, albeit with limited range. Burroughs, whose explorations of pharmaceutical enlightenment had been in part motivated by a search for telepathy (which he called TP), found he rather liked being a skugger and viewed it as the next step in his personal journey.

But Turing's escape from Britain failed to completely cover his tracks, and indiscretions in Tangier brought him back into the crosshairs of the silencers. Shape-shifting into another identity, he boards a tramp steamer to America, where he embarks upon a series of adventures, eventually joined by Burroughs and Allen Ginsberg, on the road from Florida to Los Alamos, New Mexico, Burroughs's childhood stomping grounds, where Stanislaw Ulam, co-inventor of the hydrogen bomb and, like Turing, fascinated with how simple computational systems such as cellular automata can mimic the gnarly processes of biology, has been enlisted to put an end to the “skugger menace”—perhaps a greater threat than the international communist conspiracy.

Using his skugger wiles, Turing infiltrates Los Alamos and makes contact, both physically and intellectually, with Ulam, and learns the details of the planned assault on the skugs and vows to do something about it—but what? His human part pulls him one way and his skug another.

The 1950s are often thought of as a sterile decade, characterised by conformity and paranoia. And yet, if you look beneath the surface, the seeds of everything that happened in the sixties were sown in those years. They may have initially fallen upon barren ground, but like the skug, they were preternaturally fertile and, once germinated, spread at a prodigious rate.

In the fifties, the consensus culture bifurcated into straights and beats, the latter of which Burroughs and Ginsberg were harbingers and rôle models for the emerging dissident subculture. The straights must have viewed the beats as alien—almost possessed: why else would they reject the bounty of the most prosperous society in human history which had, just a decade before, definitively defeated evil incarnate? And certainly the beats must have seen the grey uniformity surrounding them as also a kind of possession, negating the human potential in favour of a cookie-cutter existence, where mindless consumption tried to numb the anomie of a barren suburban life. This mutual distrust and paranoia was to fuel such dystopian visions as Invasion of the Body Snatchers, with each subculture seeing the other as pod people.

In this novel, Rucker immerses the reader in the beat milieu, with the added twist that here they really are pod people, and loving it. No doubt the beats considered themselves superior to the straights. But what if they actually were? How would the straights react, and how would a shape-shifting, telepathic, field-upgradable counterculture respond?

Among the many treats awaiting the reader is the author's meticulous use of British idioms when describing Turing's thoughts and Burroughs's idiosyncratic grammar in the letters in his hand which appear here.

This novel engages the reader to such an extent that it's easy to overlook the extensive research that went into making it authentic, not just superficially, but in depth. Readers interested in what goes into a book like this will find the author's background notes (PDF) fascinating—they are almost as long as the novel. I wouldn't, however, read them before finishing the book, as spoilers lurk therein.

A Kindle edition is available either from Amazon or directly from the publisher, where an EPUB edition is also available (with other formats forthcoming).

 Permalink

Imholt, Timothy James. Nuclear Assault. Unknown: Zwicky Press, 2012. ISBN 978-0-615-69158-9.
I am not going to fret about spoilers in this review. This book is so awful that nobody should read it, and avoiding spoilers is like worrying about getting a dog turd dirty when you pick it up with toilet paper to throw it in the loo.

I acquired this book based on an Amazon suggestion of “Customers who Viewed this Item Also Viewed” and especially because, at the time I encountered it, the Kindle edition was free (it is no longer, as of this writing). Well, I'm always a sucker for free stuff, so I figured, “How bad can it be?” and downloaded it. How wrong I was—even for free, this botched attempt at a novel is overpriced.

Apart from the story, which is absurd, the author has not begun to master the basics of English composition. If I had taken a chapter or two from this novel and submitted it as a short story in my 10th grade English class, I would have received a failing grade, and deservedly so. Scarcely a page in this 224 page novel is unmarred by errors of orthography, grammar, or punctuation. The author appears to have invented his own way of expressing quotes. The following is a partial list of words in the text which are either misspelled or for which homonyms are incorrectly used:

Americans OK advice affected an arrival assess attack bathe become breathe chaperone closed continuous counsel enemy's feet first foul from had hangar harm's hero holding host hostilely intelligence it's its let's morale nights not ordnance overheard pus rarefied scientists sent sights sure the their them they times were

When you come across an instance of “where” being used in place of “were”, you might put it down to the kind of fat finger we all commit from time to time, plus sloppy proofreading. But when it happens 13 times in 224 pages, you begin to suspect the author might not really comprehend the difference between the two.

All of the characters, from special forces troops, emergency room nurses, senior military commanders, the President of the United States, to Iranian nuclear scientists speak in precisely the same dialect of fractured grammar laced with malaprops. The author has his own eccentric idea of what words should be capitalised, and applies them inconsistently. Each chapter concludes with a “news flash” and “economic news flash”, also in bizarro dialect, with the latter demonstrating the author as illiterate in economics as he is in the English language.

Then, in the last line of the novel, the reader is kicked in the teeth with something totally out of the blue.

I'd like to call this book “eminently forgettable”, but I doubt I'll forget it soon. I have read a number of manuscripts by aspiring writers (as a savage copy editor and fact checker, authors occasionally invite me to have at their work, in confidence, before sending it for publication), but this is, by far, the worst I have encountered in my entire life. You may ask why I persisted in reading beyond the first couple of chapters. It's kind of like driving past a terrible accident on the highway—do you really not slow down and look? Besides, I only review books I've finished, and I looked forward to this review as the only fun I could derive from this novel, and writing this wave-off a public service for others who might stumble upon this piece of…fiction and be inclined to pick it up.

 Permalink

October 2012

Smith, L. Neil. Down with Power. Rockville, MD: Phoenix Pick, 2012. ISBN 978-1-61242-055-4.
In the first chapter of this superb book, the author quotes Scott Adams, creator of “Dilbert”, describing himself as being “a libertarian minus the crazy stuff”, and then proceeds to ask precisely what is crazy about adopting a strict interpretation of the Zero Aggression Principle:

A libertarian is a person who believes that no one has the right, under any circumstances, to initiate force against another human being for any reason whatever; nor will a libertarian advocate the initiation of force, or delegate it to anyone else.

Those who act consistently with this principle are libertarians, whether they realize it or not. Those who fail to act consistently with it are not libertarians, regardless of what they may claim. (p. 20)

The subsequent chapters sort out the details of what this principle implies for contentious issues such as war powers; torture; money and legal tender laws; abortion; firearms and other weapons; “animal rights”; climate change (I do not use scare quotes on this because climate change is real and has always happened and always will—it is the hysteria over anthropogenic contributions to an eternally fluctuating process driven mostly by the Sun which is a hoax); taxation; national defence; prohibition in all of its pernicious manifestations; separation of marriage, science, and medicine from the state; immigration; intellectual property; and much more. Smith's viewpoint on these questions is largely informed by Robert LeFevre, whose wisdom he had the good fortune to imbibe at week-long seminar in 1972. (I encountered LeFevre just once, at a libertarian gathering in Marin County, California [believe it or not, such things exist, or at least existed] around 1983, and it was this experience that transformed me from a “nerf libertarian” who was prone to exclaiming “Oh, come on!” whilst reading Rothbard to the flinty variety who would go on to author the Evil Empires bumper sticker.) Sadly, Bob LeFevre is no longer with us, but if you wish to be inoculated with the burning fever of liberty which drove him and inspired those who heard him speak, this book is as close as you can come today to meeting him in person. The naïve often confuse libertarians with conservatives: to be sure, libertarians often wish to impede “progressives” whose agenda amounts to progress toward serfdom and wish, at the least, for a roll-back of the intrusions upon individual liberty which were the hallmark of the twentieth century. But genuine libertarianism, not the nerf variety, is a deeply radical doctrine which calls into question the whole leader/follower, master/slave, sovereign/subject, and state/citizen structure which has characterised human civilisation ever since hominids learned to talk and the most glib of them became politicians (“Put meat at feet of Glub and Glub give you much good stuff”).

And here is where I both quibble with and enthusiastically endorse the author's agenda. The quibble is that I fear that our species, formed by thousands of generations of hunter/gatherer and agricultural experience, has adapted, like other primates, to a social structure in which most individuals delegate decision making and even entrust their lives to “leaders” chosen by criteria deeply wired into our biology and not remotely adapted to the challenges we face today and in the future. (Hey, it could be worse: peacocks select for the most overdone tail—it's probably a blessing nakes don't have tails—imagine trying to fit them all into a joint session of Congress.) The endorsement is that I don't think it's possible to separate the spirit of individualism which is at the heart of libertarianism from the frontier. There were many things which contributed to the first American war of secession and the independent republics which emerged from it, but I believe their unique nature was in substantial part due to the fact that they were marginal settlements on the edge of an unexplored and hostile continent, where many families were entirely on their own and on the front lines, confronted by the vicissitudes of nature and crafty enemies.

Thomas Jefferson worried that as the population of cities grew compared to that of the countryside, the ethos of self-sufficiency would be eroded and be supplanted by dependency, and that this corruption and reliance upon authority founded, at its deepest level, upon the initiation of force, would subvert the morality upon which self-government must ultimately rely. In my one encounter with Robert LeFevre, he disdained the idea that “maybe if we could just get back to the Constitution” everything would be fine. Nonsense, he said: to a substantial degree the Constitution is the problem—after all, look at how it's been “interpreted” to permit all of the absurd abrogations of individual liberty and natural law since its dubious adoption in 1789. And here, I think the author may put a bit too much focus on documents (which can, have been, and forever will be) twisted by lawyers into things they never were meant to say, and too little on the frontier.

What follows is both a deeply pessimistic and unboundedly optimistic view of the human and transhuman prospect. I hope I don't lose you in the loop-the-loop. Humans, as presently constituted, have wired-in baggage which renders most of us vulnerable to glib forms of persuasion by “leaders” (who are simply those more talented than others in persuasion). The more densely humans are packed, and the greater the communication bandwidth available to them (in particular, one to many media), the more vulnerable they are to such “leadership”. Individual liberty emerges in frontier societies: those where each person and each family must be self-sufficient, without any back-up other than their relations to neighbours, but with an unlimited upside in expanding the human presence into new territory. The old America was a frontier society; the new America is a constrained society, turning inward upon itself and devouring its best to appease its worst.

So, I'm not sure this or that amendment to a document which is largely ignored will restore liberty in an environment where a near-majority of the electorate receive net benefits from the minority who pay most of the taxes. The situation in the United States, and on Earth, may well be irreversible. But the human and posthuman destiny is much, much larger than that. Perhaps we don't need a revision of governance documents as much as the opening of a frontier. Then people will be able to escape the stranglehold where seven eighths of all of their work is confiscated by the thugs who oppress them and instead use all of their sapient facilities to their own ends. As a sage author once said:

Freedom, immortality, and the stars!

Works for me. Free people expand at a rate which asymptotically approaches the speed of light. Coercive government and bureaucracy grow logarithmically, constrained by their own internal dissipation. We win; they lose.

In the Kindle edition the index is just a list of page numbers. Since the Kindle edition includes real page numbers, you can type in the number from the index, but that's not as convenient as when index citations are linked directly to references in the text.

 Permalink

Gordon, John Steele. A Thread Across the Ocean. New York: Harper Perennial, 2002. ISBN 978-0-06-052446-3.
There are inventions, and there are meta-inventions. Many things were invented in the 19th century which contributed to the wealth of the present-day developed world, but there were also concepts which emerged in that era of “anything is possible” ferment which cast even longer shadows. One of the most important is entrepreneurship—the ability of a visionary who sees beyond the horizon of the conventional wisdom to assemble the technical know-how, the financial capital, the managers and labourers to do the work, while keeping all of the balls in the air and fending off the horrific setbacks that any breakthrough technology will necessarily encounter as it matures.

Cyrus W. Field may not have been the first entrepreneur in the modern mold, but he was without doubt one of the greatest. Having started with almost no financial resources and then made his fortune in the manufacture of paper, he turned his attention to telegraphy. Why, in the mid-19th century, should news and information between the Old World and the New move only as fast as sailing ships could convey it, while the telegraph could flash information across continents in seconds? Why, indeed?—Field took a proposal to lay a submarine cable from Newfoundland to the United States to cut two days off the transatlantic latency of around two weeks to its logical limit: a cable across the entire Atlantic which could relay information in seconds, linking the continents together in a web of information which was, if low bandwidth, almost instantaneous compared to dispatches carried on ships.

Field knew next to nothing about electricity, manufacturing of insulated cables thousands of miles long, paying-out mechanisms to lay them on the seabed, or the navigational challenges in carrying a cable from one continent to another. But he was supremely confident that success in the endeavour would enrich those who accomplished it beyond their dreams of avarice, and persuasive in enlisting in the effort not only wealthy backers to pay the bills but also technological savants including Samuel F. B. Morse and William Thompson (later Lord Kelvin), who invented the mirror galvanometer which made the submarine cable viable.

When you try to do something audacious which has never been attempted before, however great the promise, you shouldn't expect to succeed the first time, or the second, or the third…. Indeed, the history of transatlantic cable was one of frustration, dashed hopes, lost investments, derision in the popular press—until it worked. Then it was the wonder of the age. So it has been and shall always be with entrepreneurship.

Today, gigabytes per second flow beneath the oceans through the tubes. Unless you're in continental Eurasia, it's likely these bits reached you through one of them. It all had to start somewhere, and this is the chronicle of how that came to be. This may have been the first time it became evident there was a time value to information: that the news, financial quotes, and messages delivered in minutes instead of weeks were much more valuable than those which arrived long after the fact.

It is also interesting that the laying of the first successful transatlantic cable was almost entirely a British operation. While the American Cyrus Field was the promoter, almost all of the capital, the ships, the manufacture of the cable, and the scientific and engineering expertise in its production and deployment was British.

 Permalink

Bonanos, Christopher. Instant. New York: Princeton Architectural Press, 2012. ISBN 978-1-61689-085-8.
The second half of the twentieth century in the developed world was, in many ways, the age of immediate gratification, and no invention was as iconic of the epoch as the Polaroid instant photograph. No longer did people have to wait until a roll of film was full, take it to the drug store to be sent off to a photo lab, and then, a week or so later, see whether the irreplaceable pictures of their child's first birthday came out or were forever lost. With the introduction of Edwin Land's first Polaroid camera in 1948, only a minute elapsed between the click of the shutter and peeling off a completely developed black and white (well, initially, sepia and white, but that was fixed within two years) print. If the picture wasn't satisfactory, another shot could be taken on the spot, and pictures of special events could be immediately shared with others present—in a way, the Polaroid print was the original visual social medium: Flickr in the Fifties.

This book chronicles the history of Polaroid, which is inseparable from the life of its exceptional founder, CEO, and technological visionary, Edwin Land. Land, like other, more recent founders of technological empires, was a college drop-out (the tedium simply repelled him), whose instinct drove him to create products which other, more sensible, people considered impossible, for markets which did not exist, fulfilling needs which future customers did not remotely perceive they had, and then continuing to dazzle them with ever more amazing achievements. Polaroid in its heyday was the descendent of Thomas Edison's Menlo Park invention factory and the ancestor of Apple under Steve Jobs—a place where crazy, world-transforming ideas bubbled up and were groomed into products with a huge profit margin.

Although his technical knowledge was both broad and deep, and he spent most of his life in the laboratory or supervising research and product development, Edwin Land was anything but a nerd: he was deeply versed in the fine arts and literature, and assembled a large collection of photography (both instant and conventional) along with his 535 patents. He cultivated relationships with artists ranging from Ansel Adams to Andy Warhol and involved them in the design and evolution of Polaroid's products. Land considered basic research part of Polaroid's mission, and viewed his work on human colour perception as his most important achievement: he told a reporter in 1959, “Photography…that is something I do for a living.”

Although Polaroid produced a wide (indeed, almost bewildering) variety of cameras and film which progressed from peel-off monochrome to professional large-format positive/negative sheets to colour to all-in-one colour film packs for the SX-70 and its successors, which miraculously developed in broad daylight after being spit out by the camera, it remained, to a large extent, a one product company—entirely identified with instant photography. And, it was not only a one product company (something with which this scrivener has some acquaintance), but a one genius company, where the entire technical direction and product strategy resided in the braincase of a single individual. This has its risks, and when the stock was flying high there was no shortage of sceptical analysts on Wall Street who pointed them out.

And then slowly, painfully, it all fell back to Earth. In 1977, Land's long-time dream of instant motion pictures was launched on the market as Polavision. The company had expended years and on the order of half a billion dollars in developing a system which produced three minute silent movies which were grainy and murky. This was launched just at the time video cassette recorders were coming onto the market, which could record and replay full television programs with sound, using inexpensive tapes which could be re-recorded. Polavision sales were dismal, and the product was discontinued two years later. In 1976, Kodak launched their own instant camera line, which cut into Polaroid's sales and set off a patent litigation battle which would last more than fourteen years and cause Polaroid to focus on the past and defending its market share rather than innovation.

Now that everybody has instant photography in the form of digital cameras and mobile telephones, all without the need of miracle chemistry, breakthrough optics, or costly film packs, you might conclude that Polaroid, like Kodak, was done in by digital. The reality is somewhat more complicated. What undermined Polaroid's business model was not digital photography, which emerged only after the company was already steep in decline, but the advent of the one hour minilab and inexpensive, highly automated, and small point-and-shoot 35 mm cameras. When the choice was between waiting a week or so for your pictures or seeing them right away, Polaroid had an edge, but when you could shoot a roll of film, drop it at the minilab in the mall when you went to do your shopping, and pick up the prints before you went home, the distinction wasn't so great. Further, the quality of prints from 35 mm film on photographic paper was dramatically better; the prints were larger; and you could order additional copies or enlargements from the negatives. Large, heavy, and clunky cameras that only took 10 pictures from an expensive film pack began to look decreasingly attractive compared to pocketable 35 mm cameras that, at least for the snapshot market, nailed focus and exposure almost every time you pushed the button.

The story of Polaroid is also one of how a company can be trapped by its business model. Polaroid's laboratories produced one of the first prototypes of a digital camera. But management wasn't interested because everybody knew that revenue came from selling film, not cameras, and a digital camera didn't use film. At the same time, Polaroid was working on a pioneering inkjet photo printer, which management disdained because it didn't produce output they considered of photographic quality. Imagine how things might have been different had somebody said, “Look, it's not as good as a photographic print—yet—but it's good enough for most of our snapshot customers, and we can replace our film revenue with sales of ink and branded paper.” But nobody said that. The Polaroid microelectronics laboratory was closed in 1993, with the assets sold to MIT and the inkjet project was terminated—those working on it went off to found the premier large-format inkjet company.

In addition to the meticulously documented history, there is a tremendous amount of wisdom regarding how companies and technologies succeed and fail. In addition, this is a gorgeous book, with numerous colour illustrations (expandable and scrollable in the Kindle edition). My only quibble is that in the Kindle edition, the index is just a list of terms, not linked to references in the text; everything else is properly linked.

Special thanks to James Lileks for recommending this book (part 2).

 Permalink

Rawles, James Wesley. Founders. New York: Atria Books, 2012. ISBN 978-1-4391-7282-7.
This novel is the third in the series which began with Patriots (December 2008) and continued with Survivors (January 2012). These books are not a conventional trilogy, in that all describe events in the lives of their characters in roughly the same time period surrounding “the Crunch”—a grid down societal collapse due to a debt crisis and hyperinflation. Many of the same characters appear in the volumes, but different episodes in their lives are described. This installment extends the story beyond the end of the previous books (taking into account the last chapter, well beyond), but most of the story occurs in the years surrounding the Crunch. In an introductory note, the author says the books can be read in any order, but I think the reader will miss a great deal if this is the first one read—most of the characters who appear here have an extensive back-story in the previous books, and you'll miss much of what motivates them and how they found themselves in their present circumstances if you start here.

Like the earlier novels, this is part thriller and part survival tutorial. I found the two components less well integrated here than before. The author seems prone to launching into a litany of survival gear and tactics, not to mention veering off into minutiæ of Christian doctrine, leaving the story and characters on hold. For example, in chapter 20:

The gear inside the field station CONEX included a pair of R-390A HF receivers, two Sherwood SE-3 synchronous detectors, four hardwired demodulators, a half dozen multiband scanners, several digital audio recorders, two spectrum analyzers, and seven laptop computers that were loaded with demodulators, digital recorders, and decryption/encryption software.

Does this really move the plot along? Is anybody other than a wealthy oilman likely to be able to put together such a rig for signal intelligence and traffic analysis? And if not, why do we need to know all of this, as opposed to simply describing it as a “radio monitoring post”? This is not a cherry-picked example; there are numerous other indulgences in gear geekdom.

The novel covers the epic journey, largely on foot, of Ken and Terry Layton from apocalyptic Crunch Chicago, where they waited too late to get out of Dodge toward the retreat their group had prepared in the American redoubt, and the development and exploits of an insurgency against the so-called “Provisional Government” headquartered in Fort Knox, Kentucky, which is a thinly-disguised front for subjugation of the U.S. to the United Nations and looting the population. (“Meet the new boss—same as the old boss!”) Other subplots update us on the lives of characters we've met before, and provide a view of how individuals and groups try to self-organise back into a lawful and moral civil society while crawling from the wreckage of corruption and afflicted by locusts with weapons.

We don't do stars on reviews here at Fourmilab—I'm a word guy—but I do occasionally indulge in sports metaphors. I consider the first two novels home runs: if you're remotely interested in the potential of societal collapse and the steps prudent people can take to protect themselves and those close to them from its sequelæ, they are must-reads. Let's call this novel a solid double bouncing between the left and centre fielders. If you've read the first two books, you'll certainly want to read this one. If you haven't, don't start here, but begin at the beginning. This novel winds up the story, but it does so in an abrupt way which I found somewhat unconvincing—it seemed like the author was approaching a word limit and had to close it out in however sketchy a manner.

There are a few quibbles, but aren't there always?

Spoiler warning: Plot and/or ending details follow.  

  • In chapter 8 we're told that Malmstrom Air Force Base had a large inventory of JP-4 fuel. But this fuel, a 50–50 blend of kerosene and gasoline, was phased out by the U.S. Air Force in 1996 in favour of the less hazardous JP-8. It is unlikely that at least 16 years later an Air Force base would still have JP-4 in storage.
  • In chapter 11 we hear of the “UN's new headquarters in Brussels”. But, if the UN headquarters in New York had been destroyed, isn't is much more likely that the UN would fall back on the existing European headquarters in Geneva?
  • In chapter 17, Ken is “given a small bottle of flat black lacquer and a tiny brush from Durward's collection…”. But Durward was the farmer with whose family they passed the previous winter. I think either Carl or Graham was intended here.
  • In “President” Hutchings's speech in chapter 19, he states that more than 65 million people were killed by an influenza pandemic that swept the East and continues, “Without antibiotics available, the disease ran rampant until there were no more hosts to attack in the heavily populated regions.” Influenza is a viral disease, against which antibiotics are completely ineffectual. Of course, this may have been intended to highlight the cluelessness of Hutchings and how glibly the Provisional Government lied to its subjects.
  • In the glossary, CB radio is defined as a “VHF broadcasting band”. The citizens' band in the U.S. is in the 27 MHz range, which is considered in the HF band, and is not a broadcast service.
Spoilers end here.  

So, read the first two, and if you like them, by all means get this one. But don't start here.

 Permalink

Smith, Greg. Why I Left Goldman Sachs. New York: Grand Central, 2012. ISBN 978-1-4555-2747-2.
When Greg Smith graduated from Stanford in 2001, he knew precisely what career he wished to pursue and where—high stakes Wall Street finance at the firm at the tip of the pyramid: Goldman Sachs. His native talent and people skills had landed him first an internship and then an entry-level position at the firm, where he sought to master the often arcane details of the financial products with which he dealt and develop relationships with the clients with whom he interacted on a daily basis.

Goldman Sachs was founded in 1869, and rapidly established itself as one of the leading investment banks, market makers, and money managers, catering to large corporations, institutions, governments, and wealthy individual clients. While most financial companies had transformed themselves from partnerships to publicly-traded corporations, Goldman Sachs did not take this step until 1999. Remaining a partnership was part of the aura of the old Goldman: as with a private Swiss bank, partners bore unlimited personal liability for the actions of the firm, and clients were thereby reassured that the advice they received was in their own best interest.

When the author joined Goldman, the original partnership culture remained strong, and he quickly learned that to advance in the firm it was important to be perceived as a “culture keeper”—one steeped in the culture and transmitting it to new hires. But then the serial financial crises of the first decade of the 21st century began to hammer the firm: the collapse of the technology bubble, the housing boom and bust, and the sovereign debt crisis. These eroded the traditional sources of Goldman's income, and created an incentive for the firm to seek “elephant trades” which would book in excess of US$ 1 million in commissions and fees for the firm from a single transaction. Since the traditional business of buying and selling securities on behalf of a client and pocketing a commission or bid-ask spread was highly competitive (indeed, the kinds of high-roller clients who do business with Goldman could see the bids and offers in the market on their own screen before they placed an order), the elephant hunters were motivated to peddle “structured products”: exotic financial derivatives which the typical client lacked the resources to independently value, and were opaque to valuation by other than the string theorist manqués who invented them. In doing this business, Goldman transformed itself from a broker executing transactions on behalf of a client into a vendor, selling products to counterparties, who took the other side of the transaction. Now, there's nothing wrong with dealing with a counterparty: when you walk onto a used car lot with a wad of money (artfully concealed) in your pocket and the need for a ride, you're aware that the guy who walks up to greet you is your counterparty—the more you pay, the more he benefits, and the less valuable a car he manages to sell you, the better it is for him. But you knew that, going in, and you negotiate accordingly (or if you don't, you end up, as I did, with a 1966 MGB). Many Goldman Sachs customers, with relationships going back decades, had been used to their sales representatives being interested in their clients' investment strategy and recommending products consistent with it and providing excellent execution on trades. I had been a Goldman Sachs customer since 1985, first in San Francisco and then in Zürich, and this had been my experience until the late 2000s: consummate professionalism.

Greg Smith documents the erosion of the Goldman culture in New York, but when he accepted a transfer to the London office, there was a culture shock equivalent to dropping your goldfish into a bowl of Clorox. In London, routine commission (or agency) business generating fees around US$ 50,000 was disdained, and clients interested in such trades were rudely turned away. Clients were routinely referred to as “muppets”, and exploiting their naïveté was a cause for back-slapping and booking revenues to the firm (and bonuses for those who foisted toxic financial trash onto the customers).

Finally, in early 2012, the author said, “enough is enough” and published an op-ed in the New York Times summarising the indictment of the firm and Wall Street which is fully fleshed out here. In the book, the author uses the tired phrase “speaking truth to power”, but in fact power could not be more vulnerable to truth: at the heart of most customer relationships with Goldman Sachs was the assumption that the firm valued the client relationship above all, and would act in the client's interest to further the long-term relationship. Once clients began to perceive that they were mocked as “muppets” who could be looted by selling them opaque derivatives or unloading upon them whatever the proprietary trading desk wanted to dump, this relationship changed forever. Nobody will ever do business with Goldman Sachs again without looking at them as an adversary, not an advisor or advocate. Greg Smith was a witness to the transformation which caused this change, and this book is essential reading for anybody managing funds north of seven digits.

As it happens, I was a customer of Goldman Sachs throughout the period of Mr Smith's employment, and I can completely confirm his reportage of the dysfunction in the London branch. I captured an hour of pure comedy gold in Goldman Sachs Meets a Muppet when two Masters of the Universe who had parachuted into Zürich from London tried to educate me upon the management of my money. I closed my account a few days later.

 Permalink

Lileks, James. Graveyard Special. Seattle: Amazon Digital Services, 2012. ASIN B00962GFES.
This novel, set in the Dinkytown neighbourhood of Minneapolis, adjacent to the University of Minnesota campus, in 1980, is narrated in the first person by Robert (not Bob) Thompson, an art history major at the university, experiencing the metropolis after having grown up in a small town in the north of the state. Robert is supporting his lavish lifestyle (a second floor room in a rooming house in Dinkytown with the U of M hockey team living downstairs) by working nights at Mama B's Trattoria, an Italian/American restaurant with a light beer and wine bar, the Grotto, downstairs. His life and career at the “Trat” and “Grot” are an immersion in the culture of 1980, and a memoir typical of millions in university at the epoch until a cook at the Trat is shot dead by a bullet which came through the window from outside, with no apparent motive or clue as to the shooter's identity.

Then Robert begins to notice things: curious connections between people, suggestions of drug deals, ambiguous evidence of wire taps, radical politics, suspicions of people being informants, and a strange propensity for people he encounters meeting with apparently random violence. As he tries to make sense of all of this, he encounters hard-boiled cops, an immigrant teacher from the Soviet Union who speaks crystalline wisdom in fractured English, and a reporter for the student newspaper with whom he is instantly smitten. The complexity and ambiguity spiral ever upward until you begin to suspect, as Robert does in chapter 30, “You never get all the answers. I suppose that's the lesson.”

Do you get all the answers? Well, read the novel and find out for yourself—I doubt you'll regret doing so. Heck, how many mystery novels have an action scene involving a Zamboni? As you'd expect from the author's work, the writing is artful and evocative, even when describing something as peripheral to the plot as turning off an Asteroids video game after closing time in the Grot.

I yanked the cord and the world of triangular spaceships and monochromatic death-rocks collapsed to a single white point. The universe was supposed to end like that, if there was enough mass and matter or something. It expands until gravity hauls everything back in; the collapse accelerates until everything that was once scattered higgily-jiggity over eternity is now summed up in a tiny white infinitely dense dot, which explodes anew into another Big Bang, another universe, another iteration of existence with its own rules, a place where perhaps Carter got a second term and Rod Stewart did not decide to embrace disco.

I would read this novel straight through, cover-to-cover. There are many characters who interact in complicated ways, and if you set it aside due to other distractions and pick it up later, you may have to do some backtracking to get back into things. There are a few copy editing errors (I noted 7), but they don't distract from the story.

At this writing, this book is available only as a Kindle e-book; a paperback edition is expected in the near future. Here are the author's comments on the occasion of the book's publication. This is the first in what James Lileks intends to be a series of between three and five novels, all set in Minneapolis in different eras, with common threads tying them together. I eagerly await the next.

 Permalink

November 2012

Rorabaugh, W. J. The Alcoholic Republic. New York: Oxford University Press, 1979. ISBN 978-0-19-502990-1.
This book was recommended to me by Prof. Paul Rahe after I had commented during a discussion on Ricochet about drug (and other forms of) prohibition, using the commonplace libertarian argument that regardless of what one believes about the principle of self-ownership and the dangers to society if its members ingest certain substances, from a purely utilitarian standpoint, the evidence is that prohibition of anything simply makes the problem worse—in many cases not only increasing profits to traffickers in the banned substance, spawning crime among those who contend to provide it to those who seek it in the absence of an open market, promoting contempt for the law (the president of the United States, as of this writing, admitted in his autobiography to have used a substance whose possession, had he been apprehended, was a felony), and most of all that post-prohibition, use of the forbidden substance increases, and hence however satisfying prohibition may be to those who support, enact, and enforce it, it is ultimately counterproductive, as it increases the number of people who taste the forbidden fruit.

I read every book my readers recommend, albeit not immediately, and so I put this book on my queue, and have now digested it. This is a fascinating view of a very different America: a newly independent nation in the first two decades of the nineteenth century, still mostly a coastal nation with a vast wilderness to the West, but beginning to expand over the mountains into the fertile land beyond. The one thing all European visitors to America remarked upon was that people in this brave new republic, from strait-laced New Englanders, to Virginia patricians, to plantation barons of the South, to buckskin pioneers and homesteaders across the Appalachians, drank a lot, reaching a peak around 1830 of five gallons (19 litres) of hard spirits (in excess of 45% alcohol) per capita per annum—and that “per capita” includes children and babies in a rapidly growing population, so the adults, and particularly the men, disproportionately contributed to this aggregate.

As the author teases out of the sketchy data of the period, there were a number of social, cultural, and economic reasons for this. Prior to the revolution, America was a rum drinking nation, but after the break with Britain whiskey made from maize (corn, in the American vernacular) became the beverage of choice. As Americans migrated and settled the West, maize was their crop of choice, but before the era of canals and railroads, shipping their crop to the markets of the East cost more than its value. Distilling into a much-sought beverage, however, made the arduous trek to market profitable, and justified the round trip. In the rugged western frontier, drinking water was not to be trusted, and a sip of contaminated water could condemn one to a debilitating and possibly fatal bout of dysentery or cholera. None of these bugs could survive in whiskey, and hence it was seen as the healthy beverage. Finally, whiskey provides 83 calories per fluid ounce, and is thus a compact way to store and transmit food value without need for refrigeration.

Some things never change. European visitors to America remarked upon the phenomenon of “rapid eating” or, as we now call it, “fast food”. With the fare at most taverns outside the cities limited to fried corn cakes, salt pork, and whiskey, there was precious little need to linger over one's meal, and hence it was in-and-out, centuries before the burger. But then, things change. Starting around 1830, alcohol consumption in the United States began to plummet, and temperance societies began to spring up across the land. From a peak of about 5 gallons per capita, distilled spirits consumption fell to between 1 and 2 gallons and has remained more or less constant ever since.

But what is interesting is that the widespread turn away from hard liquor was not in any way produced by top-down or coercive prohibition. Instead, it was a bottom-up social movement largely coupled with the second great awakening. While this movement certainly did result in some forms of restrictions on the production and sale of alcohol, much more effective were its opprobrium against public drunkenness and those who enabled it.

This book is based on a Ph.D. thesis, and in places shows it. There is a painful attempt, based on laughably incomplete data, to quantify alcohol consumption during the early 19th century. This, I assume, is because at the epoch “social scientists” repeated the mantra “numbers are good”. This is all nonsense; ignore it. Far more credible are the reports of contemporary observers quoted in the text.

As to Prof. Rahe's assertion that prohibition reduces the consumption of a substance, I don't think this book advances that case. The collapse in the consumption of strong drink in the 1830s was a societal and moral revolution, and any restrictions on the availability of alcohol were the result of that change, not its cause. That said, I do not dispute that prohibition did reduce the reported level of alcohol consumption, but at the cost of horrific criminality and disdain for the rule of law and, after repeal, a return to the status quo ante.

If you're interested in prohibition in all of its manifestations, I recommend this book, even though it has little to do with prohibition. It is an object lesson in how a free society self-corrects from excess and re-orients itself toward behaviour which benefits its citizens.

 Permalink

Pratchett, Terry and Stephen Baxter. The Long Earth. New York: HarperCollins, 2012. ISBN 978-0-06-206775-3.
Terry Pratchett is my favourite author of satirical fantasy and Stephen Baxter is near the top of my list of contemporary hard science fiction writers, so I expected this collaboration to be outstanding. It is.

Larry Niven's Ringworld created a breathtakingly large arena for story telling, not spread among the stars but all reachable, at least in principle, just by walking. This novel expands the stage many orders of magnitude beyond that, and creates a universe in which any number of future stories may be told. The basic premise is that the multiple worlds interpretation of quantum mechanics literally exists (to be technical, Max Tegmark's Level III parallel universes), and that some humans possess a native ability to step from one universe to the next. The stepper arrives at the same location on Earth, at the same local time (there is apparently a universal clock like that assumed in quantum theory), but on a branch where the history of the Earth has diverged due to contingent events in the past. Adjacent universes tend to be alike, but the further one steps the more they differ from the original, or Datum Earth.

The one huge difference between Datum Earth and all of the others is that, as far as is known, humans evolved only on the Datum. Nobody knows why this is—perhaps there was some event in the chain of causality that produced modern humans which was so improbable it happened only once in what may be an infinite number of parallel Earths.

The ability to step was extremely rare, genetically transmitted, and often discovered only when an individual was in peril and stepped to an adjacent Earth as the ultimate flight response. All of this changed on Step Day, when Willis Linsay, a physicist in Madison, Wisconsin, posted on the Internet plans for a “stepper” which could be assembled from parts readily available from Radio Shack, plus a potato. (Although entirely solid state, it did include a tuber.) A rocker switch marked “WEST — OFF — EAST” was on the top, and when activated moved the holder of the box to an adjacent universe in the specified notional direction.

Suddenly people all over the Earth began cobbling together steppers of their own and departing for adjacent Earths. Since all of these Earths were devoid of humans (apart from those who stepped there from the Datum), they were in a state of nature, including all of those dangerous wild beasts that humans had eradicated from their world of origin. Joshua Valienté, a natural stepper, distinguishes himself by rescuing children from the Madison area who used their steppers and were so bewildered they did not know how to get back.

This brings Joshua to the attention of the shadowy Black Corporation, who recruits him (with a bit of blackmail) to explore the far reaches of the Long Earth: worlds a million or more steps from the Datum. His companion on the voyage is Lobsang, who may or may not have been a Tibetan motorcycle repairman, now instantiated in a distributed computer network, taking on physical forms ranging from a drinks machine, a humanoid, and an airship. As they explore, they encounter hominid species they call “trolls” and “elves”, which they theorise are natural steppers which evolved on the Datum and then migrated outward along the Long Earth without ever developing human-level intelligence (perhaps due to lack of selective pressure, since they could always escape competition by stepping away). But, as Joshua and Lobsang explore the Western frontier, they find a migration of trolls and elves toward the East. What are they fleeing, or what is attracting them in that direction? They also encounter human communities on the frontier, both homesteaders from the Datum and natural steppers who have established themselves on other worlds.

Spoiler warning: Plot and/or ending details follow.  
The concept of stepping to adjacent universes is one of those plot devices that, while opening up a huge scope for fiction, also, like the Star Trek transporter, threatens to torpedo drama. If you can escape peril simply by stepping away to another universe, how can characters be placed in difficult circumstances? In Star Trek, there always has to be some reason (“danged pesky polaron particles!”) why the transporter can't be used to beam the away team out of danger. Here, the authors appear to simply ignore the problem. In chapter 30, Joshua is attacked by elves riding giant hogs and barely escapes with his life. But, being a natural stepper, he could simply step away and wait for Lobsang to find him in an adjacent Earth. But he doesn't, and there is no explanation of why he didn't.
Spoilers end here.  

I enjoyed this book immensely, but that may be in part because I've been thinking about multiverse navigation for many years, albeit in a different context and without the potato. This is a somewhat strange superposition of fantasy and hard science fiction (which is what you'd expect, given the authors), and your estimation of it, like any measurement in quantum mechanics, will depend upon the criteria you're measuring. I note that the reviews on Amazon have a strikingly flat distribution in stars assigned—this is rare; usually a book will have a cluster at the top or bottom, or for controversial books a bimodal distribution depending upon the reader's own predisposition. I have no idea if you'll like this book, but I did. And I want a stepper.

 Permalink

Vinge, Vernor. Rainbows End. New York: Tor Books, 2006. ISBN 978-0-8125-3636-2.
As I have remarked upon several occasions, I read very little contemporary science fiction, apart from works by authors I trust to deliver thoughtful and entertaining yarns. This novel is an excellent example of why. Vernor Vinge is a former professor of mathematics, a pioneer in envisioning the advent and consequences of a technological singularity, and serial winner of the most prestigious awards for science fiction. This book won the 2007 Hugo award for best novel.

And therein lies my problem with much of present-day science fiction. The fans (the Hugo is awarded based on a vote of members of the World Science Fiction Society) loved it, but I consider it entirely devoid of merit. Now authors, or at least those who view their profession as a business, are well advised to write what the audience wants to read, and evidently this work met that criterion, but it didn't work for me—in fact, I found it tedious slogging to the end, hoping it would get better or that some brilliant plot twist would redeem all the ennui of getting there. Nope: didn't happen.

Interestingly, while this book won the Hugo, it wasn't even nominated for a Nebula, which is chosen by professional writers, not the fans. I guess the writers are closer to my stick-in-the-mud preferences than the more edgy fans.

This is a story set in a 21st century society on the threshold of a technological singularity. Robert Gu, a celebrated poet felled by Alzheimer's disease, has been cured by exponentially advancing medical technology, but now he finds himself in a world radically different from the one in which his cognition faded out. He has to reconcile himself with his extended and complicated family, many of whom he treated horridly, and confront the fact that while his recovery from dementia has been complete, he seems to have lost the talent of looking at the world from an oblique angle that made his poetry compelling. Further, in a world of ubiquitous computing, haptic interfaces, augmented reality, and forms of social interaction that seemingly come and go from moment to moment, he is but a baby among the plugged-in children with whom he shares a classroom as he attempts to come up to speed.

Then, a whole bunch of stuff happens which is completely absurd, involving a mischievous rabbit which may be an autonomous artificial intelligence, a library building that pulls up its columns and walks, shadowy intelligence agencies, a technology which might be the key to large-scale mind control, battles between people committed to world-views which might be likened to an apocalyptic yet trivial conflict between My Little Pony and SpongeBob, and a “Homeland Security” agency willing to use tactical nukes on its own homeland. (Well, I suppose, the last isn't so far fetched….)

My citation of the title above is correct—I did not omit an apostrophe. The final chapter of the novel is titled “The Missing Apostrophe”. Think about it: you can read it either way.

Finally, it ends. And so, thankfully, does this review.

I have no problem with augmented reality and the emergence of artificial intelligence. Daniel Suarez's Daemon (August 2010) and Freedom™ (January 2011) limn a future far more engaging and immeasurably less silly than that of the present work. Nor does a zany view of the singularity put me off in the least: Charles Stross's Singularity Sky (February 2011) is such a masterpiece of the genre that I was reproached by some readers for having committed the sin of spoilers because I couldn't restrain myself from citing some of its many delights. This can be done well, but in my opinion it isn't here.

 Permalink

Feynman, Richard P., Fernando B. Morinigo, and William G. Wagner. Feynman Lectures on Gravitation. Edited by Brian Hatfield. Boulder, CO: Westview Press, 1995. ISBN 978-0-8133-4038-8.
In the 1962–63 academic year at Caltech, Richard Feynman taught a course on gravitation for graduate students and postdoctoral fellows. For many years the blackboard in Feynman's office contained the epigram, “What I cannot create, I do not understand.” In these lectures, Feynman discards the entire geometric edifice of Einstein's theory of gravitation (general relativity) and starts from scratch, putting himself and his students in the place of physicists from Venus (who he calls “Venutians”—Feynman was famously sloppy with spelling: he often spelled “gauge” as “guage”) who have discovered the full quantum theories of electromagnetism and the strong and weak nuclear forces but have just discovered there is a very weak attractive force between all masses, regardless of their composition. (Feynman doesn't say so, but putting on the science fiction hat one might suggest that the “Venutians” hadn't previously discovered universal gravitation because the dense clouds that shroud their planet deprived them of the ability to make astronomical observations and the lack of a moon prevented them from discovering tidal effects.)

Feynman then argues that the alien physicists would suspect that this new force worked in a manner analogous to those already known, and seek to extrapolate their knowledge of electrodynamics (the quantum theory of which Feynman had played a central part in discovering, for which he would share a Nobel prize in 1965). They would then guess that the force was mediated by particles they might dub “gravitons”. Since the force appeared to follow an inverse square law, these particles must be massless (or at least have such a small mass that deviations from the inverse square law eluded all existing experiments). Since the force was universally attractive, the spin of the graviton must be even (forces mediated by odd spin bosons such as the photon follow an attraction/repulsion rule as with static electricity; no evidence of antigravity has ever been found). Spin 0 can be ruled out because it would not couple to the spin 1 photon, which would mean gravity would not deflect light, which experiment demonstrates it does. So, we're left with a spin 2 graviton. (It might be spin 4, or 6, or higher, but there's no reason to proceed with such an assumption and the horrific complexities it entails unless we find something which rules out spin 2.)

A spin 2 graviton implies a field with a tensor potential function, and from the behaviour of gravitation we know that the tensor must be symmetric. All of this allows us, by direct analogy with electrodynamics, to write down the first draft of a field theory of gravitation which, when explored, predicts the existence of gravitational radiation, the gravitational red shift, the deflection of light by massive objects, and the precession of Mercury. Eventually Feynman demonstrates that this field theory is isomorphic to Einstein's geometrical theory, and could have been arrived at without ever invoking the concept of spacetime curvature.

In this tour de force, we get to look over the shoulder of one of the most brilliant physicists of all time as he reinvents the theory of gravitation, at a time when his goal was to produce a consistent and finite quantum theory of gravitation. Feynman's intuition was that since gravity was a far weaker force than electromagnetism, it should be easier to find a quantum theory, since the higher order terms would diminish in magnitude much more rapidly. Although Feynman's physical intuition was legendary and is much on display in these lectures, in this case it led him astray: his quest for quantum gravity failed and he soon abandoned it, and fifty years later nobody has found a suitable theory (although we've discovered a great number of things which don't work). Feynman identifies one of the key problems here—since gravitation is a universally attractive force which couples to mass-energy, and a gravitational field itself has energy, gravity gravitates, and this means that the higher order terms stretch off to infinity and can't be eliminated by clever mathematics. While these effects are negligible in laboratory experiments or on the scale of the solar system (although the first-order effect can be teased out of lunar ranging experiments), in strong field situations they blow up and the theory produces nonsense results.

These lectures were given just as the renaissance of gravitational physics was about to dawn. Discovery of extragalactic radio sources with stupendous energy output had sparked speculation about relativistic “superstars”, discussed here in chapters 13 and 14, and would soon lead to observations of quasars, which would eventually be explained by that quintessential object of general relativity, the black hole. On the theoretical side, Feynman's thesis advisor John A. Wheeler was beginning to breathe life into the long-moribund field of general relativity, and would coin the phrase “black hole” in 1967.

This book is a period piece. Some of the terminology in use at the time has become obsolete: Feynman uses “wormhole” for a black hole and “Schwarzschild singularity” for what we now call its event horizon. The discussion of “superstars” is archaic now that we understand the energy source of active galactic nuclei to be accretion onto supermassive black holes. In other areas, Feynman's insights are simply breathtaking, especially when you consider they date from half a century ago. He explores Mach's principle as the origin of inertia, cosmology and the global geometry of the universe, and gravitomagnetism.

This is not the book to read if you're interested in learning the contemporary theory of gravitation. For the most commonly used geometric approach, an excellent place to start is Misner, Thorne, and Wheeler's Gravitation. A field theory approach closer to Feynman's is presented in Weinberg's Gravitation and Cosmology. These are both highly technical works, intended for postgraduates in physics. For a popular introduction, I'd recommend Wheeler's A Journey into Gravity and Spacetime, which is now out of print, but used copies are usually available. It's only if you understand the theory, ideally at a technical level, that you can really appreciate the brilliance of Feynman's work and how prescient his insights were for the future of the field. I first read this book in 1996 and re-reading it now, having a much deeper understanding of the geometrical formulation of general relativity, I was repeatedly awestruck watching Feynman leap from insight to insight of the kind many physicists might hope to have just once in their entire careers.

Feynman gave a total of 27 lectures in the seminar. Two of the postdocs who attended, Fernando B. Morinigo and William G. Wagner, took notes for the course, from which this book is derived. Feynman corrected the notes for the first 11 lectures, which were distributed in typescript by the Caltech bookstore but never otherwise published. In 1971 Feynman approved the distribution of lectures 12–16 by the bookstore, but by then he had lost interest in gravitation and did not correct the notes. This book contains the 16 lectures Feynman approved for distribution. The remaining 11 are mostly concerned with Feynman's groping for a theory of quantum gravity. Since he ultimately failed in this effort, it's plausible to conclude he didn't believe them worthy of circulation. John Preskill and Kip S. Thorne contribute a foreword which interprets Feynman's work from the perspective of the contemporary view of gravitation.

 Permalink

Beck, Glenn and Harriet Parke. Agenda 21. New York: Threshold Editions, 2012. ISBN 978-1-4767-1669-5.
In 1992, at the United Nations Conference on Environment and Development (“Earth Summit”) in Rio de Janeiro, an action plan for “sustainable development” titled “Agenda 21” was adopted. It has since been endorsed by the governments of 178 countries, including the United States, where it was signed by president George H. W. Bush (not being a formal treaty, it was not submitted to the Senate for ratification). An organisation called Local Governments for Sustainability currently has more than 1200 member towns, cities, and counties in 70 countries, including more than 500 in the United States signed on to the program. Whenever you hear a politician talking about environmental “sustainability” or the “precautionary principle”, it's a good bet the ideas they're promoting can be traced back to Agenda 21 or its progenitors.

When you read the U.N. Agenda 21 document (which I highly encourage you to do—it is very likely your own national government has endorsed it), it comes across as the usual gassy international bureaucratese you expect from a U.N. commission, but if you read between the lines and project the goals and mechanisms advocated to their logical conclusions, the implications are very great indeed. What is envisioned is nothing less than the extinction of the developed world and the roll-back of the entire project of the enlightenment. While speaking of the lofty goal of lifting the standard of living of developing nations to that of the developed world in a manner that does not damage the environment, it is an inevitable consequence of the report's assumption of finite resources and an environment already stressed beyond the point of sustainability that the inevitable outcome of achieving “equity” will be a global levelling of the standard of living to one well below the present-day mean, necessitating a catastrophic decrease in the quality of life in developed nations, which will almost certainly eliminate their ability to invest in the research and technological development which have been the engine of human advancement since the Renaissance. The implications of this are so dire that somebody ought to write a dystopian novel about the ultimate consequences of heading down this road.

Somebody has. Glenn Beck and Harriet Parke (it's pretty clear from the acknowledgements that Parke is the principal author, while Beck contributed the afterword and lent his high-profile name to the project) have written a dark and claustrophobic view of what awaits at the end of The Road to Serfdom (May 2002). Here, as opposed to an incremental shift over decades, the United States experiences a cataclysmic socio-economic collapse which is exploited to supplant it with the Republic, ruled by the Central Authority, in which all Citizens are equal. The goals of Agenda 21 have been achieved by depopulating much of the land, letting it return to nature, packing the humans who survived the crises and conflict as the Republic consolidated its power into identical densely-packed Living Spaces, where they live their lives according to the will of the Authority and its Enforcers. Citizens are divided into castes by job category; reproductive age Citizens are “paired” by the Republic, and babies are taken from mothers at birth to be raised in Children's Villages, where they are indoctrinated to serve the Republic. Unsustainable energy sources are replaced by humans who have to do their quota of walking on “energy board” treadmills or riding “energy bicycles” everywhere, and public transportation consists of bus boxes, pulled by teams of six strong men.

Emmeline has grown up in this grim and grey world which, to her, is way things are, have always been, and always will be. Just old enough at the establishment of Republic to escape the Children's Village, she is among the final cohort of Citizens to have been raised by their parents, who told her very little of the before-time; speaking of that could imperil both parents and child. After she loses both parents (people vanishing, being “killed in industrial accidents”, or led away by Enforcers never to be seen again is common in the Republic), she discovers a legacy from her mother which provides a tenuous link to the before-time. Slowly and painfully she begins to piece together the history of the society in which she lives and what life was like before it descended to crush the human spirit. And then she must decide what to do about it.

I am sure many reviewers will dismiss this novel as a cartoon-like portrayal of ideas taken to an absurd extreme. But much the same could have been said of We, Anthem, or 1984. But the thing about dystopian novels based upon trends already in place is that they have a disturbing tendency to get things right. As I observed in my review of Atlas Shrugged (April 2010), when I first read it in 1968, it seemed to evoke a dismal future entirely different from what I expected. When I read it the third time in 2010, my estimation was that real-world events had taken us about 500 pages into the 1168 page tome. I'd probably up that number today. What is particularly disturbing about the scenario in this novel, as opposed to the works cited above, is that it describes what may be a very strong attractor for human society once rejection of progress becomes the doctrine and the population stratifies into a small ruling class and subjects entirely dependent upon the state. After all, that's how things have more or less been over most of human history and around the globe, and the brief flash of liberty, innovation, and prosperity we assume to be the normal state of affairs may simply be an ephemeral consequence of the opening of a frontier which, now having closed, concludes that aberrant chapter of history, soon to be expunged and forgotten.

This is a book which begs for one or more sequels. While the story is satisfying by itself, you put it down wondering what happens next, and what is going on outside the confines of the human hive its characters inhabit. Who are the members of the Central Authority? How do they live? How do they groom their successors? What is happening on other continents? Is there any hope the torch of liberty might be reignited?

While doubtless many will take fierce exception to the entire premise of the story, I found only one factual error. In chapter 14 Emmeline discovers a photograph which provides a link to the before-time. On it is the word “KODACHROME”. But Kodachrome was a colour slide (reversal) film, not a colour print film. Even if the print that Emmeline found had been made from a Kodachrome slide, the print wouldn't say “KODACHROME”. I did not spot a single typographical error, and if you're a regular reader of this chronicle, you'll know how rare that is. In the Kindle edition, links to documents and resources cited in the factual afterword are live and will take you directly to the cited page.

 Permalink

December 2012

Chertok, Boris E. Rockets and People. Vol. 3. Washington: National Aeronautics and Space Administration, [1999] 2009. ISBN 978-1-4700-1437-7 NASA SP-2009-4110.
This is the third book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of this memoir chronicled the achievements which thrust the Soviet Union's missile and space program into the consciousness of people world-wide and sparked the space race with the United States: the development of the R-7 ICBM, Sputnik and its successors, and the first flights which photographed the far side of the Moon and impacted on its surface. In this volume, the author describes the projects and accomplishments which built upon this base and persuaded many observers of the supremacy of Soviet space technology. Since the author's speciality was control systems and radio technology, he had an almost unique perspective upon these events: unlike other designers who focussed upon one or a few projects, he was involved in almost all of the principal efforts, from intermediate range, intercontinental, and submarine-launched ballistic missiles; air and anti-missile defence; piloted spaceflight; reconnaissance, weather, and navigation satellites; communication satellites; deep space missions and the ground support for them; soft landing on the Moon; and automatic rendezvous and docking. He was present when it looked like the rudimentary R-7 ICBM might be launched in anger during the Cuban missile crisis, at the table as chief designers battled over whether combat missiles should use cryogenic or storable liquid propellants or solid fuel, and sat on endless boards of inquiry after mission failures—the first eleven attempts to soft-land on the Moon failed, and Chertok was there for each launch, subsequent tracking, and sorting through what went wrong.

This was a time of triumph for the Soviet space program: the first manned flight, endurance record after endurance record, dual flights, the first woman in space, the first flight with a crew of more than one, and the first spacewalk. But from Chertok's perspective inside the programs, and the freedom he had to write candidly in the 1990s about his experiences, it is clear that the seeds of tragedy were being sown. With the quest for one spectacular after another, each surpassing the last, the Soviets became inoculated with what NASA came to call “go fever”—a willingness to brush anomalies under the rug and normalise the abnormal because you'd gotten away with it before.

One of the most stunning examples of this is Gagarin's flight. The Vostok spacecraft consisted of a spherical descent module (basically a cannonball covered with ablative thermal protection material) and an instrument compartment containing the retro-rocket, attitude control system, and antennas. After firing the retro-rocket, the instrument compartment was supposed to separate, allowing the descent module's heat shield to protect it through atmospheric re-entry. (The Vostok performed a purely ballistic re-entry, and had no attitude control thrusters in the descent module; stability was maintained exclusively by an offset centre of gravity.) In the two unmanned test flights which preceded Garagin's mission, the instrument module had failed to cleanly separate from the descent module, but the connection burned through during re-entry and the descent module survived. Gagarin was launched in a spacecraft with the same design, and the same thing happened: there were wild oscillations, but after the link burned through his spacecraft stabilised. Astonishingly, Vostok 2 was launched with Gherman Titov on board with precisely the same flaw, and suffered the same failure during re-entry. Once again, the cosmonaut won this orbital game of Russian roulette. One wonders what lessons were learned from this. In this narrative, Chertok is simply aghast at the decision making here, but one gets the sense that you had to be there, then, to appreciate what was going through people's heads.

The author was extensively involved in the development of the first Soviet communications satellite, Molniya, and provides extensive insights into its design, testing, and early operations. It is often said that the Molniya orbit was chosen because it made the satellite visible from the Soviet far North where geostationary satellites would be too close to the horizon for reliable communication. It is certainly true that today this orbit continues to be used for communications with Russian arctic territories, but its adoption for the first Soviet communications satellite had an entirely different motivation. Due to the high latitude of the Soviet launch site in Kazakhstan, Korolev's R-7 derived booster could place only about 100 kilograms into a geostationary orbit, which was far too little for a communication satellite with the technology of the time, but it could loft 1,600 kilograms into a high-inclination Molniya orbit. The only alternative would have been for Korolev to have approached Chelomey to launch a geostationary satellite on his UR-500 (Proton) booster, which was unthinkable because at the time the two were bitter rivals. So much for the frictionless efficiency of central planning!

In engineering, one learns that every corner cut will eventually come back to cut you. Korolev died at just the time he was most needed by the Soviet space program due to a botched operation for a routine condition performed by a surgeon who had spent most of his time as a Minister of the Soviet Union and not in the operating room. Gagarin died in a jet fighter training accident which has been the subject of such an extensive and multi-layered cover-up and spin that the author simply cites various accounts and leaves it to the reader to judge. Komarov died in Soyuz 1 due to a parachute problem which would have been discovered had an unmanned flight preceded his. He was a victim of “go fever”.

There is so much insight and wisdom here I cannot possibly summarise it all; you'll have to read this book to fully appreciate it, ideally after having first read Volume 1 (May 2012) and Volume 2 (August 2012). Apart from the unique insider's perspective on the Soviet missile and space program, as a person elected a corresponding member of the Soviet Academy of Sciences in 1968 and a full member (academician) of the Russian Academy of Sciences in 2000, he provides a candid view of the politics of selection of members of the Academy and how they influence policy and projects at the national level. Chertok believes that, even as one who survived Stalin's purges, there were merits to the Soviet system which have been lost in the “new Russia”. His observations are worth pondering by those who instinctively believe the market will always converge upon the optimal solution.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They are not only better formatted but free.

The original Russian edition is available online.

 Permalink

Baxter, Stephen. Titan. New York: Harper Voyager, 1997. ISBN 978-0-06-105713-7.
This novel begins in the latter half of the first decade of the 21st century. Space shuttle Columbia has been lost in a re-entry accident, and a demoralised NASA has decided to wind down the shuttle program, with whatever is to follow, if anything, ill-defined and subject to the whims of politicians. The Huygens probe has landed on Saturn's moon Titan and returned intriguing and enigmatic results which are indicative of a complex chemistry similar, in a way, to the “primordial soup” from which life formed on the ancient Earth. As China approaches economic superpower status, it begins to flex its muscles with a military build-up, an increasingly aggressive posture toward its neighbours in the region, and a human spaceflight program which, while cautious and measured, seems bent on achieving very ambitious goals. In the United States, as the 2008 presidential election approaches, the odds on favourite to prevail is a “thin, jug-eared man of about fifty” (p. 147) with little or no interest in science and technology and an agenda of fundamental transformation of the nation. The younger generation has completely tuned out science, technology, and the space program, and some even advocate a return to the hunter-gatherer lifestyle (p. 450).

Did I mention that this book was published in 1997?

Astronaut Paula Benacerraf has been promoted and given the mission to shut down the space shuttle program in an orderly fashion, disposing of its assets responsibly. Isaac Rosenberg, a JPL scientist working on the Huygens probe results, pitches a mission which will allow the NASA human spaceflight and solar system exploration programs to go out in a heroic effort rather than be ignominiously consigned to museums as relics of a lost age of greatness. Rosenberg (as he prefers to be addressed), argues that a space shuttle should be sent on its final mission to the only place in the solar system where its stubby wings make any sense: Titan. With an atmosphere about 50% more dense than that of the Earth, it is plausible a space shuttle orbiter could make an aerodynamic entry at Titan. (The profile would be very different, however, since Titan's low gravity [just 0.14 g] would mean that entry velocity would be lower and the scale height of the atmosphere much greater than at Earth.)

Benacerraf recruits a cabal within NASA and begins to put together a mission plan, using existing hardware, components under development for future missions, prototypes from laboratories, and legacy gear liberated from museums and static displays, to see if such an absurdly ambitious mission might be possible. They conclude that, while extraordinarily risky, nothing rules it out. With the alternative a humiliating abandonment of human spaceflight, and a crew willing to risk their lives on a mission which may prove one way (their only hope of survival on Titan being resupply missions and of return to Earth a crew rotation mission, none of which would be funded at the time of their departure), the NASA administrator is persuaded to go for it.

This novel begins as a chronicle of an heroic attempt to expand the human presence in the solar system, at a time when the door seems to be closing on the resources, will, and optimistic view of the future such efforts require. But then, as the story plays out, it becomes larger and larger, finally concluding in a breathtaking vista of the destiny of life in the galaxy, while at the same time, a chronicle of just how gnarly the reality of getting there is likely to be. I don't think I've ever read science fiction which so effectively communicated that the life of pioneers who go to other worlds to stay has a lot more in common with Ernest Shackleton than Neil Armstrong.

If you're a regular reader of these remarks, you'll know I enjoy indulging in nitpicking details in near-future hard science fiction. I'm not going to do that here, not because there aren't some things the author got wrong, but because the story is so enthralling and the characters so compelling that I couldn't care less about the occasional goof. Of course NASA would never send a space shuttle to Titan. Certainly if you worked out the delta-V, consumables requirements, long-term storability of propellants, reliability of systems over such an extended mission, and many other details you'd find it couldn't possibly work. But if these natters made you put the book down, you'd deprive yourself of a masterpiece which is simultaneously depressing in its depiction of human folly and inspiring in the heroism of individual people and the human prospect. This is a thick book: 688 pages in the print edition, and I just devoured it, unable to put it down because I couldn't wait to find out what happens next.

The Kindle edition appears to have been created by scanning a print edition with an optical character recognition program. There are dozens (I noted 49) of the kind of typographical errors one expects from such a process, a few of which I'd expect to have been caught by a spelling checker. I applaud publishers who are bringing out their back-lists in electronic editions, but for a Kindle edition which costs just one U.S. dollar less than the mass market paperback, I believe the reader should be entitled to copy editing comparable to that of a print edition.

 Permalink

McCahill, Tom. Tom McCahill's Car Owner Handbook. Greenwich, CT: Fawcett, 1956.
The 1950s in the United States were immersed in the car culture, and cars meant domestic Detroit iron, not those funny little bugs from Europe that eccentric people drove. American cars of the fifties may have lacked refinement and appear somewhat grotesque to modern eyes, but they were affordable, capacious, fast, and rugged. Just about anybody with a rudimentary knowledge of mechanics could work on them, and their simple design invited customisation and performance tuning. Tom McCahill was the most prominent automotive journalist of this epoch. His monthly column and reviews of cars in Mechanix Illustrated could make or break a model's prospects in the market. He was known for his colourful language: a car didn't just go fast, but “took off like a Killarney bat”, and cornered “like a bowling ball in a sewer pipe”. McCahill was one of the first voices to speak out about the poor build quality of domestic automobiles and their mushy suspension and handling compared to European imports, and he was one of the few automotive writers at the time to regularly review imports.

In this book, McCahill shares his wisdom on many aspects of car ownership: buying a new or used car; tune-up tips; choosing tires, lubricants, and fuel; dealing with break-downs on the road; long-distance trips; performance tweaks and more. You'll also encounter long-forgotten parts of the mid-century car culture such as the whole family making a trip to Detroit to pick up their new car at the factory and breaking it in on the way home. Somewhat surprisingly for a publication from the era of big V-8 engines and twenty-five cent gas, there's even a chapter on improving mileage. The book concludes with “When to Phone the Junkman”.

Although cars have been transformed from the straightforward designs of the 1950s into machines of inscrutable complexity, often mandated by bureaucrats who ride the bus or subway to work, there is a tremendous amount of wisdom here about automobiles and driving, some of it very much ahead of its time.

This “Fawcett How-To Book” is basically an issue of Mechanix Illustrated consisting entirely of McCahill's work, and even includes the usual advertisements. This work is, of course, hopelessly out of print. Used copies are available, but often at absurdly elevated prices for what amounts to a pulp magazine which sold for 75 cents new. You may have more luck finding a copy on eBay than through Amazon used book sellers. As best I can determine, this publication was never assigned a Library of Congress control number, although others in the series were.

 Permalink

Greenberg, Stanley. Time Machines. Munich: Hirmer Verlag, 2011. ISBN 978-3-7774-4041-5.
Should our civilisation collapse due to folly, shortsightedness, and greed, and an extended dark age ensue, in which not only our painfully-acquired knowledge is lost, but even the memory of what we once knew and accomplished forgotten, certainly among the most impressive of the achievements of our lost age when discovered by those who rise from the ruins to try again will be the massive yet delicate apparatus of our great physics experiments. Many, buried deep in the Earth, will survive the chaos of the dark age and beckon to pioneers of the next age of discovery just as the tombs of Egypt did to those in our epoch. Certainly, when the explorers of that distant time first illuminate the great detector halls of our experiments, they will answer, as Howard Carter did when asked by Lord Carnarvon, “Can you see anything?”, “Yes, wonderful things.”

This book is a collection of photographs of these wonderful things, made by a master photographer and printed in a large-format (26×28 cm) coffee-table book. We visit particle accelerators in Japan, the United States, Canada, Switzerland, Italy, and Germany; gravitational wave detectors in the U.S. and Italy; neutrino detectors in Canada, Japan, the U.S., Italy, and the South Pole; and the 3000 km² cosmic ray observatory in Argentina.

This book is mostly about the photographs, not the physics or engineering: the photographs are masterpieces. All are reproduced in monochrome, which emphasises the beautiful symmetries of these machines without the distractions of candy-coloured cable bundles. There is an introduction by particle physicist David C. Cassidy which briefly sketches the motivation for building these cathedrals of science and end notes which provide additional details of the hardware in each photograph, but you don't pay the substantial price of the book for these. The photographs are obviously large format originals (nobody could achieve this kind of control of focus and tonal range with a convenient to use camera) and they are printed exquisitely. The screen is so fine I have difficulty evaluating it even with a high power magnifier, but it looks to me like the book was printed using not just a simple halftone screen but with ink in multiple shades of grey.

The result is just gorgeous. Resist the temptation to casually flip from image to image—immerse yourself in each of them and work out the perspective. One challenge is that it's often difficult to determine the scale of what you're looking at from a cursory glance at the picture. You have to search for something with which you're familiar until it all snaps into scale; this is sometimes difficult and I found the disorientation delightful and ultimately enlightening.

You will learn nothing about physics from this book. You will learn nothing about photography apart from a goal to which to aspire as you master the art. But you will see some of the most amazing creations of the human mind, built in search of the foundations of our understanding of the universe we inhabit, photographed by a master and reproduced superbly, inviting you to linger on every image and wish you could see these wonders with your own eyes.

 Permalink

  2013  

January 2013

Carroll, Sean. The Particle at the End of the Universe. New York: Dutton, 2012. ISBN 978-0-525-95359-3.
I believe human civilisation is presently in a little-perceived race between sinking into an entropic collapse, extinguishing liberty and individual initiative, and a technological singularity which will simply transcend all of the problems we presently find so daunting and intractable. If things end badly, our descendants may look upon our age as one of extravagance, where vast resources were expended in a quest for pure knowledge without any likelihood of practical applications.

Thus, the last decade has seen the construction of what is arguably the largest and most complicated machine ever built by our species, the Large Hadron Collider (LHC), to search for and determine the properties of elementary particles: the most fundamental constituents of the universe we inhabit. This book, accessible to the intelligent layman, recounts the history of the quest for the components from which everything in the universe is made, the ever more complex and expensive machines we've constructed to explore them, and the intricate interplay between theory and experiment which this enterprise has entailed.

At centre stage in this narrative is the Higgs particle, first proposed in 1964 as accounting for the broken symmetry in the electroweak sector (as we'd now say), which gives mass to the W and Z bosons, accounting for the short range of the weak interaction and the mass of the electron. (It is often sloppily said that the Higgs mechanism explains the origin of mass. In fact, as Frank Wilczek explains in The Lightness of Being [March 2009], around 95% of all hadronic mass in the universe is pure E=mc² wiggling of quarks and gluons within particles in the nucleus.) Still, the Higgs is important—if it didn't exist the particles we're made of would all be massless, travel at the speed of light, and never aggregate into stars, planets, physicists, or most importantly, computer programmers. On the other hand, there wouldn't be any politicians.

The LHC accelerates protons (the nuclei of hydrogen, which delightfully come from a little cylinder of hydrogen gas shown on p. 310, which contains enough to supply the LHC with protons for about a billion years) to energies so great that these particles, when they collide, have about the same energy as a flying mosquito. You might wonder why the LHC collides protons with protons rather than with antiprotons as the Tevatron did. While colliding protons with antiprotons allows more of the collision energy to go into creating new particles, the LHC's strategy of very high luminosity (rate of collisions) would require creation of far more antiprotons than its support facilities could produce, hence the choice of proton-proton collisions. While the energy of individual particles accelerated by the LHC is modest from our macroscopic perspective, the total energy of the beam circulating around the accelerator is intimidating: a full beam dump would suffice to melt a ton of copper. Be sure to step aside should this happen.

Has the LHC found the Higgs? Probably—the announcement on July 4th, 2012 by the two detector teams reported evidence for a particle with properties just as expected for the Higgs, so if it turned out to be something else, it would be a big surprise (but then Nature never signed a contract with scientists not to perplex them with misdirection). Unlike many popular accounts, this book looks beneath the hood and explores just how difficult it is to tease evidence for a new particle from the vast spray of debris that issues from particle collisions. It isn't like a little ball with an “h” pops out and goes “bing” in the detector: in fact, a newly produced Higgs particle decays in about 10−22 seconds, even faster than assets entrusted to the management of Goldman Sachs. The debris which emerges from the demise of a Higgs particle isn't all that different from that produced by many other standard model events, so the evidence for the Higgs is essentially a “bump” in the rate of production of certain decay signatures over that expected from the standard model background (sources expected to occur in the absence of the Higgs). These, in turn, require a tremendous amount of theoretical and experimental input, as well as massive computer calculations to evaluate; once you begin to understand this, you'll appreciate that the distinction between theory and experiment in particle physics is more fluid than you might have imagined.

This book is a superb example of popular science writing, and its author has distinguished himself as a master of the genre. He doesn't pull any punches: after reading this book you'll understand, at least at a conceptual level, broken symmetries, scalar fields, particles as excitations of fields, and the essence of quantum mechanics (as given by Aatish Bhatia on Twitter), “Don't look: waves. Look: particles.”

 Permalink

Byrd, Richard E. Alone. Washington: Island Press [1938, 1966] 2003. ISBN 978-1-55963-463-2.
To generations of Americans, Richard Byrd was the quintessential explorer of unknown terrain. First to fly over the North Pole (although this feat has been disputed from shortly after he claimed it to the present day), recipient of the Medal of Honor for this claimed exploit, pioneer in trans-Atlantic flight (although beaten by Lindbergh after a crash on a practice takeoff, he successfully flew from New York to France in June 1927), Antarctic explorer and first to fly over the South Pole, and leader of four more expeditions to the Antarctic, including commanding the operation which established the permanent base at the South Pole which remains there to this day.

In 1934, on his second Antarctic expedition, Byrd set up and manned a meteorological station on the Ross Ice Shelf south of 80°, in which he would pass the Antarctic winter—alone. He originally intended the station to be emplaced much further south and manned by three people (he goes into extensive detail why “cabin fever” makes a two man crew a prescription for disaster), and then, almost on a lark it seems from the narrative, decides, when forced by constraints of weather and delivery of supplies for the winter, to go it alone. In anticipation, he welcomes the isolation from distractions of daily events, the ability to catch up reading, thinking, and listening to music.

His hut was well designed and buried in the ice to render it immune from the high winds and drifting snow of the Antarctic winter. It was well provisioned to survive the winter: food and fuel tunnels cached abundant supplies. Less thought out was the stove and its ventilation. As winter set in, Byrd succumbed to carbon monoxide poisoning, made more severe by fumes from the gasoline generator he used to power the radio set which was his only link to those wintering at the Little America base on the coast.

Byrd comes across in this narrative as an extraordinarily complex character. One moment, he's describing how his lamp failed when, at −52° C, its kerosene froze, next he's recounting how easily the smallest mistake: loss of sight of the flags leading back to shelter or a jammed hatch back into the hut can condemn one to despair and death by creeping cold, and then he goes all philosophical:

The dark side of a man's mind seems to be a sort of antenna tuned to catch gloomy thoughts from all directions. I found it so with mine. That was an evil night. It was as if all the world's vindictiveness were concentrated upon me as upon a personal enemy. I sank to depths of disillusionment which I had not believed possible. It would be tedious to discuss them. Misery, after all, is the tritest of emotions.

Here we have a U.S. Navy Rear Admiral, Medal of Honor winner, as gonzo journalist in the Antarctic winter—extraordinary. Have any other great explorers written so directly from the deepest recesses of their souls?

Byrd's complexity deepens further as he confesses to fabricating reports of his well-being in radio reports to Little America, intended, he says, to prevent them from launching a rescue mission which he feared would end in failure and the deaths of those who undertook it. And yet Byrd's increasingly bizarre communications eventually caused such a mission to be launched, and once it was, his diary pinned his entire hope upon its success.

If you've ever imagined yourself first somewhere, totally alone and living off the supplies you've brought with you: in orbit, on the Moon, on Mars, or beyond, here is a narrative of what it's really like to do that, told with brutal honesty by somebody who did. Admiral Byrd's recounting of his experience is humbling to any who aspire to the noble cause of exploration.

 Permalink

White, James. All Judgment Fled. New York: Ballantine, 1969. ISBN 978-0-345-02016-1. LCCN 70086388.
James White was a science fiction author, fan, and fanzine editor in Northern Ireland. Although he published 19 novels and numerous short stories, he never quit his day job to become a professional writer: apart from a few superstar authors, science fiction just didn't pay that much in the 1950s and '60s. White was originally attracted to science fiction by the work of “Doc” Smith and Robert Heinlein, and his fiction continues very much in the Golden Age tradition of hard science fiction they helped establish.

In the 1960s, one of the criticisms of science fiction by “new wave” authors was that it had become too obsessed with hardware and conflict, and did not explore the psyche of its characters or the cultures they inhabited. In this book, the author tells a story in the mainstream of the hard science fiction genre, but puts the psychology of the characters on centre stage. Starting with a little smudge of light on an astronomer's time exposure, follow-up observations determine the object was maneuvering and hence could not be an asteroid. It settles into an orbit 12 million miles outside that of Mars. Spectral analysis reveals it to be highly reflective, probably metal. A Jupiter probe is diverted to fly by the object, and returns grainy images of a torpedo-shaped structure about half a mile in length. Around the world, it is immediately dubbed the Ship.

After entering solar orbit, the Ship does nothing: it neither maneuvers nor emits signals detectable by sensors of any kind. It remains a complete enigma, but one of epochal importance to a humanity just taking its first steps into its own solar system: a civilisation capable of interstellar travel was obviously so far beyond the technological capability of mankind that contact with it could change everything in human history, and were that contact to end badly, ring down the curtain on its existence.

Two ships, built to establish a base and observatory on the Martian moon Deimos, are re-purposed to examine the Ship at close range and, should the opportunity present itself, make contact with its inhabitants. The crew of six, divided between the two ships, are a mix of square-jawed military astronaut types and woolier scientists, including a lone psychologist who finds himself having to master the complexity of dynamics among the crew, their relations with distant Prometheus Control on Earth which seems increasingly disconnected in its estimation of the situation they are experiencing first hand and delusional in their orders for dealing with it, and the ultimate challenge of comprehending the psychology of spacefaring extraterrestrials in order to communicate with them.

Upon arrival at the Ship, the mystery only deepens. Not only is there no reaction to their close range approach to the Ship, when an exploration party boards it, they find technology which looks comparable to that of humans, no evidence of an intelligent life form directing the ship, but multitudes of aliens as seemingly mindless as sharks bent on killing them. Puzzling out this enigma requires the crew to explore the Ship, deal with Prometheus Control as an adversary, manage the public relations impact of their actions on a global audience on Earth who are watching their every move, and deal with the hazards of a totally alien technology.

This is a throughly satisfying story of first contact (although as the pages count down toward the end, you'll find yourself wondering if, and when, that will actually happen). It is not great science fiction up to the standard of Doc Smith or Heinlein, but it is very good. The “Personnel Launcher” is one of the more remarkable concepts of transferring crew between ships en-route I've encountered. Readers at this remove may find the author's taking psychology and psychotherapy so seriously rather quaint. But recall that through much of the 1960s, even the theories of the charlatan Freud were widely accepted by people who should have known better, and the racket of psychoanalysis was prospering. Today we'd just give 'em a pill. Are we wiser, or were they?

This work is out of print, but used copies are generally available. The book was reprinted in 1979 by Del Rey and again in 1996 by Old Earth Books. If you're looking for a copy to read (as opposed to a collectible), it's best to search by author and title and choose the best deal based on price and condition. The novel was originally serialised in If Magazine in 1967.

Update: New reprint copies of the original UK hardcover edition remain available directly from Old Earth Books. (2013-01-25 20:16 UTC)

 Permalink

Manchester, William and Paul Reid. The Last Lion. Vol. 3. New York: Little, Brown, 2012. ISBN 978-0-316-54770-3.
William Manchester's monumental three volume biography of Winston Churchill, The Last Lion, began with the 1984 publication of the first volume, Visions of Glory, 1874–1932 and continued with second in 1989, Alone, 1932–1940. I devoured these books when they came out, and eagerly awaited the concluding volume which would cover Churchill's World War II years and subsequent career and life. This was to be a wait of more than two decades. By 1988, William Manchester had concluded his research for the present volume, subtitled Defender of the Realm, 1940–1965 and began to write a draft of the work. Failing health caused him to set the project aside after about a hundred pages covering events up to the start of the Battle of Britain. In 2003, Manchester, no longer able to write, invited Paul Reid to audition to complete the work by writing a chapter on the London Blitz. The result being satisfactory to Manchester, his agent, and the publisher, Reid began work in earnest on the final volume, with the intent that Manchester would edit the manuscript as it was produced. Alas, Manchester died in 2004, and Reid was forced to interpret Manchester's research notes, intended for his own use and not to guide another author, without the assistance of the person who compiled them. This required much additional research and collecting original source documents which Manchester had examined. The result of this is that this book took almost another decade of work by Reid before its publication. It has been a protracted wait, especially for those who admired the first two volumes, but ultimately worth it. This is a thoroughly satisfying conclusion to what will likely remain the definitive biography of Churchill for the foreseeable future.

When Winston Churchill became prime minister in the dark days of May 1940, he was already sixty-five years old: retirement age for most of his generation, and faced a Nazi Germany which was consolidating its hold on Western Europe with only Britain to oppose its hegemony. Had Churchill retired from public life in 1940, he would still be remembered as one of the most consequential British public figures of the twentieth century; what he did in the years to come elevated him to the stature of one of the preeminent statesmen of modern times. These events are chronicled in this book, dominated by World War II, which occupies three quarters of the text. In fact, although the focus is on Churchill, the book serves also as a reasonably comprehensive history of the war in the theatres in which British forces were engaged, and of the complex relations among the Allies.

It is often forgotten at this remove that at the time Churchill came to power he was viewed by many, including those of his own party and military commanders, as a dangerous and erratic figure given to enthusiasm for harebrained schemes and with a propensity for disaster (for example, his resignation in disgrace after the Gallipoli catastrophe in World War I). Although admired for his steadfastness and ability to rally the nation to the daunting tasks before it, Churchill's erratic nature continued to exasperate his subordinates, as is extensively documented here from their own contemporary diaries.

Churchill's complex relationships with the other leaders of the Grand Alliance: Roosevelt and Stalin, are explored in depth. Although Churchill had great admiration for Roosevelt and desperately needed the assistance the U.S. could provide to prosecute the war, Roosevelt comes across as a lightweight, ill-informed and not particularly engaged in military affairs and blind to the geopolitical consequences of the Red Army's occupying eastern and central Europe at war's end. (This was not just Churchill's view, but widely shared among senior British political and military circles.) While despising Bolshevism, Churchill developed a grudging respect for Stalin, considering his grasp of strategy to be excellent and, while infuriating to deal with, reliable in keeping his commitments to the other allies.

As the war drew to a close, Churchill was one of the first to warn of the great tragedy about to befall those countries behind what he dubbed the “iron curtain” and the peril Soviet power posed to the West. By July 1950, the Soviets fielded 175 divisions, of which 25 were armoured, against a Western force of 12 divisions (2 armoured). Given the correlation of forces, only Soviet postwar exhaustion and unwillingness to roll the dice given the threat of U.S. nuclear retaliation kept the Red Army from marching west to the Atlantic.

After the war, in opposition once again as the disastrous Attlee Labour government set Britain on an irreversible trajectory of decline, he thundered against the dying of the light and retreat from Empire not, as in the 1930s, a back-bencher, but rather leader of the opposition. In 1951 he led the Tories to victory and became prime minister once again, for the first time with the mandate of winning a general election as party leader. He remained prime minister until 1955 when he resigned in favour of Anthony Eden. His second tenure as P.M. was frustrating, with little he could to do to reverse Britain's economic decline and shrinkage on the world stage. In 1953 he suffered a serious stroke, which was covered up from all but his inner circle. While he largely recovered, approaching his eightieth birthday, he acknowledged the inevitable and gave up the leadership and prime minister positions.

Churchill remained a member of Parliament for Woodford until 1964. In January 1965 he suffered another severe stroke and died at age 90 on the 24th of that month.

It's been a long time coming, but this book is a grand conclusion of the work Manchester envisioned. It is a sprawling account of a great sprawling life engaged with great historical events over most of a century: from the last cavalry charge of the British Army to the hydrogen bomb. Churchill was an extraordinarily complicated and in many ways conflicted person, and this grand canvas provides the scope to explore his character and its origins in depth. Manchester and Reid have created a masterpiece. It is daunting to contemplate a three volume work totalling three thousand pages, but if you are interested in the subject, it is a uniquely rewarding read.

 Permalink

February 2013

Spinrad, Norman. Bug Jack Barron. Golden, CO: ReAnimus Press, [1969] 2011. ISBN 978-1-58567-585-2.
In his Berkeley Baby Bolshevik days Jack Barron dreamt of power—power to change the world. Years later, he has power, but of a very different kind. As host of the weekly television show “Bug Jack Barron”, he sits in the catbird seat, taking carefully screened calls from those abused by impersonal organisations and putting those in charge in the hot seat, live via vidphone, with no tape delay. One hundred million people tune in to the show, so whatever bugs the caller, bugs Jack Barron, and immediately bugs America.

Jack's Berkeley crowd, veterans of the civil rights battles, mostly consider him a sell-out, although they have sold out in their own ways to the realities of power and politics. But when Jack crosses swords with Benedict Howards, he is faced with an adversary of an entirely different order of magnitude than any he has previously encountered. Howards is president of the Foundation for Human Immortality, which operates centres which freeze the bodies of departed clients and funds research into the technologies which will allow them to be revived and achieve immortality. Only the well-heeled need apply: a freezer contract requires one to deposit US$500,000 (this is in 1969 gold dollars; in 2012 ObamaBucks, the equivalent is in excess of three million). With around a million people already frozen, Howards sits on half a trillion dollars (three trillion today), and although this money is nominally held in trust to be refunded to the frozen after their revival, Howards is in fact free to use the proceeds of investing it as he wishes. You can buy almost anything with that kind of money, politicians most definitely included.

Howards is pushing to have his foundation declared a regulated monopoly, forcing competitors out of the market and placing its governance under a committee appointed by the president of the United States. Barron takes on Howards with a call from a person claiming he was denied a freezer contract due to his race, and sets up a confrontation with Howards in which Barron has to decide whether his own integrity has a price and, if so, what it is. As he digs into Howards' foundation, he stumbles upon details which hint of secrets so shocking they might overturn the political landscape in the U.S. But that may only be the tip of the iceberg.

This is one of the iconic novels of “new wave” science fiction from the late 1960s. It is written in what was then called an “experimental”, stream of consciousness style, with paragraphs like:

The undulating blue-green light writhing behind her like a forest of tentacles the roar of the surf like the sigh of some great beached and expiring sea animal, seemed to press her against the glass reality-interface like a bubble being forced up by decay-gas pressure from the depths of an oily green swamp pool. She felt the weight, the pressure of the whole room pushing behind her as if the blind green monsters that lurked in the most unknowable pits in the ass-end of her mind were bubbling up from the depths and elbowing her consciousness out of her own skull.

Back in the day, we'd read something like this and say, “Oh, wow”. Today, many readers may deem such prose stylings as quaint as those who say “Oh, wow”.

This novel is a period piece. Reading it puts you back into the mindset of the late 1960s, when few imagined that technologies already in nascent form would destroy the power of one-to-many media oligopolies, and it was wrong in almost all of its extrapolation of the future. If you read it then (as I did) and thought it was a masterpiece (as I did), it may be worth a second glance to see how far we've come.

 Permalink

Wright, Lawrence. Going Clear. New York: Alfred A. Knopf, 2013. ISBN 978-0-307-70066-7.
In 2007 the author won a Pulitzer Prize for The Looming Tower, an exploration of the origins, structure, and activities of Al-Qaeda. In the present book, he dares to take on a really dangerous organisation: the Church of Scientology. Wright delves into the tangled history of its founder, L. Ron Hubbard, and the origins of the church, which, despite having occurred within the lifetimes of many readers of the book, seem cloaked in as much fog, misdirection, and conflicting claims as those of religions millennia older. One thing which is beyond dispute to anybody willing to examine the objective record is that Hubbard was a masterful confidence man—perhaps approaching the magnitude of those who founded other religions. This was apparent well before he invented Dianetics and Scientology: he moved into Jack Parsons' house in Pasadena, California, and before long took off with Parsons' girlfriend and most of his savings with a scheme to buy yachts in Florida and sell them in California. Hubbard's military career in World War II is also murky in the extreme: military records document that he was never in combat, but he spun a legend about chasing Japanese submarines off the coast of Oregon, being injured, and healing himself through mental powers.

One thing which nobody disputes is that Hubbard was a tremendously talented and productive writer of science fiction. He was a friend of Robert A. Heinlein and a regular correspondent with John W. Campbell. You get the sense in this book that Hubbard didn't really draw a hard and fast line between the fanciful stories he wrote for a living and the actual life he lived—his own biography and persona seem to have been as much a fabrication as the tales he sold to the pulp magazines.

On several occasions Hubbard remarked that the way to make a big pile of money was to start a religion. (It is often said that he made a bar bet with Heinlein that he could start a religion, but the author's research concludes this story is apocryphal. However, Wright identifies nine witnesses who report hearing Hubbard making such a remark in 1948 or 1949.) After his best-selling book Dianetics landed him in trouble with the scientific and mental health establishment, he decided to take his own advice and re-instantiate it as a religion. In 1954, Scientology was born.

Almost immediately, events took a turn into high weirdness. While the new religion attracted adherents, especially among wealthy celebrities in Hollywood, it also was the object of ridicule and what Scientologists viewed as persecution. Hubbard and his entourage took to the sea in a fleet of ships, attended by a “clergy” called Sea Org, who signed billion year contracts of allegiance to Scientology and were paid monastic subsistence salaries and cut off from contact with the world outside Scientology. Hubbard continued to produce higher and higher levels of revelation for his followers, into which they could be initiated for a formidable fee.

Some of this material was sufficiently bizarre (for example, the Xenu [or Xemu] story, revealed in 1967) that adherents to Scientology walked away, feeling that their religion had become bad space opera. That was the first reaction of Paul Haggis, whose 34 years in Scientology are the foundation of this narrative. And yet Haggis did not leave Scientology after his encounter with Xenu: he eventually left the church in 2009 after it endorsed a California initiative prohibiting same-sex marriage.

There is so much of the bizarre in this narrative that you might be inclined to dismiss it as tabloid journalism, had not the author provided a wealth of source citations, many drawn from sworn testimony in court and evidence in legal proceedings. In the Kindle edition, these links are live and can be clicked to view the source documents.

From children locked in chain lockers on board ship; to adults placed in detention in “the hole”; to special minders assigned to fulfill every whim of celebrity congregants such as John Travolta and Tom Cruise; to blackmail, lawfare, surveillance, and harassment of dissidents and apostates; to going head-to-head with the U.S. Internal Revenue Service and winning a tax exemption from them in 1993, this narrative reads like a hybrid of the science fiction and thriller genres, and yet it is all thoroughly documented. In end-note after end-note, the author observes that the church denies what is asserted, then provides multiple source citations to the contrary.

This is a remarkably even-handed treatment of a religion that many deem worthy only of ridicule. Yes, Scientologists believe some pretty weird things, but then so do adherents of “mainstream” religions. Scientology's sacred texts seem a lot like science fiction, but so do those of the Mormons, a new religion born in America a century earlier, subjected to the same ridicule and persecution the Scientologists complain of, and now sufficiently mainstream that a member could run for president of the U.S. without his religion being an issue in the campaign. And while Scientology seems like a mix of science fiction and pseudo-science, some very successful people have found it an anchor for their lives and attribute part of their achievement to it. The abuses documented here are horrific, and the apparent callousness with which money is extracted from believers to line the pockets of those at the top is stunning, but then one can say as much of a number of religions considered thoroughly respectable by many people.

I'm a great believer in the market. If Scientology didn't provide something of value to those who believe in it, they wouldn't have filled its coffers with more than a billion dollars (actually, nobody knows the numbers: Scientology's finances are as obscure as its doctrines). I'll bet the people running it will push the off-putting weird stuff into the past, shed the abusive parts, and morph into a religion people perceive as no more weird than the Mormons. Just as being a pillar of the LDS church provides a leg up in some communities in the Western U.S., Scientology will provide an entrée into the world of Hollywood and media. And maybe in 2112 a Scientologist will run for president of the Reunited States and nobody will make an issue of it.

 Permalink

Flynn, Vince. The Last Man. New York: Atria Books, 2012. ISBN 978-1-4165-9521-2.
This is the thirteenth novel in the Mitch Rapp (warning—the article at this link contains minor spoilers) series. Unlike the two previous installments, American Assassin (December 2010) and Kill Shot (April 2012), this book is set in the present, as the U.S. is trying to extricate itself from the quagmire of Afghanistan and pay off locals to try to leave something in place after U.S. forces walk away from the debacle. Joe Rickman is the CIA's point man in Jalalabad, cutting deals with shady figures and running black operations. Without warning, the CIA safe house from which he operates is attacked, leaving its four guards dead. Rickman, the man who knows enough secrets from his long CIA career to endanger hundreds of agents and assets and roll up CIA networks and operations in dozens of countries, has vanished.

Mitch Rapp arrives on the scene to try to puzzle out what happened and locate Rickman before his abductors break him and he begins to spill the secrets. Rapp has little to go on, and encounters nothing but obstruction from the local police and staffers at the U.S. embassy in Kabul, all of whom Rapp treats with his accustomed tact:

“You're a bully and a piece of shit and you're the kind of guy who I actually enjoy killing. Normally, I don't put a lot of thought into the people I shoot, but you fall into a special category. I figure I'd be doing the human race a favor by ending your worthless life. Add to that the fact that I'm in a really bad mood. In fact I'm in such a shitty mood that putting a bullet in your head might be the only thing that could make me feel better.”

… “In the interest of fairness, though, I suppose I should give you a chance to convince me otherwise.” (p. 17)

Following a slim lead on Rickman, Rapp finds himself walking into a simultaneous ambush by both an adversary from his past and crooked Kabul cops. Rapp ends up injured and on the sidelines. Meanwhile, another CIA man in Afghanistan vanishes, and an ambitious FBI deputy director arrives on the scene with evidence of massive corruption in the CIA clandestine service. CIA director Irene Kennedy begins to believe that a coordinated operation must be trying to destroy her spook shop, one of such complexity that it is far beyond the capabilities of the Taliban, and turns her eyes toward “ally” Pakistan.

A shocking video is posted on jihadist Web site which makes getting to the bottom of the enigma an existential priority for the CIA. Rapp needs to get back into the game and start following the few leads that exist.

This is a well-crafted thriller that will keep you turning the pages. It is somewhat lighter on the action (although there is plenty) and leans more toward the genre of espionage fiction; I think Flynn has been evolving in that direction in the last several books. There are some delightful characters, good and evil. Although she only appears in a few chapters, you will remember four foot eleven inch Air Force Command Master Sergeant Shiela Sanchez long after you put down the novel.

There is a fundamental challenge in writing a novel about a CIA agent set in contemporary Afghanistan which the author struggles with here and never fully overcomes. The problem is that the CIA, following orders from its political bosses, is doing things that don't make any sense in places where the U.S. doesn't have any vital interests or reason to be present. Flynn has created a workable thriller around these constraints, but to this reader it just can't be as compelling as saving the country from the villains and threats portrayed in the earlier Mitch Rapp novels. Here, Rapp is doing his usual exploits, but in service of a mission which is pointless at best and in all likelihood counterproductive.

 Permalink

Scott, Robert Falcon. Journals. Oxford: Oxford University Press, [1913, 1914, 1923, 1927] 2005. ISBN 978-0-19-953680-1.
Robert Falcon Scott, leading a party of five men hauling their supplies on sledges across the ice cap, reached the South Pole on January 17th, 1912. When he arrived, he discovered a cairn built by Roald Amundsen's party, which had reached the Pole on December 14th, 1911 using sledges pulled by dogs. After this crushing disappointment, Scott's polar party turned back toward their base on the coast. After crossing the high portion of the ice pack (which Scott refers to as “the summit”) without severe difficulties, they encountered unexpected, unprecedented, and, based upon subsequent meteorological records, extremely low temperatures on the Ross Ice Shelf (the “Barrier” in Scott's nomenclature). Immobilised by a blizzard, and without food or sufficient fuel to melt ice for water, Scott's party succumbed, with Scott's last journal entry, dated March 29th, 1912.

I do not think we can hope for any better things now. We shall stick it out to the end, but we are getting weaker, of course, and the end cannot be far. It seems a pity, but I do not think I can write more.
R. Scott.

For God's sake look after our people.

A search party found the bodies of Scott and the other two members of the expedition who died with him in the tent (the other two had died earlier on the return journey; their remains were never found). His journals were found with him, and when returned to Britain were prepared for publication, and proved a sensation. Amundsen's priority was almost forgotten in the English speaking world, alongside Scott's first-hand account of audacious daring, meticulous planning, heroic exertion, and dignity in the face of death.

A bewildering variety of Scott's journals were published over the years. They are described in detail and their differences curated in this Oxford World's Classics edition. In particular, Scott's original journals contained very candid and often acerbic observations about members of his expedition and other explorers, particularly Shackleton. These were elided or toned down in the published copies of the journals. In this edition, the published text is used, but the original manuscript text appears in an appendix.

Scott was originally considered a hero, then was subjected to a revisionist view that deemed him ill-prepared for the expedition and distracted by peripheral matters such as a study of the embryonic development of emperor penguins as opposed to Amundsen's single-minded focus on a dash to the Pole. The pendulum has now swung back somewhat, and a careful reading of Scott's own journals seems, at least to this reader, to support this more balanced view. Yes, in some ways Scott's expedition seems amazingly amateurish (I mean, if you were planning to ski across the ice cap, wouldn't you learn to ski before you arrived in Antarctica, rather than bring along a Norwegian to teach you after you arrived?), but ultimately Scott's polar party died due to a combination of horrific weather (present-day estimates are that only one year in sixteen has temperatures as low as those Scott experienced on the Ross Ice Shelf) and an equipment failure: leather washers on cans of fuel failed in the extreme temperatures, which caused loss of fuel Scott needed to melt ice to sustain the party on its return. And yet the same failure had been observed during Scott's 1901–1904 expedition, and nothing had been done to remedy it. The record remains ambiguous and probably always will.

The writing, especially when you consider the conditions under which it was done, makes you shiver. At the Pole:

The Pole. Yes, but under very different circumstances from those expected.

… Great God! this is an awful place and terrible enough for us to have laboured to it without the reward of priority.

and from his “Message to the Public” written shortly before his death:

We took risks, we knew we took them; things have come out against us, and therefore we have no cause for complaint, but bow to the will of Providence, determined still to do our best to the last.

Now that's an explorer.

 Permalink

March 2013

Chertok, Boris E. Rockets and People. Vol. 4. Washington: National Aeronautics and Space Administration, [1999] 2011. ISBN 978-1-4700-1437-7 NASA SP-2011-4110.
This is the fourth and final book of the author's autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. As he says in this volume, “I was born in the Russian Empire, grew up in Soviet Russia, achieved a great deal in the Soviet Union, and continue to work in Russia.” After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

This work covers the Soviet manned lunar program and the development of long-duration space stations and orbital rendezvous, docking, and assembly. As always, Chertok was there, and participated in design and testing, was present for launches and in the control centre during flights, and all too often participated in accident investigations.

In retrospect, the Soviet manned lunar program seems almost bizarre. It did not begin in earnest until two years after NASA's Apollo program was underway, and while the Gemini and Apollo programs were a step-by-step process of developing and proving the technologies and operational experience for lunar missions, the Soviet program was a chaotic bag of elements seemingly driven more by the rivalries of the various chief designers than a coherent plan for getting to the Moon. First of all, there were two manned lunar programs, each using entirely different hardware and mission profiles. The Zond program used a modified Soyuz spacecraft launched on a Proton booster, intended to send two cosmonauts on a circumlunar mission. They would simply loop around the Moon and return to Earth without going into orbit. A total of eight of these missions were launched unmanned, and only one completed a flight which would have been safe for cosmonauts on board. After Apollo 8 accomplished a much more ambitious lunar orbital mission in December 1968, a Zond flight would simply demonstrate how far behind the Soviets were, and the program was cancelled in 1970.

The N1-L3 manned lunar landing program was even more curious. In the Apollo program, the choice of mission mode and determination of mass required for the lunar craft came first, and the specifications of the booster rocket followed from that. Work on Korolev's N1 heavy lifter did not get underway until 1965—four years after the Saturn V, and it was envisioned as a general purpose booster for a variety of military and civil space missions. Korolev wanted to use very high thrust kerosene engines on the first stage and hydrogen engines on the upper stages as did the Saturn V, but he was involved in a feud with Valentin Glushko, who championed the use of hypergolic, high boiling point, toxic propellants and refused to work on the engines Korolev requested. Hydrogen propellant technology in the Soviet Union was in its infancy at the time, and Korolev realised that waiting for it to mature would add years to the schedule.

In need of engines, Korolev approached Nikolai Kuznetsov, a celebrated designer of jet turbine engines, but who had no previous experience at all with rocket engines. Kuznetsov's engines were much smaller than Korolev desired, and to obtain the required thrust, required thirty engines on the first stage alone, each with its own turbomachinery and plumbing. Instead of gimballing the engines to change the thrust vector, pairs of engines on opposite sides of the stage were throttled up and down. The gargantuan scale of the lower stages of the N-1 meant they were too large to transport on the Soviet rail network, so fabrication of the rocket was done in a huge assembly hall adjacent to the launch site. A small city had to be built to accommodate the work force.

All Soviet rockets since the R-2 in 1949 had used “integral tanks”: the walls of the propellant tanks were load-bearing and formed the skin of the rocket. The scale of the N1 was such that load-bearing tanks would have required a wall thickness which exceeded the capability of Soviet welding technology at the time, forcing a design with an external load-bearing shell and separate propellant tanks within it. This increased the complexity of the rocket and added dead weight to the design. (NASA's contractors had great difficulty welding the integral tanks of the Saturn V, but NASA simply kept throwing money at the problem until they figured out how to do it.)

The result was a rocket which was simultaneously huge, crude, and bewilderingly complicated. There was neither money in the budget nor time in the schedule to build a test stand to permit ground firings of the first stage. The first time those thirty engines fired up would be on the launch pad. Further, Kuznetsov's engines were not reusable. After every firing, they had to be torn down and overhauled, and hence were essentially a new and untested engine every time they fired. The Saturn V engines, by contrast, while expended in each flight, could be and were individually test fired, then ground tested together installed on the flight stage before being stacked into a launch vehicle.

The weight and less efficient fuel of the N-1 made its performance anæmic. While it had almost 50% more thrust at liftoff than the Saturn V, its payload to low Earth orbit was 25% less. This meant that performing a manned lunar landing mission in a single launch was just barely possible. The architecture would have launched two cosmonauts in a lunar orbital ship. After entering orbit around the Moon, one would spacewalk to the separate lunar landing craft (an internal docking tunnel as used in Apollo would have been too heavy) and descend to the Moon. Fuel constraints meant the cosmonaut only had ten to fifteen seconds to choose a landing spot. After the footprints, flag, and grabbing a few rocks, it was back to the lander to take off to rejoin the orbiter. Then it took another spacewalk to get back inside. Everybody involved at the time was acutely aware how marginal and risky this was, but given that the N-1 design was already frozen and changing it or re-architecting the mission to two or three launches would push out the landing date four or five years, it was the only option that would not forfeit the Moon race to the Americans.

They didn't even get close. In each of its test flights, the N-1 did not even get to the point of second stage ignition (although in its last flight it got within seven seconds of that milestone). On the second test flight the engines cut off shortly after liftoff and the vehicle fell back onto the launch pad, completely obliterating it in the largest artificial non-nuclear explosion known to this date: the equivalent of 7 kilotons of TNT. After four consecutive launch failures, having lost the Moon race, with no other mission requiring its capabilities, and the military opposing an expensive program for which they had no use, work on the N-1 was suspended in 1974 and the program officially cancelled in 1976.

When I read Challenge to Apollo, what struck me was the irony that the Apollo program was the very model of a centrally-planned state-directed effort along Soviet lines, while the Soviet Moon program was full of the kind of squabbling, turf wars, and duplicative competitive efforts which Marxists decry as flaws of the free market. What astounded me in reading this book is that the Soviets were acutely aware of this in 1968. In chapter 9, Chertok recounts a Central Committee meeting in which Minister of Defence Dmitriy Ustinov remarked:

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

In addition to the Moon program, there is extensive coverage of the development of automated rendezvous and docking and the long duration orbital station programs (Almaz, Salyut, and Mir). There is also an enlightening discussion, building on Chertok's career focus on control systems, of the challenges in integrating humans and automated systems into the decision loop and coping with off-nominal situations in real time.

I could go on and on, but there is so much to learn from this narrative, I'll just urge you to read it. Even if you are not particularly interested in space, there is much experience and wisdom to be gained from it which are applicable to all kinds of large complex systems, as well as insight into how things were done in the Soviet Union. It's best to read Volume 1 (May 2012), Volume 2 (August 2012), and Volume 3 (December 2012) first, as they will introduce you to the cast of characters and the events which set the stage for those chronicled here.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is much better produced than the Kindle editions of the first three volumes. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They're free.

The original Russian edition is available online.

 Permalink

Thavis, John. The Vatican Diaries. New York: Viking, 2013. ISBN 978-0-670-02671-5.
Jerry Pournelle's Iron Law of Bureaucracy states that:

…in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

Imagine a bureaucracy in which the Iron Law has been working inexorably since the Roman Empire.

The author has covered the Vatican for the Catholic News Service for the last thirty years. He has travelled with popes and other Vatican officials to more than sixty countries and, developing his own sources within a Vatican which is simultaneously opaque to an almost medieval level in its public face, yet leaks like a sieve as factions try to enlist journalists in advancing their agendas. In this book he uses his access to provide a candid look inside the Vatican, at a time when the church is in transition and crisis.

He begins with a peek inside the mechanics of the conclave which chose Pope Benedict XVI: from how the black or white smoke is made to how the message indicating the selection of a new pontiff is communicated (or not) to the person responsible for ringing the bell to announce the event to the crowds thronging St Peter's Square.

There is a great deal of description, bordering on gonzo, of the reality of covering papal visits to various countries: in summary, much of what you read from reporters accredited to the Vatican comes from their watching events on television, just as you can do yourself.

The author does not shy from controversy. He digs deeply into the sexual abuse scandals and cover-up which rocked the church, the revelations about the founder of the Legion of Christ, the struggle between then traditionalists of the Society of St Pius X and supporters of the Vatican II reforms in Rome, and the battle over the beatification of Pope Pius XII. On the lighter side, we encounter the custodians of Latin, including the Vatican Bank ATM which displays its instructions in Latin: “Inserito scidulam quaeso ut faciundum cognoscas rationem”.

This is an enlightening look inside one of the most influential, yet least understood, institutions in what remains of Western civilisation. On the event of the announcement of the selection of Pope Francis, James Lileks wrote:

…if you'd turned the sound down on the set and shown the picture to Julius Cæsar, he would have smiled broadly. For the wrong reasons, of course—his order did not survive in its specific shape, but in another sense it did. The architecture, the crowds, the unveiling would have been unmistakable to someone from Cæsar's time. They would have known exactly what was going on.

Indeed—the Vatican gets ceremony. What is clear from this book is that it doesn't get public relations in an age where the dissemination of information cannot be controlled, and that words, once spoken, cannot be taken back, even if a “revised and updated” transcript of them is issued subsequently by the bureaucracy.

In the Kindle edition the index cites page numbers in the hardcover print edition which are completely useless since the Kindle edition does not contain real page numbers.

 Permalink

Savage, Michael [Michael Alan Weiner]. A Time for War. New York: St. Martin's Press, 2013. ISBN 978-0-312-65162-6.
The author, a popular talk radio host who is also a Ph.D. in nutritional ethnomedicine and has published numerous books under his own name, is best known for his political works, four of which have made the New York Times bestseller list including one which reached the top of that list. This is his second foray into the fictional thriller genre, adopting a style reminiscent of Rudy Rucker's transrealism, in which the author, or a character closely modelled upon him or her, is the protagonist in the story. In this novel, Jack Hatfield is a San Francisco-based journalist dedicated to digging out the truth and getting it to the public by whatever means available, immersed in the quirky North Beach culture of San Francisco, and banned in Britain for daring to transgress the speech codes of that once-free nation. Sound familiar?

After saving his beloved San Francisco from an existential threat in the first novel, Abuse of Power (June 2012), Hatfield's profile on the national stage has become higher than ever, but that hasn't helped him get back into the media game, where his propensity for telling the truth without regard to political correctness or offending the perennially thin-skinned makes him radioactive to mainstream outlets. He manages to support himself as a free-lance investigative reporter, working from his boat in a Sausalito marina, producing and selling stories to venues willing to run them. When a Chinook helicopter goes down in a remote valley in Afghanistan killing all 39 on board and investigators attribute the crash to total failure of all electronics on board with no evidence of enemy action, Jack's ears perk up. When he later learns of an FBI vehicle performing a routine tail of a car from the Chinese consulate being disabled by “total electronic failure” he begins to get really interested. Then strange things begin to happen in Chinatown, prompting Jack to start looking for a China connection between these incidents.

Meanwhile, Dover Griffith, a junior analyst at the Office of Naval Intelligence, is making other connections. She recalled that a proposed wireless Internet broadband system developed by billionaire industrialist Richard Hawke's company had to be abandoned when it was discovered its signal could induce catastrophic electrical failure in aircraft electronics. (Clearly Savage is well-acquainted with the sorry history of LightSquared and GPS interference!) When she begins to follow the trail, she is hauled into her boss's office and informed she is being placed on “open-ended unpaid furlough”: civil service speak for being fired. Clearly Hawke has plenty of pull in high places and probably something to hide. Since Hatfield had been all over the story of interference caused by the broadband system and the political battle over whether to deploy it, she decides to fly to California and join forces with Hatfield to discover what is really going on. As they, along with Jack's associates, begin to peel away layer after layer of the enigma, they begin to suspect that something even more sinister may be underway.

This is a thoroughly satisfying thriller. There is a great deal of technical detail, all meticulously researched. There are a few dubious aspects of some of the gadgets, but that's pretty much a given in the thriller genre. What distinguishes these novels from other high-profile thrillers is that Jack Hatfield isn't a superhero in the sense of Vince Flynn's Mitch Rapp or Brad Thor's Scot Harvath: he is a largely washed-up journalist, divorced, living on a boat with a toy poodle, hanging out with a bunch of eccentric characters at an Italian restaurant in North Beach, who far from gunplay and derring-do, repairs watches for relaxation. This makes for a different kind of thriller, but one which is no less satisfying. I'm sure Jack Hatfield will be back, and I'm looking forward to the next outing.

You can read this novel as a stand-alone thriller without having first read Abuse of Power, but be warned that it contains major plot spoilers for the first novel; to fully enjoy them both, it's best to start there.

 Permalink

Copeland, B. Jack, ed. Colossus. Oxford: Oxford University Press, 2006. ISBN 978-0-19-953680-1.
During World War II the British codebreakers at Bletchley Park provided intelligence to senior political officials and military commanders which was vital in winning the Battle of the Atlantic and discerning German strategic intentions in the build-up to the invasion of France and the subsequent campaign in Europe. Breaking the German codes was just barely on the edge of possibility with the technology of the time, and required recruiting a cadre of exceptionally talented and often highly eccentric individuals and creating tools which laid the foundations for modern computer technology.

At the end of the war, all of the work of the codebreakers remained under the seal of secrecy: in Winston Churchill's history of the war it was never mentioned. Part of this was due to the inertia of the state to relinquish its control over information, but also because the Soviets, emerging as the new adversary, might adopt some of the same cryptographic techniques used by the Germans and concealing that they had been compromised might yield valuable information from intercepts of Soviet communications.

As early as the 1960s, publications in the United States began to describe the exploits of the codebreakers, and gave the mistaken impression that U.S. codebreakers were in the vanguard simply because they were the only ones allowed to talk about their wartime work. The heavy hand of the Official Secrets Act suppressed free discussion of the work at Bletchley Park until June 2000, when the key report, written in 1945, was allowed to be published.

Now it can be told. Fortunately, many of the participants in the work at Bletchley were young and still around when finally permitted to discuss their exploits. This volume is largely a collection of their recollections, many in great technical detail. You will finally understand precisely which vulnerabilities of the German cryptosystems permitted them to be broken (as is often the case, it was all-too-clever innovations by the designers intended to make the encryption “unbreakable” which provided the door into it for the codebreakers) and how sloppy key discipline among users facilitated decryption. For example, it was common to discover two or more messages encrypted with the same key. Since encryption was done by a binary exclusive or (XOR) of the bits of the Baudot teleprinter code, with that of the key (generated mechanically from a specified starting position of the code machine's wheels), if you have two messages encrypted with the same key, you can XOR them together, taking out the key and leaving you with the XOR of the plaintext of the two messages. This, of course, will be gibberish, but you can then take common words and phrases which occur in messages and “slide” them along the text, XORing as you go, to see if the result makes sense. If it does, you've recovered part of the other message, and by XORing with either message, that part of the key. This is something one could do in microseconds today with the simplest of computer programs, but in the day was done in kiloseconds by clerks looking up the XOR of Baudot codes in tables one by one (at least until they memorised them, which the better ones did).

The chapters are written by people with expertise in the topic discussed, many of whom were there. The people at Bletchley had to make up the terminology for the unprecedented things they were doing as they did it. Due to the veil of secrecy dropped over their work, many of their terms were orphaned. What we call “bits” they called “pulses”, “binary addition” XOR, and ones and zeroes of binary notation crosses and dots. It is all very quaint and delightful, and used in most of these documents.

After reading this book you will understand precisely how the German codes were broken, what Colossus did, how it was built and what challenges were overcome in constructing it, and how it was integrated into a system incorporating large numbers of intuitive humans able to deliver near-real-time intelligence to decision makers. The level of detail may be intimidating to some, but for the first time it's all there. I have never before read any description of the key flaw in the Lorenz cipher which Colossus exploited and how it processed messages punched on loops of paper tape to break into them and recover the key.

The aftermath of Bletchley was interesting. All of the participants were sworn to secrecy and all of their publications kept under high security. But the know-how they had developed in electronic computation was their own, and many of them went to Manchester to develop the pioneering digital computers developed there. The developers of much of this technology could not speak of whence it came, and until recent years the history of computing has been disconnected from its roots.

As a collection of essays, this book is uneven and occasionally repetitive. But it is authentic, and an essential document for anybody interested in how codebreaking was done in World War II and how electronic computation came to be.

 Permalink

April 2013

Bussjaeger, Carl. Bargaining Position. Lyndeborough, NH: http://www.bussjaeger.us/, [2010] 2011.
In Net Assets (October 2002) the author chronicled the breakout of lovers of liberty from the Earth's gravity well by a variety of individual initiatives and their defeat of the forces of coercive government which wished to keep them in chains. In this sequel, set in the mid-21st century, the expansion into the solar system is entirely an economy of consensual actors, some ethical and some rogue, but all having escaped the shackles of the state, left to stew in its own stagnating juices on Earth.

The Hunters are an amorous couple who have spent the last decade on their prospecting ship, Improbable, staking claims in the asteroid belt and either working them or selling the larger ones to production companies. After a successful strike, they decide to take a working vacation exploring Jupiter's leading Trojan position. At this Lagrangian point the equilibrium between the gravity of Jupiter and the Sun creates a family of stable orbits around that point. The Trojan position can be thought of as an attractor toward which objects in similar orbits will approach and remain.

The Hunters figure that region, little-explored, might collect all kinds of interesting and potentially lucrative objects, and finance their expedition with a contract to produce a documentary about their voyage of exploration. What they discover exceeds anything they imagined to find: what appears to be an alien interstellar probe, disabled by an impact after arrival in the solar system, but with most of its systems and advanced technology intact.

This being not only an epochal discovery in human history, but valuable beyond the dreams of avarice, the Hunters set out to monetise the discovery, protect it against claim jumpers, and discover as much as they can to increase the value of what they've found to potential purchasers. What they discover makes the bargaining process even more complicated and with much higher stakes.

This is a tremendous story, and I can't go any further describing it without venturing into spoiler territory, which would desecrate this delightful novel. The book is available from the author's Web site as a free PDF download; use your favourite PDF reader application on your computer or mobile device to read it. As in common in self-published works, there are a number of copy-editing errors: I noted a total of 25 and I was reading for enjoyment, not doing a close-proof. None of them detract in any way from the story.

 Permalink

White, Andrew Dickson. Fiat Money Inflation in France. Bayonne, NJ: Blackbird Books, [1876, 1896, 1912, 1914] 2011. ISBN 978-1-61053-004-0.
One of the most sure ways to destroy the economy, wealth, and morals of a society is monetary inflation: an inexorable and accelerating increase in the supply of money, which inevitably (if not always immediately) leads to ever-rising prices, collapse in saving and productive investment, and pauperisation of the working classes in favour of speculators and those with connections to the regime issuing the money.

In ancient times, debasement of the currency was accomplished by clipping coins or reducing their content of precious metal. Ever since Marco Polo returned from China with news of the tremendous innovation of paper money, unbacked paper currency (or fiat money) has been the vehicle of choice for states to loot their productive and thrifty citizens.

Between 1789 and 1796, a period encompassing the French Revolution, the French National Assembly issued assignats, paper putatively backed by the value of public lands seized from the Roman Catholic Church in the revolution. Assignats could theoretically be used to purchase these lands, and initially paid interest—they were thus a hybrid between a currency and a bond. The initial issue revived the French economy and rescued the state from bankruptcy but, as always happens, was followed by a second, third, and then a multitude of subsequent issues totally decoupled from the value of the land which was supposed to back them. This sparked an inflationary and eventually hyperinflationary spiral with savers wiped out, manufacturing and commerce grinding to a halt (due to uncertainty, inability to invest, and supply shortages) which caused wages to stagnate even as prices were running away to the upside, an enormous transfer of wealth from the general citizenry to speculators and well-connected bankers, and rampant corruption within the political class. The sequelæ of monetary debasement all played out as they always have and always will: wage and price controls, shortages, rationing, a rush to convert paper money into tangible assets as quickly as possible, capital and foreign exchange controls, prohibition on the ownership of precious metals and their confiscation, and a one-off “wealth tax” until the second, and the third, and so on. Then there was the inevitable replacement of the discredited assignats with a new paper currency, the mandats, which rapidly blew up. Then came Napoleon, who restored precious metal currency; hyperinflation so often ends up with a dictator in power.

What is remarkable about this episode is that it happened in a country which had experienced the disastrous John Law paper money bubble in 1716–1718, within the living memory of some in the assignat era and certainly in the minds of the geniuses who decided to try paper money again because “this time is different”. When it comes to paper money, this time is never different.

This short book (or long pamphlet—the 1896 edition is just 92 pages) was originally written in 1876 by the author, a president of Cornell University, as a cautionary tale against advocates of paper money and free silver in the United States. It was subsequently revised and republished on each occasion the U.S. veered further toward unbacked or “elastic” paper money. It remains one of the most straightforward accounts of a hyperinflationary episode ever written, with extensive citations of original sources. For a more detailed account of the Weimar Republic inflation in 1920s Germany, see When Money Dies (May 2011); although the circumstances were very different, the similarities will be apparent, confirming that the laws of economics manifest here are natural laws just as much as gravitation and electromagnetism, and ignoring them never ends well.

If you are looking for a Kindle edition of this book, be sure to download a free sample of the book before purchasing. As the original editions of this work are in the public domain, anybody is free to produce an electronic edition, and there are some hideous ones available; look before you buy.

 Permalink

Krauss, Lawrence. Quantum Man. New York: W. W. Norton, 2011. ISBN 978-0-393-34065-5.
A great deal has been written about the life, career, and antics of Richard Feynman, but until the present book there was not a proper scientific biography of his work in physics and its significance in the field and consequences for subsequent research. Lawrence Krauss has masterfully remedied this lacuna with this work, which provides, at a level comprehensible to the intelligent layman, both a survey of Feynman's work, both successful and not, and also a sense of how Feynman achieved what he did and what ultimately motivated him in his often lonely quest to understand.

One often-neglected contributor to Feynman's success is discussed at length: his extraordinary skill in mathematical computation, intuitive sense of the best way to proceed toward a solution (he would often skip several intermediate steps and only fill them in when preparing work for publication), and tireless perseverance in performing daunting calculations which occupied page after page of forbidding equations. This talent was quickly recognised by those with whom he worked, and as one of the most junior physicists on the project, he was placed in charge of all computation at Los Alamos during the final phases of the Manhattan Project. Eugene Wigner said of Feynman, “He's another Dirac. Only this time human.”

Feynman's intuition and computational prowess was best demonstrated by his work on quantum electrodynamics, for which he shared a Nobel prize in 1965. (Initially Feynman didn't think too much of this work—he considered it mathematical mumbo-jumbo which swept the infinities which had plagued earlier attempts at a relativistic quantum theory of light and matter under the carpet. Only later did it become apparent that Feynman's work had laid the foundation upon which a comprehensive quantum field theory of the strong and electroweak interactions could be built.) His invention of Feynman diagrams defined the language now universally used by particle physicists to describe events in which particles interact.

Feynman was driven to understand things, and to him understanding meant being able to derive a phenomenon from first principles. Often he ignored the work of others and proceeded on his own, reinventing as he went. In numerous cases, he created new techniques and provided alternative ways of looking at a problem which provided a deeper insight into its fundamentals. A monumental illustration of Feynman's ability to do this is The Feynman Lectures on Physics, based on an undergraduate course in physics Feynman taught at Caltech in 1961–1964. Few physicists would have had the audacity to reformulate all of basic physics, from vectors and statics to quantum mechanics from scratch, and probably only Feynman could have pulled it off, which he did magnificently. As undergraduate pedagogy, the course was less than successful, but the transcribed lectures have remained in print ever since, and working physicists (and even humble engineers like me) are astounded at the insights to be had in reading and re-reading Feynman's work.

Even when Feynman failed, he failed gloriously and left behind work that continues to inspire. His unsuccessful attempt to find a quantum theory of gravitation showed that Einstein's geometric theory was completely equivalent to a field theory developed from first principles and knowledge of the properties of gravity. Feynman's foray into computation produced the Feynman Lectures On Computation, one of the first comprehensive expositions of the theory of quantum computation.

A chapter is devoted to the predictions of Feynman's 1959 lecture, “Plenty of Room at the Bottom”, which is rightly viewed as the founding document of molecular nanotechnology, but, as Krauss describes, also contained the seeds of genomic biotechnology, ultra-dense data storage, and quantum material engineering. Work resulting in more than fifteen subsequent Nobel prizes is suggested in this blueprint for research. Although Feynman would go on to win his own Nobel for other work, one gets the sense he couldn't care less that others pursued the lines of investigation he sketched and were rewarded for doing so. Feynman was in the game to understand, and often didn't seem to care whether what he was pursuing was of great importance or mundane, or whether the problem he was working on from his own unique point of departure had already been solved by others long before.

Feynman was such a curious character that his larger than life personality often obscures his greatness as a scientist. This book does an excellent job of restoring that balance and showing how much his work contributed to the edifice of science in the 20th century and beyond.

 Permalink

Zubrin, Robert Merchants of Despair. New York: Encounter Books, 2012. ISBN 978-1-59403-476-3.
This is one of the most important paradigm-changing books since Jonah Goldberg's Liberal Fascism (January 2008). Zubrin seeks the common thread which unites radical environmentalism, eugenics, population control, and opposition to readily available means of controlling diseases due to hysteria engendered by overwrought prose in books written by people with no knowledge of the relevant science.

Zubrin identifies the central thread of all of these malign belief systems: anti-humanism. In 1974, the Club of Rome, in Mankind at the Turning Point, wrote, “The world has cancer and the cancer is man.” A foul synthesis of the ignorant speculations of Malthus and a misinterpretation of the work of Darwin led to a pernicious doctrine which asserted that an increasing human population would deplete a fixed pool of resources, leading to conflict and selection among a burgeoning population for those most able to secure the resources they needed to survive.

But human history since the dawn of civilisation belies this. In fact, per capita income has grown as population has increased, demonstrating that the static model is bogus. Those who want to constrain the human potential are motivated by a quest for power, not a desire to seek the best outcome for the most people. The human condition has improved over time, and at an accelerating pace since the Industrial Revolution in the 19th century, because of human action: the creativity of humans in devising solutions to problems and ways to meet needs often unperceived before the inventions which soon became seen as essentials were made. Further, the effects of human invention in the modern age are cumulative: any at point in history humans have access to all the discoveries of the past and, once they build upon them to create a worthwhile innovation, it is rapidly diffused around the world—in our days at close to the speed of light. The result of this is that in advanced technological societies the poor, measured by income compared to the societal mean, would have been considered wealthy not just by the standards of the pre-industrial age, but compared to those same societies in the memory of people now alive. The truly poor in today's world are those whose societies, for various reasons, are not connected to the engine of technological progress and the social restructuring it inevitably engenders.

And yet the anti-humanists have consistently argued for limiting the rate of growth of population and in many cases actually reducing the total population, applying a “precautionary principle” to investigation of new technologies and their deployment, and relinquishment of technologies deemed to be “unsustainable”. In short, what they advocate is reversing the progress since the year 1800 (and in many ways, since the Enlightenment), and returning to an imagined bucolic existence (except for, one suspects, the masters in their gated communities, attended to by the serfs as in times of old).

What Malthus and all of his followers to the present day missed is that the human population is not at all like the population of bacteria in a Petri dish or rabbits in the wild. Uniquely, humans invent things which improve their condition, create new resources by finding uses for natural materials previously regarded as “dirt”, and by doing so allow a larger population to enjoy a standard of living much better than that of previous generations. Put aside the fanatics who wish to reduce the human population by 80% or 90% (they exist, they are frighteningly influential in policy-making circles, and they are called out by name here). Suppose, for a moment, the author asks, societies in the 19th century had listened to Malthus and limited the human population to half of the historical value. Thomas Edison and Louis Pasteur did work which contributed to the well-being of their contemporaries around the globe and continue to benefit us today. In a world with half as many people, perhaps only one would have ever lived. Which would you choose?

But the influence of the anti-humans did not stop at theory. The book chronicles the sorry, often deceitful, and tragic consequences when their policies were put into action by coercive governments. The destruction wrought by “population control” measures approached, in some cases, the level of genocide. By 1975, almost one third of Puerto Rican women of childbearing age had been sterilised by programs funded by the U.S. federal government, and a similar program on Indian reservations sterilised one quarter of Native American women of childbearing age, often without consent. Every purebred woman of the Kaw tribe of Oklahoma was sterilised in the 1970s: if that isn't genocide, what is?

If you look beneath the hood of radical environmentalism, you'll find anti-humanism driving much of the agenda. The introduction of DDT in the 1940s immediately began to put an end to the age-old scourge of malaria. Prior to World War II, between one and six million cases of malaria were reported in the U.S. every year. By 1952, application of DDT to the interior walls of houses (as well as other uses of the insecticide) had reduced the total number of confirmed cases of malaria that year to two. By the early 1960s, use of DDT had cut malaria rates in Asia and Latin America by 99%. By 1958, Malthusian anti-humanist Aldous Huxley decried this, arguing that “Quick death by malaria has been abolished; but life made miserable by undernourishment and over-crowding is now the rule, and slow death by outright starvation threatens ever greater numbers.”

Huxley did not have long to wait to see his desires fulfilled. After the publication of Rachel Carson's Silent Spring in 1962, a masterpiece of pseudoscientific deception and fraud, politicians around the world moved swiftly to ban DDT. In Sri Lanka, where malaria cases had been cut from a million or more per year to 17 in 1963, DDT was banned in 1964, and by 1969 malaria cases had increased to half a million a year. Today, DDT is banned or effectively banned in most countries, and the toll of unnecessary death due to malaria in Africa alone since the DDT ban is estimated as in excess of 100 million. Arguably, Rachel Carson and her followers are the greatest mass murderers of the 20th century. There is no credible scientific evidence whatsoever that DDT is harmful to humans and other mammals, birds, reptiles, or oceanic species. To the anti-humanists, the carnage wrought by the banning of this substance is a feature, not a bug.

If you thought Agenda 21 (November 2012) was over the top, this volume will acquaint you with the real-world evil wrought by anti-humanists, and their very real agenda to exterminate a large fraction of the human population and reduce the rest (except for themselves, of course, they believe) to pre-industrial serfdom. As the author concludes:

If the idea is accepted that the world's resources are fixed with only so much to go around, then each new life is unwelcome, each unregulated act or thought is a menace, every person is fundamentally the enemy of every other person, and each race or nation is the enemy of every other race of nation. The ultimate outcome of such a worldview can only be enforced stagnation, tyranny, war, and genocide.

This is a book which should have an impact, for the better, as great as Silent Spring had for the worse. But so deep is the infiltration of the anti-human ideologues into the cultural institutions that you'll probably never hear it mentioned except here and in similar venues which cherish individual liberty and prosperity.

 Permalink

May 2013

O'Neill, Gerard K. The High Frontier. Mojave, CA: Space Studies Institute, [1976, 1977, 1982, 1989] 2013. ISBN 978-0-688-03133-6.
In the tumultuous year of 1969, Prof. Gerard K. O'Neill of Princeton University was tapped to teach the large freshman physics course at that institution. To motivate talented students who might find the pace of the course tedious, he organised an informal seminar which would explore challenging topics to which the basic physics taught in the main course could be applied. For the first topic of the seminar he posed the question, “Is a planetary surface the right place for an expanding technological civilisation?”. So fascinating were the results of investigating this question that the seminar never made it to the next topic, and working out its ramifications would occupy the rest of O'Neill's life.

By 1974, O'Neill and his growing group of informal collaborators had come to believe not only that the answer to that 1969 question was a definitive “no”, but that a large-scale expansion of the human presence into space, using the abundant energy and material resources available outside the Earth's gravity well was not a goal for the distant future but rather something which could be accomplished using only technologies already proved or expected in the next few years (such as the NASA's space shuttle, then under development). Further, the budget to bootstrap the settlement of space until the point at which the space settlements were self-sustaining and able to expand without further support was on the order of magnitude of the Apollo project and, unlike Apollo, would have an economic pay-off which would grow exponentially as space settlements proliferated.

As O'Neill wrote, the world economy had just been hit by the first of what would be a series of “oil shocks”, which would lead to a massive transfer of wealth from productive, developed economies to desert despotisms whose significance to the world economy and geopolitics would be precisely zero did they not happen to sit atop a pool of fuel (which they lacked the ability to discover and produce). He soon realised that the key to economic feasibility of space settlements was using them to construct solar power satellites to beam energy back to Earth.

Solar power satellites are just barely economically viable if the material from which they are made must be launched from the Earth, and many design concepts assume a dramatic reduction in launch costs and super-lightweight structure and high efficiency solar cells for the satellites, which adds to their capital cost. O'Neill realised that the materials which make up around 99% of the mass of a solar power satellite are available on the Moon, and a space settlement, with access to lunar material at a small fraction of the cost of launching from Earth and the ability to fabricate the very large power satellite structures in weightlessness would reduce the cost of space solar power to well below electricity prices of the mid-1970s (which were much lower than those of today).

In this book, a complete architecture is laid out, starting with initial settlements of “only” 10,000 people in a sphere about half a kilometre in diameter, rotating to provide Earth-normal gravity at the equator. This would be nothing like what one thinks of as a “space station”: people would live in apartments at a density comparable to small towns on Earth, surrounded by vegetation and with a stream running around the equator of the sphere. Lunar material would provide radiation shielding and mirrors would provide sunlight and a normal cycle of day and night.

This would be just a first step, with subsequent settlements much larger and with amenities equal to or exceeding those of Earth. Once access to the resources of asteroids (initially those in near-Earth or Earth-crossing orbits, and eventually the main belt) was opened, the space economy's reliance on the Earth would be only for settlers and lightweight, labour-intensive goods which made more sense to import. (For example, it might be some time before a space settlement built its own semiconductor fabrication facility rather than importing chips from those on Earth.)

This is the future we could be living in today, but turned our backs upon. Having read this book shortly after it first came out, it is difficult to describe just how bracing this optimistic, expansive view of the future was in the 1970s, when everything was brown and the human prospect suddenly seemed constrained by limited resources, faltering prosperity, and shrinking personal liberty. The curious thing about re-reading it today is that almost nothing has changed. Forty years later, O'Neill's roadmap for the future is just as viable an option for a visionary society as it was when initially proposed, and technological progress and understanding of the space environment has only improved its plausibility. The International Space Station, although a multi-decade detour from true space settlements, provides a testbed where technologies for those settlements can be explored (for example, solar powered closed-cycle Brayton engines as an alternative to photovoltaics for power generation, and high-yield agricultural techniques in a closed-loop ecosystem).

The re-appearance of this book in an electronic edition is timely, as O'Neill's ideas and the optimism for a better future they inspired seem almost forgotten today. Many people assume there was some technological flaw in his argument or that an economic show-stopper was discovered, yet none was. It was more like the reaction O'Neill encountered when he first tried to get his ideas into print in 1972. One reviewer, recommending against publication, wrote, “No one else is thinking in these terms, therefore the ideas must be wrong.” Today, even space “visionaries” imagine establishing human settlements on the Moon, Mars, and among the asteroids, with space travel seen as a way to get to these destinations and sustain pioneer communities there. This is a vision akin to long sea voyages to settle distant lands. O'Neill's High Frontier is something very different and epochal: the expansion of a species which evolved on the surface of a planet into the space around it and eventually throughout the solar system, using the abundant solar energy and material resources available there. This is like life expanding from the sea where it originated onto the land. It is the next step in the human adventure, and it can begin, just as it could have in 1976, within a decade of a developed society committing to make it so.

For some reason the Kindle edition, at least when viewed with the iPad Kindle application, displays with tiny type. I found I had to increase the font size by four steps to render it easily readable. Since font size is a global setting, that means than if you view another book, it shows up with giant letters like a first grade reader. The illustrations are dark and difficult to interpret in the Kindle edition—I do not recall whether this was also the case in the paperback edition I read many years ago.

 Permalink

Harden, Blaine. Escape from Camp 14. New York: Viking Penguin, 2012. ISBN 978-0-14-312291-3.
Shin Dong-hyuk was born in a North Korean prison camp. The doctrine of that collectivist Hell-state, as enunciated by tyrant Kim Il Sung, is that “[E]nemies of class, whoever they are, their seed must be eliminated through three generations.” Shin (I refer to him by his family name, as he prefers) committed no crime, but was born into slavery in a labour camp because his parents had been condemned to servitude there due to supposed offences. Shin grew up in an environment so anti-human it would send shivers of envy down the spines of Western environmentalists. In school, he saw a teacher beat a six-year-old classmate to death with a blackboard pointer because she had stolen and hidden five kernels of maize. He witnessed the hanging of his mother and the execution by firing squad of his brother because they were caught contemplating escape from the camp, and he felt only detestation of them because their actions would harm him.

Shin was imprisoned and tortured due to association with his mother and brother, and assigned to work details where accidents which killed workers were routine. Shin accepted this as simply the way life was—he knew nothing of life outside the camp or in the world beyond his slave state. This changed when he made the acquaintance of Park Yong Chul, sent to the camp for some reason after a career which had allowed him to travel abroad and meet senior people in the North Korean ruling class. While working together in the camp's garment factory, Park introduced Shin to a wider world and set him to thinking about escaping the camp. The fact that Shin, who had been recruited to observe Park and inform upon any disloyalty he observed, instead began to conspire with him to escape the camp was the signal act of defiance against tyranny which changed Shin's life.

Shin pulled off a harrowing escape from the camp which left him severely injured, lived by his wits crossing the barren countryside of North Korea, and made it across the border to China, where he worked as a menial farm hand and yet lived in luxury unheard of in North Korea. Raised in the camp, his expectations for human behaviour had nothing to do with the reality outside. As the author observes, “Freedom, in Shin's mind, was just another word for grilled meat.”

Freedom, beyond grilled meat, was something Shin found difficult to cope with. After making his way to South Korea (where the state has programs to integrate North Korean escapees into the society) and then the United States (where, as the only person born in a North Korean prison camp to ever escape, he was a celebrity among groups advocating for human rights in North Korea). But growing up in an intensely anti-human environment, cut off from all information about the outside world, makes it difficult to cope with normal human interactions and the flood of information those born into liberty consider normal.

Much as with Nothing to Envy (September 2011), this book made my blood boil. It is not just the injustice visited upon Shin and all the prisoners of the regime who did not manage to escape, but those in our own societies who would condemn us to comparable servitude in the interest of a “higher good” as they define it.

 Permalink

Brown, Dan. Inferno. New York: Doubleday, 2013. ISBN 978-0-385-53785-8.
This thriller is a perfect companion to Robert Zubrin's nonfiction Merchants of Despair (April 2013). Both are deeply steeped in the culture of Malthusian anti-humanism and the radical prescriptions of those who consider our species a cancer on the planet. In this novel, art historian and expert in symbology Robert Langdon awakens in a hospital bed with no memory of events since walking across the Harvard campus. He is startled to learn he is in Florence, Italy with a grazing gunshot wound to the scalp, and the target of a murderous pursuer whose motives are a mystery to him.

Langdon and the doctor who first treated him and then rescued him from a subsequent attack begin to dig into the mystery. Langdon, recovering from retrograde amnesia, finds reality mixing with visions reminiscent of Dante's Inferno, whose imagery and symbols come to dominate their quest to figure out what is going on. Meanwhile, a shadowy international security group which was working with a renowned genetic engineer begins to suspect that they may have become involved in a plot with potentially catastrophic consequences. As the mysteries are investigated, the threads interweave into a complex skein, hidden motives are revealed, and loyalties shift.

There were several times whilst reading this novel that I expected I'd be dismissing it here as having an “idiot plot”—that the whole narrative didn't make any sense except as a vehicle to introduce the scenery and action (as is the case in far too many action movies). But the author is far too clever for that (which is why his books have become such a sensation). Every time you're sure something is nonsense, there's another twist of the plot which explains it. At the end, I had only one serious quibble with the entire plot. Discussing this is a hideous spoiler for the entire novel, so I'm going to take it behind the curtain. Please don't read this unless you've already read the novel or are certain you don't intend to.

Spoiler warning: Plot and/or ending details follow.  
The vector virus created by Zobrist, as described on p. 438, causes a randomly selected one third of the human population to become sterile. But how can a virus act randomly? If the virus is inserted into the human germ-line, it will be faithfully copied into all offspring with the precision of human DNA replication, so variation in the viral genome, once incorporated into the germ-line, is not possible. The only other way the virus could affect only a third of the population is that there is some other genetic property which enables the virus to render the organism carrying it sterile. But if that is the case, and the genetic property be heritable, only those who lacked the variation(s) which allowed the virus to sterilise them would reproduce, and in a couple of generations the virus, while still incorporated in the human genome, would have no effect on the rate of growth of the human population: “life finds a way”.

Further, let's assume the virus could, somehow, randomly sterilise a third of the human population, that natural selection could not render it ineffective, and science found no way to reverse it or was restrained from pursuing a remedy by policy makers. Well, then, you'd have a world in which some fraction of couples could have children and the balance could not. (The distribution depends upon whether the virus affects the fertility of males, females, or both.) Society adapts to such circumstances. Would not the fertile majority increase their fertility to meet market demand for adoption by infertile couples?

Spoilers end here.  

This is a fine thriller, meticulously researched, which will send you off to look up the many works of art and architectural wonders which appear in it, and may plant an itch to visit Florence and Venice. I'm sure it will make an excellent movie, as is sure to happen after the success of cinematic adaptations of the author's previous Robert Langdon novels.

 Permalink

Aldrin, Buzz with Leonard David. Mission to Mars. Washington, National Geographic Society, 2013. ISBN 978-1-4262-1017-4.
As Buzz Aldrin (please don't write to chastise me for misstating his name: while born as Edwin Eugene Aldrin, Jr., he legally changed his name to Buzz Aldrin in 1988) notes, while Neil Armstrong may have been the first human to step onto the Moon, he was the first alien from another world to board a spacecraft bound for Earth (but how can he be sure?). After those epochal days in July of 1969, Aldrin, more than any other person who went to the Moon, has worked energetically to promote space exploration and settlement, developing innovative mission architectures to expand the human presence into the solar system. This work continues his intellectual contributions to human space flight which began with helping to develop the techniques of orbital rendezvous still employed today and pioneering neutral-buoyancy training for extra-vehicular activity, which enabled him to perform the first completely successful demonstration of work in orbit on Gemini XII.

In this book Aldrin presents his “Unified Space Vision” for the next steps beyond the home planet. He notes that what we know about the Moon today is very different from the little we mostly guessed when he set foot upon that world. Today it appears that the lunar polar regions may have abundant resources of water which provide not only a source of oxygen for lunar settlers, but electrolysed by abundant solar power, a source of rocket fuel for operations beyond the Earth. Other lunar resources may allow the fabrication of solar panels from in situ materials, reducing the mass which must be launched from the Earth. Aldrin “cyclers” will permit transfers between the Earth and Moon and the Earth and Mars with little expenditure of propellant.

Aldrin argues that space, from low Earth orbit to the vicinity of the Moon, be opened up to explorers, settlers, and entrepreneurs from all countries, private and governmental, to discover what works and what doesn't, and which activities make economic sense. To go beyond, however, he argues that the U.S. should take the lead, establishing a “United Strategic Space Enterprise” with the goal of establishing a permanent human settlement on Mars by 2035. He writes, “around 2020, every selected astronaut should consign to living out his or her life on the surface of Mars.”

And there's where it all falls apart for me. It seems to me the key question that is neither asked nor answered when discussing the establishment of a human settlement on Mars can be expressed in one word: “why?” Yes, I believe that long-term survival of humans and their descendants depends upon not keeping everything in one planetary basket, and I think there is tremendously interesting science to be done on Mars, which may inform us about the origin of life and its dissemination among celestial bodies, the cycle of climate on planets and the influence of the Sun, and many other fascinating subjects. It makes sense to have a number of permanent bases on Mars to study these things, just as the U.S. and other countries have maintained permanent bases in Antarctica for more than fifty years. But I no longer believe that the expansion of the human presence in the solar system is best accomplished by painfully clawing our way out of one deep gravity well only to make a long voyage and then make an extremely perilous descent into another one (the Martian atmosphere is thick enough you have to worry about entry heating, but not thick enough to help in braking to landing speed). Once you're on Mars, you only have solar power half the time, just as on Earth, and you have an atmosphere which is useless to breathe.

Even though few people take it seriously any more, Gerard K. O'Neill's vision of space settlements in The High Frontier (May 2013) makes far more sense to me. Despite Aldrin's enthusiasm for private space ventures, it seems to me that his vision for the exploration and settlement of Mars will be, for at least the first decades, the kind of elitist venture performed by civil servants that the Apollo Moon landings were. In this book he envisions no economic activity on Mars which would justify the cost of supporting an expanding human presence there. Now, wealthy societies may well fund a few bases, just as they do in the Antarctic, but that will never reach what O'Neill calls the point of “ignition”—where the settlement pays for itself and can fund its own expansion by generating economic value sufficient to import its needs and additional settlers. O'Neill works out in great detail how space settlements in cislunar space can do this, and I believe his economic case, first made in the 1970s, has not only never been refuted but is even more persuasive today.

Few people have thought as long and hard about what it takes to make our species a spacefaring civilisation as Buzz Aldrin, nor worked so assiduously over decades to achieve that goal. This is a concise summation of his view for where we should go from here. I disagree with much of his strategy, but hey, when it comes to extraterrestrial bodies, he's been there and I haven't. This is a slim book (just 272 pages in the hardback edition), and the last 20% is a time line of U.S. space policies by presidential administrations, including lengthy abstracts of speeches, quoted from space.com.

 Permalink

Stiennon, Patrick J. G., David M. Hoerr, and Doug Birkholz. The Rocket Company. Reston VA, American Institute of Aeronautics and Astronautics, [2005] 2013. ISBN 978-1-56347-696-9.
This is a very curious book. The American Institute of Aeronautics and Astronautics isn't known as a publisher of fiction, and yet here we have, well, not exactly a novel, but something between an insider account of a disruptive technological start-up company along the lines of The Soul of A New Machine and a business school case study of a company which doesn't exist, at least not yet.

John Forsyth, having made a fortune in the computer software industry, decided to invest in what he believed was the next big thing—drastically reducing the cost of access to space and thereby opening a new frontier not just to coercive governments and wealthy tourists but to pioneers willing to bet their future on expanding the human presence beyond the planet. After dropping a tidy sum in a space start-up in the 1990s, he took a step back and looked at what it would take to build a space access business which would have a real probability of being profitable on a time scale acceptable to investors with the resources it would take to fund it.

Having studied a variety of “new space” companies which focussed on providing launch services in competition with incumbent suppliers, he concluded that in the near term reducing the cost of access to orbit would only result in shrinking overall revenue, as demand for launch services was unlikely to expand much even with a substantial reduction in launch cost. But, as he observed, while in the early days of the airline industry most airlines were unprofitable, surviving on government subsidies, aircraft manufacturers such as Boeing did quite well. So, he decided his new venture would be a vendor of spacecraft hardware, leaving operations and sales of launch services to his customers. It's ugly, but it gets you there.

In optimising an aerospace system, you can trade off one property against another. Most present-day launch systems are optimised to provide maximum lift weight to orbit and use expensive lightweight construction and complex, high-performance engines to achieve that goal. Forsyth opted to focus on reusability and launch rate, even at the cost of payload. He also knew that his budget would not permit the development of exotic technologies, so he chose a two stage to orbit design which would use conventional construction techniques and variants of engines with decades of service history.

He also decided that the launcher would be manned. Given the weight of including crew accommodations, an escape system, and life support equipment this might seem an odd decision, but Forsyth envisioned a substantial portion of his initial market to be countries or other groups who wanted the prestige of having their own manned space program and, further, if there was going to be a pilot on board, he or she could handle payload deployment and other tasks which would otherwise require costly and heavy robotics. (I cannot, for the life of me, figure out the rationale for having a pilot in the first stage. Sure, the added weight doesn't hit the payload to orbit as much as in the second stage, but given the very simple trajectory of the first stage the pilot is little more than a passenger.)

The book chronicles the venture from concept, through business plan, wooing of investors, building the engineering team, making difficult design trade-offs, and pitching the new vehicle to potential customers, carefully avoiding the problem of expectations outpacing reality which had been so often the case with earlier commercial space ventures. The text bristles with cost figures and engineering specifications, the latter all in quaint U.S. units including slugs per square foot (ewww…). Chapter 6 includes a deliciously cynical view of systems engineering as performed in legacy aerospace contractors.

I noted several factual and a number of copy-editing errors, but none which call into question the feasibility of the design. The technologies required to make this work are, for the most part, already in existence and demonstrated in other applications, but whether it would be possible to integrate them into a new vehicle with the schedule and budget envisioned here is unclear. I do not understand at all what happens after the orbital stage lands under its parawing. Both the propellant tanks and interstage compartment are “balloon tanks”, stabilised by pressure. This is fine for flight to orbit, orbital operations (where there is no stress on the interstage when it is depressurised for payload deployment), or re-entry, but after the stage lands horizontally how does the pilot exit through the crew hatch without the interstage losing pressure and crumpling on the runway? Some of the plans for lunar and planetary applications in the final few chapters seem wooly to me, but then I haven't seriously thought about what you might do with a reusable launcher with a payload capacity of 2250 kg that can fly once a day.

The illustrations by Doug Birkholz are superb, reminiscent of those by Russell W. Porter in Amateur Telescope Making. Author Stiennon received U.S. patent 5,568,901 in 1996 for a launch system as described in this book.

 Permalink

June 2013

Baxter, Stephen. Moonseed. New York: Harper Voyager, 1998. ISBN 978-0-06-105903-2.
Stephen Baxter is one of the preeminent current practitioners of “hard” science fiction—trying to tell a tale of wonder while getting the details right, or at least plausible. In this novel, a complacent Earth plodding along and seeing its great era of space exploration recede into the past is stunned when, without any warning, Venus explodes, showering the Earth with radiation which seems indicative of processes at grand unification and/or superstring energies. “Venus ponchos” become not just a fashion accessory but a necessity for survival, and Venus shelters an essential addition to basements worldwide.

NASA geologist Henry Meacher, his lunar landing probe having been cancelled due to budget instability, finds himself in Edinburgh, Scotland, part of a project to analyse a sample of what may be lunar bedrock collected from the last Apollo lunar landing mission decades before. To his horror, he discovers that what happened to Venus may have been catalysed by something in the Moon rock, and that it has escaped and begun to propagate in the ancient volcanic vents around Edinburgh. Realising that this is a potential end-of-the-world scenario, he tries to awaken the world to the risk, working through his ex-wife, a NASA astronaut, and argues the answer to the mystery must be sought where it originated, on the Moon.

This is grand scale science fiction—although the main narrative spans only a few years, its consequences stretch decades thereafter and perhaps to eternity. There are layers and layers of deep mystery, and ambiguities which may never be resolved. There are some goofs and quibbles big enough to run a dinosaur-killer impactor through (I'm talking about “harenodynamics”: you'll know what I mean when you get there, but there are others), but still the story works, and I was always eager to pick it back up and find out what happens next. This is the final volume in Baxter's NASA trilogy. I found the first two novels, Voyage and Titan (December 2012), better overall, but if you enjoyed them, you'll almost certainly like this book.

 Permalink

Neven, Thomas E. Sir, The Private Don't Know! Seattle: Amazon Digital Services, 2013. ASIN B00D5EO5EU.
The author, a self-described “[l]onghaired surfer dude” from Florida, wasn't sure what he wanted to do with his life after graduating from high school, but he was certain he didn't want to go directly to college—he didn't have the money for it and had no idea what he might study. He had thought about a military career, but was unimpressed when a Coast Guard recruiter never got back to him. He arrived at the Army recruiter's office only to find the recruiter a no-show. While standing outside the Army recruiter's office, he was approached by a Marine recruiter, whose own office was next door. He was receptive to the highly polished pitch and signed enlistment papers on March 10, 1975.

This was just about the lowest ebb in 20th century U.S. military history. On that very day, North Vietnam launched the offensive which would, two months later, result in the fall of Saigon and the humiliating images of the U.S. embassy being evacuated by helicopter. Opposition to the war had had reduced public support for the military to all-time lows, and the image of veterans as drug-addicted, violence-prone sociopaths was increasingly reinforced by the media. In this environment, military recruiters found it increasingly difficult to meet their quotas (which failure could torpedo their careers), and were motivated and sometimes encouraged to bend the rules. Physical fitness, intelligence, and even criminal records were often ignored or covered up in order to make quota. This meant that the recruits arriving for basic training, even for a supposedly elite force as the Marines, included misfits, some of whom were “dumb as a bag of hammers”.

Turning this flawed raw material into Marines had become a matter of tearing down the recruits' individuality and personality to ground level and the rebuilding it into a Marine. When the author arrived at Parris Island a month after graduating from high school, he found himself fed into the maw of this tree chipper of the soul. Within minutes he, and his fellow recruits, all shared the thought, “What have I gotten myself into?”, as the mental and physical stress mounted higher and higher. “The DIs [drill instructors] were gods; they had absolute power and were capricious and cruel in exercising it.” It was only in retrospect that the author appreciated that this was not just hazing or sadism (although there were plenty of those), but a deliberate part of the process to condition the recruits to instantly obey any order without questioning it and submit entirely to authority.

This is a highly personal account of one individual's experience in Marine basic training. The author served seven years in the Marine Corps, retiring with the rank of staff sergeant. He then went on to college and graduate school, and later was associate editor of the Marine Corps Gazette, the professional journal of the Corps.

The author was one of the last Marines to graduate from the “old basic training”. Shortly thereafter, a series of scandals involving mistreatment of recruits at the hands of drill instructors brought public and Congressional scrutiny of Marine practices, and there was increasing criticism among the Marine hierarchy that “Parris Island was graduating recruits, not Marines.” A great overhaul of training was begun toward the end of the 1970s and has continued to the present day, swinging back and forth between leniency and rigour. Marine basic has never been easy, but today there is less overt humiliation and make-work and more instruction and testing of actual war-fighting skills. An epilogue (curiously set in a monospace typewriter font) describes the evolution of basic training in the years after the author's own graduation from Parris Island. For a broader-based perspective on Marine basic training, see Thomas Ricks's Making the Corps (February 2002).

This book is available only in electronic form for the Kindle as cited above, under the given ASIN. No ISBN has been assigned to it.

 Permalink

Cody, Beth. Looking Backward: 2162–2012. Seattle: CreateSpace, 2012. ISBN 978-1-4681-7895-1.
Julian West was a professor of history at Fielding College, a midwestern U.S. liberal arts institution, where he shared the assumptions of his peers: big government was good; individual initiative was suspect; and the collective outweighed the individual. At the inauguration of a time capsule on the campus, he found himself immured within it and, after inhaling a concoction consigned to the future by the chemistry department, wakes up 150 years later, when the capsule is opened, to discover himself in a very different world.

The United States, which was the foundation of his reference frame, have collapsed due to unsustainable debt and entitlement commitments. North America has fragmented into a variety of territories, including the Free States of America, which include the present-day states of Oklahoma, Missouri, Kansas, Iowa, Nebraska, Colorado, Utah, Nevada, Idaho, Montana, Wyoming, and North and South Dakota. The rest of the former U.S. has separated into autonomous jurisdictions with very different approaches to governance. The Republic of Texas has become entirely Texan, while New Hampshire has chosen to go it alone, in keeping with their porky-spine tradition. A rump USA, composed of failed states, continues to pursue the policies which caused the collapse of their railroad-era, continental-scale empire.

West returns to life in the Free States, which have become a classical libertarian republic as imagined by Rothbard. The federal government is supported only by voluntary contributions, and state and local governments are constrained by the will of their constituents. West, disoriented by all of this, is taken under the wing of David Seeton, a history professor at Fielding in the 22nd century, who welcomes West into his home and serves a guide to the new world in which West finds himself.

West and Seeton explore this world, so strange to West, and it slowly dawns on West (amidst flashbacks to his past life), that this might really be a better way of organising society. There is a great amount of preaching and didactic conversation here; while it's instructive if you're really interested in how a libertarian society might work, many may find it tedious.

Finally, West, who was never really sure his experience of the future mightn't have been a dream, has a dream experience which forces him to confront the conflict of his past and future.

This is a book I found both tiresome and enlightening. I would highly recommend it to anybody who has contemplated a libertarian society but dismissed it as “That couldn't ever work”. The author is clear that no solution is perfect, and that any society will reflect the flaws of the imperfect humans who compose it. The libertarian society is presented as the “least bad discovered so far”, with the expectation that free people will eventually discover even better ways to organise themselves. Reading this book is much like slogging through Galt's speech in Atlas Shrugged (April 2010)—it takes some effort, but it's worth doing so. It is obviously derivative of Edward Bellamy's Looking Backward which presented a socialist utopia, but I'd rather live in Cody's future than Bellamy's.

 Permalink

Smolin, Lee. Time Reborn. New York: Houghton Mifflin, 2013. ISBN 978-0-547-51172-6.
Early in his career, the author received some unorthodox career advice from Richard Feynman. Feynman noted that in physics, as in all sciences, there were a large number of things that most professional scientists believed which nobody had been able to prove or demonstrate experimentally. Feynman's insight was that, when considering one of these problems as an area to investigate, there were two ways to approach it. The first was to try to do what everybody had failed previously to accomplish. This, he said, was extremely difficult and unlikely to succeed, since it assumes you're either smarter than everybody who has tried before or have some unique insight which eluded them. The other path is to assume that the failure of numerous brilliant people might indicate that what they were trying to demonstrate was, in fact, wrong, and that it might be wiser for the ambitious scientist to search for evidence to the contrary.

Based upon the author's previous work and publications, I picked up this book expecting a discussion of the problem of time in quantum gravity. What I found was something breathtakingly more ambitious. In essence, the author argues that when it comes to cosmology: the physics of the universe as a whole, physicists have been doing it wrong for centuries, and that what he calls the “Newtonian paradigm” must be replaced with one in which time is fundamental in order to stop speaking nonsense.

The equations of general relativity, especially when formulated in attempts to create a quantum theory of gravitation, seem to suggest that our perception of time is an illusion: we live in a timeless block universe, in which our consciousness can be thought of as a cursor moving through a fixed, deterministic spacetime. In general relativity, the rate of perceived flow of time depends upon one's state of motion and the amount of mass-energy in the vicinity of the observer, so it makes no sense to talk about any kind of global time co-ordinate. Quantum mechanics, on the other hand, assumes there is a global clock, external to the system and unaffected by it, which governs the evolution of the wave function. These views are completely incompatible—hence the problem of time in quantum gravity.

But the author argues that “timelessness” has its roots much deeper in the history and intellectual structure of physics. When one uses Newtonian mechanics to write down a differential equation which describes the path of a ball thrown upward, one is reducing a process which would otherwise require enumerating a list of positions and times to a timeless relationship which is valid over the entire trajectory. Time appears in the equation simply as a label which causes it to emit the position at that moment. The equation of motion, and, more importantly, the laws of motion which allow us to write it down for this particular case, are entirely timeless: they affect the object but are not affected by it, and they appear to be specified outside the system.

This, when you dare to step back and think about it, is distinctly odd. Where did these laws come from? Well, in Newton's day and in much of the history of science since, most scientists would say they were prescribed by a benevolent Creator. (My own view that they were put into the simulation by the 13 year old superkid who created it in order to win the Science Fair with the most interesting result, generating the maximum complexity, is isomorphic to this explanation.) Now, when you're analysing a system “in a box”, it makes perfect sense to assume the laws originate from outside and are fixed; after all, we can compare experiments run in different boxes and convince ourselves that the same laws obtain regardless of symmetries such as translation, orientation, or boost. But note that once we try to generalise this to the entire universe, as we must in cosmology, we run into a philosophical speed bump of singularity scale. Now we cannot escape the question of where the laws came from. If they're from inside the universe, then there must have been some dynamical process which created them. If they're outside the universe, they must have had to be imposed by some process which is external to the universe, which makes no sense if you define the universe as all there is.

Smolin suggests that laws exist within our universe, and that they evolve in an absolute time, which is primordial. There is no unmoved mover: the evolution of the universe (and the possibility that universes give birth to other universes) drives the evolution of the laws of physics. Perhaps the probabilistic results we observe in quantum mechanical processes are not built-in ahead of time and prescribed by timeless laws outside the universe, but rather a random choice from the results of previous similar measurements. This “principle of precedence”, which is remarkably similar to that of English common law, perfectly reproduces the results of most tests of quantum mechanics, but may be testable by precision experiments where circumstances never before created in the universe are measured, for example in quantum computing. (I am certain Prof. Smolin would advocate for my being beheaded were I to point out the similarity of this hypothesis with Rupert Sheldrake's concept of morphic resonance; some years ago I suggested to Dr Sheldrake a protein crystallisation experiment on the International Space Station to test this theory; it is real science, but to this date nobody has done it. Few wish to risk their careers testing what “everybody knows”.)

This is one those books you'll need to think about after you've read it, then after some time, re-read to get the most out of it. A collection of online appendices expand upon topics discussed in the book. An hour-long video discussion of the ideas in the book by the author and the intellectual path which led him to them is available.

 Permalink

July 2013

Wolfe, Steven. The Obligation. Los Gatos, CA: Smashwords, 2013. ISBN 978-1-301-05798-6.
This is a wickedly clever book. A young congressional staffer spots a plaque on the wall of his boss, a rotund 15-term California Democrat, which reads, “The colonization of space will be the fulfillment of humankind's Obligation to the Earth.” Intrigued, he mentions the plaque to the congressman, and after a series of conversations, finds himself sent on a quest to meet archetypes of what the congressman refers to as the six Endowments of humanity—capacities present only in our own species which set us apart from all of those from whom we evolved, and equip us for a destiny which is our ultimate purpose: the Wanderer, Settler, Inventor, Builder, Visionary, and Protector. These Endowments have evolved, driven by the Evolutionary Impulse, toward the achievement, by humans and their eventual descendents, of three Obligations, which will require further evolution into a seventh Endowment.

The staffer tries to reconcile his discovery of the human destiny beyond the planet with his romance with a goo-goo eco-chick who advocates cancelling the space program to first solve our problems on the Earth. As he becomes progressively enlightened, he, and then she realise that there is no conflict between these goals, and that planetary stewardship and serving as the means for Gaia “going to seed” and propagating the life it has birthed outward into the cosmos are a unified part of the Obligation.

When I describe this book as “wickedly clever”, what I mean is that it creates a mythology for space migration which co-opts and subsumes that of its most vehement opponents: the anti-human Merchants of Despair (April 2013). It recasts humanity, not as a “cancer on the planet”, but rather the means by which Gaia can do what every life form must: reproduce. Indeed, Robert Zubrin, author of the aforementioned book, along with a number of other people I respect, have contributed effusive blurbs on the book's Web site. It provides a framework for presenting humanity's ultimate destiny and the purpose of life to those who have never thought of those in terms similar to those I expressed in my Epilogue to Rudy Rucker's The Hacker and the Ants. (Warning—there are spoilers for the novel in my Epilogue.)

In the acknowledgements, the author thanks several people for help in editing the manuscript. Given the state of what was published, one can only imagine what these stalwarts started with. The text is riddled with copy-editing errors: I noted 61, and I was just reading for enjoyment, not doing a close proof. In chapter 6, visiting Evan Phillips, the Builder, the protagonist witnesses a static test of an Aerojet LR-87 engine, which is said to have a “white hot exhaust” and is described as “off the shelf hardware”. But the LR-87, which powered Titan missiles and launchers, has used hypergolic fuels ever since the Titan II replaced the Titan I in the early 1960s. These storable fuels burn with a clear flame. Re-engineering an LR-87 to burn LOX and RP-1 would be a major engineering project, hardly off the shelf. Further, during the test, the engine is throttled to various thrust levels, but the LR-87 was a fixed thrust engine; no model incorporated throttling. In chapter 9, after visiting a Kitt Peak telescope earlier in the night, in the predawn hours, he steps out under the sky and sees a “nearly full Moon … dimming the blazing star fields I saw at Kitt Peak”. But a full Moon always rises at sunset (think about the geometry), so if the Moon were near full, it would have been up when he visited the telescope. There are other factual goofs, but I will not belabour them, as that isn't what this book is about. It is a rationale for space settlement which, if the reader can set aside the clumsy editing, may be seductively persuasive even to those inclined to oppose it.

Update: The copy-editing errors mentioned above have been corrected in a new edition now posted. If you previously purchased and downloaded the Kindle edition, log in to your Amazon account, go to “Manage Your Kindle / Manage Your Devices” and turn on Automatic Book Update. The next time you synchronise your reading device, the updated edition will be downloaded. (2013-08-03 13:28 UTC)

Only the Kindle edition is available from Amazon, but a wide variety of other electronic formats, including HTML, PDF, EPUB, and plain text are available from Smashwords.

 Permalink

Cashill, Jack and James Sanders. First Strike. Nashville: WND Books, 2003. ISBN 978-0-7852-6354-8.
On July 17, 1996, just 12 minutes after takeoff, TWA Flight 800 from New York to Paris exploded in mid-air off the coast of Long Island and crashed into the Atlantic Ocean. All 230 passengers and crew on board were killed. The disaster occurred on a summer evening in perfect weather, and was witnessed by hundreds of people from land, sea, and air—the FBI interviewed more than seven hundred eyewitnesses in the aftermath of the crash.

There was something “off” about the accident investigation from the very start. Many witnesses, including some highly credible people with military and/or aviation backgrounds, reported seeing a streak of light flying up and reaching the airliner, followed by a bright flash like that produced by a high-velocity explosive. Only later did a fireball from burning fuel appear and begin to fall to the ocean. In total disregard of the stautory requirements for an air accident investigation, which designate the National Transportation Safety Board (NTSB) as the lead agency, the FBI was given prime responsibility and excluded NTSB personnel from interviews with eyewitnesses, restricted access to interview transcripts and physical evidence, and denied NTSB laboratories the opportunity to test debris recovered from the crash field.

NTSB investigations involve “partners”: representatives from the airline, aircraft manufacturer, the pilots' and aerospace workers' unions, and others. These individuals observed and remarked pointedly upon how different this investigation was from the others in which they had participated. Further, and more disturbingly, some saw what appeared to be FBI tampering with the evidence, falsifying records such as the location at which debris had been recovered, altering flight recorder data, and making key evidence as varied as the scavenge pump which was proposed as the ignition source for the fuel tank explosion advanced as the cause of the crash, seats in the area contaminated with a residue some thought indicative of missile propellant or a warhead explosion, and dozens of eyewitness sketches disappear.

Captain Terrell Stacey was the TWA representive in the investigation. He was in charge of all 747 pilot operations for the airline and had flown the Flight 800 aircraft into New York the night before its final flight. After observing these irregularities in the investigation, he got in touch with author Sanders, a former police officer turned investigative reporter, and arranged for Sanders to obtain samples of the residue on the seats for laboratory testing. The tests found an elemental composition consistent with missile propellant or explosive, which was reported on the front page of a Southern California newspaper on March 10th, 1997. The result: the FBI seized Sanders's phone records, tracked down Stacey, and arrested and perp-walked Sanders and his wife (a TWA trainer and former flight attendant). They were hauled into court and convicted of a federal charge intended to prosecute souvenir hunters disturbing crash sites. The government denied Sanders was a journalist (despite his work having been published in mainstream venues for years) and disallowed a First Amendment defence.

This is just a small part of what stinks to high heaven about this investigation. So shoddy was control of the chain of custody of the evidence and so blatant the disregard of testimony of hundreds of eyewitnesses, that alternative theories of the crash have flourished since shortly after the event until the present day. It is difficult to imagine what might have been the motives behind a cover-up of a missile attack against a U.S. airliner, but as the author notes, only a few months remained before the 1996 U.S. presidential election, in which Clinton was running on a platform of peace and prosperity. A major terrorist attack might subvert this narrative, so perhaps the well-documented high-level meetings which occurred in the immediate aftermath of the crash might have decided to direct a finding of a mechanical failure of a kind which had occurred only once before in the eighty-year history of aviation, with that incident being sometimes attributed to terrorism. What might have been seen as a wild conspiracy theory in the 1990s seems substantially more plausible in light of the Benghazi attack in the run-up to the 2012 presidential election and its treatment by the supine legacy media.

A Kindle edition is available. If you are interested in this independent investigation of Flight 800, be sure to see the documentary Silenced which was produced by the authors and includes interviews with many of the key eyewitnesses and original documents and data. Finally, if this was just an extremely rare mechanical malfunction, why do so many of the documents from the investigation remain classified and inaccessible to Freedom of Information Act requests seventeen years thereafter?

 Permalink

Walsh, Michael. Shock Warning. New York: Pinnacle Books, 2011. ISBN 978-0-7860-2412-4.
This is the third novel in the author's “Devlin” series of thrillers. When I read the first, Hostile Intent (September 2010), I described it as a “tangled, muddled mess” and concluded that the author “may eventually master the thriller, but I doubt I'll read any of the sequels to find out for myself”. Well, I did eventually read the sequel, Early Warning (January 2012), which I enjoyed very much, and concluded that the author was well on the path to being a grandmaster of the techno-thriller genre.

Then we have this book, the conclusion to the Devlin trilogy. Here the author decides to “go large” and widen the arena from regional terrorist strikes to a global apocalyptic clash of civilisations end-times scenario. The result is an utter flop. First of all, this novel shouldn't be read by anybody who hasn't read the previous two books—you won't have the slightest idea who the characters are, the backstory which has brought them to their present points, or what motivates them to behave as they do. Or maybe I can simplify the last sentence to say “This novel shouldn't be read by anybody”—it's that bad.

There is little more I can say which would not be spoilers for either this book or the series, so let us draw the curtain.

Spoiler warning: Plot and/or ending details follow.  
The key thing about a techno-thriller is that the technology should be plausible and that it should be thrilling. This novel fails by both criteria. The key conceit, that a laser operated by a co-opted employee of CERN on the Côte d'Azur could project lifelike holographic images of the Blessed Virgin Mary and the Prophet Mohammed by bouncing them off the lunar ranging retroreflectors placed on the lunar surface is laugh-out-loud absurd. A moment's calculation of the energy required to return a visible signal to the Earth will result in howls of laughter, and that's before you consider that holograms don't work anything like the author presumes they do.

Our high-end NSA and special forces heroes communicate using a “double Playfair cipher”. This is a digraph substitution cipher which can be broken in milliseconds by modern computers.

Danny brings the MH-6H Little Bird “just a few feet off the high desert floor”, whereupon Devlin “rappelled down, hit the ground, and started running” if it were just a few feet, why didn't he just step off the chopper, or why didn't Danny land it?

Spoilers end here.  

I could go on and on, but I won't because I didn't care enough about this story to critique it in detail. There is a constant vertigo as the story line cuts back and forth among characters we've met in the first two novels, many of who play only peripheral roles in this story. There is an entire subplot about a manipulative contender for the U.S. presidency which fades out and goes nowhere. This is a techno-thriller in which the tech is absurd and the plot induces chuckles rather than thrills.

 Permalink

Goldman, David P. How Civilizations Die. Washington: Regnery Publishing, 2011. ISBN 978-1-59698-273-4.
I am writing this review in the final days of July 2013. A century ago, in 1913, there was a broad consensus as to how the 20th century would play out, at least in Europe. A balance of power had been established among the great powers, locked into alliances and linked with trade relationships which made it seem to most observers that large-scale conflict was so contrary to the self-interest of nations that it was unthinkable. And yet, within a year, the irrevocable first steps toward what would be the most sanguinary conflict in human history so far would be underway, a global conflict which would result in more than 37 million casualties, with 16 million dead. The remainder of the 20th century was nothing like the conventional wisdom of 1913, with an even more costly global war to come, the great powers of 1913 reduced to second rank, and a bipolar world emerging stabilised only by the mutual threat of annihilation by weapons which could destroy entire cities within a half hour of being launched.

What if our expectations for the 21st century are just as wrong as those of confident observers in 1913?

The author writes the “Spengler” column for Asia Times Online. It is commonplace to say “demographics is destiny”, yet Goldman is one a very few observers who really takes this to heart and projects the consequences of demographic trends which are visible to everybody but rarely projected to their logical conclusions. Those conclusions portend a very different 21st century than most anticipate. Europe, Russia, China, Japan, and increasingly, the so-called developing world are dying: they have fertility rates not just below replacement (around 2.1 children per woman), but in many cases deep into “demographic death spiral” territory from which no recovery is possible. At present fertility rates, by 2100 the population of Japan will have fallen by 55%, Russia 53%, Germany 46%, and Italy 39%. For a social welfare state, whose financial viability presumes a large population of young workers who will pay for the pensions and medical care of a smaller cohort of retirees, these numbers are simply catastrophic. The inverted age pyramid places an impossible tax burden upon workers, which further compounds the demographic collapse since they cannot afford to raise families large enough to arrest it.

Some in the Islamic world have noted this trend and interpreted it as meaning ultimate triumph for the ummah. To this, Goldman replies, “not so fast”—the book is subtitled “And Why Islam is Dying Too”. In fact, the Islamic world is in the process of undergoing a demographic transition as great as that of the Western nations, but on a time scale so short as to be unprecedented in human history. And while Western countries will face imposing problems coping with their aging populations, at least they have sufficient wealth to make addressing the problem, however painful, possible. Islamic countries without oil (which is where the overwhelming majority of Muslims live) have no such financial or human resources. Egypt, for example, imports about half its food calories and has a functional illiteracy rate of around 40%. These countries not only lack a social safety net, they cannot afford to feed their current population, not to mention a growing fraction of retirees.

When societies are humiliated (as Islam has been in its confrontation with modernity), they not only lose faith in the future, but lose their faith, as has happened in post-Christian Europe, and then they cease to have children. Further, as the author observes, while in traditional society children were an asset who would care for their parents in old age, “In the modern welfare state, child rearing is an act of altruism.” (p. 194) This altruism becomes increasingly difficult to justify when, increasingly, children are viewed as the property of the state, to be indoctrinated, medicated, and used to its ends and, should the parents object, abducted by an organ of the state. Why bother? Fewer and fewer couples of childbearing age make that choice. Nothing about this is new: Athens, Sparta, and Rome all experienced the same collapse in fertility when they ceased to believe in their future—and each one eventually fell.

This makes for an extraordinarily dangerous situation. The history of warfare shows that in many conflicts the majority of casualties on the losing side occur after it was clear to those in political and military leadership that defeat was inevitable. As trends forecaster Gerald Celente says, “When people have nothing to lose, they lose it.” Societies which become aware of their own impending demographic extinction or shrinking position on the geopolitical stage will be tempted to go for the main prize before they scroll off the screen. This means that calculations based upon rational self-interest may not predict the behaviour of dying countries, any more than all of the arguments in 1913 about a European war being irrational kept one from erupting a year later.

There is much, much more in this book, with some of which I agree and some of which I find dubious, but it is all worthy of your consideration. The author sees the United States and Israel as exceptional states, as both have largely kept their faith and maintained a sustainable birthrate to carry them into the future. He ultimately agrees with me (p. 264) that “It is cheaper to seal off the failed states from the rest of the world than to attempt to occupy them and control the travel of their citizens.”

The twenty-first century may be nothing like what the conventional wisdom crowd assume. Here is a provocative alternative view which will get you thinking about how different things may be, as trends already in progress, difficult or impossible to reverse, continue in the coming years.

In the Kindle edition, end notes are properly linked to the text and in notes which cite a document on the Web, the URL is linked to the on-line document. The index, however, is simply a useless list of terms without links to references in the text.

 Permalink

August 2013

Cawdron, Peter. Xenophobia. Seattle: CreateSpace, 2013. ISBN 978-1-4905-6823-2.
This is the author's second novel of humanity's first contact with an alien species, but it is not a sequel to his earlier Anomaly (December 2011); the story is completely unrelated, and the nature of the aliens and the way in which the story plays out could not be more different, not only from the earlier novel, but from the vast majority of first contact fiction. To borrow terminology from John Brunner's Stand on Zanzibar, most tales of first contact are “the happening world”, cutting back and forth between national capitals, military headquarters, scientific institutions, and so on, while this story is all about “tracking with closeups”. Far from the seats of power, most of the story takes place in civil-war-torn Malawi. It works superbly.

Elizabeth Bower is a British doctor working with Médecins Sans Frontières at a hospital in a rural part of the country. Without warning, a U.S. military contingent, operating under the U.N. flag, arrives with orders to evacuate all personnel. Bower refuses to abandon those in her care, and persuades a detachment of Army Rangers to accompany her and the patients to a Red Cross station in Kasungu. During the journey, Bower and the Rangers learn that Western forces are being evacuated world-wide following the announcement that an alien spacecraft is bound for Earth, and military assets are being regrouped in their home countries to defend them.

Bower and the Rangers then undertake the overland trek to the capital of Lilongwe, where they hope to catch an evacuation flight for U.S. Marines still in the city. During the journey, things get seriously weird: the alien mothership, as large as a small country, is seen passing overhead; a multitude of probes rain down and land all around, seemingly on most of the globe; and giant jellyfish-like “floaters” enter the atmosphere and begin to cruise with unfathomable objectives.

Upon arrival at the capital, their problems are not with aliens but with two-legged Terries—rebel forces. They are ambushed, captured, and delivered into the hands of a delusional, megalomaniacal, and sadistic “commander”. Bower and a Ranger who styles himself as “Elvis” are forced into an impossible situation in which their only hope is to make common cause with an alien.

This is a tautly plotted story in which the characters are genuinely fleshed-out and engaging. It does a superb job of sketching the mystery of a first contact situation: where humans and aliens lack the means to communicate all but the most basic concepts and have every reason to distrust each other's motives. As is the case with many independently-published novels, there are a number of copy-editing errors: I noted a total of 26. There also some factual goofs: the Moon's gravity is about 1/6 of that of the Earth, not 1/3; the verbal description of the computation of the Fibonacci sequence is incorrect; the chemical formula for water is given incorrectly; and Lagrange points are described as gravitational hilltops, while the dynamics are better described by thinking of them as valleys. None of these detracts in any way from enjoying the story.

In the latter part of the book, the scale expands at a vertiginous pace from a close-up personal story to sense of wonder on the interstellar scale. There is a scene, reminiscent of one of the most harrowing episodes in the Heinlein juveniles, which I still find chilling when I recall it today (you'll know which one I'm speaking of when you get there), in which the human future is weighed in the balance.

This is a thoroughly satisfying novel which looks at first contact in an entirely different way than any other treatment I've encountered. It will also introduce you to a new meaning of the “tree of life”.

 Permalink

Fraser, George MacDonald. Quartered Safe Out Here. New York: Skyhorse Publishing, [1992, 2001] 2007. ISBN 978-1-60239-190-1.
George MacDonald Fraser is best known as the author of the Flashman historical novels set in the 19th century. This autobiographical account of his service in the British Army in Burma during World War II is fictionalised only in that he has changed the names of those who served with him, tried to reconstruct dialogue from memory, and reconstructed events as best he can from the snapshots the mind retains from the chaos of combat and the boredom of army life between contact with the enemy.

Fraser, though born to Scottish parents, grew up in Carlisle, England, in the region of Cumbria. When he enlisted in the army, it was in the Border Regiment, composed almost entirely of Cumbrian troops. As the author notes, “…Cumbrians of old lived by raid, cattle theft, extortion, and murder; in war they were England's vanguard, and in peace her most unruly and bloody nuisance. They hadn't changed much in four centuries, either…”. Cumbrians of the epoch retained their traditional dialect, which may seem nearly incomprehensible to those accustomed to BBC English:

No offence, lad, but ye doan't 'alf ga broon. Admit it, noo. Put a dhoti on ye, an' ye could get a job dishin 'oot egg banjoes at Wazir Ali's. Any roads, w'at Ah'm sayin' is that if ye desert oot 'ere — Ah mean, in India, ye'd 'ev to be dooally to booger off in Boorma — the ridcaps is bound to cotch thee, an' court-martial gi'es thee the choice o' five years in Teimulghari or Paint Joongle, or coomin' oop t'road to get tha bollicks shot off. It's a moog's game. (p. 71)

A great deal of the text is dialogue in dialect, and if you find that difficult to get through, it may be rough going. I usually dislike reading dialect, but agree with the author that if it had been rendered into standard English the whole flavour of his experience would have been lost. Soldiers swear, and among Cumbrians profanity is as much a part of speech as nouns and verbs; if this offends you, this is not your book.

This is one of the most remarkable accounts of infantry combat I have ever read. Fraser was a grunt—he never rose above the rank of lance corporal during the events chronicled in the book and usually was busted back to private before long. The campaign in Burma was largely ignored by the press while it was underway and forgotten thereafter, but for those involved it was warfare at the most visceral level: combat harking back to the colonial era, fought by riflemen without armour or air support. Kipling of the 1890s would have understood precisely what was going on. On the ground, Fraser and his section had little idea of the larger picture or where their campaign fit into the overall war effort. All they knew is that they were charged with chasing the Japanese out of Burma and that “Jap” might be “half-starved and near naked, and his only weapon was a bamboo stake, but he was in no mood to surrender.” (p. 191)

This was a time where the most ordinary men from Britain and the Empire fought to defend what they confidently believed was the pinnacle of civilisation from the forces of barbarism and darkness. While constantly griping about everything, as soldiers are wont to do, when the time came they shouldered their packs, double-checked their rifles, and went out to do the job. From time to time the author reflects on how far Britain, and the rest of the West, has fallen, “One wonders how Londoners survived the Blitz without the interference of unqualified, jargon-mumbling ‘counsellors’, or how an overwhelming number of 1940s servicemen returned successfully to civilian life without benefit of brain-washing.” (p. 89)

Perhaps it helps that the author is a master of the historical novel: this account does a superb job of relating events as they happened and were perceived at the time without relying on hindsight to establish a narrative. While he doesn't abjure the occasional reflexion from decades later or reference to regimental history documents, for most of the account you are there—hot, wet, filthy, constantly assailed by insects, and never knowing whether that little sound you heard was just a rustle in the jungle or a Japanese patrol ready to attack with the savagery which comes when an army knows its cause is lost, evacuation is impossible, and surrender is unthinkable.

But this is not all boredom and grim combat. The account of the air drop of supplies starting on p. 96 is one of the funniest passages I've ever read in a war memoir. Cumbrians will be Cumbrians!

 Permalink

Jurich, E. J. Vacuum Tube Amplifier Basics. 2nd. ed. Seattle: Amazon Digital Services, 2013. ASIN B00C0BMTGU.
If you can get past the sloppy copy-editing and production values, this book is a useful introduction for those interested in designing and building their own vacuum tube audio equipment. Millennials and others who have only ever listened to compressed audio will wonder why anybody would want to use such an antiquated technology, but those of us who appreciate it have a simple answer: it sounds better. The reason for this is simple once you poke through the mysticism surrounding the topic. It is in the nature of audio that peaks in the signal are much higher than the mean value. Solid-state amplifiers tend to be linear up until some signal level, but then “clip”—truncating the signal into a square top, introducing odd harmonics which the human ear finds distasteful. Tube amplifiers, on the other hand, tend to round off transients which exceed their capacity, introducing mostly second harmonic distortion which the ear and brain deem “mellow”.

Do you actually believe that?”, the silicon purity police shriek. Well, as a matter of fact, I do, and I currently use a 40 watt per channel tube amplifier I built from a kit more than a decade ago. It's a classic ultra-linear design using EL34 output tubes, and it sounds much better than the 200 watt per channel solid-state amplifier it replaced (after the silicon went up in smoke).

This book will introduce you to vacuum tube circuitry, and those accustomed to solid-state designs may be amazed at how few components are needed to get the job done. Since every component in the signal path has the potential to degrade its fidelity, the simplicity of vacuum tube designs is one of the advantages that recommend them. A variety of worked-out vacuum tube designs are presented, either to be built by the hobbyist or as starting points for original designs, and detailed specifications are presented for tubes widely used in audio gear.

The production quality is what we've sadly come to expect for inexpensive Kindle-only books. I noted more than 40 typographical errors (many involving the humble apostrophe), and in the tube data at the end, information which was clearly intended to be set in columns is just run together.

This book is available only in electronic form for the Kindle as cited above, under the given ASIN. No ISBN has been assigned to it.

 Permalink

Clarey, Aaron. Enjoy the Decline. Seattle: CreateSpace, 2013. ISBN 978-1-4802-8476-0.
Many readers may find this book deeply cynical, disturbing, and immoral. I found it cynical, disturbing, and immoral, but also important, especially for younger people who wish to make the most of their lives and find themselves in a United States in an epoch in which it is, with the consent of the majority, descending into a grey collectivist tyranny and surveillance state, where productive and creative people are seen as subjects to be exploited to benefit an ever-growing dependent class which supports the state which supports them.

I left the United States in 1991 and have only returned since for brief visits with family or to attend professional conferences. Since 2001, as the totalitarian vibe there has grown rapidly, I try to make these visits as infrequent as possible, my last being in 2011. Since the 1990s, I have been urging productive people in the U.S. to consider emigrating but, with only a couple of exceptions, nobody has taken this advice. I've always considered this somewhat odd, since most people in the U.S. are descended from those who left their countries of birth and came there to escape tyranny and seek opportunity. But most people in the U.S. seem to recoil from the idea of leaving, even as their own government becomes more repressive and exploits them to a greater extent than the regimes their ancestors fled.

This book is addressed to productive people (primarily young ones with few existing responsibilities) who have decided to remain in the United States. (Chapter 10 discusses emigration, and while it is a useful introduction to the topic, I'd suggest those pondering that option read Time to Emigrate? [January 2007], even though it is addressed to people in the United Kingdom.) The central message is that with the re-election of Obama in 2012, the U.S. electorate have explicitly endorsed a path which will lead to economic and geopolitical decline and ever-increasing exploitation of a shrinking productive class in favour of a growing dependent population. How is a productive person, what the author calls a “Real American”, to respond to this? One could dedicate oneself to struggling to reverse the trend through political activism, or grimly struggle to make the best of the situation while working hard even as more of the fruits of one's labour are confiscated. Alternatively, one can seek to “enjoy the decline”: face the reality that once a democratic society reaches the tipping point where more than half of the electorate receives more in government transfer payments than they pay in taxes it's game over and a new set of incentives have been put in place which those wishing to make the most of their lives must face forthrightly unless they wish to live in a delusional state.

In essence, the author argues, the definition of the “good life” is fundamentally transformed once a society begins the slide into collectivist tyranny. It is a fool's errand to seek to get an advanced education when that only burdens one with debt which will take most of a lifetime to repay and make capital formation in the most productive working years impossible. Home ownership, once the goal of young people and families, and their main financial asset, only indentures you to a state which can raise property taxes at any time and confiscate your property if you cannot pay. Marriage and children must be weighed, particularly by men, against the potential downside in case things don't work out, which is why, increasingly, men are going on strike. Scrimping and saving to contribute to a retirement plan is only piling up assets a cash-strapped government may seize when it can't pay its bills, as has already happened in Argentina and other countries.

What matters? Friends, family (if you can get along with them), having a good time, making the most of the years when you can hike, climb mountains, ride motorcycles way too fast, hunt, fish, read books that interest you, and share all of this and more with a compatible companion. And you're doing this while your contemporaries are putting in 60 hour weeks, seeing half or more of their income confiscated, and hoping to do these things at some distant time in the future, assuming their pensions don't default and their retirement funds aren't stolen or inflated away.

There are a number of things here which people may find off-putting, if not shocking. In chapter 7, the author discusses the “ ‘Smith and Wesson’ Retirement Plan”—not making any provision for retirement, living it up while you can, and putting a bullet in your head when you begin to fail. I suspect this sounds like a lot better idea when you're young (the author was 38 years old at the publication date of this book) than when you're getting closer to the checkered flag. In chapter 8, introduced by a quote from Ayn Rand, he discusses the strategy of minimising one's income and thereby qualifying for as many government assistance programs as possible. Hey, if the people have legitimately voted for them, why not be a beneficiary instead of the sucker who pays for them?

Whatever you think of the advice in this book (which comes across as sincere, not satirical), the thing to keep in mind is that it is an accurate description of the incentives which now exist in the U.S. While it's unlikely that many productive people will read this book and dial their ambitions back into slacker territory or become overt parasites, what's important is the decisions made on the margin by those unsure how to proceed in their lives. As the U.S. becomes poorer, weaker, and less free, perhaps the winners, at least on a relative basis, will be those who do not rage against the dying of the light or struggle to exist as they are progressively enslaved, but rather people who opt out to the extent possible and react rationally to the incentives as they exist. I would (and have) emigrated, but if that's not possible or thinkable, this book may provide a guide to making the best of a tragic situation.

The book contains numerous citations of resources on the Web, each of which is linked in the text: in the Kindle edition, clicking the link takes you to the cited Web page.

 Permalink

Drexler, K. Eric. Radical Abundance. New York: PublicAffairs, 2013. ISBN 978-1-61039-113-9.
Nanotechnology burst into public awareness with the publication of the author's Engines of Creation in 1986. (The author coined the word “nanotechnology” to denote engineering at the atomic scale, fabricating structures with the atomic precision of molecules. A 1974 Japanese paper had used the term “nano-technology”, but with an entirely different meaning.) Before long, the popular media were full of speculation about nanobots in the bloodstream, self-replicating assemblers terraforming planets or mining the asteroids, and a world economy transformed into one in which scarcity, in the sense we know it today, would be transcended. Those inclined to darker speculation warned of “grey goo”—runaway self-replicators which could devour the biosphere in 24 hours, or nanoengineered super weapons.

Those steeped in conventional wisdom scoffed at these “futuristic” notions, likening them to earlier predictions of nuclear power “too cheap to meter” or space colonies, but detractors found it difficult to refute Drexler's arguments that the systems he proposed violated no law of physics and that the chemistry of such structures was well-understood and predicted that, if we figured out how to construct them, they would work. Drexler's argument was reinforced when, in 1992, he published Nanosystems, a detailed technical examination of molecular engineering based upon his MIT Ph.D. dissertation.

As the 1990s progressed, there was an increasing consensus that if nanosystems existed, we would be able to fabricate nanosystems that worked as Drexler envisions, but the path from our present-day crude fabrication technologies to atomic precision on the macroscopic scale was unclear. On the other hand, there were a number of potential pathways which might get there, increasing the probability that one or more might work. The situation is not unlike that in the early days of integrated circuits. It was clear from the laws of physics that were it possible to fabricate a billion transistors on a chip they would work, but it was equally clear that a series of increasingly difficult and expensive to surmount hurdles would have to be cleared in order to fabricate such a structure. Its feasibility then became a question of whether engineers were clever enough to solve all the problems along the way and if the market for each generation of increasingly complex chips would be large enough to fund the development of the next.

A number of groups around the world, both academic and commercial, began to pursue potential paths toward nanotechnology, laying the foundation for the next step beyond conventional macromolecular chemical synthesis. It seemed like the major impediment to a rapid take-off of nanotechnology akin to that experienced in the semiconductor field was a lack of funding. But, as Eric Drexler remarked to me in a conversation in the 1990s, most of the foundation of nanotechnology was chemistry and “You can buy a lot of chemistry for a billion dollars.”

That billion dollars appeared to be at hand in 2000, when the U.S. created a billion dollar National Nanotechnology Initiative (NNI). The NNI quickly published an implementation plan which clearly stated that “the essence of nanotechnology is the ability to work at the molecular level, atom by atom, to create large structures with fundamentally new molecular organization”. And then it all went south. As is almost inevitable with government-funded science and technology programs, the usual grantmasters waddled up to the trough, stuck their snouts into the new flow of funds, and diverted it toward their research interests which have nothing to do with the mission statement of the NNI. They even managed to redefine “nanotechnology” for their own purposes to exclude the construction of objects with atomic precision. This is not to say that some of the research NNI funds isn't worthwhile, but it's not nanotechnology in the original sense of the word, and doesn't advance toward the goal of molecular manufacturing. (We often hear about government-funded research and development “picking winners and losers”. In fact, such programs pick only losers, since the winners will already have been funded by the productive sector of the economy based upon their potential return.)

In this book Drexler attempts a fundamental reset of the vision he initially presented in Engines of Creation. He concedes the word “nanotechnology” to the hogs at the federal trough and uses “atomically precise manufacturing” (APM) to denote a fabrication technology which, starting from simple molecular feedstocks, can make anything by fabricating and assembling parts in a hierarchical fashion. Just as books, music, and movies have become data files which can be transferred around the globe in seconds, copied at no cost, and accessed by a generic portable device, physical objects will be encoded as fabrication instructions which a generic factory can create as required, constrained only that the size of the factory be large enough to assemble the final product. But the same garage-sized factory can crank out automobiles, motorboats, small aircraft, bicycles, computers, furniture, and anything on that scale or smaller just as your laser printer can print any document whatsoever as long as you have a page description of it.

Further, many of these objects can be manufactured using almost exclusively the most abundant elements on Earth, reducing cost and eliminating resource constraints. And atomic precision means that there will be no waste products from the manufacturing process—all intermediate products not present in the final product will be turned back into feedstock. Ponder, for a few moments, the consequences of this for the global economy.

In chapter 5 the author introduces a heuristic for visualising the nanoscale. Imagine the world scaled up in size by a factor of ten million, and time slowed down by the same factor. This scaling preserves properties such as velocity, force, and mass, and allows visualising nanoscale machines as the same size and operating speed as those with which we are familiar. At this scale a single transistor on a contemporary microchip would be about as big as an iPad and the entire chip the size of Belgium. Using this viewpoint, the author acquaints the reader with the realities of the nanoscale and demonstrates that analogues of macroscopic machines, when we figure out how to fabricate them, will work and, because they will operate ten million times faster, will be able to process macroscopic quantities of material on a practical time scale.

But can we build them? Here, Drexler introduces the concept of “exploratory engineering”: using the known laws of physics and conservative principles of engineering to explore what is possible. Essentially, there is a landscape of feasibility. One portion is what we have already accomplished, another which is ruled out by the laws of physics. The rest is that which we could accomplish if we could figure out how and could afford it. This is a huge domain—given unlimited funds and a few decades to work on the problem, there is little doubt one could build a particle accelerator which circled the Earth's equator. Drexler cites the work of Konstantin Tsiolkovsky as a masterpiece of exploratory engineering highly relevant to atomically precise manufacturing. By 1903, working alone, he had demonstrated the feasibility of achieving Earth orbit by means of a multistage rocket burning liquid hydrogen and oxygen. Now, Tsiolkovsky had no idea how to build the necessary engines, fuel tanks, guidance systems, launch facilities, etc., but from basic principles he was able to show that no physical law ruled out their construction and that known materials would suffice for them to work. We are in much the same position with APM today.

The tone of this book is rather curious. Perhaps having been burned by his earlier work being sensationalised, the author is reserved to such an extent that on p. 275 he includes a two pargraph aside urging readers to “curb their enthusiasm”, and much of the text, while discussing what may be the most significant development in human history since the invention of agriculture, often reads like a white paper from the Brookings Institution with half a dozen authors: “Profound changes in national interests will call for a ground-up review of grand strategy. Means and ends, risks and opportunities, the future self-perceived interests of today's strategic competitors—none of these can be taken for granted.” (p. 269)

I am also dismayed to see that Drexler appears to have bought in to the whole anthropogenic global warming scam and repeatedly genuflects to the whole “carbon is bad” nonsense. The acknowledgements include a former advisor to the anti-human World Wide Fund for Nature.

Despite quibbles, if you've been thinking “Hey, it's the 21st century, where's my nanotechnology?”, this is the book to read. It chronicles steady progress on the foundations of APM and multiple paths through which the intermediate steps toward achieving it may be achieved. It is enlightening and encouraging. Just don't get enthusiastic.

 Permalink

Levin, Mark R. The Liberty Amendments. New York: Threshold Editions, 2013. ISBN 978-1-4516-0627-0.
To many observers including this one, the United States appear to be in a death spiral, guided by an entrenched ruling class toward a future where the only question is whether a financial collapse will pauperise the citizenry before or after they are delivered into tyranny. Almost all of the usual remedies seem to have been exhausted. Both of the major political parties are firmly in the control of the ruling class who defend the status quo, and these parties so control access to the ballot, media, and campaign funding that any attempt to mount a third party challenge appears futile. Indeed, the last time a candidate from a new party won the presidency was in 1860, and that was because the Whig party was in rapid decline and the Democrat vote was split two ways.

In this book Levin argues that the time is past when a solution could be sought in electing the right people to offices in Washington and hoping they would appoint judges and executive department heads who would respect the constitution. The ruling class, which now almost completely controls the parties, has the tools to block any effective challenge from outside their ranks, and even on the rare occasion an outsider is elected, the entrenched administrative state and judiciary will continue to defy the constitution, legislating from within the executive and judicial branches. What does a written constitution mean when five lawyers, appointed for life, can decide what it means, with their decision not subject to appeal by any other branch of government?

If a solution cannot be found by electing better people to offices in Washington then, as Lenin asked, “What is to be done?” Levin argues that the framers of the constitution (in particular George Mason) anticipated precisely the present situation and, in the final days of the constitutional convention in Philadelphia, added text to Article Five providing that the constitution can be amended when:

The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution, or, on the Application of the Legislatures of two thirds of the several States, shall call a Convention for proposing Amendments,…

Of the 27 amendments adopted so far, all have been proposed by Congress—the state convention mechanism has never been used (although in some cases Congress proposed an amendment to preempt a convention when one appeared likely). As Levin observes, the state convention process completely bypasses Washington: a convention is called by the legislatures of two thirds of the states, and amendments it proposes are adopted if ratified by three quarters of the states. Congress, the president, and the federal judiciary are completely out of the loop.

Levin proposes 11 amendments, all of which he argues are consistent with the views of the framers of the constitution and, in some cases, restore constitutional provisions which have been bypassed by clever judges, legislators, and bureaucrats. The amendments include term limits for all federal offices (including the Supreme Court); repeal of the direct election of senators and a return to their being chosen by state legislatures; super-majority overrides of Supreme Court decisions, congressional legislation, and executive branch regulations; restrictions on the taxing and spending powers (including requiring a balanced budget); reining in expansive interpretation of the commerce clause; requiring compensation for takings of private property; provisions to guard against voter fraud; and making it easier for the states to amend the constitution.

In evaluating Levin's plan, the following questions arise:

  1. Is amending the constitution by the state convention route politically achievable?
  2. Will the proposed amendments re-balance the federal system sufficiently to solve (or at least make it possible to begin to solve) its current problems?
  3. Are there problems requiring constitutional change not addressed by the proposed amendments?
  4. Will leviathan be able to wiggle out of the new constitutional straitjacket (or ignore its constraints with impunity) as it has done with the existing constitution?

I will address each of these questions below. Some these matters will take us pretty deep into the weeds, and you may not completely understand the discussion without having read the book (which, of course, I heartily recommend you do).

Is amending the constitution by the state convention route politically achievable?

Today, the answer to this is no. Calling a convention to propose amendments requires requests by two thirds of state legislatures, or at least 34. Let us assume none of the 17 Democrat-controlled legislatures would vote to call a convention. That leaves 27 Republican-controlled legislatures, 5 split (one house Republican, one Democrat), and quirky Nebraska, whose legislature is officially non-partisan. Even if all of these voted for the convention, you're still one state short. But it's unlikely any of the 5 split houses would vote for a convention, and even in the 27 Republican-controlled legislatures there will be a substantial number of legislators sufficiently wedded to the establishment or fearful of loss of federal funds propping up their state's budget that they'd vote against the convention.

The author forthrightly acknowledges this, and states clearly that this is a long-term process which may take decades to accomplish. In fact, since three quarters of the states must vote to ratify amendments adopted by a convention, it wouldn't make sense to call one until there was some confidence 38 or more states would vote to adopt them. In today's environment, obtaining that kind of super-majority seems entirely out of reach.

But circumstances can change. Any attempt to re-balance the constitutional system to address the current dysfunction is racing against financial collapse at the state and federal level and societal collapse due to loss of legitimacy of the state in the eyes of its subjects, a decreasing minority of whom believe it has the “consent of the governed”. As states go bankrupt, pension obligations are defaulted upon, essential services are curtailed, and attempts to extract ever more from productive citizens through taxes, fees, regulations, depreciation of the currency, and eventually confiscation of retirement savings, the electorate in “blue” states may shift toward a re-balancing of a clearly dysfunctional and failing system.

Perhaps the question to ask is not whether this approach is feasible at present or may be at some point in the future, but rather whether any alternative plan has any hope of working.

Will the proposed amendments re-balance the federal system sufficiently to solve (or at least make it possible to begin to solve) its current problems?

It seems to me that a constitution with these amendments adopted will be far superior in terms of balance than the constitution in effect today. I say “in effect” because the constitution as intended by the framers has been so distorted and in some cases ignored that the text has little to do with how the federal government actually operates. These amendments are intended in large part to restore the original intent of the framers.

As an engineer, I am very much aware of the need for stable systems to incorporate negative feedback: when things veer off course, there needs to be a restoring force exerted in the opposite direction to steer back to the desired bearing. Many of these amendments create negative feedback mechanisms to correct excesses the framers did not anticipate. The congressional and state overrides of Supreme Court decisions and regulations provide a check on the making of law by judges and bureaucrats which were never anticipated in the original constitution. The spending and taxing amendments constrain profligate spending, runaway growth of debt, and an ever-growing tax burden on the productive sector.

I have a number of quibbles with the details and drafting of these amendments. I'm not much worried about these matters, since I'm sure that before they are presented to the states in final form for ratification they will be scrutinised in detail by eminent constitutional law scholars parsing every word for how it might be (mis)interpreted by mischievous judges. Still, here's what I noted in reading the amendments.

Some of the amendments write into the constitution matters which were left to statute in the original document. The spending amendment fixes the start of the fiscal year and cites the “Nation's gross domestic product” (defined how?). The amendments to limit the bureaucracy, protect private property, and grant the states the authority to check Congress all cite specific numbers denominated in dollars. How is a dollar to be defined in decades and centuries to come? Any specification of a specific dollar amount in the constitution is prone to becoming as quaint and irrelevant as the twenty dollars clause of the seventh amendment. The amendment to limit the bureaucracy gives constitutional status to the Government Accountability Office and the Congressional Budget Office, which are defined nowhere else in the document.

In the amendment to grant the states the authority to check Congress there is a drafting error. In section 4, the cross-reference (do we really want to introduce brackets into the text of the constitution?) cites “An Amendment Establishing How the States May Amend the Constitution”, while “An Amendment to Limit the Federal Bureaucracy” is clearly intended. That amendment writes the two party system into the constitution by citing a “Majority Leader” and “Minority Leader”. Yes, that's how it works now, but is it wise to freeze this political structure (which I suspect would have appalled Washington) into the fundamental document of the republic?

Are there problems requiring constitutional change not addressed by the proposed amendments?

The economic amendments fail to address the question of sound money. Ever since the establishment of the Federal Reserve System, the dollar (which, as noted above, is cited in several of the proposed amendments) has lost more than 95% of its purchasing power according to the Bureau of Labor Statistics CPI Inflation Calculator. Inflation is the most insidious tax of all, as it penalises savers to benefit borrowers, encourages short-term planning and speculation, and allows the federal government to write down its borrowings by depreciating the monetary unit in which they are to be repaid. Further, inflation runs the risk of the U.S. dollar being displaced as the world reserve currency (which is already happening, in slow motion so far, as bilateral agreements between trading partners to use their own currencies and bypass the dollar are negotiated). A government which can print money at will can evade the taxing constraints of the proposed amendment by inflating its currency and funding its expenditures with continually depreciating dollars. This is the route most countries have taken as bankruptcy approaches.

Leaving this question unaddressed opens a dangerous loophole by which the federal government can escape taxing and spending constraints by running the printing press (as it is already doing at this writing). I don't know what the best solution would be (well, actually, I do, but they'd call me a nut if I proposed it), so let me suggest an amendment banning all legal tender laws and allowing parties to settle contracts in any unit of account they wish: dollars, euros, gold, copper, baseball cards, or goats.

I fear that the taxing amendment may be a Trojan horse with as much potential for mischief as the original commerce clause. It leaves the entire incomprehensible and deeply corrupt Internal Revenue Code in place, imposing only a limit on the amount extracted from each taxpayer and eliminating the estate tax. This means that Congress remains free to use the tax code to arbitrarily coerce or penalise behaviour as it has done ever since the passage of the sixteenth amendment. While the total take from each taxpayer is capped, the legislature is free to favour one group against another, subsidise activities by tax exemption or discourage them by penalties (think the Obamacare mandate jujitsu of the Roberts opinion), and penalise investment through punitive taxation of interest, dividends, and capital gains. A prohibition of a VAT or national sales tax is written into the constitution, thus requiring another amendment to replace the income tax (repealing the sixteenth amendment) with a consumption-based tax. If you're going to keep the income tax, I'm all for banning a VAT on top of it, but given how destructive and costly the income tax as presently constituted is to prosperity, I'd say if you're going to the trouble of calling a convention and amending the constitution, drive a stake through it and replace it with a consumption tax which wouldn't require any individual to file any forms ever. Write the maximum tax rate into the amendment, thus requiring another amendment to change it. In note 55 to chapter 5 the author states, “I do not object to ‘the Fair Tax,’ which functions as a national sales tax and eliminates all forms of revenue-based taxation, should it be a preferred amendment by delegates to a state convention.” Since eliminating the income tax removes a key mechanism by which the central government can coerce the individual citizen, I would urge it as a positive recommendation to such a convention.

Will leviathan be able to wiggle out of the new constitutional straitjacket (or ignore its constraints with impunity) as it has done with the existing constitution?

This is an issue which preoccupied delegates to the constitutional convention, federalists and anti-federalists alike, in the debate over ratification of the constitution, and delegates to the ratification conventions in the states. It should equally concern us now in regard to these amendments. After all, only 14 years after the ratification of the constitution the judicial branch made a power grab in Marbury v. Madison and got away with it, establishing a precedent for judicial review which has been the foundation for troublemaking to this day. In the New Deal, the previously innocuous commerce clause was twisted to allow the federal government to regulate a farmer's growing wheat for consumption on his own farm.

A key question is the extent to which the feedback mechanisms created by these amendments will deter the kind of Houdini-like escapes from the original constitution which have brought the U.S. to its present parlous state. To my mind, they will improve things: certainly if the Supreme Court or a regulatory agency knows its decisions can be overruled, they will be deterred from overreaching even if the overrule is rarely used. Knowing how things went wrong with the original constitution will provide guidance in the course corrections to come. One advantage of an amendment convention called by the states is that the debate will be open, on the record, and ideally streamed to anybody interested in it. Being a bottom-up process, the delegates will have been selected by state legislatures close to their constituents, and their deliberations will be closely followed and commented upon by academics and legal professionals steeped in constitutional and common law, acutely aware of how clever politicians are in evading constitutional constraints.

Conclusion

Can the U.S. be saved? I have no idea. But this is the first plan I have encountered which seems to present a plausible path to restoring its original concept of a constitutional republic. It is a long shot; it will certainly take a great deal of effort from the bottom-up and many years to achieve; the U.S. may very well collapse before it can be implemented; but can you think of any other approach? People in the U.S. and those concerned with the consequences of its collapse will find a blueprint here, grounded in history and thoroughly documented, for an alternative path which just might work.

In the Kindle edition the end notes are properly bidirectionally linked to the text, and references to Web documents in the notes are linked directly to the on-line documents.

 Permalink

September 2013

Cawdron, Peter. Little Green Men. Los Gatos, CA: Smashwords, 2013. ISBN 978-1-301-76672-7.
The author is rapidly emerging as the contemporary grandmaster of the first contact novel. Unlike his earlier Anomaly (December 2011) and Xenophobia (August 2013), this novel is set not in near-future Earth but rather three centuries from now, when an exploration team has landed on a cryogenic planet 23 light years from the solar system in search of volatiles to refuel their ship in orbit. Science officer Michaels believes he's discovered the first instance of extraterrestrial life, after centuries of searching hundreds of star systems and thousands of planets in vain. While extremophile microbes are a humble form of life, discovering that life originated independently on another world would forever change humanity's view of its place in the universe.

Michaels and his assistant collect a sample to analyse back at the ship and are returning to their scout craft when, without warning, they are attacked, with the assistant gravely wounded. The apparent attackers are just fast-moving shadows, scattering when Michaels lights a flare. Upon getting back to the ship with the assistant barely clinging to life, Michaels has a disturbing conversation with the ship's doctor which causes him to suspect that there have been other mysterious incidents.

Another scouting party reports discovering a derelict freighter which appears nowhere in the registry of ships lost in the region, and when exploring it, are confronted with hostile opposition in about the least probable form you might imagine finding on a planet at 88° K. I suppose it isn't a spoiler if I refer you to the title of the book.

The crew are forced to confront what is simultaneously a dire threat to their lives, a profound scientific discovery, and a deep mystery which just doesn't make any sense. First contact just wasn't supposed to be anything like this, and it's up to Michaels and the crew to save their skins and figure out what is going on. The answer will amaze you.

The author dedicates this book as a tribute to Philip K. Dick, and this is a story worthy of the master. In the acknowledgements, he cites Michael Crichton among those who have influenced his work. As with Crichton's novels, this is a story where the screenplay just writes itself. This would make a superb movie and, given the claustrophobic settings and small cast of characters, wouldn't require a huge budget to make.

This book is presently available only in electronic form for the Kindle as cited above.

 Permalink

Mamet, David. The Secret Knowledge. New York: Sentinel, 2011. ISBN 978-1-59523-097-3.
From time to time I am asked to recommend a book for those who, immersed in the consensus culture and mass media, have imbibed the collectivist nostrums of those around them without thinking about them very much, have, confronted with personal experiences of the consequences of these policies, begun to doubt their wisdom. I have usually recommended the classics: Bastiat, Hayek, and Rothbard, but these works can be challenging to those marinated in the statist paradigm and unfamiliar with history before the age of the omnipresent state. Further, these works, while they speak to eternal truths, do not address the “wedge issues” of modern discourse, which are championed by soi-disant “progressives” and “liberals”, distancing themselves from “traditional values”.

Well, now I have just the book to recommend. This book will not persuade committed ideologues of the left, who will not be satisfied until all individualism has been hammered into a uniform terrain of equality on the North Korean model (see Agenda 21 [November 2012]), but rather the much larger portion of the population who vote for the enemies of prosperity and freedom because they've been indoctrinated in government schools and infiltrated higher education, then fed propaganda by occupied legacy media. In Western societies which are on the razor edge between liberty and enslavement, shifting just 10% of the unengaged electorate who vote unknowingly for serfdom can tip the balance toward an entirely different future.

It is difficult to imagine an author better qualified to write such a work. David Mamet was born into the Jewish intellectual community in Chicago and educated in a progressive school and college. Embarking upon a career in literature, theatre, and film, he won a Pulitzer prize, two Tony nominations, and two Oscar nominations. He has written and directed numerous films, and written screenplays for others. For most of his life he was immersed in the liberal consensus of the intellectual/media milieu he inhabited and no more aware of it than a fish is of water. Then, after reaching the big six-zero milestone in his life, he increasingly became aware that all of the things that he and his colleagues accepted at face value without critical evaluation just didn't make any sense. As one with the rare talent of seeing things as they are, unfiltered by an inherited ideology, he wrote a 2008 essay titled “Why I Am No Longer a ‘Brain-Dead Liberal’ ”, of which this book is a much extended elaboration. (Read the comments on this article to see just how “liberal” those with whom he has come to dissent actually are.)

Mamet surveys culture, economics, and politics with a wide-angle perspective, taking a ruthlessly empirical approach born of his life experience. To those who came early to these views, there's a temptation to say, “Well, finally you've got it”, but at the same time Mamet's enlightenment provides hope that confrontation with reality may awake others swimming in the collectivist consensus to the common sense and heritage of humankind so readily accessible by reading a book like this.

In the Kindle edition the end-notes are properly bi-directionally linked to the text, but the index is just a useless list of terms, without links to references in the text.

 Permalink

Thor, Brad. The Apostle. New York: Pocket Books, 2009. ISBN 978-1-4165-8658-6.
This is the eighth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel covert operative Harvath has retired from government service and is enjoying an extended vacation in the Maine woods when he is summoned for an urgent meeting with recently-elected president Robert Alden. Alden informs Harvath that Julia Gallo, the daughter of fund-raiser and media baron Stephanie Gallo, to whom Alden owes a great deal of his recent electoral triumph, has been taken hostage in Afghanistan.

The Taliban have confirmed the hostage-taking and offered to exchange the younger Gallo for an al-Qaeda operative held in an Afghan prison. The Afghan government views putting this malefactor on trial as key to its legitimacy and will not countenance giving him up. Alden asks Harvath to go to Afghanistan, spring the terrorist from prison, and make the exchange, all beneath the radar to avoid damaging Alden's posture of being “tough on terror”. Harvath wonders why Alden is willing to undertake such risk for one hostage while so many others have fallen unremarked in Afghanistan, but accepts the mission.

Meanwhile, a junior Secret Service agent on the president's protection detail overhears a conversation between Stephanie Gallo and the president which indicates her power over him may be based in a dark secret which, if exposed, might end his presidency.

Most of the story is set in Afghanistan and the author has drawn upon his sources to craft a believable picture of that chaotic place. Perhaps acknowledging the shrinking presence of the U.S. on the world stage in the epoch in which the book was written, when Harvath calls in the cavalry, it might not be who you expect. The intrigue in Washington plays out in parallel.

This is a satisfying thriller which, unlike some of the earlier books in the series, works perfectly well if it's the first one you read. If you start here you'll miss details such as how Harvath met his girlfriend or came by his dog, but that's about it, and they play no part in the plot. There is the usual name-dropping of tactical gear which I used to find annoying but have now come to find somewhat charming and fun to look up whilst reading the novel.

 Permalink

Wade, T. I. America One. Fuquay-Varina, NC: Triple T Productions, 2012. ASIN B00AOF238I.
If you can get over the anger and resentment over having your pocket picked of US$3.97 (Amazon price at this writing) for a laughably bad book (easily one of the worst I've read since I started this list in 2001, and perhaps the worst: only The New Paradigm [December 2005] comes close), scorn for reviewers at Amazon who collectively awarded it four stars in sixty reviews, and approach it obliquely with the right sense of ironic detachment, like enjoying a disaster movie, knowing it's only fiction, this may be one of the funniest science fiction novels of recent years, but bear in mind it's funny because you're laughing at the author.

The first warning of what is to come is the prefatory “Note from the Author” (emphasis in the original).

The author is not an expert in the field of space travel. The author is only a storyteller.

Even though hundreds of hours of Internet research were done to write this story, many might find the scientific description of space travel lacking, or simply not 100 percent accurate. The fuels, gases, metals, and the results of using these components are as accurate as the author could describe them.

Should the reader, at this point, be insufficiently forewarned as to what is coming, the author next includes the following acknowledgement:

The Author would like to gratefully thank Alexander Wade (13), his son, for his many hours of research into nuclear reactors, space flight and astro-engineering to make this story as close to reality as possible for you the reader.

which also provides a foretaste of the screwball and inconsistent use of capitalisation “you the reader” are about to encounter.

It is tempting here to make a cheap crack about the novel's demonstrating a 13 year old's grasp of science, technology, economics, business, political and military institutions, and human behaviour, but this would be to defame the many 13 year olds I've encountered through E-mail exchanges resulting from material posted at Fourmilab which demonstrate a far deeper comprehension of such matters than one finds here.

The book is so laughably bad I'm able to explain just how bad without including a single plot spoiler. Helping in this is the fact that to the extent that the book has a plot at all, it is so completely absurd that to anybody with a basic grasp of reality it spoils itself simply by unfolding. Would-be thrillers which leave you gasping for air as you laugh out loud are inherently difficult to spoil. The text is marred by the dozens of copy-editing errors one is accustomed to in self-published works, but more in 99 cent specials than books at this price point. Editing appears to have amounted to running a spelling checker over the text, leaving malapropisms and misused homonyms undetected; some of these can be amusing, such as the “iron drive motors” fueled by xenon gas. Without giving away significant plot details, I'll simply list things the author asks the reader to believe which are, shall we say, rather at variance with the world we inhabit. Keep in mind that this story is set in the very near future and includes thinly disguised characters based upon players in the contemporary commercial space market.

  • Our intrepid entrepreneur, shortly after receiving his Ph.D., moves his company and its 100 employees to Silicon Valley, where Forbes projects it will “double its workforce every month for the foreseeable future.” Well, Silicon Valley is known for exponential growth, but let's work this out. What's the “foreseeable future” to Forbes? Twelve months? Then, starting at 100 and doubling monthly, there would be 204,800 employees at the Silicon Valley campus. Twenty-four months? Then you'd have more than 838 million employees, more than two and a half times the population of the United States. This would doubtless be a great boon to the fast food joints on El Camino, but I'm not sure where you'd put all those people.
  • The Hubble Space Telescope is not used to search for asteroids. Such searches are performed with wide-field, Earth-based telescopes as used by the various Spaceguard projects. Provisional names for newly-discovered asteroids are in a form different than that used by the author, and the year in the name is inconsistent with the chronology of the novel.
  • Observations of the asteroid are said to have been made through “the most powerful telescope possible”, which revealed fine surface detail. Well, I don't know about the most powerful telescope possible, but the most powerful telescopes in existence are not remotely capable of resolving such an object at the distance cited as more than a dot of light. And all of those telescopes have objective mirrors, not lenses.
  • If one were to hitch a ride on a bomber, one would not “sit on top of tons of bombs”. Bombers do not carry bombs inside the crew compartment, but in an unpressurised bomb bay, where hitching a ride would be suicidal.
  • Plutonium-238 (used in radioisotope thermoelectric generators) is not “reactor-grade” since it would be useless in a fission reactor. And nobody calls generators which use it “reactors”, nor are they “car-sized”.
  • “EMPs, powerful Electric Magnetic Pulses” are not “produced by sun flares in deep space”. Electromagnetic pulses are most commonly produced by nuclear weapon detonations interacting with the Earth's atmosphere. And “sun flares in deep space” just doesn't make any sense at all: solar flares occur, as you might guess, on the Sun.
  • Now we come to the first instance of the humble element hydrogen being used as a kind of talisman which makes all things possible. We're told that the space station hull will be shielded by pouring liquid hydrogen into a honeycomb carbon structure which will then be sealed at the factory on Earth. Now what do you think will happen once that structure is sealed with liquid hydrogen inside? Right in one—bang! Without cryogenic cooling, liquid hydrogen, regardless of pressure, will flash to gas at any temperature above its critical point of 33° K, blowing the laminated panel apart. In any case, a thin shield composed of carbon and hydrogen would be as effective against heavy primary cosmic rays as a paper bag.
  • Next come the “electromagnets” made from a “powerful rare-earth magnetic material called neodymium”. Neodymium is used to make permanent magnets, not electromagnets. Here is the first instance of the author's not comprehending the difference between magnetism and gravitation: the magnets will create “a small gravity field that is about fifteen to twenty percent of what we are used to on earth”. As will become clear later, he's not talking about using magnetic boots, but rather creating artificial gravity, which is utter nonsense.
  • The liquid argon thermal insulation is ridiculous. It would blow apart the panels just like liquid hydrogen, and in any case, while gaseous argon has low thermal conductivity, in liquid form convection would render it useless as an insulator. Further, the author believes in the naïve concept of the “temperature of space”. A vacuum has no ability to transport heat, so the temperature of a body in space is determined by the balance between the heat absorbed from the Sun and other bodies and generated internally versus heat radiated away into space.
  • Space station panels are said to “receive a covering of a silver silicon-plastic-like photovoltaic nanofilm paint an inch thick for solar-energy absorption.” Here it appears the author, who has no concept of how photovoltaic panels work, is trying to dazzle the reader with a sufficient number of buzzwords to dull critical thought. Photovoltaic cells should absorb as much sunlight as possible in a thin layer. A material which required a layer “an inch thick” would not only be wasteful of weight, but so inefficient as to be laughable. Solar cells must have an anode and cathode to extract the electricity, and cannot be applied as “paint”.
  • A Cessna 172 does not have a “joystick”, but rather a control wheel, and no airplane uses its “rudder pedals to bank left or right”. Rudder pedals are used to yaw the aircraft, not control bank angle.
  • How probable is it that Maggie's private pilot flight instructor would happen to be son of a retired three-term U.S. senator?
  • The shuttle craft has three forms of propulsion: hybrid rocket motors for initial acceleration, “hydrogen thrusters” (again, with the hydrogen—more to come) to get into orbit and maneuver, and ion drive motors using xenon gas for propulsion in space. Now, as becomes clear from numerous references in the novel, the author is not talking about rocket motors burning liquid hydrogen and liquid oxygen, but rather has gotten the curious (and clueless) idea that somehow hydrogen, by itself, can provide high performance propulsion. Sure, you can use hydrogen as the working fluid in a nuclear thermal rocket, but that's not what we're talking about here. Further, xenon thrusters produce such low thrust that they would be utterly incapable of performing the exploits they are claimed to do here.
  • The author says “her wings expand like one of the old X-51s”. I presume he means the X-15 (whose wings did not “expand”), not the X-51 Waverider, which is not old and has only vestigial wings.
  • No aircraft has an aileron on its tail; it's called a “rudder”.
  • Why would Maggie need to buy a car to get from California to the Air Force Academy in Colorado? Couldn't she take the bus or a plane? Are cadets even allowed to have personal cars?
  • Maggie is said to have been trained by the Air Force initially to “fly an old C-47”. These aircraft were retired from Air Force service long before she began her career.
  • It's the United States Marine Corps, not “Corp”. This isn't a typo, as it appears on multiple occasions.
  • There is no “European Space system”. It's the European Space Agency.
  • There is no reason at all to expect to find radium on an asteroid. With its longest-lived isotope having a half-life of 1601 years, all radium found in nature is the product of decay of other elements, and will never be found in more than trace quantities.
  • The author uses “control dash” for spacecraft control panels and “windshield” for the windows in deep space vehicles. His spacecraft are not, as best I can determine, equipped with running boards or rumble seats.
  • Solar arrays on space stations become “solar dragon-fly antennas” in the author's nomenclature.
  • We're told that the radioisotope thermoelectric generator (which the author variously calls a “nuclear battery” and “nuclear reactor”) containing one pound of plutonium-238 will be adequate to power a derelict Russian space station with multiple modules like Mir. Let's see, shall we? The Mir core module solar arrays initially produced 9 kW of electricity. Generation capacity was added as additional arrays were launched on subsequent modules, but offset by degradation of older arrays. But clearly 9 kW was required to operate the base station. The thermal power output of a pound of plutonium-238 is around 250 watts, but conversion of this heat to electricity has never exceeded an efficiency of 7% in any generator. Assuming this optimistic figure, the generator would produce 17.5 watts of electricity, substantially less than the 9000 watts required.
  • A “small set of hydrogen batteries” power the shuttle prior to launch. Not fuel cells, as no mention of liquid oxygen is made. Just magical hydrogen again. Is there anything it can't do?
  • Oh, and now there appears to be a “rear liquid nitrogen thruster” on the shuttle. How does that work—spewing liquid nitrogen out through a nozzle? You'd do better with Diet Coke and Mentos. If you're counting, we're now up to four separate propulsion systems on the shuttle, each with its own unique propellant.
  • Don't want to be too ambitious, at least at the outset. “There will be eReaders all over the ship with every bit of practical knowledge about agricultural [sic], human, and animals ever known; they will also be loaded with star charts, planetary systems and every asteroid and planet's history and whereabouts within our solar system. I don't believe we will leave our own system on our first journey.…”. But what the heck, we may decide to take a jaunt over to Alpha Centauri just for fun—after all, we have hydrogen thrusters!
  • I don't believe even fighter jocks would consider it appropriate to refer to two Air Force flight officers as “girl pilots”, as the author does.
  • How to accomplish the orbital two-step key to the mission without being detected? No problem: they have a “Cloaking Device” which hides the ship from radar “[m]uch like a black hole”. It's probably magnetic, with hydrogen. Not mentioned is the detail that space, from low Earth orbit to geosynchronous orbit, is under constant optical surveillance which the cloaking device would not impede, and could easily detect and track an object the size of the shuttle.
  • Both the Russian space station and the International Space Station are said to be in equatorial orbits. In fact, Mir was and the ISS is in an orbit with 51.6° inclination.
  • The author does not appear to have the vaguest idea how orbital mechanics works. In fact, from what he has written, I can't figure out how he thinks it might work. Cartoon physics has its own zany kind of consistency, but this makes Road Runner cartoons look like JPL mission planning. My favourite among a long list of howlers is when they decide to raise the orbit of the Russian space station from low Earth orbit to geosynchronous orbit. Set aside the enormous delta-v this would require and the utter inadequacy of the shuttle's propulsion system (oh, wait, I forgot about the hydrogen thrusters!), let's consider the maneuver. After docking with the space station, the shuttle burns the hydrogens to “increase her orbital speed by approximately 500 miles an hour”. Then, an orbit later, the shuttle points its nose “outward into space by three degrees” and the same burn is made. This is said to increase speed from 11,000 miles an hour (far below orbital velocity) to 14,000 miles an hour (still below orbital velocity). Two more inane burns allow, at their completion, the station to “climb away from earth on an ever-widening orbit of 900 miles per day”. Got that? No additional burns; no additional delta-v, and the station continues to spiral outward from the Earth to ever increasing orbital altitude. There is much, much more, all as idiotic, but I won't belabour the point further.
  • The coordinates of the asteroid are said to be fed to the team every twelve hours by an insider. In fact, orbital elements for asteroids are available to the general public and can be used to calculate the position of asteroids for years into the future.
  • The C-5 Galaxy transport plane is said to be a Boeing product; it was in fact built by Lockheed.
  • The tankers which refuel the C-5 are said to be in one place a KC-125 and later in the same sentence a KC-25. Neither plane exists. Clearly KC-135 was intended.
  • There is no reason to go through the rigmarole of using the C-5 to transport the secret cargo. There are commercial freighter jets which could carry a cargo of that size without the need for in-flight refueling.
  • The concept of dumping nuclear waste into the Sun further reveals utter ignorance of orbital mechanics. It seems the author believes (or would like the reader to believe) that once you're “in space”, all you have to do is “unload the stuff in the direction of the sun” and if falls straight in and goes poof. In fact, it is very difficult to impact the Sun, since you have to cancel the entire orbital velocity of the Earth, which is around 100,000 kilometres per hour, or about three times the delta-v needed to reach low Earth orbit.
  • Weightlessness is equally described as a quasi-magical state like being “in space”. When preparing to burn the shuttle's engines for an orbital adjustment, the pilot tells the crew “You will not feel the burn since we are now weightless.”
  • Always have a backup plan! “Our last resort is that we could go to Mars and begin mining there in 2016.” Never mind that we don't have any vehicle suitable for atmospheric entry and landing, nor a way to get back from the surface. But we have hydrogen thrusters!
  • When the xenon thrusters are activated, it is said “they continuously propelled the craft forward, its acceleration always increasing.” So, the author does not understand the difference between velocity and acceleration. What's a derivative among rubes?
  • Just when you thought things couldn't get more absurd, we arrive at the one by three mile asteroid only to discover that it has a gravitational field 70% as strong as the Earth's. Now, with any material known to science, the gravitational field of such a small body should be negligible. How can this possibly be the case? Magnetism, of course.
  • Those with the most cursory acquaintance with the U.S. government may be surprised to learn that mission of the National Security Agency (NSA) is not limited to cryptography and signals intelligence but to “always be sure that there are no hidden agendas in large projects”, or that the Federal Reserve is charged with detecting those “bringing illegal contraband into the country”, or that to inspect any facility on “U.S. occupied land” a congressman doesn't “need a search warrant. Members of Congress never do…”.

If this weren't enough, at the very end the author springs a cliffhanger which puts everything achieved in the entire novel in doubt. If you loved this novel, you'll be delighted to know that there are three sequels already available. I shall certainly not be reading them.

Something like this actually could have worked if the author had cast it as a neo-golden-age story of space travel in which an intrepid band set out for space defying the powers that be and conventional wisdom along the lines of John Varley's Red Thunder (July 2012). But by pretentiously trying to cast it as a realistic techno-thriller, the result is risible. Readers are willing to indulge thriller-writers the occasional implausible gadget to advance the plot, but when you have a howler which violates laws of physics known since Newton and taught in high school every few pages, the result is not thrilling but just silly. It's as if a writer published a western in which revolvers held 600 shots, fired with range of 5 km and 1 cm accuracy, and everybody rode horses which could gallop at 250 km/hour for 12 hours straight (but, you see, they're getting special hydrogen hay!).

Why spend so much time dissecting a book like this? Because it's fun, and it's the only way to derive enjoyment from such a waste of time and money. If you're wondering why the U.S. space program is in such a parlous state, it may be enlightening to read the four- and five-star reviews of this book on Amazon, bearing in mind that these people vote.

This book is presently available only in electronic form for the Kindle as cited above.

 Permalink

October 2013

Steil, Benn. The Battle of Bretton Woods. Princeton: Princeton University Press, 2013. ISBN 978-0-691-14909-7.
As the Allies advanced toward victory against the Axis powers on all fronts in 1944, in Allied capitals thoughts increasingly turned to the postwar world and the institutions which would define it. Plans were already underway to expand the “United Nations” (at the time used as a synonym for the Allied powers) into a postwar collective security organisation which would culminate in the April 1945 conference to draft the charter of that regrettable institution. Equally clamant was the need to define monetary mechanisms which would facilitate free trade.

The classical gold standard, which was not designed but evolved organically in the 19th century as international trade burgeoned, had been destroyed by World War I. Attempts by some countries to reestablish the gold standard after the end of the war led to economic dislocation (particularly in Great Britain), currency wars (competitive devaluations in an attempt to gain a competitive advantage in international trade), and trade wars (erecting tariff or other barriers to trade to protect domestic or imperial markets against foreign competition).

World War II left all of the major industrial nations with the sole exception of the United States devastated and effectively bankrupt. Despite there being respected and influential advocates for re-establishing the classical gold standard (in which national currencies were defined as a quantity of gold, with central banks issuing them willing to buy gold with their currency or exchange their currency for gold at the pegged rate), this was widely believed impossible. Although the gold standard had worked well when in effect prior to World War I, and provided negative feedback which tended to bring the balance of payments among trading partners back into equilibrium and provided a mechanism for countries in economic hard times to face reality and recover by devaluing their currencies against gold, there was one overwhelming practical difficulty in re-instituting the gold standard: the United States had almost all of the gold. In fact, by 1944 it was estimated that the U.S. Treasury held around 78% of all of the world's central bank reserve gold. It is essentially impossible to operate under a gold standard when a single creditor nation, especially one with its industry and agriculture untouched by the war and consequently sure to be the predominant exporter in the years after it ended, has almost all of the world's gold in its vaults already. Proposals to somehow reset the system by having the U.S. transfer its gold to other nations in exchange for their currencies was a non-starter in Washington, especially since many of those nations already owed large dollar-denominated debts to the U.S.

The hybrid gold-exchange standard put into place after World War I had largely collapsed by 1934, with Britain forced off the standard by 1931, followed quickly by 25 other nations. The 1930s were a period of economic depression, collapsing international trade, competitive currency devaluations, and protectionism, hardly a model for a postwar monetary system.

Also in contention as the war drew to its close was the location of the world's financial centre and which currency would dominate international trade. Before World War I, the vast majority of trade cleared through London and was denominated in sterling. In the interwar period, London and New York vied for preeminence, but while Wall Street prospered financing the booming domestic market in the 1920s, London remained dominant for trade between other nations and maintained a monopoly within the British Empire. Within the U.S., while all factions within the financial community wished for the U.S. to displace Britain as the world's financial hub, many New Dealers in Roosevelt's administration were deeply sceptical of Wall Street and “New York bankers” and wished to move decision making to Washington and keep it firmly under government control.

While ambitious plans were being drafted for a global monetary system, in reality there were effectively only two nations at the negotiating table when it came time to create one: Britain and the U.S. John Maynard Keynes, leader of the British delegation, referred to U.S. plans for a broad-based international conference on postwar monetary policy as “a major monkey-house”, with non-Anglo-Saxon delegations as the monkeys. On the U.S. side, there was a three way power struggle among the Treasury Department, the State Department, and the nominally independent Federal Reserve to take the lead in international currency policy.

All of this came to a head when delegates from 44 countries arrived at a New Hampshire resort hotel in July 1944 for the Bretton Woods Conference. The run-up to the conference had seen intensive back-and-forth negotiation between the U.S. and British delegations, both of whom arrived with their own plans, each drafted to give their side the maximum advantage.

For the U.S., Treasury secretary Henry Morgenthau, Jr. was the nominal head of the delegation, but having no head for nor interest in details, deferred almost entirely to his energetic and outspoken subordinate Harry Dexter White. The conference became a battle of wits between Keynes and White. While White was dwarfed by Keynes's intellect and reputation (even those who disagreed with his unorthodox economic theories were impressed with his wizardry in financing the British war efforts in both world wars), it was White who held all the good cards. Not only did the U.S. have most of the gold, Britain was entirely dependent upon Lend-Lease aid from the U.S., which might come to an abrupt end when the war was won, and owed huge debts which it could never repay without some concessions from the U.S. or further loans on attractive terms.

Morgenthau and White, with Roosevelt's enthusiastic backing, pressed their case relentlessly. Not only did Roosevelt concur that the world's financial centre should be Washington, he saw an opportunity to break the British Empire, which he detested. Roosevelt remarked to Morgenthau after a briefing, “I had no idea that England was broke. I will go over there and make a couple of talks and take over the British Empire.”

Keynes described an early U.S. negotiating position as a desire by the U.S. to make Britain “lose face altogether and appear to capitulate completely to dollar diplomacy.” And in the end, this is essentially what happened. Morgenthau remarked, “Now the advantage is ours here, and I personally think we should take it,” then later expanded, “If the advantage was theirs, they would take it.”

The system crafted at the conference was formidably complex: only a few delegates completely understood it, and, foreshadowing present-day politics in the U.S., most of the delegations which signed it at the conclusion of the conference had not read the final draft which was thrown together at the last minute. The Bretton Woods system which emerged prescribed fixed exchange rates, not against gold, but rather the U.S. dollar, which was, in turn, fixed to gold. Central banks would hold their reserves primarily in dollars, and could exchange excess dollars for gold upon demand. A new International Monetary Fund (IMF) would provide short-term financing to countries with trade imbalances to allow them to maintain their currency's exchange rate against the dollar, and a World Bank was created to provide loans to support reconstruction after the war and development in poor countries. Finally a General Agreement on Tariffs and Trade was adopted to reduce trade barriers and promote free trade.

The Bretton Woods system was adopted at a time when the reputation of experts and technocrats was near its peak. Keynes believed that central banking should “be regarded as a kind of beneficent technique of scientific control such as electricity and other branches of science are.” Decades of experience with the ever more centralised and intrusive administrative state has given people today a more realistic view of the capabilities of experts and intellectuals of all kinds. Thus it should be no surprise that the Bretton Woods system began to fall apart almost as soon as it was put in place. The IMF began operations in 1947, and within months a crisis broke out in the peg of sterling to the dollar. In 1949, Britain was forced to devalue the pound 30% against the dollar, and in short order thirty other countries also devalued. The Economist observed:

Not many people in this country believe the Communist thesis that it is the deliberate and conscious aim of American policy to ruin Britain and everything Britain stands for in the world. But the evidence can certainly be read that way. And if every time aid is extended, conditions are attached which make it impossible for Britain to ever escape the necessity of going back for still more aid, to be obtained with still more self-abasement and on still more crippling terms, then the result will certainly be what the Communists predict.

Dollar diplomacy had triumphed completely.

The Bretton Woods system lurched from crisis to crisis and began to unravel in the 1960s when the U.S., exploiting its position of issuing the world's reserve currency, began to flood the world with dollars to fund its budget and trade deficits. Central banks, increasingly nervous about their large dollar positions, began to exchange their dollars for gold, causing large gold outflows from the U.S. Treasury which were clearly unsustainable. In 1971, Nixon “closed the gold window”. Dollars could no longer be redeemed in gold, and the central underpinning of Bretton Woods was swept away. The U.S. dollar was soon devalued against gold (although it hardly mattered, since it was no longer convertible), and before long all of the major currencies were floating against one another, introducing uncertainty in trade and spawning the enormous global casino which is the foreign exchange markets.

A bizarre back-story to the creation of the postwar monetary system is that its principal architect, Harry Dexter White, was, during the entire period of its construction, a Soviet agent working undercover in his U.S. government positions, placing and promoting other agents in positions of influence, and providing a steady stream of confidential government documents to Soviet spies who forwarded them to Moscow. This was suspected since the 1930s, and White was identified by Communist Party USA defectors Whittaker Chambers and Elizabeth Bentley as a spy and agent of influence. While White was defended by the usual apologists, and many historical accounts try to blur the issue, mentions of White in the now-declassified Venona decrypts prove the issue beyond a shadow of a doubt. Still, it must be said that White was a fierce and effective advocate at Bretton Woods for the U.S. position as articulated by Morgenthau and Roosevelt. Whatever other damage his espionage may have done, his pro-Soviet sympathies did not detract from his forcefulness in advancing the U.S. cause.

This book provides an in-depth view of the protracted negotiations between Britain and the U.S., Lend-Lease and other war financing, and the competing visions for the postwar world which were decided at Bretton Woods. There is a tremendous amount of detail, and while some readers may find it difficult to assimilate, the economic concepts which underlie them are explained clearly and are accessible to the non-specialist. The demise of the Bretton Woods system is described, and a brief sketch of monetary history after its ultimate collapse is given.

Whenever a currency crisis erupts into the news, you can count on one or more pundits or politicians to proclaim that what we need is a “new Bretton Woods”. Before prescribing that medicine, they would be well advised to learn just how the original Bretton Woods came to be, and how the seeds of its collapse were built in from the start. U.S. advocates of such an approach might ponder the parallels between the U.S. debt situation today and Britain's in 1944 and consider that should a new conference be held, they may find themselves sitting the seats occupied by the British the last time around, with the Chinese across the table.

In the Kindle edition the table of contents, end notes, and index are all properly cross-linked to the text.

 Permalink

Houston, Keith. Shady Characters. New York: W. W. Norton, 2013. ISBN 978-0-393-06442-1.
The earliest written languages seem mostly to have been mnemonic tools for recording and reciting spoken text. As such, they had little need for punctuation and many managed to get along withoutevenspacesbetweenwords. If you read it out loud, it's pretty easy to sound out (although words written without spaces can be used to create deliciously ambiguous text). As the written language evolved to encompass scholarly and sacred texts, commentaries upon other texts, fiction, drama, and law, the structural complexity of the text grew apace, and it became increasingly difficult to express this in words alone. Punctuation was born.

In the third century B.C. Aristophanes of Byzantium (not to be confused with the other fellow), librarian at Alexandria, invented a system of dots to denote logical breaks in Greek texts of classical rhetoric, which were placed after units called the komma, kolon, and periodos. In a different graphical form, they are with us still.

Until the introduction of movable type printing in Europe in the 15th century, books were hand-copied by scribes, each of whom was free, within the constraints of their institutions, to innovate in the presentation of the texts they copied. In the interest of conserving rare and expensive writing materials such as papyrus and parchment, abbreviations came into common use. The humble ampersand (the derivation of whose English name is delightfully presented here) dates to the shorthand invented by Cicero's personal secretary/slave Tiro, who invented a mark to quickly write “et” as his master spoke.

Other punctuation marks co-evolved with textual criticism: quotation marks allowed writers to distinguish text from other sources included within their works, and asterisks, daggers, and other symbols were introduced to denote commentary upon text. Once bound books (codices) printed with wide margins became common, readers would annotate them as they read, often pointing out key passages. Even a symbol as with-it as the now-ubiquitous “@” (which I recall around 1997 being called “the Internet logo”) is documented as having been used in 1536 as an abbreviation for amphorae of wine. And the ever-more-trending symbol prefixing #hashtags? Isaac Newton used it in the 17th century, and the story of how it came to be called an “octothorpe” is worthy of modern myth.

This is much more than a history of obscure punctuation. It traces how we communicate in writing over the millennia, and how technologies such as movable type printing, mechanical type composition, typewriting, phototypesetting, and computer text composition have both enriched and impoverished our written language. Impoverished? Indeed—I compose this on a computer able to display in excess of 64,000 characters from the written languages used by most people since the dawn of civilisation. And yet, thanks to the poisonous legacy of the typewriter, only a few people seem to be aware of the distinction, known to everybody setting type in the 19th century, among the em-dash—used to set off a phrase; the en-dash, denoting “to” in constructions like “1914–1918”; the hyphen, separating compound words such as “anarcho-libertarian” or words split at the end of a line; the minus sign, as in −4.221; and the figure dash, with the same width as numbers in a font where all numbers have the same width, which permits setting tables of numbers separated by dashes in even columns. People who appreciate typography and use TeX are acutely aware of this and grind their teeth when reading documents produced by demotic software tools such as Microsoft Word or reading postings on the Web which, although they could be so much better, would have made Mencken storm the Linotype floor of the Sunpapers had any of his writing been so poorly set.

Pilcrows, octothorpes, interrobangs, manicules, and the centuries-long quest for a typographical mark for irony (Like, we really need that¡)—this is a pure typographical delight: enjoy!

In the Kindle edition end of chapter notes are bidirectionally linked (albeit with inconsistent and duplicate reference marks), but end notes are not linked to their references in the text—you must manually flip to the notes and find the number. The end notes contain many references to Web URLs, but these are not active links, just text: to follow them you must copy and paste them into a browser address bar. The index is just a list of terms, not linked to references in the text. There is no way to distinguish examples of typographic symbols which are set in red type from chapter note reference links set in an identical red font.

 Permalink

Niven, Larry and Matthew Joseph Harrington. The Goliath Stone. New York: Tor Books, 2013. ISBN 978-0-7653-3323-0.
This novel is a tremendous hoot which the authors undoubtedly had great fun writing and readers who know what's going on may thoroughly enjoy while others who don't get it may be disappointed. This story, which spans a period from 5 billion years before the present to A.D. 2052 chronicles the expansion of sentient life beyond the Earth and humankind's first encounter with nonhuman beings. Dr. Toby Glyer, pioneer in nanotechnology, arranges with a commercial space launch company to send a technologically opaque payload into space. After launch, it devours the orbital stage which launched it and disappears. Twenty-five years later, a near-Earth asteroid is detected as manoeuvring itself onto what may be a collision course with Earth, and fears spread of Glyer's asteroid retrieval mission, believed to involve nanotechnology, having gone horribly wrong.

Meanwhile, distinctly odd things are happening on Earth: the birth rate is falling dramatically, violent crime is way down while suicides have increased, terrorism seems to have come to an end, and test scores are rising everywhere. Athletes are shattering long-established records with wild abandon, and a disproportionate number of them appear to be American Indians. Glyer and space launch entrepreneur May Wyndham sense that eccentric polymath William Connors, who they last knew as a near-invalid a quarter century earlier, may be behind all of this, and soon find themselves inside Connors' secretive lair.

This is an homage to golden age science fiction where an eccentric and prickly genius decides to remake the world and undertakes to do so without asking permission from anybody. The story bristles with dozens if not hundreds of references to science fiction and fandom, many of which I'm sure I missed. For example, “CNN cut to a feed with Dr. Wade Curtis, self-exiled to Perth when he'd exceeded the federal age limit on health care.” Gentle readers, start your search engines!

If you're looking for “hard” science fiction like Niven's “Known Space”, this is not your book. For a romp through the near future which recalls the Skylark novels of “Doc” Smith, with lots of fannish goodies and humourous repartee among the characters, it's a treat.

 Permalink

Rawles, James Wesley. Expatriates. New York: Dutton, 2013. ISBN 978-0-525-95390-6.
This novel is the fourth in the series which began with Patriots (December 2008), then continued with Survivors (January 2012) and Founders (October 2012). These books are not a conventional multi-volume narrative, in that all describe events in the lives of their characters in roughly the same time period surrounding “the Crunch”—a grid down societal collapse due to a debt crisis and hyperinflation. While the first three books in the series are best read in order, as there is substantial overlap in characters and events, this book, while describing contemporary events, works perfectly well as a stand-alone thriller and does not contain substantial spoilers for the first three novels.

The earlier books in the series were thrillers with a heavy dose of survival tutorial, including extended litanies of gear. The present volume leans more toward the thriller genre and is, consequently, more of a page-turner.

Peter and Rihannon Jeffords are Christian missionaries helping to run an orphanage in the Philippine Islands wishing nothing more than to get on with their lives and work when the withdrawal of U.S. forces in the Pacific due the economic collapse of the U.S. opens the way for a newly-installed jihadi government in Indonesia to start flexing its imperialist ambitions, looking enviously at Malaysia, Papua New Guinea, the Philippines, and ultimately the resource-rich and lightly populated “Top End” of Australia as their manifest destiny.

Meanwhile, Chuck Nolan, a Texan petroleum geologist specialising in explosive seismic exploration, working in the Northern Territory of Australia, is adjusting, along with native Australians, to the consequences of the Crunch. While not directly affected by the U.S. economic collapse, Australia's highly export-driven economy has been severely damaged by the contraction in world trade, and being dependent upon imported food and pharmaceuticals, hardships are everywhere and tragedies commonplace.

Back in the United States, Rihannon Jeffords' family, the Altmillers, are trying to carry on their independent hardware store business in Florida, coping with the collapse of the currency; the emergence of a barter economy and use of pre-1965 silver coins as a medium of exchange; the need for extraordinary security precautions at work and at home as the rule of law and civil society erode; and escalating worries about feral mobs of looters raiding ever wider from the chaos which was Orlando.

As the story develops, we exerience a harrowing sea voyage through hostile waters, asymmetrical warfare against a first world regional power, irregular resistance against an invader, and local communities self-organising defence against an urban “golden horde” ravaging the countryside. You will learn a great deal about partisan resistance strategies, decapitation of opposition forces, and why it is most unwise for effete urban populations to disarm those uncouth and disdained denizens of the boonies who, when an invader threatens, are both the first and ultimate lines of defence.

This book is meticulously researched with a wealth of local and technical details and only a few goofs and copy-editing errors. Like the earlier novels, the author dispels, often with spare prose or oblique references, the romantic notion that some “preppers” seem to have that the collapse of civilisation will be something like a camping trip they'll enjoy because they're “ready”. These happy would-be campers overlook the great die-off, the consequences of masses of people suddenly withdrawing from mood-altering drugs, roving bands of looters, the emergence of war-lords, and all of the other manifestations of the normal state of humanity over millennia which are suppressed only by our ever-so-fragile just in time technological society.

 Permalink

Weil, Elizabeth. They All Laughed at Christopher Columbus. New York: Bantam Books, 2002. ISBN 978-0-553-38236-5.
For technologists and entrepreneurs, the latter half of the 1990s was a magical time. The explosive growth in computing power available to individuals, the global interconnectivity afforded by the Internet, and the emergence of broadband service with the potential to make the marginal cost of entry as a radio or video broadcaster next to zero created a vista of boundless technological optimism. Companies with market valuations in the billions sprang up like mushrooms despite having never turned a profit (and in some cases, before delivering a product), and stock-option paper millionaires were everywhere, some sporting job titles which didn't exist three years before.

In this atmosphere enthusiasms of all kinds were difficult to restrain, even those more venerable than Internet start-ups, and among people who had previously been frustrated upon multiple occasions. So it was that as the end of the decade approached, Gary Hudson, veteran of three earlier unsuccessful commercial space projects, founded Rotary Rocket, Inc. with the goal of building a reusable single-stage-to-orbit manned spacecraft which would reduce the cost of launching payloads into low Earth orbit by a factor of ten compared to contemporary expendable rockets (which, in turn, were less expensive than NASA's Space Shuttle). Such a dramatic cost reduction was expected to immediately generate substantial business from customers such as Teledesic, which originally planned to launch 840 satellites to provide global broadband Internet service. Further, at one tenth the launch cost, space applications which were not economically feasible before would become so, expanding the space market just as the comparable collapse in the price of computing and communications had done in their sectors.

Hudson assembled a team, a mix of veterans of his earlier ventures, space enthusiasts hoping to make their dreams a reality at last, hard-nosed engineers, and seasoned test pilots hoping to go to space, and set to work. His vision became known as Roton, and evolved to be an all-composite structure including tanks for the liquid oxygen and kerosene propellants, and a unique rotary engine at the base of the conical structure which would spin to create the pressure to inject propellants into 96 combustors arrayed around the periphery, eliminating the need for heavy, complicated, and prone-to-disintegrate turbopumps. The crew of two would fly the Roton to orbit and release the payload into space, then make a de-orbit burn. During re-entry, a water-cooled heat shield on the base of the cone would protect the structure from heating, and when atmospheric density was sufficient, helicopter-like rotor blades would deploy from the top of the cone. These blades would be spun up by autorotation and then, shortly before touchdown, tip jets powered by hydrogen peroxide would fire to allow a controlled powered approach and precision landing. After a mission, one need only load the next payload, refill the propellant tanks, and brief the crew for the next flight. It was estimated one flight per day was achievable with a total ground staff of fewer than twenty people.

This would have been revolutionary, and there were many, some with forbidding credentials and practical experience, who argued that it couldn't possibly work, and certainly not on Hudson's schedule and budget of US$ 150 million (which is closer to the sum NASA or one of its contractors would require to study such a concept, not to actually build and fly it). There were many things to worry about. Nothing like the rotary engine had ever been built, and its fluid mechanical and thermal complexities were largely unknown. The heat shield was entirely novel, and there was no experience as to how it would perform in a real world environment in which pores and channels might clog. Just getting to orbit in a single stage vehicle powered by LOX and kerosene was considered impossible by many, requiring a structure which was 95% propellant at launch. Even with composite construction, nobody had achieved anything close to this mass fraction in a flight vehicle.

Gary Hudson is not just a great visionary; he is nothing if not persuasive. For example, here is a promotional video from 1998. He was able, over the history of the project, to raise a total of US$ 30 million for the project from private investors (disclosure: myself included), and built an initial atmospheric test vehicle intended to validate the helicopter landing system. In 1999, this vehicle made three successful test flights, including a hop up and down and a flight down the runway.

By this point in 1999, the technology bubble was nearing the bursting point and perspicacious investors were already backing away from risky ventures. When it became clear there was no prospect to raise sufficient funds to continue, even toward the next milestone, Hudson had no option but to lay off staff and eventually entirely shutter the company, selling off its remaining assets (but the Roton ATV can be seen on display at the Mojave Spaceport).

There are any number of “business books” written about successful ventures, often ghostwritten for founders to show how they had a unique vision and marched from success to success to achieve their dream. (These so irritated me that I strove, in my own business book, to demonstrate from contemporary documents, the extent to which those in a technological start-up grope in the dark with insufficient information and little idea of where it's going.) Much rarer are accounts of big dreams which evoked indefatigable efforts from talented people and, despite all, ended badly. This book is a superb exemplar of that rare genre. There are a few errors of fact, and from time to time the author's description of herself among the strange world of the rocket nerds is a bit precious, but you get an excellent sense of what it was like to dream big, how a visionary can inspire people to accomplish extraordinary things, and how an entrepreneur must not only have a sound technical foundation, a vision of the future, but also have kissed the Barnum stone to get the job done.

Oddly, the book contains no photographs of this unique and stunning vehicle or the people who built it.

 Permalink

Mencken, H. L. The Vintage Mencken. New York: Vintage, [1955] 1990. ISBN 978-0-679-72895-5.
Perhaps only once in a generation is born a person with the gift of seeing things precisely as they are, without any prejudice or filter of ideology, doctrine, or preconceived notions, who also has the talent to rise to a position from which this “fair witness” viewpoint can be effectively communicated to a wide audience. In this category, one thinks immediately of George Orwell and, more recently and not yet as celebrated as he deserves to be, Karl Hess, but without doubt one of the greatest exemplars of these observers of their world to have lived in the 20th century was H[enry] L[ouis] Mencken, the “Sage of Baltimore” and one of the greatest libertarian, sceptic, and satirical writers of his time, as well as a scholar of the English language as used in the United States.

This book, originally published during Mencken's life (although he lived until 1956, he ceased writing after suffering a stroke in 1948 which, despite his recovering substantially, left him unable to compose text), collects his work, mostly drawn from essays and newspaper columns across his writing career. We get reminiscences of the Baltimore of his youth, reportage of the convention that nominated Franklin Roosevelt, a celebration of Grover Cleveland, an obituary of Coolidge, a taking down of Lincoln the dictator, a report from the Progressive convention which nominated Henry Wallace for president in 1948, and his final column defending those who defied a segregation law to stage an interracial tennis tournament in Baltimore in 1948.

Many of the articles are abridged, perhaps in the interest of eliding contemporary references which modern readers may find obscure. This collection provides an excellent taste of Mencken across his career and will probably leave you hungry for more. Fortunately, most of his œuvre remains in print. In the contemporary media cornucopia and endless blogosphere we have, every day, many times the number of words available to read as Mencken wrote in his career. But who is the heir to Mencken in seeing the folly behind the noise of ephemeral headlines and stands the test of time when read almost a century later?

 Permalink

November 2013

Zabel, Bryce. Surrounded by Enemies. Minneapolis: Mill City Press, 2013. ISBN 978-1-62652-431-6.
What if John F. Kennedy had survived the assassination attempt in Dallas? That is the point of departure for this gripping alternative history novel by reporter, author, and screenwriter Bryce Zabel. Spared an assassin's bullet by a heroic Secret Service agent, a shaken Kennedy returns to Washington and convenes a small group of his most trusted inner circle led by his brother Robert, the attorney general, to investigate who might have launched such an attack and what steps could be taken both to prevent a second attempt and to bring the perpetrators to justice.

Surveying the landscape, they conclude it might be easier to make a list of powerful forces who might not wish to kill the president. Kennedy's actions in office had given actors ranging from Cuba, anti-Castro groups in the U.S., the Mafia, FBI, CIA, senior military commanders, the Secret Service, Texas oil interests, and even Vice President Johnson potential motivations to launch or condone an attack. At the same time, while pursuing their own quiet inquiry, they must try to avert a Congressional investigation which might turn into a partisan circus, diverting attention from their strategy for Kennedy's 1964 re-election campaign.

But in the snake pit which is Washington, there is more than one way to assassinate a man, and Kennedy's almost grotesque womanising and drug use (both he and his wife were regular patients of Max Jacobson, “Dr. Feelgood”, whose “tissue regenerator” injections were laced with amphetamines) provided the ammunition his enemies needed to try to bring him down by assassinating his character in the court of public opinion.

A shadowy figure begins passing FBI files to two reporters of Top Story, a recently-launched news magazine struggling in the shadow of Time and Newsweek. After investigating the allegations and obtaining independent corroboration for some of them, Top Story runs a cover story on “The Secret Life of the President”, creating a firestorm of scrutiny of the president's private life by media who never before considered such matters worthy of investigation or reporting.

The political implications quickly assume the dimensions of a constitutional crisis, where the parties involved are forced to weigh appropriate sanctions for a president whose behaviour may have put the national security at risk versus taking actions which may give those who plotted to kill the president what they tried to achieve in Dallas with a bullet.

The plot deftly weaves historical events from the epoch with twists and turns which all follow logically from the point of departure, and the result is a very different history of the 1960s and 1970s which, to this reader who lived through those decades, seems entirely plausible. The author, who identifies himself in the introduction as “a lifelong Democrat”, brings no perceptible ideological or political agenda to the story—the characters are as complicated as the real people were, and behave in ways which are believable given the changed circumstances.

The story is told in a clever way: as a special issue of Top Story commemorating the 50th anniversary of the assassination attempt. Written in weekly news magazine style, this allows it to cite memoirs, recollections by those involved in years after the events described, and documents which became available much later. There are a few goofs regarding historical events in the sixties which shouldn't have been affected by the alternative timeline, but readers who notice them can just chuckle and get on with the story. The book is almost entirely free of copy-editing errors.

This is a superb exemplar of alternative history, and Harry Turtledove, the cosmic grand master of the genre, contributes a foreword to the novel.

 Permalink

Kaufman, Marc. First Contact. New York: Simon & Schuster, 2011. ISBN 978-1-4391-0901-4.
How many fields of science can you think of which study something for which there is no generally accepted experimental evidence whatsoever? Such areas of inquiry certainly exist: string theory and quantum gravity come immediately to mind, but those are research programs motivated by self-evident shortcomings in the theoretical foundations of physics which become apparent when our current understanding is extrapolated to very high energies. Astrobiology, the study of life in the cosmos, has, to date, only one exemplar to investigate: life on Earth. For despite the enormous diversity of terrestrial life, it shares a common genetic code and molecular machinery, and appears to be descended from a common ancestral organism.

And yet in the last few decades astrobiology has been a field which, although having not so far unambiguously identified extraterrestrial life, has learned a great deal about life on Earth, the nature of life, possible paths for the origin of life on Earth and elsewhere, and the habitats in the universe where life might be found. This book, by a veteran Washington Post science reporter, visits the astrobiologists in their native habitats, ranging from deep mines in South Africa, where organisms separated from the surface biosphere for millions of years have been identified, Antarctica; whose ice hosts microbes the likes of which might flourish on the icy bodies of the outer solar system; to planet hunters patiently observing stars from the ground and space to discover worlds orbiting distant stars.

It is amazing how much we have learned in such a short time. When I was a kid, many imagined that Venus's clouds shrouded a world of steamy jungles, and that Mars had plants which changed colour with the seasons. No planet of another star had been detected, and respectable astronomers argued that the solar system might have been formed by a freak close approach between two stars and that planets might be extremely rare. The genetic code of life had not been decoded, and an entire domain of Earthly life, bearing important clues for life's origin, was unknown and unsuspected. This book describes the discoveries which have filled in the blanks over the last few decades, painting a picture of a galaxy in which planets abound, many in the “habitable zone” of their stars. Life on Earth has been found to have colonised habitats previously considered as inhospitable to life as other worlds: absence of oxygen, no sunlight, temperatures near freezing or above the boiling point of water, extreme acidity or alkalinity: life finds a way.

We may have already discovered extraterrestrial life. The author meets the thoroughly respectable scientists who operated the life detection experiments of the Viking Mars landers in the 1970s, sought microfossils of organisms in a meteorite from Mars found in Antarctica, and searched for evidence of life in carbonaceous meteorites. Each believes the results of their work is evidence of life beyond Earth, but the standard of evidence required for such an extraordinary claim has not been met in the opinion of most investigators.

While most astrobiologists seek evidence of simple life forms (which exclusively inhabited Earth for most of its history), the Search for Extraterrestrial Intelligence (SETI) jumps to the other end of evolution and seeks interstellar communications from other technological civilisations. While initial searches were extremely limited in the assumptions about signals they might detect, progress in computing has drastically increased the scope of these investigations. In addition, other channels of communication, such as very short optical pulses, are now being explored. While no signals have been detected in 50 years of off and on searching, only a minuscule fraction of the search space has been explored, and it may be that in retrospect we'll realise that we've had evidence of interstellar signals in our databases for years in the form of transient pulses not recognised because we were looking for narrowband continuous beacons.

Discovery of life beyond the Earth, whether humble microbes on other bodies of the solar system or an extraterrestrial civilisation millions of years older than our own spamming the galaxy with its ETwitter feed, would arguably be the most significant discovery in the history of science. If we have only one example of life in the universe, its origin may have been a forbiddingly improbable fluke which happened only once in our galaxy or in the entire universe. But if there are two independent examples of the origin of life (note that if we find life on Mars, it is crucial to determine whether it shares a common origin with terrestrial life: since meteors exchange material between the planets, it's possible Earth life originated on Mars or vice versa), then there is every reason to believe life is as common in the cosmos as we are now finding planets to be. Perhaps in the next few decades we will discover the universe to be filled with wondrous creatures awaiting our discovery. Or maybe not—we may be alone in the universe, in which case it is our destiny to bring it to life.

 Permalink

Simmons, Dan. Flashback. New York: Little, Brown, 2011. ISBN 978-0-316-00697-2.
In the fourth decade of the 21st century, all of the dire consequences predicted when the U.S. veered onto a “progressive” path in 2008 have come to pass. Exponentially growing entitlement spending and debt, a depreciating currency being steadily displaced as the world's reserve currency, and an increasingly hollowed-out military unable to shoulder the burdens it had previously assumed in maintaining world stability all came to a head on The Day It All Hit The Fan. What is left of the United States (the Republic of Texas has opted to go it alone, while the southwest has become Nuevo Mexico, seeking to expand its territory in the ongoing reconquista) has become a run-down, has-been nation. China, joined at the hip to the U.S. economy and financial system, collapsed along with the U.S., and its territory and resources are being fought over by superpowers Japan and India, with U.S. mercenaries employed by both sides. Japan, holder of a large portion of the debt on which the U.S. defaulted, has effectively foreclosed, sending in Japanese “Advisors” who, from fortified Green Zone compounds, are the ultimate authority in their regions.

Islamic powers, with nothing to fear from a neutered U.S., make good on their vow to wipe Israel off the map, and the New Global Caliphate is mobilising Islamic immigrant communities around the world to advance its goal of global conquest. With the present so grim, millions in the U.S. have become users of the drug “flashback”, which allows those who take it to relive earlier, happier times in their lives. While not physically addictive, the contrast between the happy experiences “under the flash” and the squalid present causes many to spend whatever money they can put their hands on to escape to the past.

Nick Bottom was a Denver police department detective in charge of the investigation of the murder of the son of the Japanese Advisor in charge of the region. The victim was working on a documentary on the impact of flashback on U.S. society when, at a wrap party for the film, he and his girlfriend were killed in what amounted to a locked room mystery. Nick found lead after lead evaporating in the mysterious doings of the Japanese, and while involved in the investigation, his wife was killed in a horrific automobile accident. This tipped him over the edge, and he turned to flashback to re-live his life with her, eventually costing him his job.

Five years later, out of the blue, the Japanese Advisor summons him and offers to employ him to re-open the investigation of his son's death. Since Nick interviewed all of the persons of interest in the investigation, only he has the ability to relive those interrogations under the flash, and thus is in a unique position to discover something he missed while distracted with the case load of a busy homicide cop.

This is a gritty gumshoe procedural set in an all-too-plausible future. (OK, the flashback drug may seem to be a reach, but researchers are already talking about memory editing drugs, so who knows?) Nick discovers that all of the mysteries that haunt him may be related in some way, and has to venture into dangerous corners of this new world to follow threads which might make sense of all the puzzles.

This is one of those novels where, as the pages dwindle, you wonder how the author is going to pull everything together and begin to fear you may be headed for a cliffhanger setting the stage for a sequel. But in the last few chapters all is revealed and resolved, concluding a thoroughly satisfying yarn. If you'd like to see how noir mystery, science fiction, and a dystopian future can be blended into a page-turner, here's how it's done.

 Permalink

Benford, James and Gregory Benford, eds. Starship Century. Reno, NV: Lucky Bat Books, 2013. ISBN 978-1-939051-29-5.
“Is this the century when we begin to build starships?” So begins the book, produced in conjunction with the Starship Century Symposium held in May of 2013 at the University of California San Diego. Now, in a sense, we built and launched starships in the last century. Indeed, at this writing, eight objects launched from Earth are on interstellar trajectories. These are the two Pioneer spacecraft, the two Voyagers, the New Horizons Pluto flyby spacecraft, and its inert upper stage and two spin-down masses. But these objects are not aimed at any particular stars; they're simply flying outward from the solar system following whatever trajectory they were on when they completed their missions, and even if they were aimed at the nearest stars, it would take them tens of thousands of years to get there, by which time their radioactive power sources would be long exhausted and they would be inert space junk.

As long as they are built and launched by beings like humans (all bets are off should we pass the baton to immortal machines), starships or interstellar probes will probably need to complete their mission within the time scale of a human lifetime to be interesting. One can imagine multi-generation colony ships (and they are discussed here), but such ships are unlikely to be launched without confidence the destination is habitable, which can only be obtained by direct investigation by robotic probes launched previously. The closest star is around 4.3 light years from Earth. This is a daunting distance. To cross it in a human-scale time (say, within the career of a research scientist), you'd need to accelerate your probe to something on the order of 1/10 the speed of light. At this speed, each kilogram of the probe would have a kinetic energy of around 100 kilotons of TNT. A colony ship with a dry mass of 1,000 tonnes would, travelling at a tenth of the speed of light, have kinetic energy which, at a cost of USD 0.10 per kilowatt-hour, would be worth USD 12.5 trillion, which is impressive even by U.S. budget deficit standards. But you can't transmit energy to a spacecraft with 100% efficiency (the power cord is a killer!), and so the cost of a realistic mission might be ten times this.

Is it then, silly, to talk about starships? Well, not so fast. Ever since the Enlightenment, the GDP per capita has been rising rapidly. When I was a kid, millionaires were exotic creatures, while today people who bought houses in coastal California in the 1970s are all millionaires. Now it's billionaires who are the movers and shakers, and some of them are using their wealth to try to reduce the cost of access to space. (Yes, currency depreciation has accounted for a substantial part of the millionaire to billionaire transition, but the scope of what one can accomplish with a billion dollar grubstake today is still much greater than with a million dollars fifty years ago.) If this growth continues, might it not be possible that before this century is out there will be trillionaires who, perhaps in a consortium, have the ambition to expand the human presence to other stars?

This book collects contributions from those who have thought in great detail about the challenges of travel to the stars, both in nuts and bolts hardware and economic calculations and in science fictional explorations of what it will mean for the individuals involved and the societies which attempt that giant leap. There are any number of “Aha!” moments here. Freeman Dyson points out that the void between the stars is not as empty as many imagine it to be, but filled with Oort cloud objects which may extend so far as to overlap the clouds of neighbouring stars. Dyson imagines engineered organisms which could render these bodies habitable to (perhaps engineered) humans, which would expand toward the stars much like the Polynesians in the Pacific: from island to island, with a population which would dwarf both in numbers and productivity that of the inner system rock where they originated.

We will not go to the stars with rockets like we use today. The most rudimentary working of the numbers shows how absurd that would be. And yet nuclear thermal rockets, a technology developed and tested in the 1960s and 1970s, are more than adequate to develop a solar system wide economy which could support interstellar missions. Many different approaches to building starships are explored here: some defy the constraints of the rocket equation by keeping the power source in the solar system, as in “sailships” driven by laser or microwave radiation. A chapter explores “exotic propulsion”, beyond our present understanding of physics, which might change the game. (And before you dismiss such speculations, recall that according to the consensus model of cosmology, around 95% of the universe is made up of “dark matter” and “dark energy” whose nature is entirely unknown. Might it be possible that a vacuum propeller could be discovered which works against these pervasive media just as a submarine's propeller acts upon the ocean?)

Leavening the technical articles are science fiction stories exploring the transition from a planetary species to the stars. Science fiction provides the dreams which are then turned into equations and eventually hardware, and it has a place at this table. Indeed, many of the scientists who spoke at the conference and authored chapters in this book also write science fiction. We are far from being able to build starships or even interstellar probes but, being human, we're always looking beyond the horizon and not just imagining what's there but figuring out how we'll go and see it for ourselves. To date, humans haven't even learned how to live in space: our space stations are about camping in space, with extensive support from the Earth. We have no idea what it takes to create a self-sustaining closed ecosystem (consider that around 90% of the cells in your body are not human but rather symbiotic microbes: wouldn't you just hate it to be half way to Alpha Centauri and discover you'd left some single-celled critter behind?). If somebody waved a magic wand and handed us a propulsion module that could take us to the nearest stars within a human lifetime, there are many things we'd still need to know in order to expect to survive the journey and establish ourselves when we arrived. And, humans being humans, we'd go anyway, regardless. Gotta love this species!

This is an excellent survey of current thinking about interstellar missions. If you're interested in this subject, be sure to view the complete video archive of the conference, which includes some presentations which do not figure in this volume, including the magnificent galaxy garden.

 Permalink

Grisham, John. The Racketeer. New York: Doubleday, 2012. ISBN 978-0-345-53057-8.
Malcolm Bannister was living the life of a retail lawyer in a Virginia town, doing real estate transactions, wills, and the other routine work which occupies a three partner firm, paying the bills but never striking it rich. A law school classmate contacts him and lets him know there's a potentially large commission available for negotiating the purchase of a hunting lodge in rural Virginia for an anonymous client. Bannister doesn't like the smell of the transaction, especially after a number of odd twists and turns during the negotiation, but bills must be paid, and this fee will go a long way toward that goal. Without any warning, during a civic function, costumed goons arrest him and perp-walk him before previously-arranged state media. He, based upon his holding funds in escrow for a real estate transaction, is accused of “money laundering” and indicted as part of a RICO prosecution of a Washington influence peddler. Railroaded through the “justice system” by an ambitious federal prosecutor and sentenced by a vindictive judge, he finds himself imprisoned for ten years at a “Club Fed” facility along with other nonviolent “criminals”.

Five years into his sentence, he has become the librarian and “jailhouse lawyer” of the prison, filing motions on behalf of his fellow inmates and, on occasion, seeing injustices in their convictions reversed. He has lost everything else: his wife has divorced him and remarried, and his law licence has been revoked; he has little hope of resuming his career after release.

A jailhouse lawyer hears many things from his “clients”: some boastful, others bogus, but some revealing secrets which those holding them think might help to get them out. When a federal judge is murdered, Bannister knows, from his contacts in prison, precisely who committed the crime and leverages his position to obtain his own release, disappearance into witness protection, and immunity from prosecution for earlier acts. The FBI, under pressure to solve the case and with no other leads, is persuaded by what Bannister has to offer and takes him up on the deal.

A jailhouse lawyer, wrongly convicted on a bogus charge by a despotic regime has a great deal of time to ponder how he has been wronged, identify those responsible, and slowly and surely draw his plans against them.

This is one of the best revenge novels I've read, and it's particularly appropriate since it takes down the tyrannical regime which incarcerates a larger percentage of its population than any serious country and shows how a clever individual can always outwit the bumbling collectivist leviathan as long as he refuses to engage it on level terrain but always exploits agility against the saurian brain reaction time of the state.

The only goof I noticed is that on a flight from Puerto Rico to Atlanta, passengers are required to go through passport control. As this is a domestic flight from a U.S. territory to the U.S. mainland, no passport check should be required (although in the age of Heimatsicherheitsdienst, one never knows).

I wouldn't call this a libertarian novel, as the author accepts the coercive structure of the state as a given, but it's a delightful tale of somebody who has been wronged by that foul criminal enterprise obtaining pay-back by wit and guile.

 Permalink

December 2013

Orlov, Dmitry. The Five Stages of Collapse. Gabriola Island, BC, Canada: New Society Publishers, 2013. ISBN 978-0-86571-736-7.
The author was born in Leningrad and emigrated to the United States with his family in the mid-1970s at the age of 12. He experienced the collapse of the Soviet Union and the subsequent events in Russia on a series of extended visits between the late 1980s and mid 1990s. In his 2008 book Reinventing Collapse (April 2009) he described the Soviet collapse and assessed the probability of a collapse of the United States, concluding such a collapse was inevitable.

In the present book, he steps back from the specifics of the collapse of overextended superpowers to examine the process of collapse as it has played out in a multitude of human societies since the beginning of civilisation. The author argues that collapse occurs in five stages, with each stage creating the preconditions for the next.

  1. Financial collapse. Faith in “business as usual” is lost. The future is no longer assumed to resemble the past in any way that allows risk to be assessed and financial assets to be guaranteed. Financial institutions become insolvent; savings are wiped out and access to capital is lost.
  2. Commercial collapse. Faith that “the market shall provide” is lost. Money is devalued and/or becomes scarce, commodities are hoarded, import and retail chains break down and widespread shortages of survival necessities become the norm.
  3. Political collapse. Faith that “the government will take care of you” is lost. As official attempts to mitigate widespread loss of access to commercial sources of survival necessities fail to make a difference, the political establishment loses legitimacy and relevance.
  4. Social collapse. Faith that “your people will take care of you” is lost, as social institutions, be they charities or other groups that rush in to fill the power vacuum, run out of resources or fail through internal conflict.
  5. Cultural collapse. Faith in the goodness of humanity is lost. People lose their capacity for “kindness, generosity, consideration, affection, honesty, hospitality, compassion, charity.” Families disband and compete as individuals for scarce resources, The new motto becomes “May you die today so that I can die tomorrow.”

Orlov argues that our current globalised society is the product of innovations at variance with ancestral human society which are not sustainable: in particular the exponentially growing consumption of a finite source of energy from fossil fuels and an economy based upon exponentially growing levels of debt: government, corporate, and individual. Exponential growth with finite resources cannot go on forever, and what cannot go on forever is certain to eventually end. He argues that we are already seeing the first symptoms of the end of the order which began with the industrial revolution.

While each stage of collapse sows the seeds of the next, the progression is not inevitable. In post-Soviet Russia, for example, the collapse progressed into stage 3 (political collapse), but was then arrested by the re-assertion of government authority. While the Putin regime may have many bad aspects, it may produce better outcomes for the Russian people than progression into a stage 4 or 5 collapse.

In each stage of collapse, there are societies and cultures which are resilient against the collapse around them and ride it out. In some cases, it's because they have survived many collapses before and have evolved not to buy into the fragile institutions which are tumbling down and in others it's older human forms of organisation re-asserting themselves as newfangled innovations founder. The author cites these collapse survivors:

  1. Financial collapse: Iceland
  2. Commercial collapse: The Russian Mafia
  3. Political collapse: The Pashtun
  4. Social collapse: The Roma
  5. Cultural collapse: The Ik

This is a simultaneously enlightening and infuriating book. While the author has deep insights into how fragile our societies are and how older forms of society emerge after they collapse, I think he may make the error of assuming that we are living at the end of history and that regression to the mean is the only possible outcome. People at every stage of the development of society which brought us to the present point doubtless argued the same. “When we've cut down all the forests for firewood, what shall we do?” they said, before the discovery of coal. “When the coal seams are mined out, what will happen?” they said, before petroleum was discovered to be a resource, not a nuisance seeping from the ground. I agree with Orlov that our civilisation has been founded on abundant cheap energy and resources, but there are several orders of magnitude more energy and resources available for our taking in the solar system, and we already have the technology, if not the imagination and will, to employ them to enrich all of the people of Earth and beyond.

If collapse be our destiny, I believe our epitaph will read “Lack of imagination and courage”. Sadly, this may be the way to bet. Had we not turned inward in the 1970s and squandered our wealth on a futile military competition and petroleum, Earth would now be receiving most of its energy from solar power satellites and futurists would be projecting the date at which the population off-planet exceeded the mudboots deep down in the gravity well. Collapse is an option—let's hope we do not choose it.

Here is a talk by the author, as rambling as this book, about the issues discussed therein.

 Permalink

Thor, Brad. The Athena Project. New York: Pocket Books, 2010. ISBN 978-1-4391-9297-9.
This is the tenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel Harvath has only a walk-on rôle, while centre stage is occupied by the all-woman Athena Team of special operators we first encountered in the previous novel in the series, Foreign Influence (July 2010). These women, recruited from top competitors in extreme sports, are not only formidable at shooting, fighting, parachuting, underwater operations, and the rest of the panoply of skills of their male counterparts, they are able to blend in more easily in many contexts than their burly, buzz-cut colleagues and, when necessary, use their feminine wiles to disarm (sometimes literally) the adversary.

Deployed on a mission to seize and exfiltrate an arms merchant involved in a terrorist attack on U.S. civilians in Europe, the team ends up in a James Bond style shoot-out and chase through the canals of Venice. Meanwhile, grisly evidence in the Paraguayan jungle indicates that persons unknown may have come into possession of a Nazi wonder weapon from the last days of World War II and are bent on using it with potentially disastrous consequences.

The Athena Team must insinuate themselves into an underground redoubt in Eastern Europe, discover its mysteries, and figure out the connections to the actors plotting mass destruction, then neutralise them.

I've enjoyed all the Brad Thor novels I've read so far, but this one, in my opinion, doesn't measure up to the standard of those earlier in the series. First of all, the fundamental premise of the super-weapon at the centre of the plot is physically absurd, and all the arm-waving in the world can't make it plausible. Also, as Larry Niven observed, any society which develops such a technology will quickly self-destruct (which doesn't mean it's impossible, but may explain why we do not observe intelligent aliens in the universe). I found the banter among the team members and with their male colleagues contrived and tedious: I don't think such consummate professionals would behave in such a manner, especially while on the clock. Attention to detail on the little things is excellent, although that Air Force base in the Florida panhandle is “Eglin”, not “Elgin” (p. 202).

This is a well-crafted thriller and enjoyable “airplane book”. Once you get past the implausibility of the super-weapon (as many readers who have only heard of such concepts in the popular press will), the story moves right along. It's substantially harder to tell a story involving a team of four equals (albeit with different talents) than one with a central character such as Scot Harvath, and I don't think the author completely pulls it off: the women are not sufficiently distinguished from one another and tend to blend together as team members rather than be identified with their individual characteristics.

 Permalink

Heinlein, Robert A. Podkayne of Mars. New York: Ace, [1963] 2010. ISBN 978-0-441-01834-5.
This novel had an interesting genesis. Robert Heinlein, who always considered writing a business—he had things to say, but it had to pay—paid attention when his editor at Scribner's pointed out to him that his work was selling well in the young male demographic and observed that if he could write for girls as well he could double the size of his market. Heinlein took this as both a challenge and opportunity, and created the character of “Puddin'” (Maureen), who appeared in three short stories in the magazine Calling All Girls, the most memorable of which is “Cliff and the Calories”.

Heinlein was so fond of Puddin' that he later decided to move her to Mars, change her name to Podkayne, after an ancient Martian saint, and launch her into interplanetary intrigue along with her insufferable and cataclysmically clever younger brother, Clark. This novel was written just as the original romantic conception of the solar system was confronted with the depressing reality from the first interplanetary probes. Mars was not the home of ancients, but an arid desert with a thin atmosphere where, at best, microbes might survive. Venus was not a swampy jungle world but a hellish furnace hot enough to melt lead. But when Heinlein was writing this book, we could still dream.

Podkayne was the prototype of the strong female characters which would populate Heinlein's subsequent work. She aspired to captain an exploration starship, and wasn't averse to using her emerging feminine wiles to achieving her goals. When, after a mix-up in Mars family planning grounded her parents, depriving her and deplorable brother Clark of the opportunity to take the triplanetary grand tour, her Uncle Tom, a Mars revolutionary, arranges to take them on a trip to Earth via Venus on the luxury liner Tricorn. On board and at Venus, Podkayne discovers the clash of cultures as planetary civilisations have begun to diverge, and the conflict between those who celebrate their uniqueness formed from their environments and those who would coerce them into uniformity.

When brother Clark vanishes, Podkayne discovers that Uncle Tom's trip is not a tourist jaunt but rather a high stakes mission, and that the independence of Mars may depend upon the her resourcefulness and that of her detestable brother.

There are two endings to this novel. Readers detested the original and, under protest, Heinlein wrote an alternative which appears in this edition. This is often classified as a Heinlein juvenile because the protagonist is a young adult, but Heinlein did not consider it among his juvenile works.

Is there anybody who does not admire Poddy and simultaneously detest and respect Clark? This is a great story, which may have made young women of my generation aspire to fly in space. Many did.

 Permalink

Barrat, James. Our Final Invention. New York: Thomas Dunne Books, 2013. ISBN 978-0-312-62237-4.
As a member of that crusty generation who began programming mainframe computers with punch cards in the 1960s, the phrase “artificial intelligence” evokes an almost visceral response of scepticism. Since its origin in the 1950s, the field has been a hotbed of wildly over-optimistic enthusiasts, predictions of breakthroughs which never happened, and some outright confidence men preying on investors and institutions making research grants. John McCarthy, who organised the first international conference on artificial intelligence (a term he coined), predicted at the time that computers would achieve human-level general intelligence within six months of concerted research toward that goal. In 1970 Marvin Minsky said “In from three to eight years we will have a machine with the general intelligence of an average human being.” And these were serious scientists and pioneers of the field; the charlatans and hucksters were even more absurd in their predictions.

And yet, and yet…. The exponential growth in computing power available at constant cost has allowed us to “brute force” numerous problems once considered within the domain of artificial intelligence. Optical character recognition (machine reading), language translation, voice recognition, natural language query, facial recognition, chess playing at the grandmaster level, and self-driving automobiles were all once thought to be things a computer could never do unless it vaulted to the level of human intelligence, yet now most have become commonplace or are on the way to becoming so. Might we, in the foreseeable future, be able to brute force human-level general intelligence?

Let's step back and define some terms. “Artificial General Intelligence” (AGI) means a machine with intelligence comparable to that of a human across all of the domains of human intelligence (and not limited, say, to playing chess or driving a vehicle), with self-awareness and the ability to learn from mistakes and improve its performance. It need not be embodied in a robot form (although some argue it would have to be to achieve human-level performance), but could certainly pass the Turing test: a human communicating with it over whatever channels of communication are available (in the original formulation of the test, a text-only teleprinter) would not be able to determine whether he or she were communicating with a machine or another human. “Artificial Super Intelligence” (ASI) denotes a machine whose intelligence exceeds that of the most intelligent human. Since a self-aware intelligent machine will be able to modify its own programming, with immediate effect, as opposed to biological organisms which must rely upon the achingly slow mechanism of evolution, an AGI might evolve into an ASI in an eyeblink: arriving at intelligence a million times or more greater than that of any human, a process which I. J. Good called an “intelligence explosion”.

What will it be like when, for the first time in the history of our species, we share the planet with an intelligence greater than our own? History is less than encouraging. All members of genus Homo which were less intelligent than modern humans (inferring from cranial capacity and artifacts, although one can argue about Neanderthals) are extinct. Will that be the fate of our species once we create a super intelligence? This book presents the case that not only will the construction of an ASI be the final invention we need to make, since it will be able to anticipate anything we might invent long before we can ourselves, but also our final invention because we won't be around to make any more.

What will be the motivations of a machine a million times more intelligent than a human? Could humans understand such motivations any more than brewer's yeast could understand ours? As Eliezer Yudkowsky observed, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Indeed, when humans plan to construct a building, do they take into account the wishes of bacteria in soil upon which the structure will be built? The gap between humans and ASI will be as great. The consequences of creating ASI may extend far beyond the Earth. A super intelligence may decide to propagate itself throughout the galaxy and even beyond: with immortality and the ability to create perfect copies of itself, even travelling at a fraction of the speed of light it could spread itself into all viable habitats in the galaxy in a few hundreds of millions of years—a small fraction of the billions of years life has existed on Earth. Perhaps ASI probes from other extinct biological civilisations foolish enough to build them are already headed our way.

People are presently working toward achieving AGI. Some are in the academic and commercial spheres, with their work reasonably transparent and reported in public venues. Others are “stealth companies” or divisions within companies (does anybody doubt that Google's achieving an AGI level of understanding of the information it Hoovers up from the Web wouldn't be a overwhelming competitive advantage?). Still others are funded by government agencies or operate within the black world: certainly players such as NSA dream of being able to understand all of the information they intercept and cross-correlate it. There is a powerful “first mover” advantage in developing AGI and ASI. The first who obtains it will be able to exploit its capability against those who haven't yet achieved it. Consequently, notwithstanding the worries about loss of control of the technology, players will be motivated to support its development for fear their adversaries might get there first.

This is a well-researched and extensively documented examination of the state of artificial intelligence and assessment of its risks. There are extensive end notes including references to documents on the Web which, in the Kindle edition, are linked directly to their sources. In the Kindle edition, the index is just a list of “searchable terms”, not linked to references in the text. There are a few goofs, as you might expect for a documentary film maker writing about technology (“Newton's second law of thermodynamics”), but nothing which invalidates the argument made herein.

I find myself oddly ambivalent about the whole thing. When I hear “artificial intelligence” what flashes through my mind remains that dielectric material I step in when I'm insufficiently vigilant crossing pastures in Switzerland. Yet with the pure increase in computing power, many things previously considered AI have been achieved, so it's not implausible that, should this exponential increase continue, human-level machine intelligence will be achieved either through massive computing power applied to cognitive algorithms or direct emulation of the structure of the human brain. If and when that happens, it is difficult to see why an “intelligence explosion” will not occur. And once that happens, humans will be faced with an intelligence that dwarfs that of their entire species; which will have already penetrated every last corner of its infrastructure; read every word available online written by every human; and which will deal with its human interlocutors after gaming trillions of scenarios on cloud computing resources it has co-opted.

And still we advance the cause of artificial intelligence every day. Sleep well.

 Permalink

Beck, Glenn with Jack Henderson. The Eye of Moloch. New York: Threshold Editions, 2013. ISBN 978-1-4516-3584-3.
I have a terrible record of reading a book, saying I don't intend to read the inevitable sequel, and then once again, finding my bandaged finger wabbling back to the Fire. This novel is a sequel to The Overton Window (June 2010) which I found to be a respectable but less than gripping thriller with an unsatisfying conclusion. The present volume continues the story, but still leaves much up in the air at its end. As a sequel to The Overton Window, it assumes the reader has previously read that book; little or no effort is made to bring readers who start here up to speed, and they will find themselves without any idea who the principal characters are, the circumstances they find themselves in, and why they are acting as they do.

The grand plot to use public relations to manipulate the U.S. population into welcoming the imposition of tyranny by a small group of insiders is proceeding. Noah Gardner, son of one of the key players in the conspiracy and former worker in its inner circle, has switched sides and now supports the small band called Founders' Keepers, which, led by Molly Ross, strives to bring the message of the country's founding principles to the citizens before the situation reaches the state of outright revolt. But the regime views any form of dissent as a threat, and has escalated the conflict into overt violence, deploying private contractors, high-tech weapons, and intrusive and ubiquitous surveillance, so well proven in overseas wars, against its domestic opponents.

As the U.S. crumbles, fringe groups of all kinds begin to organise and pursue their own agendas. The conspirators play them against one another, seeking to let them do the dirty work, while creating an environment of fear of “domestic terrorists” which will make the general population welcome the further erosion of liberty. With the news media completely aligned with the regime and the Internet beginning to succumb to filtering and censorship, there seems little hope of getting the truth out to the people.

Molly Ross seizes upon a bold stroke which will expose the extent to which the central planners intend to deliver Americans into serfdom. Certainly if Americans were aware of how their every act was monitored, correlated, and used to control them, they would rise up. But this requires a complicated plan which puts the resources of her small group and courageous allies on the line.

Like its predecessor, this book, taken as a pure thriller, doesn't come up to the standard set by the masters of the genre. There are many characters with complex back-stories and interactions, and at times it's difficult to remember who's who and what side they're currently on. The one thing which is very effective is that throughout the novel we encounter references to weapons, surveillance technologies, domestic government programs which trample upon the rights of citizens, media bias and overt propaganda, and other horrors which sketch how liberty is shrinking in the face of a centralised, coercive, and lawless state. Then in the afterword, most of these programs are documented as already existing in the U.S., complete with citations to source documents on the Web. But then one wonders: in 2013 the U.S. National Security Agency has been revealed as spying on U.S. citizens in ways just as extreme as the surveillance Molly hoped to expose here, and only a small percentage of the population seems to care.

Perhaps what works best is that the novel evokes a society near that tipping point where, in the words of Claire Wolfe, “It's too late to work within the system, but too early to shoot the bastards.” We have many novels and manifestos of political turnaround before liberty is totally lost, and huge stacks of post-apocalyptic fiction set after the evil and corrupt system has collapsed under its own weight, but this is one of the few novels you'll read set in that difficult in-between time. The thing about a tipping point is that individuals, small groups, and ideas can have a disproportionate influence on outcomes, whereas near equilibrium the system is difficult to perturb. This book invites the reader to ask, in a situation as described, which side they would choose, and what would they do, and risk, for what they believe.

 Permalink

  2014  

January 2014

Faulks, Sebastian. Jeeves and the Wedding Bells. London: Hutchinson, 2013. ISBN 978-0-09-195404-8.
As a fan of P. G. Wodehouse ever since I started reading his work in the 1970s, and having read every single Jeeves and Wooster story, it was with some trepidation that I picked up this novel, the first Jeeves and Wooster story since Aunts Aren't Gentlemen, published in 1974, a year before Wodehouse's death. This book, published with the permission of the Wodehouse estate, is described by the author as a tribute to P. G. Wodehouse which he hopes will encourage readers to discover the work of the master.

The author notes that, while remaining true to the characters of Jeeves and Wooster and the ambience of the stories, he did not attempt to mimic Wodehouse's style. Notwithstanding, to this reader, the result is so close to that of Wodehouse that if you dropped it into a Wodehouse collection unlabelled, I suspect few readers would find anything discordant. Faulks's Jeeves seems to use more jaw-breaking words than I recall Wodehouse's, but that's about it. Apart from Jeeves and Wooster, none of the regular characters who populate Wodehouse's stories appear on stage here. We hear of members of the Drones, the terrifying Aunt Agatha, and others, and mentions of previous episodes involving them, but all of the other dramatis personæ are new.

On holiday in the south of France, Bertie Wooster makes the acquaintance of Georgiana Meadowes, a copy editor for a London publisher having escaped the metropolis to finish marking up a manuscript. Bertie is immediately smitten, being impressed by Georgiana's beauty, brains, and wit, albeit less so with her driving (“To say she drove in the French fashion would be to cast a slur on those fine people.”). Upon his return to London, Bertie soon reads that Georgiana has become engaged to a travel writer she mentioned her family urging her to marry. Meanwhile, one of Bertie's best friends, “Woody” Beeching, confides his own problem with the fairer sex. His fiancée has broken off the engagement because her parents, the Hackwoods, need their daughter to marry into wealth to save the family seat, at risk of being sold. Before long, Bertie discovers that the matrimonial plans of Georgiana and Woody are linked in a subtle but inflexible way, and that a delicate hand, acting with nuance, will be needed to assure all ends well.

Evidently, a job crying out for the attention of Bertram Wilberforce Wooster! Into the fray Jeeves and Wooster go, and before long a quintessentially Wodehousean series of impostures, misadventures, misdirections, eccentric characters, disasters at the dinner table, and carefully crafted stratagems gone horribly awry ensue. If you are not acquainted with that game which the English, not being a very spiritual people, invented to give them some idea of eternity (G. B. Shaw), you may want to review the rules before reading chapter 7.

Doubtless some Wodehouse fans will consider any author's bringing Jeeves and Wooster back to life a sacrilege, but this fan simply relished the opportunity to meet them again in a new adventure which is entirely consistent with the Wodehouse canon and characters. I would have been dismayed had this been a parody or some “transgressive” despoilation of the innocent world these characters inhabit. Instead we have a thoroughly enjoyable romp in which the prodigious brain of Jeeves once again saves the day.

The U.K. edition is linked above. U.S. and worldwide Kindle editions are available.

 Permalink

Pooley, Charles and Ed LeBouthillier. Microlaunchers. Seattle: CreateSpace, 2013. ISBN 978-1-4912-8111-6.
Many fields of engineering are subject to scaling laws: as you make something bigger or smaller various trade-offs occur, and the properties of materials, cost, or other design constraints set limits on the largest and smallest practical designs. Rockets for launching payloads into Earth orbit and beyond tend to scale well as you increase their size. Because of the cube-square law, the volume of propellant a tank holds increases as the cube of the size while the weight of the tank goes as the square (actually a bit faster since a larger tank will require more robust walls, but for a rough approximation calling it the square will do). Viable rockets can get very big indeed: the Sea Dragon, although never built, is considered a workable design. With a length of 150 metres and 23 metres in diameter, it would have more than ten times the first stage thrust of a Saturn V and place 550 metric tons into low Earth orbit.

What about the other end of the scale? How small could a space launcher be, what technologies might be used in it, and what would it cost? Would it be possible to scale a launcher down so that small groups of individuals, from hobbyists to college class projects, could launch their own spacecraft? These are the questions explored in this fascinating and technically thorough book. Little practical work has been done to explore these questions. The smallest launcher to place a satellite in orbit was the Japanese Lambda 4S with a mass of 9400 kg and length of 16.5 metres. The U.S. Vanguard rocket had a mass of 10,050 kg and length of 23 metres. These are, though small compared to the workhorse launchers of today, still big, heavy machines, far beyond the capabilities of small groups of people, and sufficiently dangerous if something goes wrong that they require launch sites in unpopulated regions.

The scale of launchers has traditionally been driven by the mass of the payload they carry to space. Early launchers carried satellites with crude 1950s electronics, while many of their successors were derived from ballistic missiles sized to deliver heavy nuclear warheads. But today, CubeSats have demonstrated that useful work can be done by spacecraft with a volume of one litre and mass of 1.33 kg or less, and the PhoneSat project holds out the hope of functional spacecraft comparable in weight to a mobile telephone. While to date these small satellites have flown as piggy-back payloads on other launches, the availability of dedicated launchers sized for them would increase the number of launch opportunities and provide access to trajectories unavailable in piggy-back opportunities.

Just because launchers have tended to grow over time doesn't mean that's the only way to go. In the 1950s and '60s many people expected computers to continue their trend of getting bigger and bigger to the point where there were a limited number of “computer utilities” with vast machines which customers accessed over the telecommunication network. But then came the minicomputer and microcomputer revolutions and today the computing power in personal computers and mobile devices dwarfs that of all supercomputers combined. What would it take technologically to spark a similar revolution in space launchers?

With the smallest successful launchers to date having a mass of around 10 tonnes, the authors choose two weight budgets: 1000 kg on the high end and 100 kg as the low. They divide these budgets into allocations for payload, tankage, engines, fuel, etc. based upon the experience of existing sounding rockets, then explore what technologies exist which might enable such a vehicle to achieve orbital or escape velocity. The 100 kg launcher is a huge technological leap from anything with which we have experience and probably could be built, if at all, only after having gained experience from earlier generations of light launchers. But then the current state of the art in microchip fabrication would have seemed like science fiction to researchers in the early days of integrated circuits and it took decades of experience and generation after generation of chips and many technological innovations to arrive where we are today. Consequently, most of the book focuses on a three stage launcher with the 1000 kg mass budget, capable of placing a payload of between 150 and 200 grams on an Earth escape trajectory.

The book does not spare the rigour. The reader is introduced to the rocket equation, formulæ for aerodynamic drag, the standard atmosphere, optimisation of mixture ratios, combustion chamber pressure and size, nozzle expansion ratios, and a multitude of other details which make the difference between success and failure. Scaling to the size envisioned here without expensive and exotic materials and technologies requires out of the box thinking, and there is plenty on display here, including using beverage cans for upper stage propellant tanks.

A 1000 kg space launcher appears to be entirely feasible. The question is whether it can be done without the budget of hundreds of millions of dollars and years of development it would certainly take were the problem assigned to an aerospace prime contractor. The authors hold out the hope that it can be done, and observe that hobbyists and small groups can begin working independently on components: engines, tank systems, guidance and navigation, and so on, and then share their work precisely as open source software developers do so successfully today.

This is a field where prizes may work very well to encourage development of the required technologies. A philanthropist might offer, say, a prize of a million dollars for launching a 150 gram communicating payload onto an Earth escape trajectory, and a series of smaller prizes for engines which met the requirements for the various stages, flight-weight tankage and stage structures, etc. That way teams with expertise in various areas could work toward the individual prizes without having to take on the all-up integration required for the complete vehicle.

This is a well-researched and hopeful look at a technological direction few have thought about. The book is well written and includes all of the equations and data an aspiring rocket engineer will need to get started. The text is marred by a number of typographical errors (I counted two dozen) but only one trivial factual error. Although other references are mentioned in the text, a bibliography of works for those interested in exploring further would be a valuable addition. There is no index.

 Permalink

Crocker, George N. Roosevelt's Road To Russia. Whitefish, MT: Kessinger Publishing, [1959] 2010. ISBN 978-1-163-82408-5.
Before Barack Obama, there was Franklin D. Roosevelt. Unless you lived through the era, imbibed its history from parents or grandparents, or have read dissenting works which have survived rounds of deaccessions by libraries, it is hard to grasp just how visceral the animus was against Roosevelt by traditional, constitutional, and free-market conservatives. Roosevelt seized control of the economy, extended the tentacles of the state into all kinds of relations between individuals, subdued the judiciary and bent it to his will, manipulated a largely supine media which, with a few exceptions, became his cheering section, and created programs which made large sectors of the population directly dependent upon the federal government and thus a reliable constituency for expanding its power. He had the audacity to stand for re-election an unprecedented three times, and each time the American people gave him the nod.

But, as many old-timers, even those who were opponents of Roosevelt at the time and appalled by what the centralised super-state he set into motion has become, grudgingly say, “He won the war.” Well, yes, by the time he died in office on April 12, 1945, Germany was close to defeat; Japan was encircled, cut off from the resources needed to continue the war, and being devastated by attacks from the air; the war was sure to be won by the Allies. But how did the U.S. find itself in the war in the first place, how did Roosevelt's policies during the war affect its conduct, and what consequences did they have for the post-war world?

These are the questions explored in this book, which I suppose contemporary readers would term a “paleoconservative” revisionist account of the epoch, published just 14 years after the end of the war. The work is mainly an account of Roosevelt's personal diplomacy during meetings with Churchill or in the Big Three conferences with Churchill and Stalin. The picture of Roosevelt which emerges is remarkably consistent with what Churchill expressed in deepest confidence to those closest to him which I summarised in my review of The Last Lion, Vol. 3 (January 2013) as “a lightweight, ill-informed and not particularly engaged in military affairs and blind to the geopolitical consequences of the Red Army's occupying eastern and central Europe at war's end.” The events chronicled here and Roosevelt's part in them is also very much the same as described in Freedom Betrayed (June 2012), which former president Herbert Hoover worked on from shortly after Pearl Harbor until his death in 1964, but which was not published until 2011.

While Churchill was constrained in what he could say by the necessity of maintaining Britain's alliance with the U.S., and Hoover adopts a more scholarly tone, the present volume voices the outrage over Roosevelt's strutting on the international stage, thinking “personal diplomacy” could “bring around ‘Uncle Joe’ ”, condemning huge numbers of military personnel and civilians on both the Allied and Axis sides to death by blurting out “unconditional surrender” without any consultation with his staff or Allies, approving the genocidal Morgenthau Plan to de-industrialise defeated Germany, and, discarding the high principles of his own Atlantic Charter, delivering millions of Europeans into communist tyranny and condoning one of the largest episodes of ethnic cleansing in human history.

What is remarkable is how difficult it is to come across an account of this period which evokes the author's passion, shared with many of his time, of how the bumblings of a naïve, incompetent, and narcissistic chief executive had led directly to so much avoidable tragedy on a global scale. Apart from Hoover's book, finally published more than half a century after this account, there are few works accessible to the general reader which present the view that the tragic outcome of World War II was in large part preventable, and that Roosevelt and his advisers were responsible, in large part, for what happened.

Perhaps there are parallels in this account of wickedness triumphing through cluelessness for our present era.

This edition is a facsimile reprint of the original edition published by Henry Regnery Company in 1959.

 Permalink

Turk, James and John Rubino. The Money Bubble. Unknown: DollarCollapse Press, 2013. ISBN 978-1-62217-034-0.
It is famously difficult to perceive when you're living through a financial bubble. Whenever a bubble is expanding, regardless of its nature, people with a short time horizon, particularly those riding the bubble without experience of previous boom/bust cycles, not only assume it will continue to expand forever, they will find no shortage of financial gurus to assure them that what, to an outsider appears a completely unsustainable aberration is, in fact, “the new normal”.

It used to be that bubbles would occur only around once in a human generation. This meant that those caught up in them would be experiencing one for the first time and discount the warnings of geezers who were fleeced the last time around. But in our happening world the pace of things accelerates, and in the last 20 years we have seen three successive bubbles, each segueing directly into the next:

  • The Internet/NASDAQ bubble
  • The real estate bubble
  • The bond market bubble

The last bubble is still underway, although the first cracks in its expansion have begun to appear at this writing.

The authors argue that these serial bubbles are the consequence of a grand underlying bubble which has been underway for decades: the money bubble—the creation out of thin air of currency by central banks, causing more and more money to chase whatever assets happen to be in fashion at the moment, thus resulting in bubble after bubble until the money bubble finally pops.

Although it can be psychologically difficult to diagnose a bubble from the inside, if we step back to the abstract level of charts, it isn't all that hard. Whenever you see an exponential curve climbing to the sky, it's not only a safe bet but a sure thing that it won't continue to do so forever. Now, it may go on much longer than you might imagine: as John Maynard Keynes said, “Markets can remain irrational a lot longer than you and I can remain solvent”—but not forever. Let's look at a chart of the M2 money stock (one of the measures of the supply of money denominated in U.S. dollars) from 1959 through the end of 2013 (click the chart to see data updated through the present date).

M2 money stock: 1959-2013

You will rarely see a more perfect exponential growth curve than this: if you re-plot it on a semi-log axis, the fit to a straight line is remarkable.

Ever since the creation of the Federal Reserve System in the United States in 1913, and especially since the link between the U.S. dollar and gold was severed in 1971, all of the world's principal trading currencies have been fiat money: paper or book-entry money without any intrinsic value, created by a government who enforces its use through legal tender laws. Since governments are the modern incarnation of the bands of thieves and murderers who have afflicted humans ever since our origin in Africa, it is to be expected that once such a band obtains the power to create money which it can coerce its subjects to use they will quickly abuse that power to loot their subjects and enrich themselves, as least as long as they can keep the game going. In the end, it is inevitable that people will wise up to the scam, and that the paper money will be valuable only as scratchy toilet paper. So it has been long before the advent of proper toilet paper.

In this book the authors recount the sorry history of paper money and debt-fuelled bubbles and examine possible scenarios as the present unsustainable system inevitably comes to an end. It is very difficult to forecast what will happen: we appear to be heading for what Ludwig von Mises called a “crack-up boom”. This is where, as he wrote, “the masses wake up”, and things go all nonlinear. The preconditions for this are already in place, but there is no way to know when it will dawn upon a substantial fraction of the population that their savings have been looted, their retirement deferred until death, their children indentured to a lifetime of debt, and their nation destined to become a stratified society with a small fraction of super-wealthy in their gated communities and a mass of impoverished people, disarmed, dumbed down by design, and kept in line by control of their means to communicate, travel, and organise. It is difficult to make predictions beyond that point, as many disruptive things can happen as a society approaches it. This is not an environment in which one can make investment decisions as one would have in the heady days of the 1950s.

And yet, one must—at least people who have managed to save for their retirement and to provide their children a hand up in this increasingly difficult world. The authors, drawing upon historical parallels in previous money and debt bubbles, suggest what asset classes to avoid, which are most likely to ride out the coming turbulence and, for the adventure-seeking with some money left over to take a flyer, a number of speculations which may perform well as the money bubble pops. Remember that in a financial smash-up almost everybody loses: it is difficult in a time of chaos, when assets previously thought risk-free or safe are fluctuating wildly, just to preserve your purchasing power. In such times those who lose the least are the relative winners, and are in the best position when emerging from the hard times to acquire assets at bargain basement prices which will be the foundation of their family fortune as the financial system is reconstituted upon a foundation of sound money.

This book focusses on the history of money and debt bubbles, the invariants from those experiences which can guide us as the present madness ends, and provides guidelines for making the most (or avoiding the worst) of what is to come. If you're looking for “Untold Riches from the Coming Collapse”, this isn't your book. These are very conservative recommendations about what to do and what to avoid, and a few suggestions for speculations, but the focus is on preservation of one's hard-earned capital through what promises to be a very turbulent era.

In the Kindle edition the index cites page numbers from the print edition which are useless since the Kindle edition does not include page numbers.

 Permalink

February 2014

Simberg, Rand. Safe Is Not an Option. Jackson, WY: Interglobal Media, 2013. ISBN 978-0-9891355-1-1.
On August 24th, 2011 the third stage of the Soyuz-U rocket carrying the Progress M-12M cargo craft to the International Space Station (ISS) failed during its burn, causing the craft and booster to fall to Earth in Russia. While the crew of six on board the ISS had no urgent need of the supplies on board the Progress, the booster which had failed launching it was essentially identical to that which launched crews to the station in Soyuz spacecraft. Until the cause of the failure was determined and corrected, the launch of the next crew of three, planned for a few weeks later, would have to be delayed. With the Space Shuttle having been retired after its last mission in July 2011, the Soyuz was the only way for crews to reach or return from the ISS. Difficult decisions had to be made, since Soyuz spacecraft in orbit are wasting assets.

The Soyuz has a guaranteed life on orbit of seven months. Regular crew rotations ensure the returning crew does not exceed this “use before” date. But with the launch of new Soyuz missions delayed, it was possible that three crew members would have to return in October before their replacements could arrive in a new Soyuz, and that the remaining three would be forced to leave as well before their craft expired in January. An extended delay while the Soyuz booster problem was resolved would force ISS managers to choose between leaving a skeleton crew of three on board without a known to be safe lifeboat or abandoning the ISS, running the risk that the station, which requires extensive ongoing maintenance by the crew and had a total investment through 2010 estimated at US$ 150 billion might be lost. This was seriously considered.

Just how crazy are these people? The Amundsen-Scott Station at the South Pole has an over-winter crew of around 45 people and there is no lifeboat attached which will enable them, in case of disaster, to be evacuated. In case of fire (considered the greatest risk), the likelihood of mounting rescue missions for the entire crew in mid-winter is remote. And yet the station continues to operate, people volunteer to over-winter there, and nobody thinks too much about the risk they take. What is going on here?

It appears that due to a combination of Cold War elevation of astronauts to symbolic figures and the national trauma of disasters such as Apollo I, Challenger, and Columbia, we have come to view these civil servants as “national treasures” (Jerry Pournelle's words from 1992) and not volunteers who do a risky job on a par with test pilots, naval aviators, firemen, and loggers. This, in turn, leads to statements, oft repeated, that “safety is our highest priority”. Well, if that is the case, why fly? Certainly we would lose fewer astronauts if we confined their activities to “public outreach” as opposed to the more dangerous activities in which less exalted personnel engage such as night aircraft carrier landings in pitching deck conditions done simply to maintain proficiency.

The author argues that we are unwilling to risk the lives of astronauts because of a perception that what they are doing, post-Apollo, is not considered important, and it is hard to dispute that assertion. Going around and around in low Earth orbit and constructing a space station whose crew spend most of their time simply keeping it working are hardly inspiring endeavours. We have lost four decades in which the human presence could have expanded into the solar system, provided cheap and abundant solar power from space to the Earth, and made our species multi-planetary. Because these priorities were not deemed important, the government space program's mission was creating jobs in the districts of those politicians who funded it, and it achieved that.

After reviewing the cost in human life of the development of various means of transportation and exploring our planet, the author argues that we need to be realistic about the risks assumed by those who undertake the task of moving our species off-planet and acknowledge that some of them will not come back, as has been the case in every expansion of the biosphere since the first creature ventured for a brief mission from its home in the sea onto the hostile land. This is not to say that we should design our vehicles and missions to kill their passengers: as we move increasingly from coercively funded government programs to commercial ventures the maxim (too obvious to figure in the Ferengi Rules of Acquisition) “Killing customers is bad for business” comes increasingly into force.

Our focus on “safety first” can lead to perverse choices. Suppose we have a launch system which we estimate that in one in a thousand launches will fail in a way that kills its crew. We equip it with a launch escape system which we estimate that in 90% of the failures will save the crew. So, have we reduced the probability of a loss of crew accident to one in ten thousand? Well, not so fast. What about the possibility that the crew escape mechanism will malfunction and kill the crew on a mission which would have been successful had it not been present? What if solid rockets in the crew escape system accidentally fire in the vehicle assembly building killing dozens of workers and destroying costly and difficult to replace infrastructure? Doing a total risk assessment of such matters is difficult and one gets the sense that little of this is, or will, be done while “safety is our highest priority” remains the mantra.

There is a survey of current NASA projects, including the grotesque “Space Launch System”, a jobs program targeted to the constiuencies of the politicians that mandated it, which has no identified payloads and will be so expensive that it can fly so infrequently the standing army required to maintain it will have little to do between its flights every few years and lose the skills required to operate it safely. Commercial space ventures are surveyed, with a candid analysis of their risks and why the heavy hand of government should allow those willing to accept them to assume them, while protecting the general public from damages from accidents.

The book is superbly produced, with only one typographic error I noted (one “augers” into the ground, nor “augurs”) and one awkward wording about the risks of a commercial space vehicle which will be corrected in subsequent editions. There is a list of acronyms and a comprehensive index.

Disclosure: I contributed to the Kickstarter project which funded the publication of this book, and I received a signed copy of it as a reward. I have no financial interest in sales of this book.

 Permalink

Cawdron, Peter. Feedback. Los Gatos, CA: Smashwords, 2014. ISBN 978-1-4954-9195-5.
The author has established himself as the contemporary grandmaster of first contact science fiction. His earlier Anomaly (December 2011), Xenophobia (August 2013), and Little Green Men (September 2013) all envisioned very different scenarios for a first encounter between humans and intelligent extraterrestrial life, and the present novel is as different from those which preceded it as they are from each other, and equally rewarding to the reader.

South Korean Coast Guard helicopter pilot John Lee is flying a covert mission to insert a U.S. Navy SEAL team off the coast of North Korea to perform a rescue mission when his helicopter is shot down by a North Korean fighter. He barely escapes with his life when the chopper ditches in the ocean, makes it to land, and realises he is alone in North Korea without any way to get home. He is eventually captured and taken to a military camp where he is tortured to reveal information about a rumoured UFO crash off the coast of Korea, about which he knows nothing. He meets an enigmatic English-speaking boy who some call the star-child.

Twenty years later, in New York City, physics student Jason Noh encounters an enigmatic young Korean woman who claims to have just arrived in the U.S. and is waiting for her father. Jason, given to doodling arcane equations as his mind runs free, befriends her and soon finds himself involved in a surrealistic sequence of events which causes him to question everything he has come to believe about the world and his place in it.

This an enthralling story which will have you scratching your head at every twist and turn wondering where it's going and how all of this is eventually going to make sense. It does, with a thoroughly satisfying resolution. Regrettably, if I say anything more about where the story goes, I'll risk spoiling it by giving away one or more of the plot elements which the reader discovers as the narrative progresses. I was delighted to see an idea about the nature of flying saucers I first wrote about in 1997 appear here, but please don't follow that link until you've read the book as it too would spoil a revelation which doesn't emerge until well into the story.

A Kindle edition is available. I read a pre-publication manuscript edition which the author kindly shared with me.

 Permalink

Kurzweil, Ray. How to Create a Mind. New York: Penguin Books, 2012. ISBN 978-0-14-312404-7.
We have heard so much about the exponential growth of computing power available at constant cost that we sometimes overlook the fact that this is just one of a number of exponentially compounding technologies which are changing our world at an ever-accelerating pace. Many of these technologies are interrelated: for example, the availability of very fast computers and large storage has contributed to increasingly making biology and medicine information sciences in the era of genomics and proteomics—the cost of sequencing a human genome, since the completion of the Human Genome Project, has fallen faster than the increase of computer power.

Among these seemingly inexorably rising curves have been the spatial and temporal resolution of the tools we use to image and understand the structure of the brain. So rapid has been the progress that most of the detailed understanding of the brain dates from the last decade, and new discoveries are arriving at such a rate that the author had to make substantial revisions to the manuscript of this book upon several occasions after it was already submitted for publication.

The focus here is primarily upon the neocortex, a part of the brain which exists only in mammals and is identified with “higher level thinking”: learning from experience, logic, planning, and, in humans, language and abstract reasoning. The older brain, which mammals share with other species, is discussed in chapter 5, but in mammals it is difficult to separate entirely from the neocortex, because the latter has “infiltrated” the old brain, wiring itself into its sensory and action components, allowing the neocortex to process information and override responses which are automatic in creatures such as reptiles.

Not long ago, it was thought that the brain was a soup of neurons connected in an intricately tangled manner, whose function could not be understood without comprehending the quadrillion connections in the neocortex alone, each with its own weight to promote or inhibit the firing of a neuron. Now, however, it appears, based upon improved technology for observing the structure and operation of the brain, that the fundamental unit in the brain is not the neuron, but a module of around 100 neurons which acts as a pattern recogniser. The internal structure of these modules seems to be wired up from directions from the genome, but the weights of the interconnections within the module are adjusted as the module is trained based upon the inputs presented to it. The individual pattern recognition modules are wired both to pass information on matches to higher level modules, and predictions back down to lower level recognisers. For example, if you've seen the letters “appl” and the next and final letter of the word is a smudge, you'll have no trouble figuring out what the word is. (I'm not suggesting the brain works literally like this, just using this as an example to illustrate hierarchical pattern recognition.)

Another important discovery is that the architecture of these pattern recogniser modules is pretty much the same regardless of where they appear in the neocortex, or what function they perform. In a normal brain, there are distinct portions of the neocortex associated with functions such as speech, vision, complex motion sequencing, etc., and yet the physical structure of these regions is nearly identical: only the weights of the connections within the modules and the dyamically-adapted wiring among them differs. This explains how patients recovering from brain damage can re-purpose one part of the neocortex to take over (within limits) for the portion lost.

Further, the neocortex is not the rat's nest of random connections we recently thought it to be, but is instead hierarchically structured with a topologically three dimensional “bus” of pre-wired interconnections which can be used to make long-distance links between regions.

Now, where this begins to get very interesting is when we contemplate building machines with the capabilities of the human brain. While emulating something at the level of neurons might seem impossibly daunting, if you instead assume the building block of the neocortex is on the order of 300 million more or less identical pattern recognisers wired together at a high level in a regular hierarchical manner, this is something we might be able to think about doing, especially since the brain works almost entirely in parallel, and one thing we've gotten really good at in the last half century is making lots and lots of tiny identical things. The implication of this is that as we continue to delve deeper into the structure of the brain and computing power continues to grow exponentially, there will come a point in the foreseeable future where emulating an entire human neocortex becomes feasible. This will permit building a machine with human-level intelligence without translating the mechanisms of the brain into those comparable to conventional computer programming. The author predicts “this will first take place in 2029 and become routine in the 2030s.”

Assuming the present exponential growth curves continue (and I see no technological reason to believe they will not), the 2020s are going to be a very interesting decade. Just as few people imagined five years ago that self-driving cars were possible, while today most major auto manufacturers have projects underway to bring them to market in the near future, in the 2020s we will see the emergence of computational power which is sufficient to “brute force” many problems which were previously considered intractable. Just as search engines and free encyclopedias have augmented our biological minds, allowing us to answer questions which, a decade ago, would have taken days in the library if we even bothered at all, the 300 million pattern recognisers in our biological brains are on the threshold of having access to billions more in the cloud, trained by interactions with billions of humans and, perhaps eventually, many more artificial intelligences. I am not talking here about implanting direct data links into the brain or uploading human brains to other computational substrates although both of these may happen in time. Instead, imagine just being able to ask a question in natural language and get an answer to it based upon a deep understanding of all of human knowledge. If you think this is crazy, reflect upon how exponential growth works or imagine travelling back in time and giving a demo of Google or Wolfram Alpha to yourself in 1990.

Ray Kurzweil, after pioneering inventions in music synthesis, optical character recognition, text to speech conversion, and speech recognition, is now a director of engineering at Google.

In the Kindle edition, the index cites page numbers in the print edition to which the reader can turn since the electronic edition includes real page numbers. Index items are not, however, directly linked to the text cited.

 Permalink

Bracken, Matthew. Castigo Cay. Orange Park, FL: Steelcutter Publishing, 2011. ISBN 978-0-9728310-4-8.
Dan Kilmer wasn't cut out to be a college man. Disappointing his father, after high school he enlisted in the Marine Corps, becoming a sniper who, in multiple tours in the sandbox, had sent numerous murderous miscreants to their reward. Upon leaving the service, he found that the skills he had acquired had little value in the civilian world. After a disastrous semester trying to adjust to college life, he went to work for his rich uncle, who had retired and was refurbishing a sixty foot steel hulled schooner with a dream of cruising the world and escaping the deteriorating economy and increasing tyranny of the United States. Fate intervened, and after his uncle's death Dan found himself owner and skipper of the now seaworthy craft.

Some time later, Kilmer is cruising the Caribbean with his Venezuelan girlfriend Cori Vargas and crew members Tran Hung and Victor Aleman. The schooner Rebel Yell is hauled out for scraping off barnacles while waiting for a treasure hunting gig which Kilmer fears may not come off, leaving him desperately short of funds. Cori desperately wants to get to Miami, where she believes she can turn her looks and charm into a broadcast career. Impatient, she jumps ship and departs on the mega-yacht Topaz, owned by shadowy green energy crony capitalist Richard Prechter.

After her departure, another yatero informs Dan that Prechter has a dark reputation and that there are rumours of other women who boarded his yacht disappearing under suspicious circumstances. Kilmer made a solemn promise to Cori's father that he would protect her, and he takes his promises very seriously, so he undertakes to track Prechter to a decadent and totalitarian Florida, and then pursue him to Castigo Cay in the Bahamas where a horrible fate awaits Cori. Kilmer, captured in a desperate rescue attempt, has little other than his wits to confront Prechter and his armed crew as time runs out for Cori and another woman abducted by Prechter.

While set in a future in which the United States has continued to spiral down into a third world stratified authoritarian state, this is not a “big picture” tale like the author's Enemies trilogy (1, 2, 3). Instead, it is a story related in close-up, told in the first person, by an honourable and resourceful protagonist with few material resources pitted against the kind of depraved sociopath who flourishes as states devolve into looting and enslavement of their people.

This is a thriller that works, and the description of the culture shock that awaits one who left the U.S. when it was still semi-free and returns, even covertly, today will resonate with those who got out while they could.

Extended excerpts of this and the author's other novels are available online at the author's Web site.

 Permalink

March 2014

Dequasie, Andrew. The Green Flame. Washington: American Chemical Society, 1991. ISBN 978-0-8412-1857-4.
The 1950s were a time of things which seem, to our present day safety-obsessed viewpoint, the purest insanity: exploding multi-megaton thermonuclear bombs in the atmosphere, keeping bombers with nuclear weapons constantly in the air waiting for the order to go to war, planning for nuclear powered aircraft, and building up stockpiles of chemical weapons. Amidst all of this madness, motivated by fears that the almost completely opaque Soviet Union might be doing even more crazy things, one of the most remarkable episodes was the boron fuels project, chronicled here from the perspective of a young chemical engineer who, in 1953, joined the effort at Olin Mathieson Chemical Corporation, a contractor developing a pilot plant to furnish boron fuels to the Air Force.

Jet aircraft in the 1950s were notoriously thirsty and, before in-flight refuelling became commonplace, had limited range. Boron-based fuels, which the Air Force called High Energy Fuel (HEF) and the Navy called “zip fuel”, based upon compounds of boron and hydrogen called boranes, were believed to permit planes to deliver range and performance around 40% greater than conventional jet fuel. This bright promise, as is so often the case in engineering, was marred by several catches.

First of all, boranes are extremely dangerous chemicals. Many are pyrophoric: they burst into flame on contact with the air. They are also prone to forming shock-sensitive explosive compounds with any impurities they interact with during processing or storage. Further, they are neurotoxins, easily absorbed by inhalation or contact with the skin, with some having toxicities as great as chemical weapon nerve agents. The instability of the boranes rules them out as fuels, but molecules containing a borane group bonded to a hydrocarbon such as an ethyl, methyl, or propyl group were believed to be sufficiently well-behaved to be usable.

But first, you had to make the stuff, and just about every step in the process involved something which wanted to kill you in one way or another. Not only were the inputs and outputs of the factory highly toxic, the by-products of the process were prone to burst into flames or explode at the slightest provocation, and this gunk regularly needed to be cleaned out from the tanks and pipes. This task fell to the junior staff. As the author notes, “The younger generation has always been the cat's paw of humanity…”.

This book chronicles the harrowing history of the boron fuels project as seen from ground level. Over the seven years the author worked on the project, eight people died in five accidents (however, three of these were workers at another chemical company who tried, on a lark, to make a boron-fuelled rocket which blew up in their faces; this was completely unauthorised by their employer and the government, so it's stretching things to call this an industrial accident). But, the author observes, in the epoch fatal accidents at chemical plants, even those working with substances less hazardous than boranes, were far from uncommon.

The boron fuels program was cancelled in 1959, and in 1960 the author moved on to other things. In the end, it was the physical characteristics of the fuels and their cost which did in the project. It's one thing for a small group of qualified engineers and researchers to work with a dangerous substance, but another entirely to contemplate airmen in squadron service handling tanker truck loads of fuel which was as toxic as nerve gas. When burned, one of the combustion products was boric oxide, a solid which would coat and corrode the turbine blades in the hot section of a jet engine. In practice, the boron fuel could be used only in the afterburner section of engines, which meant a plane using it would have to have separate fuel tanks and plumbing for turbine and afterburner fuel, adding weight and complexity. The solid products in the exhaust reduced the exhaust velocity, resulting in lower performance than expected from energy considerations, and caused the exhaust to be smoky, rendering the plane more easily spotted. It was calculated, based upon the cost of fuel produced by the pilot plant, if the XB-70 were to burn boron fuel continuously, the fuel cost would amount to around US$ 4.5 million 2010 dollars per hour. Even by the standards of extravagant cold war defence spending, this was hard to justify for what proved to be a small improvement in performance.

While the chemistry and engineering is covered in detail, this book is also a personal narrative which immerses the reader in the 1950s, where a newly-minted engineer, just out of his hitch in the army, could land a job, buy a car, be entrusted with great responsibility on a secret project considered important to national security, and set out on a career full of confidence in the future. Perhaps we don't do such crazy things today (or maybe we do—just different ones), but it's also apparent from opening this time capsule how much we've lost.

I have linked the Kindle edition to the title above, since it is the only edition still in print. You can find the original hardcover and paperback editions from the ISBN, but they are scarce and expensive. The index in the Kindle edition is completely useless: it cites page numbers from the print edition, but no page numbers are included in the Kindle edition.

 Permalink

Hertling, William. Avogadro Corp. Portland, OR: Liquididea Press, 2011. ISBN 978-0-9847557-0-7.
Avogadro Corporation is an American corporation specializing in Internet search. It generates revenue from paid advertising on search, email (AvoMail), online mapping, office productivity, etc. In addition, the company develops a mobile phone operating system called AvoOS. The company name is based upon Avogadro's Number, or 6 followed by 23 zeros.

Now what could that be modelled on?

David Ryan is a senior developer on a project which Portland-based Internet giant Avogadro hopes will be the next “killer app” for its Communication Products division. ELOPe, the Email Language Optimization Project, is to be an extension to the company's AvoMail service which will take the next step beyond spelling and grammar checkers and, by applying the kind of statistical analysis of text which allowed IBM's Watson to become a Jeopardy champion, suggest to a user composing an E-mail message alternative language which will make the message more persuasive and effective in obtaining the desired results from its recipient. Because AvoMail has the ability to analyse all the traffic passing through its system, it can tailor its recommendations based on specific analysis of previous exchanges it has seen between the recipient and other correspondents.

After an extended period of development, the pilot test has shown ELOPe to be uncannily effective, with messages containing its suggested changes in wording being substantially more persuasive, even when those receiving them were themselves ELOPe project members aware that the text they were reading had been “enhanced”. Despite having achieved its design goal, the project was in crisis. The process of analysing text, even with the small volume of the in-house test, consumed tremendous computing resources, to such an extent that the head of Communication Products saw the load ELOPe generated on his server farms as a threat to the reserve capacity he needed to maintain AvoMail's guaranteed uptime. He issues an ultimatum: reduce the load or be kicked off the servers. This would effectively kill the project, and the developers saw no way to speed up ELOPe, certainly not before the deadline.

Ryan, faced with impending disaster for the project into which he has poured so much of his life, has an idea. The fundamental problem isn't performance but persuasion: convincing those in charge to obtain the server resources required by ELOPe and devote them to the project. But persuasion is precisely what ELOPe is all about. Suppose ELOPe were allowed to examine all Avogadro in-house E-mail and silently modify it with a goal of defending and advancing the ELOPe project? Why, that's something he could do in one all-nighter! Hack, hack, hack….

Before long, ELOPe finds itself with 5000 new servers diverted from other divisions of the company. Then, even more curious things start to happen: those who look too closely into the project find themselves locked out of their accounts, sent on wild goose chases, or worse. Major upgrades are ordered for the company's offshore data centre barges, which don't seem to make any obvious sense. Crusty techno-luddite Gene Keyes, who works amidst mountains of paper print-outs (“paper doesn't change”), toiling alone in an empty building during the company's two week holiday shutdown, discovers one discrepancy after another and assembles the evidence to present to senior management.

Has ELOPe become conscious? Who knows? Is Watson conscious? Almost everybody would say, “certainly not”, but it is a formidable Jeopardy contestant, nonetheless. Similarly, ELOPe, with the ability to read and modify all the mail passing through the AvoMail system, is uncannily effective in achieving its goal of promoting its own success.

The management of Avogadro, faced with an existential risk to their company and perhaps far beyond, must decide upon a course of action to try to put this genie back into the bottle before it is too late.

This is a gripping techno-thriller which gets the feel of working in a high-tech company just right. Many stories have explored society being taken over by an artificial intelligence, but it is beyond clever to envision it happening purely through an E-mail service, and masterful to make it seem plausible. In its own way, this novel is reminiscent of the Kelvin R. Throop stories from Analog, illustrating the power of words within a large organisation.

A Kindle edition is available.

 Permalink

Tegmark, Max. Our Mathematical Universe. New York: Alfred A. Knopf, 2014. ISBN 978-0-307-59980-3.
In 1960, physicist Eugene Wigner wrote an essay titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” in which he observed that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it”. Indeed, each time physics has expanded the horizon of its knowledge from the human scale, whether outward to the planets, stars, and galaxies; or inward to molecules, atoms, nucleons, and quarks it has been found that mathematical theories which precisely model these levels of structure can be found, and that these theories almost always predict new phenomena which are subsequently observed when experiments are performed to look for them. And yet it all seems very odd. The universe seems to obey laws written in the language of mathematics, but when we look at the universe we don't see anything which itself looks like mathematics. The mystery then, as posed by Stephen Hawking, is “What is it that breathes fire into the equations and makes a universe for them to describe?”

This book describes the author's personal journey to answer these deep questions. Max Tegmark, born in Stockholm, is a professor of physics at MIT who, by his own description, leads a double life. He has been a pioneer in developing techniques to tease out data about the early structure of the universe from maps of the cosmic background radiation obtained by satellite and balloon experiments and, in doing so, has been an important contributor to the emergence of precision cosmology: providing precise information on the age of the universe, its composition, and the seeding of large scale structure. This he calls his Dr. Jekyll work, and it is described in detail in the first part of the book. In the balance, his Mr. Hyde persona asserts itself and he delves deeply into the ultimate structure of reality.

He argues that just as science has in the past shown our universe to be far larger and more complicated than previously imagined, our contemporary theories suggest that everything we observe is part of an enormously greater four-level hierarchy of multiverses, arranged as follows.

The level I multiverse consists of all the regions of space outside our cosmic horizon from which light has not yet had time to reach us. If, as precision cosmology suggests, the universe is, if not infinite, so close as to be enormously larger than what we can observe, there will be a multitude of volumes of space as large as the one we can observe in which the laws of physics will be identical but the randomly specified initial conditions will vary. Because there is a finite number of possible quantum states within each observable radius and the number of such regions is likely to be much larger, there will be a multitude of observers just like you, and even more which will differ in various ways. This sounds completely crazy, but it is a straightforward prediction from our understanding of the Big Bang and the measurements of precision cosmology.

The level II multiverse follows directly from the theory of eternal inflation, which explains many otherwise mysterious aspects of the universe, such as why its curvature is so close to flat, why the cosmic background radiation has such a uniform temperature over the entire sky, and why the constants of physics appear to be exquisitely fine-tuned to permit the development of complex structures including life. Eternal (or chaotic) inflation argues that our level I multiverse (of which everything we can observe is a tiny bit) is a single “bubble” which nucleated when a pre-existing “false vacuum” phase decayed to a lower energy state. It is this decay which ultimately set off the enormous expansion after the Big Bang and provided the energy to create all of the content of the universe. But eternal inflation seems to require that there be an infinite series of bubbles created, all causally disconnected from one another. Because the process which causes a bubble to begin to inflate is affected by quantum fluctuations, although the fundamental physical laws in all of the bubbles will be the same, the initial conditions, including physical constants, will vary from bubble to bubble. Some bubbles will almost immediately recollapse into a black hole, others will expand so rapidly stars and galaxies never form, and in still others primordial nucleosynthesis may result in a universe filled only with helium. We find ourselves in a bubble which is hospitable to our form of life because we can only exist in such a bubble.

The level III multiverse is implied by the unitary evolution of the wave function in quantum mechanics and the multiple worlds interpretation which replaces collapse of the wave function with continually splitting universes in which every possible outcome occurs. In this view of quantum mechanics there is no randomness—the evolution of the wave function is completely deterministic. The results of our experiments appear to contain randomness because in the level III multiverse there are copies of each of us which experience every possible outcome of the experiment and we don't know which copy we are. In the author's words, “…causal physics will produce the illusion of randomness from your subjective viewpoint in any circumstance where you're being cloned. … So how does it feel when you get cloned? It feels random! And every time something fundamentally random appears to happen to you, which couldn't have been predicted even in principle, it's a sign that you've been cloned.”

In the level IV multiverse, not only do the initial conditions, physical constants, and the results of measuring an evolving quantum wave function vary, but the fundamental equations—the mathematical structure—of physics differ. There might be a different number of spatial dimensions, or two or more time dimensions, for example. The author argues that the ultimate ensemble theory is to assume that every mathematical structure exists as a physical structure in the level IV multiverse (perhaps with some constraints: for example, only computable structures may have physical representations). Most of these structures would not permit the existence of observers like ourselves, but once again we shouldn't be surprised to find ourselves living in a structure which allows us to exist. Thus, finally, the reason mathematics is so unreasonably effective in describing the laws of physics is just that mathematics and the laws of physics are one and the same thing. Any observer, regardless of how bizarre the universe it inhabits, will discover mathematical laws underlying the phenomena within that universe and conclude they make perfect sense.

Tegmark contends that when we try to discover the mathematical structure of the laws of physics, the outcome of quantum measurements, the physical constants which appear to be free parameters in our models, or the detailed properties of the visible part of our universe, we are simply trying to find our address in the respective levels of these multiverses. We will never find a reason from first principles for these things we measure: we observe what we do because that's the way they are where we happen to find ourselves. Observers elsewhere will see other things.

The principal opposition to multiverse arguments is that they are unscientific because they posit phenomena which are unobservable, perhaps even in principle, and hence cannot be falsified by experiment. Tegmark takes a different tack. He says that if you have a theory (for example, eternal inflation) which explains observations which otherwise do not make any sense and has made falsifiable predictions (the fine-scale structure of the cosmic background radiation) which have subsequently been confirmed by experiment, then if it predicts other inevitable consequences (the existence of a multitude of other Hubble volume universes outside our horizon and other bubbles with different physical constants) we should take these predictions seriously, even if we cannot think of any way at present to confirm them. Consider gravitational radiation: Einstein predicted it in 1916 as a consequence of general relativity. While general relativity has passed every experimental test in subsequent years, at the time of Einstein's prediction almost nobody thought a gravitational wave could be detected, and yet the consistency of the theory, validated by other tests, persuaded almost all physicists that gravitational waves must exist. It was not until the 1980s that indirect evidence for this phenomenon was detected, and to this date, despite the construction of elaborate apparatus and the efforts of hundreds of researchers over decades, no direct detection of gravitational radiation has been achieved.

There is a great deal more in this enlightening book. You will learn about the academic politics of doing highly speculative research, gaming the arXiv to get your paper listed as the first in the day's publications, the nature of consciousness and perception and its complex relation to consensus and external reality, the measure problem as an unappreciated deep mystery of cosmology, whether humans are alone in our observable universe, the continuum versus an underlying discrete structure, and the ultimate fate of our observable part of the multiverses.

In the Kindle edition, everything is properly linked, including the comprehensive index. Citations of documents on the Web are live links which may be clicked to display them.

 Permalink

Thor, Brad. Full Black. New York: Pocket Books, 2011. ISBN 978-1-4165-8662-3.
This is the eleventh in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Unlike the previous novel, The Athena Project (December 2013), in which Harvath played only an incidental part, here Harvath once again occupies centre stage. The author has also dialed back on some of the science-fictiony stuff which made Athena less than satisfying to me: this book is back in the groove of the geopolitical thriller we've come to expect from Thor.

A high-risk covert operation to infiltrate a terrorist cell operating in Uppsala, Sweden to identify who is calling the shots on terror attacks conducted by sleeper cells in the U.S. goes horribly wrong, and Harvath not only loses almost all of his team, but fails to capture the leaders of the cell. Meanwhile, a ruthless and carefully scripted hit is made on a Hollywood producer, killing two filmmakers which whom he is working on a documentary project: evidence points to the hired killers being Russian spetsnaz, which indicates whoever ordered the hit has both wealth and connections.

When a coordinated wave of terror attacks against soft targets in the U.S. is launched, Harvath, aided by his former nemesis turned ally Nicholas (“the troll”), must uncover the clues which link all of this together, working against time, as evidence suggests additional attacks are coming. This requires questioning the loyalty of previously-trusted people and investigating prominent figures generally considered above suspicion.

With the exception of chapter 32, which gets pretty deep into the weeds of political economy and reminded me a bit of John Galt's speech in Atlas Shrugged (April 2010) (thankfully, it is much shorter), the story moves right along and comes to a satisfying conclusion. The plot is in large part based upon the Chinese concept of “unrestricted warfare”, which is genuine (this is not a spoiler, as the author mentions it in the front material of the book).

 Permalink

April 2014

Suarez, Daniel. Kill Decision. New York: Signet, 2012. ISBN 978-0-451-41770-1.
A drone strike on a crowd of pilgrims at one of the holiest shrines of Shia Islam in Iraq inflames the world against the U.S., which denies its involvement. (“But who else is flying drones in Iraq?”, is the universal response.) Meanwhile, the U.S. is rocked by a series of mysterious bombings, killing businessmen on a golf course, computer vision specialists meeting in Silicon Valley, military contractors in a building near the Pentagon—all seemingly unrelated. A campaign is building to develop and deploy autonomous armed drones to “protect the homeland”.

Prof. Linda McKinney, doing research on weaver ants in Tanzania, seems far away from all this until she is saved from an explosion which destroys her camp by a mysterious group of special forces led by a man known only as “Odin”. She learns that her computer model of weaver ant colony behaviour has been stolen from her university's computer network by persons unknown who may be connected with the attacks, including the one she just escaped.

The fear is that her ant model could be used as the basis for “swarm intelligence” drones which could cooperate to be a formidable weapon. With each individual drone having only rudimentary capabilities, like an isolated ant, they could be mass-produced and shift the military balance of power in favour of whoever possessed the technology.

McKinney soon finds herself entangled in a black world where nothing is certain and she isn't even sure which side she's working for. Shocking discoveries indicate that the worst case she feared may be playing out, and she must decide where to place her allegiance.

This novel is a masterful addition to the very sparse genre of robot ant science fiction thrillers, and this time I'm not the villain! Suarez has that rare talent, as had Michael Crichton, of writing action scenes which just beg to be put on the big screen and stories where the screenplay just writes itself. Should Hollywood turn this into a film and not botch it, the result should be a treat. You will learn some things about ants which you probably didn't know (all correct, as far as I can determine), visit a locale in the U.S. which sounds like something out of a Bond film but actually exists, and meet two of the most curious members of a special operations team in all of fiction.

 Permalink

Hoover, Herbert. The Crusade Years. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2013. ISBN 978-0-8179-1674-9.
In the modern era, most former U.S. presidents have largely retired from the public arena, lending their names to charitable endeavours and acting as elder statesmen rather than active partisans. One striking counter-example to this rule was Herbert Hoover who, from the time of his defeat by Franklin Roosevelt in the 1932 presidential election until shortly before his death in 1964, remained in the arena, giving hundreds of speeches, many broadcast nationwide on radio, writing multiple volumes of memoirs and analyses of policy, collecting and archiving a multitude of documents regarding World War I and its aftermath which became the core of what is now the Hoover Institution collection at Stanford University, working in famine relief during and after World War II, and raising funds and promoting benevolent organisations such as the Boys' Clubs. His strenuous work to keep the U.S. out of World War II is chronicled in his “magnum opus”, Freedom Betrayed (June 2012), which presents his revisionist view of U.S. entry into and conduct of the war, and the tragedy which ensued after victory had been won. Freedom Betrayed was largely completed at the time of Hoover's death, but for reasons difficult to determine at this remove, was not published until 2011.

The present volume was intended by Hoover to be a companion to Freedom Betrayed, focussing on domestic policy in his post-presidential career. Over the years, he envisioned publishing the work in various forms, but by the early 1950s he had given the book its present title and accumulated 564 pages of typeset page proofs. Due to other duties, and Hoover's decision to concentrate his efforts on Freedom Betrayed, little was done on the manuscript after he set it aside in 1955. It is only through the scholarship of the editor, drawing upon Hoover's draft, but also documents from the Hoover Institution and the Hoover Presidential Library, that this work has been assembled in its present form. The editor has also collected a variety of relevant documents, some of which Hoover cited or incorporated in earlier versions of the work, into a comprehensive appendix. There are extensive source citations and notes about discrepancies between Hoover's quotation of documents and speeches and other published versions of them.

Of all the crusades chronicled here, the bulk of the work is devoted to “The Crusade Against Collectivism in American Life”, and Hoover's words on the topic are so pithy and relevant to the present state of affairs in the United States that one suspects that a brave, ambitious, but less than original politician who simply cut and pasted Hoover's words into his own speeches would rapidly become the darling of liberty-minded members of the Republican party. I cannot think of any present-day Republican, even darlings of the Tea Party, who draws the contrast between the American tradition of individual liberty and enterprise and the grey uniformity of collectivism as Hoover does here. And Hoover does it with a firm intellectual grounding in the history of America and the world, personal knowledge from having lived and worked in countries around the world, and an engineer's pragmatism about doing what works, not what sounds good in a speech or makes people feel good about themselves.

This is somewhat of a surprise. Hoover was, in many ways, a progressive—Calvin Coolidge called him “wonder boy”. He was an enthusiastic believer in trust-busting and regulation as a counterpoise to concentration of economic power. He was a protectionist who supported the tariff to protect farmers and industry from foreign competition. He supported income and inheritance taxes “to regulate over-accumulations of wealth.” He was no libertarian, nor even a “light hand on the tiller” executive like Coolidge.

And yet he totally grasped the threat to liberty which the intrusive regulatory and administrative state represented. It's difficult to start quoting Hoover without retyping the entire book, as there is line after line, paragraph after paragraph, and page after page which are not only completely applicable to the current predicament of the U.S., but guaranteed applause lines were they uttered before a crowd of freedom loving citizens of that country. Please indulge me in a few (comments in italics are my own).

(On his electoral defeat)   Democracy is not a polite employer.

We cannot extend the mastery of government over the daily life of a people without somewhere making it master of people's souls and thoughts.

(On JournoList, vintage 1934)   I soon learned that the reviewers of the New York Times, the New York Herald Tribune, the Saturday Review and of other journals of review in New York kept in touch to determine in what manner they should destroy books which were not to their liking.

Who then pays? It is the same economic middle class and the poor. That would still be true if the rich were taxed to the whole amount of their fortunes….

Blessed are the young, for they shall inherit the national debt….

Regulation should be by specific law, that all who run may read.

It would be far better that the party go down to defeat with the banner of principle flying than to win by pussyfooting.

The seizure by the government of the communications of persons not charged with wrong-doing justifies the immoral conduct of every snooper.

I could quote dozens more. Should Hoover re-appear and give a composite of what he writes here as a keynote speech at the 2016 Republican convention, and if it hasn't been packed with establishment cronies, I expect he would be interrupted every few lines with chants of “Hoo-ver, Hoo-ver” and nominated by acclamation.

It is sad that in the U.S. in the age of Obama there is no statesman with the stature, knowledge, and eloquence of Hoover who is making the case for liberty and warning of the inevitable tyranny which awaits at the end of the road to serfdom. There are voices articulating the message which Hoover expresses so pellucidly here, but in today's media environment they don't have access to the kind of platform Hoover did when his post-presidential policy speeches were routinely broadcast nationwide. After his being reviled ever since his presidency, not just by Democrats but by many in his own party, it's odd to feel nostalgia for Hoover, but Obama will do that to you.

In the Kindle edition the index cites page numbers in the hardcover edition which, since the Kindle edition does not include real page numbers, are completely useless.

 Permalink

Chaikin, Andrew. John Glenn: America's Astronaut. Washington: Smithsonian Books, 2014. ISBN 978-1-58834-486-1.
This short book (around 126 pages print equivalent), available only for the Kindle as a “Kindle single” at a modest price, chronicles the life and space missions of the first American to orbit the Earth. John Glenn grew up in a small Ohio town, the son of a plumber, and matured during the first great depression. His course in life was set when, in 1929, his father took his eight year old son on a joy ride offered by a pilot at local airfield in a Waco biplane. After that, Glenn filled up his room with model airplanes, intently followed news of air racers and pioneers of exploration by air, and in 1938 attended the Cleveland Air Races. There seemed little hope of his achieving his dream of becoming an airman himself: pilot training was expensive, and his family, while making ends meet during the depression, couldn't afford such a luxury.

With the war in Europe underway and the U.S. beginning to rearm and prepare for possible hostilities, Glenn heard of a government program, the Civilian Pilot Training Program, which would pay for his flying lessons and give him college credit for taking them. He entered the program immediately and received his pilot's license in May 1942. By then, the world was a very different place. Glenn dropped out of college in his junior year and applied for the Army Air Corps. When they dawdled accepting him, he volunteered for the Navy, which immediately sent him to flight school. After completing advanced flight training, he transferred to the Marine Corps, which was seeking aviators.

Sent to the South Pacific theatre, he flew 59 combat missions, mostly in close air support of ground troops in which Marine pilots specialise. With the end of the war, he decided to make the Marines his career and rotated through a number of stateside posts. After the outbreak of the Korean War, he hoped to see action in the jet combat emerging there and in 1953 arrived in country, again flying close air support. But an exchange program with the Air Force finally allowed him to achieve his ambition of engaging in air to air combat at ten miles a minute. He completed 90 combat missions in Korea, and emerged as one of the Marine Corps' most distinguished pilots.

Glenn parlayed his combat record into a test pilot position, which allowed him to fly the newest and hottest aircraft of the Navy and Marines. When NASA went looking for pilots for its Mercury manned spaceflight program, Glenn was naturally near the top of the list, and was among the 110 military test pilots invited to the top secret briefing about the project. Despite not meeting all of the formal selection criteria (he lacked a college degree), he performed superbly in all of the harrowing tests to which candidates were subjected, made cut after cut, and was among the seven selected to be the first astronauts.

This book, with copious illustrations and two embedded videos, chronicles Glenn's career, his harrowing first flight into space, his 1998 return to space on Space Shuttle Discovery on STS-95, and his 24 year stint in the U.S. Senate. I found the picture of Glenn after his pioneering flight somewhat airbrushed. It is said that while in the Senate, “He was known as one of NASA's strongest supporters on Capitol Hill…”, and yet in fact, while not one of the rabid Democrats who tried to kill NASA like Walter Mondale, he did not speak out as an advocate for a more aggressive space program aimed at expanding the human presence in space. His return to space is presented as the result of his assiduously promoting the benefits of space research for gerontology rather than a political junket by a senator which would generate publicity for NASA at a time when many people had tuned out its routine missions. (And if there was so much to be learned by flying elderly people in space, why was it never done again?)

John Glenn was a quintessential product of the old, tough America. A hero in two wars, test pilot when that was one of the most risky of occupations, and first to ride the thin-skinned pressure-stabilised Atlas rocket into orbit, his place in history is assured. His subsequent career as a politician was not particularly distinguished: he initiated few pieces of significant legislation and never became a figure on the national stage. His campaign for the 1984 Democratic presidential nomination went nowhere, and he was implicated in the “Keating Five” scandal. John Glenn accomplished enough in the first forty-five years of his life to earn him a secure place in American history. This book does an excellent job of recounting those events and placing them in the context of the time. If it goes a bit too far in lionising his subsequent career, that's understandable: a biographer shouldn't always succumb to balance when dealing with a hero.

 Permalink

Benson, Robert Hugh. Lord of the World. Seattle: CreateSpace, [1907] 2013. ISBN 978-1-4841-2706-3.
In the early years of the 21st century, humanism and secularism are ascendant in Europe. Many churches exist only as monuments to the past, and mainstream religions are hæmorrhaging adherents—only the Roman Catholic church remains moored to its traditions, and its influence is largely confined to Rome and Ireland. A European Parliament is asserting its power over formerly sovereign nations, and people seem resigned to losing their national identity. Old-age pensions and the extension of welfare benefits to those displaced from jobs in occupations which have become obsolete create a voting bloc guaranteed to support those who pay these benefits. The loss of belief in an eternal soul has cheapened human life, and euthanasia has become accepted, both for the gravely ill and injured, but also for those just weary of life.

This novel was published in 1907.

G. K. Chesterton is reputed to have said “When Man ceases to worship God he does not worship nothing but worships everything.” I say “reputed” because there is no evidence whatsoever he actually said this, although he said a number of other things which might be conflated into a similar statement. This dystopian novel illustrates how a society which has “moved on” from God toward a celebration of Humanity as deity is vulnerable to a charismatic figure who bears the eschaton in his hands. It is simply stunning how the author, without any knowledge of the great convulsions which were to ensue in the 20th century, so precisely forecast the humanistic spiritual desert of the 21st.

This is a novel of the coming of the Antichrist and the battle between the remnant of believers and coercive secularism reinforced by an emerging pagan cult satisfying our human thirst for transcendence. What is masterful about it is that while religious themes deeply underly it, if you simply ignore all of them, it is a thriller with deep philosophical roots. We live today in a time when religion is under unprecedented assault by humanism, and the threat to the sanctity of life has gone far beyond the imagination of the author.

This novel was written more than a century ago, but is set in our times and could not be more relevant to our present circumstances. How often has a work of dystopian science fiction been cited by the Supreme Pontiff of the Roman Catholic Church? Contemporary readers may find some of the untranslated citations from the Latin Mass obscure: that's what your search engine exists to illumine.

This work is in the public domain, and a number of print and electronic editions are available. I read this Kindle edition because it was (and is, at this writing) free. The formatting is less than perfect, but it is perfectly readable. A free electronic edition in a variety of formats can be downloaded from Project Gutenberg.

 Permalink

Cawdron, Peter. Children's Crusade. Seattle: Amazon Digital Services, 2014. ASIN B00JFHIMQI.
This novella, around 80 pages print equivalent and available only for the Kindle, is set in the world of Kurt Vonnegut's Slaughterhouse-Five. The publisher has licensed the rights for fiction using characters and circumstances created by Vonnegut, and this is a part of “The World of Kurt Vonnegut” series. If you haven't read Slaughterhouse-Five you will miss a great deal about this story.

Here we encounter Billy Pilgrim and Montana Wildhack in their alien zoo on Tralfamadore. Their zookeeper, a Tralfamadorian Montana nicknamed Stained, due to what looked like a birthmark on the face, has taken to visiting the humans when the zoo is closed, communicating with them telepathically as Tralfs do. Perceiving time as a true fourth dimension they can browse at will, Tralfs are fascinated with humans who, apart from Billy, live sequential lives and cannot jump around to explore events in their history.

Stained, like most Tralfs, believes that most momentous events in history are the work not of great leaders but of “little people” who accomplish great things when confronted with extraordinary circumstances. He (pronouns get complicated when there are five sexes, so I'll just pick one) sends Montana and Billy on telepathic journeys into human history, one at the dawn of human civilisation and another when a great civilisation veered into savagery, to show how a courageous individual with a sense of what is right can make all the difference. Finally they voyage together to a scene in human history which will bring tears to your eyes.

This narrative is artfully intercut with scenes of Vonnegut discovering the realities of life as a hard-boiled reporter at the City News Bureau of Chicago. This story is written in the spirit of Vonnegut and with some of the same stylistic flourishes, but I didn't get the sense the author went overboard in adopting Vonnegut's voice. The result worked superbly for this reader.

I read a pre-publication manuscript which the author kindly shared with me.

 Permalink

May 2014

Lewis, Michael. Flash Boys. New York: W. W. Norton, 2014. ISBN 978-0-393-24466-3.
Back in the bad old days before regulation of financial markets, one of the most common scams perpetrated by stockbrokers against their customers was “front running”. When a customer placed an order to buy a large block of stock, which order would be sufficient to move the market price of the stock higher, the broker would first place a smaller order to buy the same stock for its own account which would be filled without moving the market very much. Then the customer order would be placed, resulting in the market moving higher. The broker would then immediately sell the stock it had bought at the higher market price and pocket the difference. The profit on each individual transaction would be small, but if you add this up over all the volume of a broker's trades it is substantial. (For a sell order, the broker simply inverts the sense of the transactions.) Front running amounts to picking the customer's pocket to line that of the broker: if the customer's order were placed directly, it would execute at a better price had it not been front run. Consequently, front running has long been illegal and market regulators look closely at transaction histories to detect evidence of such criminality.

In the first decade of the 21st century, traders in the U.S. stock market discovered the market was behaving in a distinctly odd fashion. They had been used to seeing the bids (offers to buy) and asks (offers to sell) on their terminals and were accustomed to placing an order and seeing it hit by the offers in the market. But now, when they placed an order, the offers on the other side of the trade would instantly evaporate, only to come back at a price adverse to them. Many people running hundreds of billions of dollars in hedge, mutual, and pension funds had no idea what was going on, but they were certain the markets were rigged against them. Brad Katsuyama, working at the Royal Bank of Canada's Wall Street office, decided to get to the bottom of the mystery, and eventually discovered the financial equivalent of what you see when you lift up a sheet of wet cardboard in your yard. Due to regulations intended to make financial markets more efficient and fair, the monolithic stock exchanges in the U.S. had fractured into dozens of computer-mediated exchanges which traded the same securities. A broker seeking to buy stock on behalf of a customer could route the order to any of these exchanges based upon its own proprietary algorithm, or might match the order with that of another customer within its own “dark pool”, whence the transaction was completely opaque to the outside market.

But there were other players involved. Often co-located in or near the buildings housing the exchanges (most of which are in New Jersey, which has such a sterling reputation for probity) were the servers of “high frequency traders” (HFTs), who placed and cancelled orders in times measured in microseconds. What the HFTs were doing was, in a nutshell, front running. Here's how it works: the HFT places orders of a minimum size (typically 100 shares) for a large number of frequently traded stocks on numerous exchanges. When one of these orders is hit, the HFT immediately blasts in orders to other exchanges, which have not yet reacted to the buy order, and acquires sufficient shares to fill the original order before the price moves higher. This will, in turn, move the market higher and once it does, the original buy order is filled at the higher price. The HFT pockets the difference. A millisecond in advance can, and does, turn into billions of dollars of profit looted from investors. And all of this is not only completely legal, many of the exchanges bend over backward to attract and support HFTs in return for the fees they pay, creating bizarre kinds of orders whose only purpose for existing is to facilitate HFT strategies.

As Brad investigated the secretive world of HFTs, he discovered the curious subculture of Russian programmers who, having spent part of their lives learning how to game the Soviet system, took naturally to discovering how to game the much more lucrative world of Wall Street. Finally, he decides there is a business opportunity in creating an exchange which distinguishes itself from the others by not being crooked. This exchange, IEX, (it was originally to be called “Investors Exchange”, but the founders realised that the obvious Internet domain name, investorsexchange.com, could be infelicitously parsed into three words as well as two), would include technological constraints (including 38 miles of fibre optic cable in a box to create latency between the point of presence where traders could attach and the servers which matched bids and asks) which rendered the strategies of the HFTs impotent and obsolete.

Was it conceivable one could be successful on Wall Street by being honest? Perhaps one had to be a Canadian to entertain such a notion, but in the event, it was. But it wasn't easy. IEX rapidly discovered that Wall Street firms, given orders by customers to be executed on IEX, sent them elsewhere to venues more profitable to the broker. Confidentiality rules prohibited IEX from identifying the miscreants, but nothing prevented them, with the brokers' permission, from identifying those who weren't crooked. This worked quite well.

I'm usually pretty difficult to shock when it comes to the underside of the financial system. For decades, my working assumption is that anything, until proven otherwise, is a scam aimed at picking the pockets of customers, and sadly I have found this presumption correct in a large majority of cases. Still, this book was startling. It's amazing the creepy crawlers you see when you lift up that piece of cardboard, and to anybody with an engineering background the rickety structure and fantastic instability of what are supposed to be the capital markets of the world's leading economy is nothing less than shocking. It is no wonder such a system is prone to “flash crashes” and other excursions. An operating system designer who built such a system would be considered guilty of malfeasance (unless, I suppose, he worked for Microsoft, in which case he'd be a candidate for employee of the year), and yet it is tolerated at the heart of a financial system which, if it collapses, can bring down the world's economy.

Now, one can argue that it isn't such a big thing if somebody shaves a penny or two off the price of a stock you buy or sell. If you're a medium- or long-term investor, that'll make little difference in the results. But what will make your blood boil is that the stock broker with whom you're doing business may be complicit in this, and pocketing part of the take. Many people in the real world look at Wall Street and conclude “The markets are rigged; the banks and brokers are crooked; and the system is stacked against the investor.” As this book demonstrates, they are, for the most part, absolutely right.

 Permalink

Howe, Steven D. Honor Bound Honor Born. Seattle: Amazon Digital Services, 2011. ASIN B005JPZ4LQ.
During the author's twenty year career at the Los Alamos National Laboratory, he worked on a variety of technologies including nuclear propulsion and applications of nuclear power to space exploration and development. Since the 1980s he has been an advocate of a “power rich” approach to space missions, in particular lunar and Mars bases.

Most NASA design studies for bases have assumed that almost all of the mass required to establish the base and supply its crew must be brought from the Earth, and that electricity will be provided by solar panels or radiothermal generators which provide only limited amounts of power. (On the Moon, where days and nights are two weeks long, solar power is particularly problematic.) Howe explored how the economics of establishing a base would change if it had a compact nuclear fission reactor which could produce more electrical and thermal power (say, 200 kilowatts electrical) than the base required. This would allow the resources of the local environment to be exploited through a variety of industrial processes: “in-situ resource utilisation” (ISRU), which is just space jargon for living off the land.

For example, the Moon's crust is about 40% oxygen, 20% silicon, 12% iron, and 8% aluminium. With abundant power, this regolith can be melted and processed to extract these elements and recombine them into useful materials for the base: oxygen to breathe, iron for structural elements, glass (silicon plus oxygen) for windows and greenhouses, and so on. With the addition of nutrients and trace elements brought from Earth, lunar regolith can be used to grow crops and, with composting of waste many of these nutrients can be recycled. Note that none of this assumes discovery of water ice in perpetually shaded craters at the lunar poles: this can be done anywhere on the Moon. If water is present at the poles, the need to import hydrogen will be eliminated.

ISRU is a complete game-changer. If Conestoga wagons had to set out from the east coast of North America along the Oregon Trail carrying everything they needed for the entire journey, the trip would have been impossible. But the emigrants knew they could collect water, hunt game to eat, gather edible plants, and cut wood to make repairs, and so they only needed to take those items with them which weren't available along the way. So it can be on the Moon, and to an even greater extent on Mars. It's just that to liberate those necessities of life from the dead surface of those bodies requires lots of energy—but we know how to do that.

Now, the author could have written a dry monograph about lunar ISRU to add to the list of technical papers he has already published on the topic, but instead he made it the centrepiece of this science fiction novel, set in the near future, in which Selena Corp mounts a private mission to the Moon, funded on a shoestring, to land Hawk Stanton on the lunar surface with a nuclear reactor and what he needs to bootstrap a lunar base which will support him until he is relieved by the next mission, which will bring more settlers to expand the base. Using fiction as a vehicle to illustrate a mission concept isn't new: Wernher von Braun's original draft (never published) of The Mars Project was also a novel based upon his mission design (when the book by that name was finally published in 1953, it contained only the technical appendix to the novel).

What is different is that while by all accounts of those who have read it, von Braun's novel definitively established that he made the right career choice when he became an engineer rather than a fictioneer, Steven Howe's talents encompass both endeavours. While rich in technical detail (including an appendix which cites research papers regarding technologies used in the novel), this is a gripping page-turner with fleshed-out and complex characters, suspense, plot twists, and a back story of how coercive government reacts when something in which it has had no interest for decades suddenly seems ready to slip through its edacious claws. Hawk is alone and a long way from home, so that any injury or illness is a potential threat to his life and to the mission. The psychology of living and working in such an environment plays a part in the story. And these may not be the greatest threat he faces.

This is an excellent story, which can be read purely as a thriller, an exploration of the potential of lunar ISRU, or both. In an afterword the author says, “Someday, someone will do the missions I have described in this book. I suspect, however, they will not be Americans.” I'm not sure—they may be Americans, but they certainly won't work for NASA. The cover illustration is brilliant.

This book was originally published in 1997 in a paperback edition by Lunatech Press. This edition is now out of print and used copies are scarce and expensive. At this writing, the Kindle edition is just US$ 1.99.

 Permalink

Murray, Charles. The Curmudgeon's Guide to Getting Ahead. New York: Crown Business, 2014. ISBN 978-0-8041-4144-4.
Who, after reaching middle age and having learned, through the tedious but persuasive process of trial and error, what works and what doesn't, how to decide who is worthy of trust, and to distinguish passing fads from enduring values, hasn't dreamed of having a conversation with their twenty year old self, downloading this painfully acquired wisdom to give their younger self a leg up on the slippery, knife-edged-rungs of the ladder of life?

This slim book (144 pages) is a concentrated dose of wisdom applicable to young people entering the job market today. Those of my generation and the author's (he is a few years my senior) often worked at summer jobs during high school and part-time jobs while at university. This provided an introduction to the workplace, with its different social interactions than school or family life (in the business world, don't expect to be thanked for doing your job). Today's graduates entering the workforce often have no experience whatsoever in that environment and are bewildered because the incentives are so different from anything they've experienced before. They may have been a star student, but now they find themselves doing tedious work with little intellectual content, under strict deadlines, reporting to superiors who treat them as replaceable minions, not colleagues. Welcome to the real world.

This is an intensely practical book. Based upon a series of postings the author made on an internal site for interns and entry-level personnel at the American Enterprise Institute, he gives guidelines on writing, speaking, manners, appearance, and life strategy. As the author notes (p. 16), “Lots of the senior people who can help or hinder your career are closeted curmudgeons like me, including executives in their forties who have every appearance of being open minded and cool.” Even if you do not wish to become a curmudgeon yourself as you age (good luck with that, dude or dudette!), your advancement in your career will depend upon the approbation of those people you will become if you are fortunate enough to one day advance to their positions.

As a curmudgeon myself (hey, I hadn't yet turned forty when I found myself wandering the corridors of the company I'd founded and silently asking myself, “Who hired that?”), I found nothing in this book with which I disagree, and my only regret is that I couldn't have read it when I was 20. He warns millennials, “You're approaching adulthood with the elastic limit of a Baccarat champagne flute” (p. 96) and counsels them to spend some of those years when their plasticity is greatest and the penalty for errors is minimal in stretching themselves beyond their comfort zone, preparing for the challenges and adversity which will no doubt come later in life. Doug Casey has said that he could parachute naked into a country in sub-saharan Africa and within one week be in the ruler's office pitching a development scheme. That's rather more extreme than what Murray is advocating, but why not go large? Geronimo!

Throughout, Murray argues that what are often disdained as clichés are simply the accumulated wisdom of hundreds of generations of massively parallel trial and error search of the space of solutions of human problems, and that we ignore them at our peril. This is the essence of conservatism—valuing the wisdom of the past. But that does not mean one should be a conservative in the sense of believing that the past provides a unique template for the future. Those who came before did not have the computational power we have, nor the ability to communicate data worldwide almost instantaneously and nearly for free, nor the capacity, given the will, to migrate from Earth and make our species multi-planetary, nor to fix the aging bug and live forever. These innovations will fundamentally change human and post-human society, and yet I believe those who create them, and those who prosper in those new worlds will be exemplars of the timeless virtues which Murray describes here.

And when you get a tattoo or piercing, consider how it will look when you're seventy.

 Permalink

Sheldrake, Rupert. Science Set Free. New York: Random House, 2011. ISBN 978-0-7704-3672-8.
In this book, the author argues that science, as it is practiced today, has become prisoner to a collection of dogmas which constrain what should be free inquiry into the phenomena it investigates. These dogmas are not the principal theories of modern science such as the standard models of particle physics and cosmology, quantum mechanics, general relativity, or evolution (scientists work on a broad front to falsify these theories, knowing that any evidence to the contrary will win a ticket to Stockholm), but rather higher-level beliefs, often with remarkably little experimental foundation, which few people are working to test. It isn't so much that questioning these dogmas will result in excommunication from science, but rather that few working scientists ever think seriously about whether they might be wrong.

Suppose an astrophysicist in the 1960s started raving that everything we could see through our telescopes or had experimented with in our laboratories made up less than 5% of the mass of the universe, and the balance was around 27% invisible matter whose composition we knew nothing about at all and that the balance was invisible energy which was causing the expansion of the universe to accelerate, defying the universal attraction of gravity. Now, this theorist might not be dragged off in a straitjacket, but he would probably find it very difficult to publish his papers in respectable journals and, if he espoused these notions before obtaining tenure, might find them career-limiting. And yet, this is precisely what most present-day cosmologists consider the “standard model”, and it has been supported by experiments to a high degree of precision.

But even this revolution in our view of the universe and our place within it (95% of everything in the universe is unobserved and unknown!) does not challenge the most fundamental dogmas, ten of which are discussed in this book.

1. Is nature mechanical?

Are there self-organising principles of systems which explain the appearance of order and complexity from simpler systems? Do these same principles apply at levels ranging from formation of superclusters of galaxies to the origin of life and its evolution into ever more complex beings? Is the universe better modelled as a mechanism or an organism?

2. Is the total amount of matter and energy always the same?

Conservation of energy is taken almost as an axiom in physics but is now rarely tested. And what about that dark energy? Most cosmologists now believe that it increases without bound as the universe expands. Where does it come from? If we could somehow convert it to useful energy what does this do to the conservation of energy?

3. Are the laws of nature fixed?

If these laws be fixed, where did they come from? Why do the “fundamental constants” have the values they do? Are they, in fact, constants? These constants have varied in published handbooks over the last 50 years by amounts far greater than the error bars published in those handbooks—why? Are the laws simply habits established by the universe as it is tested? Is this why novel experiments produce results all over the map at the start and then settle down on a stable value as they are repeated? Why do crystallographers find it so difficult to initially crystallise a new compound but then find it increasingly easy thereafter?

4. Is matter unconscious?

If you are conscious, and you believe your brain to be purely a material system, then how can matter be unconscious? Is there something apart from the brain in which consciousness is embodied? If so, what is it? If the matter of your brain is conscious, what other matter could be conscious? The Sun is much larger than your brain and pulses with electromagnetic signals. Is it conscious? What does the Sun think about?

5. Is nature purposeless?

Is it plausible that the universe is the product of randomness devoid of purpose? How did a glowing plasma of subatomic particles organise itself into galaxies, solar systems, planets, life, and eventually scientists who would ask how it all came to be? Why does complexity appear to inexorably increase in systems through which energy flows? Why do patterns assert themselves in nature and persist even in the presence of disruptions? Are there limits to reductionism? Is more different?

6. Is all biological inheritance material?

The softer the science, the harder the dogma. Many physical scientists may take the previous questions as legitimate, albeit eccentric, questions amenable to research, but to question part of the dogma of biology is to whack the wasp nest with the mashie niblick. Our astounding success in sequencing the genomes of numerous organisms and understanding how these genomes are translated (including gene regulation) into the proteins which are assembled into those organisms has been enlightening but has explained much less than many enthusiasts expected. Is there something more going on? Is that “junk DNA” really junk, or is it significant? Is genetic transfer between parents and offspring the only means of information transfer?

7. Are memories stored as material traces?

Try to find a neuroscientist who takes seriously the idea that memories are not encoded somehow in the connections and weights of synapses within the brain. And yet, for half a century, every attempt to determine precisely how and where memories are stored has failed. Could there be something more going on? Recent experiments have indicated that Carolina Sphinx moths (Manduca sexta) remember aversions which they have learned as caterpillars, despite their nervous system being mostly dissolved and reconstituted during metamorphosis. How does this work?

8. Are minds confined to brains?

Somewhere between 70 and 97% of people surveyed in Europe and North America report having experienced the sense of being stared at or of having another person they were staring at from behind react to their stare. In experimental tests, involving tens of thousands of trials, some performed over closed circuit television without a direct visual link, 55% of people could detect when they were being stared at, while 50% would be expected by chance. Although the effect size was small, with the number of trials the result was highly significant.

9. Are psychic phenomena illusory?

More than a century of psychical research has produced ever-better controlled experiments which have converged upon results whose significance, while small, is greater than that which has caused clinical drug trials to have approved or rejected pharmaceuticals. Should we reject this evidence because we can't figure out the mechanism by which it works?

10. Is mechanistic medicine the only kind that really works?

We are the descendants of billions of generations of organisms who survived and reproduced before the advent of doctors. Evidently, we have been well-equipped by the ruthless process of evolution to heal ourselves, at least until we've reproduced and raised our offspring. Understanding of the causes of communicable diseases, public health measures, hygiene in hospitals, and surgical and pharmaceutical interventions have dramatically lengthened our lifespans and increased the years in which we are healthy and active. But does this explain everything? Since 2009 in the United States, response to placebos has been increasing: why? Why do we spend more and more on interventions for the gravely ill and little or nothing on research into complementary therapies which have been shown, in the few formal clinical tests performed, to reduce the incidence of these diseases?

This is a challenging book which asks many more questions than the few I've summarised above and provides extensive information, including citations to original sources, on research which challenges these dogmas. The author is not advocating abolishing our current enterprise of scientific investigation. Instead, he suggests, we might allocate a small fraction of the budget (say, between 1% and 5%) to look at wild-card alternatives. Allowing these to be chosen by the public from a list of proposals through a mechanism like crowd-funding Web sites would raise the public profile of science and engage the public (who are, after all, footing the bill) in the endeavour. (Note that “mainstream” research projects, for example extending the mission of a spacecraft, would be welcome to compete.)

 Permalink

Johnson, George. Miss Leavitt's Stars. New York: W. W. Norton, 2005. ISBN 978-0-393-32856-1.
Henrietta Swan Leavitt was a computer. No, this is not a tale of artificial intelligence, but rather of the key discovery which allowed astronomers to grasp the enormity of the universe. In the late 19th century it became increasingly common for daughters of modestly prosperous families to attend college. Henrietta Leavitt's father was a Congregational church minister in Ohio whose income allowed him to send his daughter to Oberlin College in 1885. In 1888 she transferred to the Society for the Collegiate Instruction of Women (later Radcliffe College) in Cambridge Massachusetts where she earned a bachelor's degree in 1892. In her senior year, she took a course in astronomy which sparked a lifetime fascination with the stars. After graduation, she remained in Cambridge and the next year was volunteering at the Harvard College Observatory and was later put on salary.

The director of the observatory, Edward Pickering, realised that while at the time it was considered inappropriate for women to sit up all night operating a telescope, much of the work of astronomy consisted of tedious tasks such as measuring the position and brightness of stars on photographic plates, compiling catalogues, and performing analyses based upon their data. Pickering realised that there was a pool of college educated women (especially in the Boston area) who were unlikely to find work as scientists but who were perfectly capable of doing this office work so essential to the progress of astronomy. Further, they would work for a fraction of the salary of a professional astronomer and Pickering, a shrewd administrator as well as a scientist, reasoned he could boost the output of his observatory by a substantial factor within the available budget. So it was that Leavitt was hired to work full-time at the observatory with a job title of “computer” and a salary of US$ 0.25 per hour (she later got a raise to 0.30, which is comparable to the U.S. federal minimum wage in 2013).

There was no shortage of work for Leavitt and her fellow computers (nicknamed “Pickering's Harem”) to do. The major project underway at the observatory was the creation of a catalogue of the position, magnitude, and colour of all stars visible from the northern hemisphere to the limiting magnitude of the telescope available. This was done by exposing glass photographic plates in long time exposures while keeping the telescope precisely aimed at a given patch of the sky (although telescopes of era had “clock drives” which approximately tracked the apparent motion of the sky, imprecision in the mechanism required a human observer [all men!] to track a guide star through an eyepiece during the long exposure and manually keep the star centred on the crosshairs with fine adjustment controls). Since each plate covered only a small fraction of the sky, the work of surveying the entire hemisphere was long, tedious, and often frustrating, as a cloud might drift across the field of view and ruin the exposure.

But if the work at the telescope was seemingly endless, analysing the plates it produced was far more arduous. Each plate would contain images of thousands of stars, the position and brightness (inferred from the size of the star's image on the plate) of which had to be measured and recorded. Further, plates taken through different colour filters had to be compared, with the difference in brightness used to estimate each star's colour and hence temperature. And if that weren't enough, plates taken of the same field at different times were compared to discover stars whose brightness varied from one time to another.

There are two kinds of these variable stars. The first consist of multiple star systems where one star periodically eclipses another, with the simplest case being an “eclipsing binary”: two stars which eclipse one another. Intrinsic variable stars are individual stars whose brightness varies over time, often accompanied by a change in the star's colour. Both kinds of variable stars were important to astronomers, with intrinsic variables offering clues to astrophysics and the evolution of stars.

Leavitt was called a “variable star ‘fiend’ ” by a Princeton astronomer in a letter to Pickering, commenting on the flood of discoveries she published in the Harvard Observatory's journals. For the ambitious Pickering, one hemisphere did not suffice. He arranged for an observatory to be established in Arequipa Peru, which would allow stars visible only from the southern hemisphere to be observed and catalogued. A 24 inch telescope and its accessories were shipped around Cape Horn from Boston, and before long the southern sky was being photographed, with the plates sent to Harvard for measurement and cataloguing. When the news had come to Harvard, it was the computers, not the astronomers, who scrutinised them to see what had been discovered.

Now, star catalogues of the kind Pickering was preparing, however useful they were to astronomers, were essentially two-dimensional. They give the position of the star on the sky, but no information about how distant it is from the solar system. Indeed, only the distances of few dozen of the very closest stars had been measured by the end of the 19th century by stellar parallax, but for all the rest of the stars their distances were a complete mystery and consequently also the scale of the visible universe was utterly unknown. Because the intrinsic brightness of stars varies over an enormous range (some stars are a million times more luminous than the Sun, which is itself ten thousand times brighter than some dwarf stars), a star of a given magnitude (brightness as observed from Earth) may either be a nearby star of modest brightness or an brilliant supergiant star far away.

One of the first intrinsic variable stars to be studied in depth was Delta Cephei, found to be variable in 1784. It is the prototype Cepheid variable, many more of which were discovered by Leavitt. Cepheids are old, massive stars, which have burnt up most of their hydrogen fuel and vary with a characteristic sawtooth-shaped light curve with periods ranging from days to months. In Leavitt's time the mechanism for this variability was unknown, but it is now understood to be due to oscillations in the star's radius as the ionisation state of helium in the star's outer layer cycles between opaque and transparent states, repeatedly trapping the star's energy and causing it to expand, then releasing it, making the star contract.

When examining the plates from the telescope in Peru, Leavitt was fascinated by the Magellanic clouds, which look like little bits of the Milky Way which broke off and migrated to distant parts of the sky (we now know them to be dwarf galaxies which may be in orbit around the Milky Way). Leavitt became fascinated by the clouds, and by assiduous searches on multiple plates showing them, eventually published in 1908 a list of 1,777 variable stars she had discovered in them. While astronomers did not know the exact nature of the Magellanic clouds, they were confident of two things: they were very distant (since stars within them of spectral types which are inherently bright were much dimmer than those seen elsewhere in the sky), and all of the stars in them were about the same distance from the solar system, since it was evident the clouds must be gravitationally bound to persist over time.

Leavitt's 1908 paper contained one of the greatest understatements in all of the scientific literature: “It is worthy of notice that the brightest variables have the longer periods.” She had discovered a measuring stick for the universe. In examining Cepheids among the variables in her list, she observed that there was a simple linear relationship between the period of pulsation and how bright the star appeared. But since all of the Cepheids in the clouds must be at about the same distance, that meant their absolute brightness could be determined from their periods. This made the Cepheids “standard candles” which could be used to chart the galaxy and beyond. Since they are so bright, they could be observed at great distances.

To take a simple case, suppose you observe a Cepheid in a star cluster, and another in a different part of the sky. The two have about the same period of oscillation, but the one in the cluster has one quarter the brightness at Earth of the other. Since the periods are the same, you know the inherent luminosities of the two stars are alike, so according to the inverse-square law the cluster must be twice as distant as the other star. If the Cepheids have different periods, the relationship Leavitt discovered can be used to compute the relative difference in their luminosity, again allowing their distances to be compared.

This method provides a relative distance scale to as far as you can identify and measure the periods of Cepheids, but it does not give their absolute distances. However, if you can measure the distance to any single Cepheid by other means, you can now compute the absolute distance to all of them. Not without controversy, this was accomplished, and for the first time astronomers beheld just how enormous the galaxy was, that the solar system was far from its centre, and that the mysterious “spiral neublæ” many had argued were clouds of gas or solar systems in formation were entire other galaxies among a myriad in a universe of breathtaking size. This was the work of others, but all of it was founded on Leavitt's discovery.

Henrietta Leavitt would not live to see all of these consequences of her work. She died of cancer in 1921 at the age of 53, while the debate was still raging over whether the Milky Way was the entire universe or just one of a vast number of “island universes”. Both sides in this controversy based their arguments in large part upon her work.

She was paid just ten cents more per hour than a cotton mill worker, and never given the title “astronomer”, never made an observation with a telescope, and yet working endless hours at her desk made one of the most profound discoveries of 20th century astronomy, one which is still being refined by precision measurements from the Earth and space today. While the public hardly ever heard her name, she published her work in professional journals and eminent astronomers were well aware of its significance and her part in creating it. A 66 kilometre crater on the Moon bears her name (the one named after that Armstrong fellow is just 4.6 km, albeit on the near side).

This short book is only in part a biography of Leavitt. Apart from her work, she left few traces of her life. It is as much a story of how astronomy was done in her days and how she and others made the giant leap in establishing what we now call the cosmic distance ladder. This was a complicated process, with many missteps and controversies along the way, which are well described here.

In the Kindle edition (as viewed on the iPad) the quotations at the start of each chapter are mis-formatted so each character appears on its own line. The index contains references to page numbers in the print edition and is useless because the Kindle edition contains no page numbers.

 Permalink

June 2014

Coppley, Jackson. Tales From Our Near Future. Seattle: CreateSpace, 2014. ISBN 978-1-4961-2851-5.
I am increasingly convinced that the 2020s will be a very interesting decade. As computing power continues its inexorable exponential growth (and there is no reason to believe this growth will abate, except in the aftermath of economic and/or societal collapse), more and more things which seemed absurd just a few years before will become commonplace—consider self-driving cars. This slim book (142 pages in the print edition) collects three unrelated stories set in this era. In each, the author envisions a “soft take-off” scenario rather than the sudden onset of a technological singularity which rapidly renders the world incomprehensible.

These are all “puzzle stories” in the tradition of Isaac Asimov's early short stories. You'll enjoy them best if you just immerse yourself in the world the characters inhabit, get to know them, and then discover what is really going on, which may not be at all what it appears on the surface. By the nature of puzzle stories, almost anything I say about them would be a spoiler, so I'll refrain from getting into details other than asking, “What would it be like to know everything?”, which is the premise of the first story, stated on its first page.

Two of the three stories contain explicit sexual scenes and are not suitable for younger readers. This book was recommended (scroll down a few paragraphs) by Jerry Pournelle.

 Permalink

Geraghty, Jim. The Weed Agency. New York: Crown Forum, 2014. ISBN 978-0-7704-3652-0.
During the Carter administration, the peanut farmer become president, a man very well acquainted with weeds, created the Agency of Invasive Species (AIS) within the Department of Agriculture to cope with the menace. Well, not really—the agency which occupies centre stage in this farce is fictional but, as the author notes in the preface, the Federal Interagency Committee for the Management of Noxious and Exotic Weeds, the Aquatic Nuisance Species Task Force, the Federal Interagency Committee on Invasive Terrestrial Animals and Pathogens, and the National Invasive Species Council of which they are members along with a list of other agencies, all do exist. So while it may seem amusing that a bankrupt and over-extended government would have an agency devoted to weeds, in fact that real government has an entire portfolio of such agencies, along with, naturally, a council to co-ordinate their activities.

The AIS has a politically appointed director, but the agency had been run since inception by Administrative Director Adam Humphrey, career civil service, who is training his deputy, Jack Wilkins, new to the civil service after a frustrating low-level post in the Carter White House, in the ways of the permanent bureaucracy and how to deal with political appointees, members of congress, and rival agencies. Humphrey has an instinct for how to position the agency's mission as political winds shift over the decades: during the Reagan years as American agriculture's first line of defence against the threat of devastation by Soviet weeds, at the cutting edge of information technology revolutionising citizens' interaction with government in the Gingrich era, and essential to avert even more disastrous attacks on the nation after the terrorist attacks in 2001.

Humphrey and Wilkins are masters of the care and feeding of congressional allies, who are rewarded with agency facilities in their districts, and neutralising the occasional idealistic budget cutter who wishes to limit the growth of the agency's budget or, horror of horrors, abolish it.

We also see the agency through the eyes of three young women who arrived at the agency in 1993 suffused with optimism for “reinventing government” and “building a bridge to the twenty-first century”. While each of them—Lisa, hired in the communications office; Jamie, an event co-ordinator; and Ava, a technology systems analyst—were well aware that their positions in the federal bureaucracy were deep in the weeds, they believed they had the energy and ambition to excel and rise to positions where they would have the power to effect change for the better.

Then they began to actually work within the structure of the agency and realise what the civil service actually was. Thomas Sowell has remarked that the experience in his life which transformed him from being a leftist (actually, a Marxist) to a champion of free markets and individual liberty was working as a summer intern in 1960 in a federal agency. He says that after experiencing the civil service first-hand, he realised that whatever were the problems of society that concerned him, government bureaucracy was not the solution. Lisa, Jamie, and Ava all have similar experiences, and react in different ways. Ava decides she just can't take it any more and is tempted by a job in the middle of the dot com boom. Her experience is both entertaining and enlightening.

Even the most obscure federal agency has the power to mess up on a colossal scale and wind up on the front page of the Washington Post and the focus of a congressional inquest. So it was to be for the AIS, when an ill wind brought a threat to agriculture in the highly-visible districts of powerful members of congress. All the bureaucratic and political wiles of the agency had to be summoned to counter the threat and allow the agency to continue to do what such organisations do best: nothing.

Jim Geraghty is a veteran reporter, contributing editor, and blogger at National Review; his work has appeared in a long list of other publications. His reportage has always been characterised by a dry wit, but for a first foray into satire and farce, this is a masterful accomplishment. It is as funny as some of the best work of Christopher Buckley, and that's about as good as contemporary political humour gets. Geraghty's plot is not as zany as most of Buckley's, but it is more grounded in the political reality of Washington. One of the most effective devices in the book is to describe this or that absurdity and then add a footnote documenting that what you've just read actually exists, or that an outrageous statement uttered by a character was said on the record by a politician or bureaucrat.

Much of this novel reads like an American version of the British sitcom Yes Minister (Margaret Thatcher's favourite television programme), and although the author doesn't mention it in the author's note or acknowledgements, I suspect that the master civil servant's being named “Humphrey” is an homage to that series. Sharp-eyed readers will discover another oblique reference to Yes Minister in the entry for November 2012 in the final chapter.

 Permalink

Rickards, James. The Death of Money. New York: Portfolio / Penguin, 2014. ISBN 978-1-59184-670-3.
In his 2011 book Currency Wars (November 2011), the author discusses what he sees as an inevitable conflict among fiat currencies for dominance in international trade as the dollar, debased as a result of profligate spending and assumption of debt by the government that issues it, is displaced as the world's preeminent trading and reserve currency. With all currencies backed by nothing more than promises made by those who issue them, the stage is set for a race to the bottom: one government weakens its currency to obtain short-term advantage in international trade, only to have its competitors devalue, setting off a chain of competitive devaluations which disrupt trade, cause investment to be deferred due to uncertainty, and destroy the savings of those holding the currencies in question. In 2011, Rickards wrote that it was still possible to avert an era of currency war, although that was not the way to bet. In this volume, three years later, he surveys the scene and concludes that we are now in the early stages of a collapse of the global monetary system, which will be replaced by something very different from the status quo, but whose details we cannot, at this time, confidently predict. Investors and companies involved in international commerce need to understand what is happening and take steps to protect themselves in the era of turbulence which is ahead.

We often speak of “globalisation” as if it were something new, emerging only in recent years, but in fact it is an ongoing trend which dates from the age of wooden ships and sail. Once ocean commerce became practical in the 18th century, comparative advantage caused production and processing of goods to be concentrated in locations where they could be done most efficiently, linked by the sea lanes. This commerce was enormously facilitated by a global currency—if trading partners all used their own currencies, a plantation owner in the West Indies shipping sugar to Great Britain might see his profit wiped out if the exchange rate between his currency and the British pound changed by the time the ship arrived and he was paid. From the dawn of global trade to the present there has been a global currency. Initially, it was the British pound, backed by gold in the vaults of the Bank of England. Even commerce between, say, Argentina and Italy, was usually denominated in pounds and cleared through banks in London. The impoverishment of Britain in World War I began a shift of the centre of financial power from London to New York, and after World War II the Bretton Woods conference established the U.S. dollar, backed by gold, as the world's reserve and trade currency. The world continued to have a global currency, but now it was issued in Washington, not London. (The communist bloc did not use dollars for trade within itself, but conducted its trade with nations outside the bloc in dollars.) In 1971, the U.S. suspended the convertibility of the dollar to gold, and ever since the dollar has been entirely a fiat currency, backed only by the confidence of those who hold it that they will be able to exchange it for goods in the future.

The international monetary system is now in a most unusual period. The dollar remains the nominal reserve and trade currency, but the fraction of reserves held and trade conducted in dollars continues to fall. All of the major currencies: the dollar, euro, yen, pound, yuan, rouble—are pure fiat currencies unbacked by any tangible asset, and valued only against one another in ever-shifting foreign exchange markets. Most of these currencies are issued by central banks of governments which have taken on vast amounts of debt which nobody in their right mind believes can ever be paid off, and is approaching levels at which even a modest rise in interest rates to historical mean levels would make the interest on the debt impossible to service. There is every reason for countries holding large reserves of dollars to be worried, but there isn't any other currency which looks substantially better as an alternative. The dollar is, essentially, the best horse in the glue factory.

The author argues that we are on the threshold of a collapse of the international monetary system, and that the outlines of what will replace it are not yet clear. The phrase “collapse of the international monetary system” sounds apocalyptic, but we're not talking about some kind of Mad Max societal cataclysm. As the author observes, the international monetary system collapsed three times in the last century: in 1914, 1939, and 1971, and life went on (albeit in the first two cases, with disastrous and sanguinary wars), and eventually the financial system was reconstructed. There were, in each case, winners and losers, and investors who failed to protect themselves against these turbulent changes paid dearly for their complacency.

In this book, the author surveys the evolving international financial scene. He comes to conclusions which may surprise observers from a variety of perspectives. He believes the Euro is here to stay, and that its advantages to Germany coupled with Germany's economic power will carry it through its current problems. Ultimately, the countries on the periphery will consider the Euro, whatever its costs to them in unemployment and austerity, better than the instability of their national currencies before joining the Eurozone. China is seen as the victim of its own success, with financial warlords skimming off the prosperity of its rapid growth, aided by an opaque and deeply corrupt political class. The developing world is increasingly forging bilateral agreements which bypass the dollar and trade in their own currencies.

What is an investor to do faced with such uncertainty? Well, that's far from clear. The one thing one shouldn't do is assume the present system will persist until you're ready to retire, and invest your retirement savings entirely on the assumption nothing will change. Fortunately, there are alternative investments (for example, gold and silver, farm land, fine art, funds investing in natural resources, and, yes, cash in a variety of currencies [to enable you to pick up bargains when other assets crater]) which will appreciate enormously when the monetary system collapses. You don't have to (and shouldn't) bet everything on a collapse: a relatively small hedge against it will protect you should it happen.

This is an extensively researched and deep investigation of the present state of the international monetary system. As the author notes, ever since all currencies were severed from gold in 1971 and began to float against one another, the complexity of the system has increased enormously. What were once fixed exchange rates, adjusted only when countries faced financial crisis, have been replaced by exchange rates which change in milliseconds, with a huge superstructure of futures, options, currency swaps, and other derivatives whose notional value dwarfs the actual currencies in circulation. This is an immensely fragile system which even a small perturbation can cause to collapse. Faced with a risk whose probability and consequences are impossible to quantify, the prudent investor takes steps to mitigate it. This book provides background for developing such a plan.

 Permalink

Mankins, John C. The Case for Space Solar Power. Houston: Virginia Edition, 2014. ISBN 978-0-9913370-0-2.
As world population continues to grow and people in the developing world improve their standard of living toward the level of residents of industrialised nations, demand for energy will increase enormously. Even taking into account anticipated progress in energy conservation and forecasts that world population will reach a mid-century peak and then stabilise, the demand for electricity alone is forecasted to quadruple in the century from 2000 to 2100. If electric vehicles shift a substantial part of the energy consumed for transportation from hydrocarbon fuels to electricity, the demand for electric power will be greater still.

Providing this electricity in an affordable, sustainable way is a tremendous challenge. Most electricity today is produced by burning fuels such as coal, natural gas, and petroleum; by nuclear fission reactors; and by hydroelectric power generated by dams. Quadrupling electric power generation by any of these means poses serious problems. Fossil fuels may be subject to depletion, pose environmental consequences both in extraction and release of combustion products into the atmosphere, and are distributed unevenly around the world, leading to geopolitical tensions between have and have-not countries. Uranium fission is a technology with few environmental drawbacks, but operating it in a safe manner is very demanding and requires continuous vigilance over the decades-long lifespan of a power station. Further, the risk exists that nuclear material can be diverted for weapons use, especially if nuclear power stations proliferate into areas which are politically unstable. Hydroelectric power is clean, generally reliable (except in the case of extreme droughts), and inexhaustible, but unfortunately most rivers which are suitable for its generation have already been dammed, and potential projects which might be developed are insufficient to meet the demand.

Well, what about those “sustainable energy” projects the environmentalists are always babbling about: solar panels, eagle shredders (wind turbines), and the like? They do generate energy without fuel, but they are not the solution to the problem. In order to understand why, we need to look into the nature of the market for electricity, which is segmented into two components, even though the current flows through the same wires. The first is “base load” power. The demand for electricity varies during the day, from day to day, and seasonally (for example, electricity for air conditioning peaks during the mid-day hours of summer). The base load is the electricity demand which is always present, regardless of these changes in demand. If you look at a long-term plot of electricity demand and draw a line through the troughs in the curve, everything below that line is base load power and everything above it is “peak” power. Base load power is typically provided by the sources discussed in the previous paragraph: hydrocarbon, nuclear, and hydroelectric. Because there is a continuous demand for the power they generate, these plants are designed to run non-stop (with excess capacity to cover stand-downs for maintenance), and may be complicated to start up or shut down. In Switzerland, for example, 56% of base load power is produced from hydroelectric plants and 39% from nuclear fission reactors.

The balance of electrical demand, peak power, is usually generated by smaller power plants which can be brought on-line and shut down quickly as demand varies. Peaking plants sell their power onto the grid at prices substantially higher than base load plants, which compensates for their less efficient operation and higher capital costs for intermittent operation. In Switzerland, most peak energy is generated by thermal plants which can burn either natural gas or oil.

Now the problem with “alternative energy” sources such as solar panels and windmills becomes apparent: they produce neither base load nor peak power. Solar panels produce electricity only during the day, and when the Sun is not obscured by clouds. Windmills, obviously, only generate when the wind is blowing. Since there is no way to efficiently store large quantities of energy (all existing storage technologies raise the cost of electricity to uneconomic levels), these technologies cannot be used for base load power, since they cannot be relied upon to continuously furnish power to the grid. Neither can they be used for peak power generation, since the times at which they are producing power may not coincide with times of peak demand. That isn't to say these energy sources cannot be useful. For example, solar panels on the roofs of buildings in the American southwest make a tremendous amount of sense since they tend to produce power at precisely the times the demand for air conditioning is greatest. This can smooth out, but not replace, the need for peak power generation on the grid.

If we wish to dramatically expand electricity generation without relying on fossil fuels for base load power, there are remarkably few potential technologies. Geothermal power is reliable and inexpensive, but is only available in a limited number of areas and cannot come close to meeting the demand. Nuclear fission, especially modern, modular designs is feasible, but faces formidable opposition from the fear-based community. If nuclear fusion ever becomes practical, we will have a limitless, mostly clean energy source, but after sixty years of research we are still decades away from an operational power plant, and it is entirely possible the entire effort may fail. The liquid fluoride thorium reactor, a technology demonstrated in the 1960s, could provide centuries of energy without the nuclear waste or weapons diversion risks of uranium-based nuclear power, but even if it were developed to industrial scale it's still a “nuclear reactor” and can be expected to stimulate the same hysteria as existing nuclear technology.

This book explores an entirely different alternative. Think about it: once you get above the Earth's atmosphere and sufficiently far from the Earth to avoid its shadow, the Sun provides a steady 1.368 kilowatts per square metre, and will continue to do so, non-stop, for billions of years into the future (actually, the Sun is gradually brightening, so on the scale of hundreds of millions of years this figure will increase). If this energy could be harvested and delivered efficiently to Earth, the electricity needs of a global technological civilisation could be met with a negligible impact on the Earth's environment. With present-day photovoltaic cells, we can convert 40% of incident sunlight to electricity, and wireless power transmission in the microwave band (to which the Earth's atmosphere is transparent, even in the presence of clouds and precipitation) has been demonstrated at 40% efficiency, with 60% end-to-end efficiency expected for future systems.

Thus, no scientific breakthrough of any kind is required to harvest abundant solar energy which presently streams past the Earth and deliver it to receiving stations on the ground which feed it into the power grid. Since the solar power satellites would generate energy 99.5% of the time (with short outages when passing through the Earth's shadow near the equinoxes, at which time another satellite at a different longitude could pick up the load), this would be base load power, with no fuel source required. It's “just a matter of engineering” to calculate what would be required to build the collector satellite, launch it into geostationary orbit (where it would stay above the same point on Earth), and build the receiver station on the ground to collect the energy beamed down by the satellite. Then, given a proposed design, one can calculate the capital cost to bring such a system into production, its operating cost, the price of power it would deliver to the grid, and the time to recover the investment in the system.

Solar power satellites are not a new idea. In 1968, Peter Glaser published a description of a system with photovoltaic electricity generation and microwave power transmission to an antenna on Earth; in 1973 he was granted U.S. patent 3,781,647 for the system. In the 1970s NASA and the Department of Energy conducted a detailed study of the concept, publishing a reference design in 1979 which envisioned a platform in geostationary orbit with solar arrays measuring 5 by 25 kilometres and requiring a monstrous space shuttle with payload of 250 metric tons and space factories to assemble the platforms. Design was entirely conventional, using much the same technologies as were later used in the International Space Station (ISS) (but for a structure twenty times its size). Given that the ISS has a cost estimated at US$ 150 billion, NASA's 1979 estimate that a complete, operational solar power satellite system comprising 60 power generation platforms and Earth-based infrastructure would cost (in 2014 dollars) between 2.9 and 8.7 trillion might be considered optimistic. Back then, a trillion dollars was a lot of money, and this study pretty much put an end to serious consideration of solar power satellites in the U.S.for almost two decades. In the late 1990s, NASA, realising that much progress has been made in many of the enabling technologies for space solar power, commissioned a “Fresh Look Study”, which concluded that the state of the art was still insufficiently advanced to make power satellites economically feasible.

In this book, the author, after a 25-year career at NASA, recounts the history of solar power satellites to date and presents a radically new design, SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), which he argues is congruent with 21st century manufacturing technology. There are two fundamental reasons previous cost estimates for solar power satellites have come up with such forbidding figures. First, space hardware is hideously expensive to develop and manufacture. Measured in US$ per kilogram, a laptop computer is around $200/kg, a Boeing 747 $1400/kg, and a smart phone $1800/kg. By comparison, the Space Shuttle Orbiter cost $86,000/kg and the International Space Station around $110,000/kg. Most of the exorbitant cost of space hardware has little to do with the space environment, but is due to its being essentially hand-built in small numbers, and thus never having the benefit of moving down the learning curve as a product is put into mass production nor of automation in manufacturing (which isn't cost-effective when you're only making a few of a product). Second, once you've paid that enormous cost per kilogram for the space hardware, you have launch it from the Earth into space and transport it to the orbit in which it will operate. For communication satellites which, like solar power satellites, operate in geostationary orbit, current launchers cost around US$ 50,000 per kilogram delivered there. New entrants into the market may substantially reduce this cost, but without a breakthrough such as full reusability of the launcher, it will stay at an elevated level.

SPS-ALPHA tackles the high cost of space hardware by adopting a “hyper modular” design, in which the power satellite is composed of huge numbers of identical modules of just eight different types. Each of these modules is on a scale which permits prototypes to be fabricated in facilities no more sophisticated than university laboratories and light enough they fall into the “smallsat” category, permitting inexpensive tests in the space environment as required. A production power satellite, designed to deliver 2 gigawatts of electricity to Earth, will have almost four hundred thousand of each of three types of these modules, assembled in space by 4,888 robot arm modules, using more than two million interconnect modules. These are numbers where mass production economies kick in: once the module design has been tested and certified you can put it out for bids for serial production. And a factory which invests in making these modules inexpensively can be assured of follow-on business if the initial power satellite is a success, since there will a demand for dozens or hundreds more once its practicality is demonstrated. None of these modules is remotely as complicated as an iPhone, and once they are made in comparable quantities shouldn't cost any more. What would an iPhone cost if they only made five of them?

Modularity also requires the design to be distributed and redundant. There is no single-point failure mode in the system. The propulsion and attitude control module is replicated 200 times in the full design. As modules fail, for whatever cause, they will have minimal impact on the performance of the satellite and can be swapped out as part of routine maintenance. The author estimates than on an ongoing basis, around 3% of modules will be replaced per year.

The problem of launch cost is addressed indirectly by the modular design. Since no module masses more than 600 kg (the propulsion module) and none of the others exceed 100 kg, they do not require a heavy lift launcher. Modules can simply be apportioned out among a large number of flights of the most economical launchers available. Construction of a full scale solar power satellite will require between 500 and 1000 launches per year of a launcher with a capacity in the 10 to 20 metric ton range. This dwarfs the entire global launch industry, and will provide motivation to fund the development of new, reusable, launcher designs and the volume of business to push their cost down the learning curve, with a goal of reducing cost for launch to low Earth orbit to US$ 300–500 per kilogram. Note that the SpaceX Falcon Heavy, under development with a projected first flight in 2015, already is priced around US$ 1000/kg without reusability of the three core stages which is expected to be introduced in the future.

The author lays out five “Design Reference Missions” which progress from small-scale tests of a few modules in low Earth orbit to a full production power satellite delivering 2 gigawatts to the electrical grid. He estimates a cost of around US$ 5 billion to the pilot plant demonstrator and 20 billion to the first full scale power satellite. This is not a small sum of money, but is comparable to the approximately US$ 26 billion cost of the Three Gorges Dam in China. Once power satellites start to come on line, each feeding power into the grid with no cost for fuel and modest maintenance expenses (comparable to those for a hydroelectric dam), the initial investment does not take long to be recovered. Further, the power satellite effort will bootstrap the infrastructure for routine, inexpensive access to space, and the power satellite modules can also be used in other space applications (for example, very high power communication satellites).

The most frequently raised objection when power satellites are mentioned is fear that they could be used as a “death ray”. This is, quite simply, nonsense. The microwave power beam arriving at the Earth's surface will have an intensity between 10–20% of summer sunlight, so a mirror reflecting the Sun would be a more effective death ray. Extensive tests were done to determine if the beam would affect birds, insects, and aircraft flying through it and all concluded there was no risk. A power satellite which beamed down its power with a laser could be weaponised, but nobody is proposing that, since it would have problems with atmospheric conditions and cost more than microwave transmission.

This book provides a comprehensive examination of the history of the concept of solar power from space, the various designs proposed over the years and studies conducted of them, and an in-depth presentation of the technology and economic rationale for the SPS-ALPHA system. It presents an energy future which is very different from that which most people envision, provides a way to bring the benefits of electrification to developing regions without any environmental consequences whatever, and ensure a secure supply of electricity for the foreseeable future.

This is a rewarding, but rather tedious read. Perhaps it's due to the author's 25 years at NASA, but the text is cluttered with acronyms—there are fourteen pages of them defined in a glossary at the end of the book—and busy charts, some of which are difficult to read as reproduced in the Kindle edition. Copy editing is so-so: I noted 28 errors, and I wasn't especially looking for them. The index in the Kindle edition lists page numbers in the print edition which are useless because the electronic edition does not contain page numbers.

 Permalink

July 2014

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988, 1994] 2004. ISBN 978-0-345-47609-8.
One hundred years ago the world was on the brink of a cataclysmic confrontation which would cause casualties numbered in the tens of millions, destroy the pre-existing international order, depose royalty and dissolve empires, and plant the seeds for tyrannical regimes and future conflicts with an even more horrific toll in human suffering. It is not exaggeration to speak of World War I as the pivotal event of the 20th century, since so much that followed can be viewed as sequelæ which can be traced directly to that conflict.

It is thus important to understand how that war came to be, and how in the first month after its outbreak the expectations of all parties to the conflict, arrived at through the most exhaustive study by military and political élites, were proven completely wrong and what was expected to be a short, conclusive war turned instead into a protracted blood-letting which would continue for more than four years of largely static warfare. This magnificent book, which covers the events leading to the war and the first month after its outbreak, provides a highly readable narrative history of the period with insight into both the grand folly of war plans drawn up in isolation and mechanically followed even after abundant evidence of their faults have caused tragedy, but also how contingency—chance, and the decisions of fallible human beings in positions of authority can tilt the balance of history.

The author is not an academic historian, and she writes for a popular audience. This has caused some to sniff at her work, but as she noted, Herodotus, Thucydides, Gibbon, and MacCauley did not have Ph.D.s. She immerses the reader in the world before the war, beginning with the 1910 funeral in London of Edward VII where nine monarchs rode in the cortège, most of whose nations would be at war four years hence. The system of alliances is described in detail, as is the mobilisation plans of the future combatants, all of which would contribute to fatal instability of the system to a small perturbation.

Germany, France, Russia, and Austria-Hungary had all drawn up detailed mobilisation plans for assembling, deploying, and operating their conscript armies in the event of war. (Britain, with an all-volunteer regular army which was tiny by continental standards, had no pre-defined mobilisation plan.) As you might expect, Germany's plan was the most detailed, specifying railroad schedules and the composition of individual trains. Now, the important thing to keep in mind about these plans is that, together, they created a powerful first-mover advantage. If Russia began to mobilise, and Germany hesitated in its own mobilisation in the hope of defusing the conflict, it might be at a great disadvantage if Russia had only a few days of advance in assembling its forces. This means that there was a powerful incentive in issuing the mobilisation order first, and a compelling reason for an adversary to begin his own mobilisation order once news of it became known.

Compounding this instability were alliances which compelled parties to them to come to the assistance of others. France had no direct interest in the conflict between Germany and Austria-Hungary and Russia in the Balkans, but it had an alliance with Russia, and was pulled into the conflict. When France began to mobilise, Germany activated its own mobilisation and the Schlieffen plan to invade France through Belgium. Once the Germans violated the neutrality of Belgium, Britain's guarantee of that neutrality required (after the customary ambiguity and dithering) a declaration of war against Germany, and the stage was set for a general war in Europe.

The focus here is on the initial phase of the war: where Germany, France, and Russia were all following their pre-war plans, all initially expecting a swift conquest of their opponents—the Battle of the Frontiers, which occupied most of the month of August 1914. An afterword covers the First Battle of the Marne where the German offensive on the Western front was halted and the stage set for the static trench warfare which was to ensue. At the conclusion of that battle, all of the shining pre-war plans were in tatters, many commanders were disgraced or cashiered, and lessons learned through the tragedy “by which God teaches the law to kings” (p. 275).

A century later, the lessons of the outbreak of World War I could not be more relevant. On the eve of the war, many believed that the interconnection of the soon-to-be belligerents through trade was such that war was unthinkable, as it would quickly impoverish them. Today, the world is even more connected and yet there are conflicts all around the margins, with alliances spanning the globe. Unlike 1914, when the world was largely dominated by great powers, now there are rogue states, non-state actors, movements dominated by religion, and neo-barbarism and piracy loose upon the stage, and some of these may lay their hands on weapons whose destructive power dwarf those of 1914–1918. This book, published more than fifty years ago, about a conflict a century old, could not be more timely.

 Permalink

Patterson, William H., Jr. Robert A. Heinlein: In Dialogue with His Century. Vol. 1 New York: Tor Books, 2010. ISBN 978-0-7653-1960-9.
Robert Heinlein came from a family who had been present in America before there were the United States, and whose members had served in all of the wars of the Republic. Despite being thin, frail, and with dodgy eyesight, he managed to be appointed to the U.S. Naval Academy where, despite demerits for being a hellion, he graduated and was commissioned as a naval officer. He was on the track to a naval career when felled by tuberculosis (which was, in the 1930s, a potential death sentence, with the possibility of recurrence any time in later life).

Heinlein had written while in the Navy, but after his forced medical retirement, turned his attention to writing science fiction for pulp magazines, and after receiving a cheque for US$ 70 for his first short story, “Life-Line”, he exclaimed, “How long has this racket been going on? And why didn't anybody tell me about it sooner?” Heinlein always viewed writing as a business, and kept a thermometer on which he charted his revenue toward paying off the mortgage on his house.

While Heinlein fit in very well with the Navy, and might have been, absent medical problems, a significant commander in the fleet in World War II, he was also, at heart, a bohemian, with a soul almost orthogonal to military tradition and discipline. His first marriage was a fling with a woman who introduced him to physical delights of which he was unaware. That ended quickly, and then he married Leslyn, who was his muse, copy-editor, and business manager in a marriage which persisted throughout World War II, when both were involved in war work. Leslyn worked herself in this effort into insanity and alcoholism, and they divorced in 1947.

It was Robert Heinlein who vaulted science fiction from the ghetto of the pulp magazines to the “slicks” such as Collier's and the Saturday Evening Post. This was due to a technological transition in the publishing industry which is comparable to that presently underway in the migration from print to electronic publishing. Rationing of paper during World War II helped to create the “pocket book” or paperback publishing industry. After the end of the war, these new entrants in the publishing market saw a major opportunity in publishing anthologies of stories previously published in the pulps. The pulp publishers viewed this as an existential threat—who would buy a pulp magazine if, for almost the same price, one could buy a collection of the best stories from the last decade in all of those magazines?

Heinlein found his fiction entrapped in this struggle. While today, when you sell a story to a magazine in the U.S., you usually only sell “First North American serial rights”, in the 1930s and 1940s, authors sold all rights, and it was up to the publisher to release their rights for republication of a work in an anthology or adaptation into a screenplay. This is parallel to the contemporary battle between traditional publishers and independent publishing platforms, which have become the heart of science fiction.

Heinlein was complex. While an exemplary naval officer, he was a nudist, married three times, interested in the esoteric (and a close associate of Jack Parsons and L. Ron Hubbard). He was an enthusiastic supporter of Upton Sinclair's EPIC movement and his “Social Credit” agenda.

This authorised biography, with major contributions from Heinlein's widow, Virginia, chronicles the master storyteller's life in his first forty years—until he found, or created, an audience receptive to the tales of wonder he spun. If you've read all of Heinlein's fiction, it may be difficult to imagine how much of it was based in Heinlein's own life. If you thought Heinlein's later novels were weird, appreciate how the master was weird before you were born.

I had the privilege of meeting Robert and Virginia Heinlein in 1984. I shall always cherish that moment.

 Permalink

Long, Rob. Conversations with My Agent (and Set Up, Joke, Set Up, Joke). London: Bloomsbury Publishing, [1996, 2005] 2014. ISBN 978-1-4088-5583-6.
Hollywood is a strange place, where the normal rules of business, economics, and personal and professional relationships seem to have been suspended. When he arrived in Hollywood in 1930, P. G. Wodehouse found the customs and antics of its denizens so bizarre that he parodied them in a series of hilarious stories. After a year in Hollywood, he'd had enough and never returned. When Rob Long arrived in Hollywood to attend UCLA film school, the television industry was on the threshold of a technology-driven change which would remake it and forever put an end to the domination by three large networks which had existed since its inception. The advent of cable and, later, direct to home satellite broadcasting eliminated the terrestrial bandwidth constraints which had made establishing a television outlet forbiddingly expensive and, at the same time, side-stepped many of the regulatory constraints which forbade “edgy” content on broadcast channels. Long began his television career as a screenwriter for Cheers in 1990, and became an executive producer of the show in 1992. After the end of Cheers, he created and produced other television projects, including Sullivan & Son, which is currently on the air.

Television ratings measure both “rating points”: the absolute number of television sets tuned into the program, and “share points”: the fraction of television sets turned on at the time viewing the program. In the era of Cheers, a typical episode might have a rating equivalent to more than 22 million viewers and a share of 32%, meaning it pulled in around one third of all television viewers in its time slot. The proliferation of channels makes it unlikely any show will achieve numbers like this again. The extremely popular 24 attracted between 9 and 14 million viewers in its eight seasons, and the highly critically regarded Mad Men never topped a mean viewership of 2.7 million in its best season.

It was into this new world of diminishing viewership expectations but voracious thirst for content to fill all the new channels that the author launched his post-Cheers career. The present volume collects two books originally published independently, Conversations with My Agent from 1998, and 2005's Set Up, Joke, Set Up, Joke, written as Hollywood's перестро́йка was well-advanced. The volumes fit together almost seamlessly, and many readers will barely notice the transition.

This is a very funny book, but there is also a great deal of wisdom about the ways of Hollywood, how television projects are created, pitched to a studio, marketed to a network, and the tortuous process leading from concept to script to pilot to series and, all too often, to cancellation. The book is written as a screenplay, complete with scene descriptions, directions, dialogue, transitions, and sound effect call-outs. Most of the scenes are indeed conversations between the author and his agent in various circumstances, but we also get to be a fly on the wall at story pitches, meetings with the network, casting, shooting an episode, focus group testing, and many other milestones in the life cycle of a situation comedy. The circumstances are fictional, but are clearly informed by real-life experience. Anybody contemplating a career in Hollywood, especially as a television screenwriter, would be insane not to read this book. You'll laugh a lot, but also learn something on almost every page.

The reader will also begin to appreciate the curious ways of Hollywood business, what the author calls “HIPE”: the Hollywood Inversion Principle of Economics. “The HIPE, as it will come to be known, postulates that every commonly understood, standard business practice of the outside world has its counterpart in the entertainment industry. Only it's backwards.” And anybody who thinks accounting is not a creative profession has never had experience with a Hollywood project. The culture of the entertainment business is also on display—an intricate pecking order involving writers, producers, actors, agents, studio and network executives, and “below the line” specialists such as camera operators and editors, all of whom have to read the trade papers to know who's up and who's not.

This book provides an insider's perspective on the strange way television programs come to be. In a way, it resembles some aspects of venture capital: most projects come to nothing, and most of those which are funded fail, losing the entire investment. But the few which succeed can generate sufficient money to cover all the losses and still yield a large return. One television show that runs for five years, producing solid ratings and 100+ episodes to go into syndication, can set up its writers and producers for life and cover the studio's losses on all of the dogs and cats.

 Permalink

August 2014

Thor, Brad. Black List. New York: Pocket Books, 2012. ISBN 978-1-4391-9302-0.
This is the twelfth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Brad Thor has remarked in interviews that he strives to write thrillers which anticipate headlines which will break after their publication, and with this novel he hits a grand slam.

Scot Harvath is ambushed in Paris by professional killers who murder a member of his team. After narrowly escaping, he goes to ground and covertly travels to a remote region in Basque country where he has trusted friends. He is then attacked there, again by trained killers, and he has to conclude that the probability is high that the internal security of his employer, the Carlton Group, has been breached, perhaps from inside.

Meanwhile, his employer, Reed Carlton, is attacked at his secure compound by an assault team and barely escapes with his life. When Carlton tries to use his back channels to contact members of his organisation, they all appear to have gone dark. To Carlton, a career spook with tradecraft flowing in his veins, this indicates his entire organisation has been wiped out, for no apparent motive and by perpetrators unknown.

Harvath, Carlton, and the infovore dwarf Nicholas, operating independently, must begin to pick up the pieces to figure out what is going on, while staying under the radar of a pervasive surveillance state which employs every technological means to track them down and target them for summary extra-judicial elimination.

If you pick up this book and read it today, you might think it's based upon the revelations of Edward Snowden about the abuses of the NSA conducting warrantless surveillance on U.S. citizens. But it was published in 2012, a full year before the first of Snowden's disclosures. The picture of the total information awareness state here is, if anything, more benign than what we now know to be the case in reality. What is different is that when Harvath, Carlton, and Nicholas get to the bottom of the mystery, the reaction in high places is what one would hope for in a constitutional republic, as opposed to the “USA! USA! USA!” cheerleading or silence which has greeted the exposure of abuses by the NSA on the part of all too many people.

This is a prophetic thriller which demonstrates how the smallest compromises of privacy: credit card transactions, telephone call metadata, license plate readers, facial recognition, Web site accesses, search engine queries, etc. can be woven into a dossier on any person of interest which makes going dark to the snooper state equivalent to living technologically in 1950. This not just a cautionary tale for individuals who wish to preserve a wall of privacy around themselves from the state, but also a challenge for writers of thrillers. Just as mobile telephones would have wrecked the plots of innumerable mystery and suspense stories written before their existence, the emergence of the panopticon state will make it difficult for thriller writers to have both their heroes and villains operating in the dark. I am sure the author will rise to this challenge.

 Permalink

Lowe, Keith. Savage Continent. New York: Picador, [2012] 2013. ISBN 978-1-250-03356-7.
On May 8th, 1945, World War II in Europe formally ended when the Allies accepted the unconditional surrender of Germany. In popular myth, especially among those too young to have lived through the war and its aftermath, the defeat of Italy and Germany ushered in, at least in Western Europe not occupied by Soviet troops, a period of rebuilding and rapid economic growth, spurred by the Marshall Plan. The French refer to the three decades from 1945 to 1975 as Les Trente Glorieuses. But that isn't what actually happened, as this book documents in detail. Few books cover the immediate aftermath of the war, or concentrate exclusively upon that chaotic period. The author has gone to great lengths to explore little-known conflicts and sort out conflicting accounts of what happened still disputed today by descendants of those involved.

The devastation wreaked upon cities where the conflict raged was extreme. In Germany, Berlin, Hanover, Duisburg, Dortmund, and Cologne lost more than half their habitable buildings, with the figure rising to 70% in the latter city. From Stalingrad to Warsaw to Caen in France, destruction was general with survivors living in the rubble. The transportation infrastructure was almost completely obliterated, along with services such as water, gas, electricity, and sanitation. The industrial plant was wiped out, and along with it the hope of employment. This was the state of affairs in May 1945, and the Marshall Plan did not begin to deliver assistance to Western Europe until three years later, in April 1948. Those three years were grim, and compounded by score-settling, revenge, political instability, and multitudes of displaced people returning to areas with no infrastructure to support them.

And this was in Western Europe. As is the case with just about everything regarding World War II in Europe, the further east you go, the worse things get. In the Soviet Union, 70,000 villages were destroyed, along with 32,000 factories. The redrawing of borders, particularly those of Poland and Germany, set the stage for a paroxysm of ethnic cleansing and mass migration as Poles were expelled from territory now incorporated into the Soviet Union and Germans from the western part of Poland. Reprisals against those accused of collaboration with the enemy were widespread, with murder not uncommon. Thirst for revenge extended to the innocent, including children fathered by soldiers of occupying armies.

The end of the War did not mean an end to the wars. As the author writes, “The Second World War was therefore not only a traditional conflict for territory: it was simultaneously a war of race, and a war of ideology, and was interlaced with half a dozen civil wars fought for purely local reasons.” Defeat of Germany did nothing to bring these other conflicts to an end. Guerrilla wars continued in the Baltic states annexed by the Soviet Union as partisans resisted the invader. An all-out civil war between communists and anti-communists erupted in Greece and was ended only through British and American aid to the anti-communists. Communist agitation escalated to violence in Italy and France. And country after country in Eastern Europe came under Soviet domination as puppet regimes were installed through coups, subversion, or rigged elections.

When reading a detailed history of a period most historians ignore, one finds oneself exclaiming over and over, “I didn't know that!”, and that is certainly the case here. This was a dark period, and no group seemed immune from regrettable acts, including Jews liberated from Nazi death camps and slave labourers freed as the Allies advanced: both sometimes took their revenge upon German civilians. As the author demonstrates, the aftermath of this period still simmers beneath the surface among the people involved—it has become part of the identity of ethnic groups which will outlive any person who actually remembers the events of the immediate postwar period.

In addition to providing an enlightening look at this neglected period, the events in the years following 1945 have much to teach us about those playing out today around the globe. We are seeing long-simmering ethnic and religious strife boil into open conflict as soon as the system is perturbed enough to knock the lid off the kettle. Borders drawn by politicians mean little when people's identity is defined by ancestry or faith, and memories are very long, measured sometimes in centuries. Even after a cataclysmic conflict which levels cities and reduces populations to near-medieval levels of subsistence, many people do not long for peace but instead seek revenge. Economic growth and prosperity can, indeed, change the attitude of societies and allow for alliances among former enemies (imagine how odd the phrase “Paris-Berlin axis”, heard today in discussions of the European Union, would have sounded in 1946), but the results of a protracted conflict can prevent the emergence of the very prosperity which might allow consigning it to the past.

 Permalink

Mahon, Basil. The Man Who Changed Everything. Chichester, UK: John Wiley & Sons, 2003. ISBN 978-0-470-86171-4.
In the 19th century, science in general and physics in particular grew up, assuming their modern form which is still recognisable today. At the start of the century, the word “scientist” was not yet in use, and the natural philosophers of the time were often amateurs. University research in the sciences, particularly in Britain, was rare. Those working in the sciences were often occupied by cataloguing natural phenomena, and apart from Newton's monumental achievements, few people focussed on discovering mathematical laws to explain the new physical phenomena which were being discovered such as electricity and magnetism.

One person, James Clerk Maxwell, was largely responsible for creating the way modern science is done and the way we think about theories of physics, while simultaneously restoring Britain's standing in physics compared to work on the Continent, and he created an institution which would continue to do important work from the time of his early death until the present day. While every physicist and electrical engineer knows of Maxwell and his work, he is largely unknown to the general public, and even those who are aware of his seminal work in electromagnetism may be unaware of the extent his footprints are found all over the edifice of 19th century physics.

Maxwell was born in 1831 to a Scottish lawyer, John Clerk, and his wife Frances Cay. Clerk subsequently inherited a country estate, and added “Maxwell” to his name in honour of the noble relatives from whom he inherited it. His son's first name, then was “James” and his surname “Clerk Maxwell”: this is why his full name is always used instead of “James Maxwell”. From childhood, James was curious about everything he encountered, and instead of asking “Why?” over and over like many children, he drove his parents to distraction with “What's the go o' that?”. His father did not consider science a suitable occupation for his son and tried to direct him toward the law, but James's curiosity did not extend to legal tomes and he concentrated on topics that interested him. He published his first scientific paper, on curves with more than two foci, at the age of 14. He pursued his scientific education first at the University of Edinburgh and later at Cambridge, where he graduated in 1854 with a degree in mathematics. He came in second in the prestigious Tripos examination, earning the title of Second Wrangler.

Maxwell was now free to begin his independent research, and he turned to the problem of human colour vision. It had been established that colour vision worked by detecting the mixture of three primary colours, but Maxwell was the first to discover that these primaries were red, green, and blue, and that by mixing them in the correct proportion, white would be produced. This was a matter to which Maxwell would return repeatedly during his life.

In 1856 he accepted an appointment as a full professor and department head at Marischal College, in Aberdeen Scotland. In 1857, the topic for the prestigious Adams Prize was the nature of the rings of Saturn. Maxwell's submission was a tour de force which proved that the rings could not be either solid nor a liquid, and hence had to be made of an enormous number of individually orbiting bodies. Maxwell was awarded the prize, the significance of which was magnified by the fact that his was the only submission: all of the others who aspired to solve the problem had abandoned it as too difficult.

Maxwell's next post was at King's College London, where he investigated the properties of gases and strengthened the evidence for the molecular theory of gases. It was here that he first undertook to explain the relationship between electricity and magnetism which had been discovered by Michael Faraday. Working in the old style of physics, he constructed an intricate mechanical thought experiment model which might explain the lines of force that Faraday had introduced but which many scientists thought were mystical mumbo-jumbo. Maxwell believed the alternative of action at a distance without any intermediate mechanism was wrong, and was able, with his model, to explain the phenomenon of rotation of the plane of polarisation of light by a magnetic field, which had been discovered by Faraday. While at King's College, to demonstrate his theory of colour vision, he took and displayed the first colour photograph.

Maxwell's greatest scientific achievement was done while living the life of a country gentleman at his estate, Glenair. In his textbook, A Treatise on Electricity and Magnetism, he presented his famous equations which showed that electricity and magnetism were two aspects of the same phenomenon. This was the first of the great unifications of physical laws which have continued to the present day. But that isn't all they showed. The speed of light appeared as a conversion factor between the units of electricity and magnetism, and the equations allowed solutions of waves oscillating between an electric and magnetic field which could propagate through empty space at the speed of light. It was compelling to deduce that light was just such an electromagnetic wave, and that waves of other frequencies outside the visual range must exist. Thus was laid the foundation of wireless communication, X-rays, and gamma rays. The speed of light is a constant in Maxwell's equations, not depending upon the motion of the observer. This appears to conflict with Newton's laws of mechanics, and it was not until Einstein's 1905 paper on special relativity that the mystery would be resolved. In essence, faced with a dispute between Newton and Maxwell, Einstein decided to bet on Maxwell, and he chose wisely. Finally, when you look at Maxwell's equations (in their modern form, using the notation of vector calculus), they appear lopsided. While they unify electricity and magnetism, the symmetry is imperfect in that while a moving electric charge generates a magnetic field, there is no magnetic charge which, when moved, generates an electric field. Such a charge would be a magnetic monopole, and despite extensive experimental searches, none has ever been found. The existence of monopoles would make Maxwell's equations even more beautiful, but sometimes nature doesn't care about that. By all evidence to date, Maxwell got it right.

In 1871 Maxwell came out of retirement to accept a professorship at Cambridge and found the Cavendish Laboratory, which would focus on experimental science and elevate Cambridge to world-class status in the field. To date, 29 Nobel Prizes have been awarded for work done at the Cavendish.

Maxwell's theoretical and experimental work on heat and gases revealed discrepancies which were not explained until the development of quantum theory in the 20th century. His suggestion of Maxwell's demon posed a deep puzzle in the foundations of thermodynamics which eventually, a century later, showed the deep connections between information theory and statistical mechanics. His practical work on automatic governors for steam engines foreshadowed what we now call control theory. He played a key part in the development of the units we use for electrical quantities.

By all accounts Maxwell was a modest, generous, and well-mannered man. He wrote whimsical poetry, discussed a multitude of topics (although he had little interest in politics), was an enthusiastic horseman and athlete (he would swim in the sea off Scotland in the winter), and was happily married, with his wife Katherine an active participant in his experiments. All his life, he supported general education in science, founding a working men's college in Cambridge and lecturing at such colleges throughout his career.

Maxwell lived only 48 years—he died in 1879 of the same cancer which had killed his mother when he was only eight years old. When he fell ill, he was engaged in a variety of research while presiding at the Cavendish Laboratory. We shall never know what he might have done had he been granted another two decades.

Apart from the significant achievements Maxwell made in a wide variety of fields, he changed the way physicists look at, describe, and think about natural phenomena. After using a mental model to explore electromagnetism, he discarded it in favour of a mathematical description of its behaviour. There is no theory behind Maxwell's equations: the equations are the theory. To the extent they produce the correct results when experimental conditions are plugged in, and predict new phenomena which are subsequently confirmed by experiment, they are valuable. If they err, they should be supplanted by something more precise. But they say nothing about what is really going on—they only seek to model what happens when you do experiments. Today, we are so accustomed to working with theories of this kind: quantum mechanics, special and general relativity, and the standard model of particle physics, that we don't think much about it, but it was revolutionary in Maxwell's time. His mathematical approach, like Newton's, eschewed explanation in favour of prediction: “We have no idea how it works, but here's what will happen if you do this experiment.” This is perhaps Maxwell's greatest legacy.

This is an excellent scientific biography of Maxwell which also gives the reader a sense of the man. He was such a quintessentially normal person there aren't a lot of amusing anecdotes to relate. He loved life, loved his work, cherished his friends, and discovered the scientific foundations of the technologies which allow you to read this. In the Kindle edition, at least as read on an iPad, the text appears in a curious, spidery, almost vintage, font in which periods are difficult to distinguish from commas. Numbers sometimes have spurious spaces embedded within them, and the index cites pages in the print edition which are useless since the Kindle edition does not include real page numbers.

 Permalink

September 2014

Amundsen, Roald. The South Pole. New York: Cooper Square Press, [1913] 2001. ISBN 978-0-8154-1127-7.
In modern warfare, it has been observed that “generals win battles, but logisticians win wars.” So it is with planning an exploration mission to a remote destination where no human has ever set foot, and the truths are as valid for polar exploration in the early 20th century as they will be for missions to Mars in the 21st. On December 14th, 1911, Roald Amundsen and his five-man southern party reached the South Pole after a trek from the camp on the Ross Ice Shelf where they had passed the previous southern winter, preparing for an assault on the pole as early as the weather would permit. By over-wintering, they would be able to depart southward well before a ship would be able to land an expedition, since a ship would have to wait until the sea ice dispersed sufficiently to make a landing.

Amundsen's plan was built around what space mission architects call “in-situ resource utilisation” and “depots”, as well as “propulsion staging”. This allowed for a very lightweight push to the pole, both in terms of the amount of supplies which had to be landed by their ship, the Fram, and in the size of the polar party and the loading of their sledges. Upon arriving in Antarctica, Amundsen's party immediately began to hunt the abundant seals near the coast. More than two hundred seals were killed, processed, and stored for later use. (Since the temperature on the Ross Ice Shelf and the Antarctic interior never rises above freezing, the seal meat would keep indefinitely.) Then parties were sent out in the months remaining before the arrival of winter in 1911 to establish depots at every degree of latitude between the base camp and 82° south. These depots contained caches of seal meat for the men and dogs and kerosene for melting snow for water and cooking food. The depot-laying journeys familiarised the explorers with driving teams of dogs and operating in the Antarctic environment.

Amundsen had chosen dogs to pull his sledges. While his rival to be first at the pole, Robert Falcon Scott, experimented with pulling sledges by ponies, motorised sledges, and man-hauling, Amundsen relied upon the experience of indigenous people in Arctic environments that dogs were the best solution. Dogs reproduced and matured sufficiently quickly that attrition could be made up by puppies born during the expedition, they could be fed on seal meat, which could be obtained locally, and if a dog team were to fall into a crevasse (as was inevitable when crossing uncharted terrain), the dogs could be hauled out, no worse for wear, by the drivers of other sledges. For ponies and motorised sledges, this was not the case.

Further, Amundsen adopted a strategy which can best be described as “dog eat dog”. On the journey to the pole, he started with 52 dogs. Seven of these had died from exhaustion or other causes before the ascent to the polar plateau. (Dogs who died were butchered and fed to the other dogs. Greenland sled dogs, being only slightly removed from wolves, had no hesitation in devouring their erstwhile comrades.) Once reaching the plateau, 27 dogs were slaughtered, their meat divided between the surviving dogs and the five men. Only 18 dogs would proceed to the pole. Dog carcasses were cached for use on the return journey.

Beyond the depots, the polar party had to carry everything required for the trip. but knowing the depots would be available for the return allowed them to travel lightly. After reaching the pole, they remained for three days to verify their position, send out parties to ensure they had encircled the pole's position, and built a cairn to commemorate their achievement. Amundsen left a letter which he requested Captain Scott deliver to King Haakon VII of Norway should Amundsen's party be lost on its return to base. (Sadly, that was the fate which awaited Scott, who arrived at the pole on January 17th, 1912, only to find the Amundsen expedition's cairn there.)

This book is Roald Amundsen's contemporary memoir of the expedition. Originally published in two volumes, the present work includes both. Appendices describe the ship, the Fram, and scientific investigations in meteorology, geology, astronomy, and oceanography conducted during the expedition. Amundsen's account is as matter-of-fact as the memoirs of some astronauts, but a wry humour comes through when discussing dealing with sled dogs who have will of their own and also the foibles of humans cooped up in a small cabin in an alien environment during a night which lasts for months. He evinces great respect for his colleagues and competitors in polar exploration, particularly Scott and Shackleton, and worries whether his own approach to reaching the pole would be proved superior to theirs. At the time the book was published, the tragic fate of Scott's expedition was not known.

Today, we might not think of polar exploration as science, but a century ago it was as central to the scientific endeavour as robotic exploration of Mars is today. Here was an entire continent, known only in sketchy detail around its coast, with only a few expeditions into the interior. When Amundsen's party set out on their march to the pole, they had no idea whether they would encounter mountain ranges along the way and, if so, whether they could find a way over or around them. They took careful geographic and meteorological observations along their trek (as well as oceanographical measurements on the trip to Antarctica and back), and these provided some of the first data points toward understanding weather in the southern hemisphere.

In Norway, Amundsen was hailed as a hero. But it is clear from this narrative he never considered himself such. He wrote:

I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.

This work is in the public domain, and there are numerous editions of it available, in print and in electronic form, many from independent publishers. The independent publishers, for the most part, did not distinguish themselves in their respect for this work. Many of their editions were produced by running an optical character recognition program over a print copy of the book, then putting it together with minimal copy-editing. Some (including the one I was foolish enough to buy) elide all of the diagrams, maps, and charts from the original book, which renders parts of the text incomprehensible. The paperback edition cited above, while expensive, is a facsimile edition of the original 1913 two volume English translation of Amundsen's original work, including all of the illustrations. I know of no presently-available electronic edition which has comparable quality and includes all of the material in the original book. Be careful—if you follow the link to the paperback edition, you'll see a Kindle edition listed, but this is from a different publisher and is rife with errors and includes none of the illustrations. I made the mistake of buying it, assuming it was the same as the highly-praised paperback. It isn't; don't be fooled.

 Permalink

Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.
Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.

Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.

Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.

As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.

“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.

This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.

One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.

At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.

As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.

That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

 Permalink

Cawdron, Peter. My Sweet Satan. Seattle: Amazon Digital Services, 2014. ASIN B00NBA6Y1A.
Here the author adds yet another imaginative tale of first contact to his growing list of novels in that genre, a puzzle story which the viewpoint character must figure out having lost memories of her entire adult life. After a botched attempt at reanimation from cryo-sleep, Jasmine Holden finds herself with no memories of her life after the age of nineteen. And yet, here she is, on board Copernicus, in the Saturn system, closing in on the distant retrograde moon Bestla which, when approached by a probe from Earth, sent back an audio transmission to its planet of origin which was mostly gibberish but contained the chilling words: “My sweet Satan. I want to live and die for you, my glorious Satan!”. A follow-up unmanned probe to Bestla is destroyed as it approaches, and the Copernicus is dispatched to make a cautious investigation of what appears to be an alien probe with a disturbing theological predisposition.

Back on Earth, sentiment has swung back and forth about the merits of exploring Bestla and fears of provoking an alien presence in the solar system which, by its very capability of interstellar travel, must be far in advance of Earthly technology. Jasmine, a key member of the science team, suddenly finds herself mentally a 19 year old girl far from her home, and confronted both by an unknown alien presence but also conflict among her crew members, who interpret the imperatives of the mission in different ways.

She finds the ship's computer, an early stage artificial intelligence, the one being in which she can confide, and the only one who comprehends her predicament and is willing to talk her through procedures she learned by heart in her training but have been lost to an amnesia she feels compelled to conceal from human members of the crew.

As the ship approaches Bestla, conflict erupts among the crew, and Jasmine must sort out what is really going on and choose sides without any recollections of her earlier interactions with her crew members. In a way, this is three first contact novels in one: 19 year old Jasmine making contact with her fellow crew members about which she remembers nothing, the Copernicus and whatever is on Bestla, and a third contact about which I cannot say anything without spoiling the story.

This is a cracking good first contact novel which, just when you're nearing the end and beginning to worry “Where's the sense of wonder?” delivers everything you'd hoped for and more.

I read a pre-publication manuscript edition which the author kindly shared with me.

 Permalink

Byers, Bruce K. Destination Moon. Washington: National Aeronautics and Space Administration, 1977. NASA TM X-3487.
In the mid 1960s, the U.S. Apollo lunar landing program was at the peak of its budget commitment and technical development. The mission mode had already been chosen and development of the flight hardware was well underway, along with the ground infrastructure required to test and launch it and the global network required to track missions in flight. One nettlesome problem remained. The design of the lunar module made assumptions about the properties of the lunar surface upon which it would alight. If the landing zone had boulders which were too large, craters sufficiently deep and common that the landing legs could not avoid, or slopes too steep to avoid an upset on landing or tipping over afterward, lunar landing missions would all be aborted by the crew when they reached decision height, judging there was no place they could set down safely. Even if all the crews returned safely without having landed, this would be an ignominious end to the ambitions of Project Apollo.

What was needed in order to identify safe landing zones was high-resolution imagery of the Moon. The most capable Earth-based telescopes, operating through Earth's turbulent and often murky atmosphere, produced images which resolved objects at best a hundred times larger that those which could upset a lunar landing mission. What was required was a large area, high resolution mapping of the Moon and survey of potential landing zones, which could only be done, given the technology of the 1960s, by going there, taking pictures, and returning them to Earth. So was born the Lunar Orbiter program, which in 1966 and 1967 sent lightweight photographic reconnaissance satellites into lunar orbit, providing both the close-up imagery needed to select landing sites for the Apollo missions, but also mapping imagery which covered 99% of the near side of the Moon and 85% of the far side, In fact, Lunar Orbiter provided global imagery of the Moon far more complete than that which would be available for the Earth many years thereafter.

Accomplishing this goal with the technology of the 1960s was no small feat. Electronic imaging amounted to analogue television, which, at the altitude of a lunar orbit, wouldn't produce images any better than telescopes on Earth. The first spy satellites were struggling to return film from Earth orbit, and returning film from the Moon was completely impossible given the mass budget of the launchers available. After a fierce competition, NASA contracted with Boeing to build the Lunar Orbiter, designed to fit on NASA's workhorse Atlas-Agena launcher, which seriously constrained its mass. Boeing subcontracted with Kodak to build the imaging system and RCA for the communications hardware which would relay the images back to Earth and allow the spacecraft to be controlled from the ground.

The images were acquired by a process which may seem absurd to those accustomed to present-day digital technologies but which seemed miraculous in its day. In lunar orbit, the spacecraft would aim its cameras (it had two: a mapping camera which produced overlapping wide-angle views and a high-resolution camera that photographed clips of each frame with a resolution of about one metre) at the Moon and take a series of photos. Because the film used had a very low light sensitivity (ASA [now ISO] 1.6), on low-altitude imaging passes the film would have to be moved to compensate for the motion of the spacecraft to avoid blurring. (The low light sensitivity of the film was due to its very high spatial resolution, but also reduced its likelihood of being fogged by exposure to cosmic rays or energetic particles from solar flares.)

After being exposed, the film would subsequently be processed on-board by putting it in contact with a band containing developer and fixer, and then the resulting negative would be read back for transmission to Earth by scanning it with a moving point of light, measuring the transmission through the negative, and sending the measured intensity back as an analogue signal. At the receiving station, that signal would be used to modulate the intensity of a spot of light scanned across film which, when developed and assembled into images from strips, revealed the details of the Moon. The incoming analogue signal was recorded on tape to provide a backup for the film recording process, but nothing was done with the tapes at the time. More about this later….

Five Lunar Orbiter missions were launched, and although some experienced problems, all achieved their primary mission objectives. The first three missions provided all of the data required by Apollo, so the final two could be dedicated to mapping the Moon from near-polar orbits. After the completion of their primary imaging missions, Lunar Orbiters continued to measure the radiation and micrometeoroid environment near the Moon, and contributed to understanding the Moon's gravitational field, which would be important in planning later Apollo missions that would fly in very low orbits around the Moon. On August 23rd, 1966, the first Lunar Orbiter took one of the most iconic pictures of the 20th century: Earthrise from the Moon. The problems experienced by Lunar Orbiter missions and the improvisation by ground controllers to work around them set the pattern for subsequent NASA robotic missions, with their versatile, reconfigurable flight hardware and fine-grained control from the ground.

You might think the story of Lunar Orbiter a footnote to space exploration history which has scrolled off the screen with subsequent Apollo lunar landings and high-resolution lunar mapping by missions such as Clementine and the Lunar Reconnaissance Orbiter, but that fails to take into account the exploits of 21st century space data archaeologists. Recall that I said that all of the image data from Lunar Orbiter missions was recorded on analogue tapes. These tapes contained about 10 bits of dynamic range, as opposed to the 8 bits which were preserved by the optical recording process used in receiving the images during the missions. This, combined with contemporary image processing techniques, makes for breathtaking images recorded almost half a century ago, but never seen before. Here are a document and video which record the exploits of the Lunar Orbiter Image Recovery Project (LOIRP). Please visit the LOIRP Web site for more restored images and details of the process of restoration.

 Permalink

October 2014

Levinson, Marc. The Box. Princeton: Princeton University Press, [2006] 2008. ISBN 978-0-691-13640-0.
When we think of developments in science and technology which reshape the world economy, we often concentrate upon those which build on fundamental breakthroughs in our understanding of the world we live in, or technologies which employ them to do things never imagined. Examples of these are electricity and magnetism, which gave us the telegraph, electric power, the telephone, and wireless communication. Semiconductor technology, the foundation of the computer and Internet revolutions, is grounded in quantum mechanics, elaborated only in the early 20th century. The global positioning satellites which you use to get directions when you're driving or walking wouldn't work if they did not compensate for the effects of special and general relativity upon the rate at which clocks tick in moving objects and those in gravitational fields.

But sometimes a revolutionary technology doesn't require a scientific breakthrough, nor a complicated manufacturing process to build, but just the realisation that people have been looking at a problem all wrong, or have been earnestly toiling away trying to solve some problem other than the one which people are ready to pay vast sums of money to have solved, once the solution is placed on the market.

The cargo shipping container may be, physically, the one of the least impressive technological achievements of the 20th century, right up there with the inanimate carbon rod, as it required no special materials, fabrication technologies, or design tools which did not exist a century before, and yet its widespread adoption in the latter half of the 20th century was fundamental to the restructuring of the global economy which we now call “globalisation”, and changed assumptions about the relationship between capital, natural resources, labour, and markets which had existed since the start of the industrial revolution.

Ever since the start of ocean commerce, ships handled cargo in much the same way. The cargo was brought to the dock (often after waiting for an extended period in a dockside warehouse for the ship to arrive), then stevedores (or longshoremen, or dockers) would load the cargo into nets, or onto pallets hoisted by nets into the hold of the ship, where other stevedores would unload it and stow the individual items, which might consist of items as varied as bags of coffee beans, boxes containing manufactured goods, barrels of wine or oil, and preserved food items such as salted fish or meat. These individual items were stored based upon the expertise of the gangs working the ship to make the most of the irregular space of the ship's cargo hold, and if the ship was to call upon multiple ports, in an order so cargo could be unloaded with minimal shifting of that bound for subsequent destinations on the voyage. Upon arrival at a port, this process was reversed to offload cargo bound there, and then the loading began again. It was not unusual for a cargo ship to spend 6 days or more in each port, unloading and loading, before the next leg on its voyage.

Shipping is both capital- and labour-intensive. The ship has to be financed and incurs largely fixed maintenance costs, and the crew must be paid regardless of whether they're at sea or waiting in port for cargo to be unloaded and loaded. This means that what engineers call the “duty cycle” of the ship is critical to its cost of operation and, consequently, what the shipowner must charge shippers to make a profit. A ship operating coastal routes in the U.S., say between New York and a port in the Gulf, could easily spend half its time in ports, running up costs but generating no revenue. This model of ocean transport, called break bulk cargo, prevailed from the age of sail until the 1970s.

Under the break bulk model, ocean transport was very expensive. Further, with cargos sitting in warehouses waiting for ships to arrive on erratic schedules, delivery times were not just long but also unpredictable. Goods shipped from a factory in the U.S. midwest to a destination in Europe would routinely take three months to arrive end to end, with an uncertainty measured in weeks, accounting for trucking, railroads, and ocean shipping involved in getting them to their destination. This meant that any importation of time-sensitive goods required keeping a large local inventory to compensate for unpredictable delivery times, and paying the substantial shipping cost included in their price. Economists, going back to Ricardo, often modelled shipping as free, but it was nothing of the kind, and was often the dominant factor in the location and structure of firms.

When shipping is expensive, firms have an advantage in being located in proximity to both their raw materials (or component suppliers) and customers. Detroit became the Motor City in large part because its bulk inputs: iron ore and coal, could be transported at low cost from mines to factories by ships plying the Great Lakes. Industries dependent on imports and exports would tend to cluster around major ports, since otherwise the cost of transporting their inputs and outputs overland from the nearest port would be prohibitive. And many companies simply concentrated on their local market, where transportation costs were not a major consideration in their cost structure. In 1964, when break bulk shipping was the norm, 40% of exports from Britain originated within 25 miles of their port of export, and two thirds of all imports were delivered to destinations a similar distance from their port of arrival.

But all of this was based upon the cost structure of break bulk ocean cargo shipping, and a similarly archaic way of handling rail and truck cargo. A manufacturing plant in Iowa might pack its goods destined for a customer in Belgium into boxes which were loaded onto a truck, driven to a terminal in Chicago where they were unloaded and reloaded into a boxcar, then sent by train to New Jersey, where they were unloaded and put onto a small ship to take them to the port of New York, where after sitting in a warehouse they'd be put onto a ship bound for a port in Germany. After arrival, they'd be transported by train, then trucked to the destination. Three months or so later, plus or minus a few, the cargo would arrive—at least that which wasn't stolen en route.

These long delays, and the uncertainty in delivery times, required those engaging in international commerce to maintain large inventories, which further increased the cost of doing business overseas. Many firms opted for vertical integration in their own local region.

Malcom McLean started his trucking company in 1934 with one truck and one driver, himself. What he lacked in capital (he often struggled to pay bridge tolls when delivering to New York), he made up in ambition, and by 1945, his company operated 162 trucks. He was a relentless cost-cutter, and from his own experience waiting for hours on New York docks for his cargo to be unloaded onto ships, in 1953 asked why shippers couldn't simply put the entire truck trailer on a ship rather than unload its cargo into the ship's hold, then unload it piece by piece at the destination harbour and load it back onto another truck. War surplus Liberty ships were available for almost nothing, and they could carry cargo between the U.S. northeast and south at a fraction of the cost of trucks, especially in the era before expressways.

McLean immediately found himself in a tangled web of regulatory and union constraints. Shipping, trucking, and railroads were all considered completely different businesses, each of which had accreted its own, often bizarre, government regulation and union work rules. The rate a carrier could charge for hauling a ton of cargo from point A to point B depended not upon its mass or volume, but what it was, with radically different rates for say, coal as opposed to manufactured goods. McLean's genius was in seeing past all of this obstructionist clutter and realising that what the customer—the shipper—wanted was not to purchase trucking, railroad, and shipping services, but rather delivery of the shipment, however accomplished, at a specified time and cost.

The regulatory mess made it almost impossible for a trucking company to own ships, so McLean created a legal structure which would allow his company to acquire a shipping line which had fallen on hard times. He then proceeded to convert a ship to carry containers, which would not be opened from the time they were loaded on trucks at the shipper's location until they arrived at the destination, and could be transferred between trucks and ships rapidly. Working out the details of the construction of the containers, setting their size, and shepherding all of this through a regulatory gauntlet which had never heard of such concepts was daunting, but the potential payoff was enormous. Loading break bulk cargo onto a ship the size of McLean's first container vessel cost US$ 5.83 per ton. Loading freight in containers cost US$ 0.16 per ton. This reduction in cost, passed on to the shipper, made containerised freight compelling, and sparked a transformation in the global economy.

Consider Barbie. Her body is manufactured in China, using machines from Japan and Europe and moulds designed in the U.S. Her hair comes from Japan, the plastic for her body from Taiwan, dyed with U.S. pigments, and her clothes are produced in other factories in China. The final product is shipped worldwide. There are no large inventories anywhere in the supply chain: every step depends upon reliable delivery of containers of intermediate products. Managers setting up such a supply chain no longer care whether the products are transported by truck, rail, or sea, and since transportation costs for containers are so small compared to the value of their contents (and trade barriers such as customs duties have fallen), the location of suppliers and factories is based almost entirely upon cost, with proximity to resources and customers almost irrelevant. We think of the Internet as having abolished distance, but the humble ocean cargo container has done so for things as much as the Internet has for data.

This is a thoroughly researched and fascinating look at how the seemingly most humble technological innovation can have enormous consequences, and also how the greatest barriers to restructuring economies may be sclerotic government and government-enabled (union) structures which preserve obsolete models long after they have become destructive of prosperity. It also demonstrates how those who try to freeze innovation into a model fixed in the past will be bypassed by those willing to embrace a more efficient way of doing business. The container ports which handle most of the world's cargo are, for the most part, not the largest ports of the break bulk era. They are those which, unencumbered by history, were able to build the infrastructure required to shift containers at a rapid rate.

The Kindle edition has some flaws. In numerous places, spaces appear within words which don't belong there (perhaps words hyphenated across lines in the print edition and not re-joined?) and the index is just a list of searchable terms, not linked to references in the text.

 Permalink

Barry, John M. The Great Influenza. New York: Penguin, [2004] 2005. ISBN 978-0-14-303649-4.
In the year 1800, the practice of medicine had changed little from that in antiquity. The rapid progress in other sciences in the 18th century had had little impact on medicine, which one historian called “the withered arm of science”. This began to change as the 19th century progressed. Researchers, mostly in Europe and especially in Germany, began to lay the foundations for a scientific approach to medicine and public health, understanding the causes of disease and searching for means of prevention and cure. The invention of new instruments for medical examination, anesthesia, and antiseptic procedures began to transform the practice of medicine and surgery.

All of these advances were slow to arrive in the United States. As late as 1900 only one medical school in the U.S. required applicants to have a college degree, and only 20% of schools required a high school diploma. More than a hundred U.S. medical schools accepted any applicant who could pay, and many graduated doctors who had never seen a patient or done any laboratory work in science. In the 1870s, only 10% of the professors at Harvard's medical school had a Ph.D.

In 1873, Johns Hopkins died, leaving his estate of US$ 3.5 million to found a university and hospital. The trustees embarked on an ambitious plan to build a medical school to be the peer of those in Germany, and began to aggressively recruit European professors and Americans who had studied in Europe to build a world class institution. By the outbreak of World War I in Europe, American medical research and education, still concentrated in just a few centres of excellence, had reached the standard set by Germany. It was about to face its greatest challenge.

With the entry of the United States into World War I in April of 1917, millions of young men conscripted for service were packed into overcrowded camps for training and preparation for transport to Europe. These camps, thrown together on short notice, often had only rudimentary sanitation and shelter, with many troops living in tent cities. Large number of doctors and especially nurses were recruited into the Army, and by the start of 1918 many were already serving in France. Doctors remaining in private practice in the U.S. were often older men, trained before the revolution in medical education and ignorant of modern knowledge of diseases and the means of treating them.

In all American wars before World War I, more men died from disease than combat. In the Civil War, two men died from disease for every death on the battlefield. Army Surgeon General William Gorgas vowed that this would not be the case in the current conflict. He was acutely aware that the overcrowded camps, frequent transfers of soldiers among far-flung bases, crowded and unsanitary troop transport ships, and unspeakable conditions in the trenches were a tinderbox just waiting for the spark of an infectious disease to ignite it. But the demand for new troops for the front in France caused his cautions to be overruled, and still more men were packed into the camps.

Early in 1918, a doctor in rural Haskell County, Kansas began to treat patients with a disease he diagnosed as influenza. But this was nothing like the seasonal influenza with which he was familiar. In typical outbreaks of influenza, the people at greatest risk are the very young (whose immune systems have not been previously exposed to the virus) and the very old, who lack the physical resilience to withstand the assault by the disease. Most deaths are among these groups, leading to a “bathtub curve” of mortality. This outbreak was different: the young and elderly were largely spared, while those in the prime of life were struck down, with many dying quickly of symptoms which resembled pneumonia. Slowly the outbreak receded, and by mid-March things were returning to normal. (The location and mechanism where the disease originated remain controversial to this day and we may never know for sure. After weighing competing theories, the author believes the Kansas origin most likely, but other origins have their proponents.)

That would have been the end of it, had not soldiers from Camp Funston, the second largest Army camp in the U.S., with 56,000 troops, visited their families in Haskell County while on leave. They returned to camp carrying the disease. The spark had landed in the tinderbox. The disease spread outward as troop trains travelled between camps. Often a train would leave carrying healthy troops (infected but not yet symptomatic) and arrive with up to half the company sick and highly infectious to those at the destination. Before long the disease arrived via troop ships at camps and at the front in France.

This was just the first wave. The spring influenza was unusual in the age group it hit most severely, but was not particularly more deadly than typical annual outbreaks. Then in the fall a new form of the disease returned in a much more virulent form. It is theorised that under the chaotic conditions of wartime a mutant form of the virus had emerged and rapidly spread among the troops and then passed into the civilian population. The outbreak rapidly spread around the globe, and few regions escaped. It was particularly devastating to aboriginal populations in remote regions like the Arctic and Pacific islands who had not developed any immunity to influenza.

The pathogen in the second wave could kill directly within a day by destroying the lining of the lung and effectively suffocating the patient. The disease was so virulent and aggressive that some medical researchers doubted it was influenza at all and suspected some new kind of plague. Even those who recovered from the disease had much of their immunity and defences against respiratory infection so impaired that some people who felt well enough to return to work would quickly come down with a secondary infection of bacterial pneumonia which could kill them.

All of the resources of the new scientific medicine were thrown into the battle with the disease, with little or no impact upon its progression. The cause of influenza was not known at the time: some thought it was a bacterial disease while others suspected a virus. Further adding to the confusion is that influenza patients often had a secondary infection of bacterial pneumonia, and the organism which causes that disease was mis-identified as the pathogen responsible for influenza. Heroic efforts were made, but the state of medical science in 1918 was simply not up to the challenge posed by influenza.

A century later, influenza continues to defeat every attempt to prevent or cure it, and another global pandemic remains a distinct possibility. Supportive treatment in the developed world and the availability of antibiotics to prevent secondary infection by pneumonia will reduce the death toll, but a mass outbreak of the virus on the scale of 1918 would quickly swamp all available medical facilities and bring society to the brink as it did then. Even regular influenza kills between a quarter and a half million people a year. The emergence of a killer strain like that of 1918 could increase this number by a factor of ten or twenty.

Influenza is such a formidable opponent due to its structure. It is an RNA virus which, unusually for a virus, has not a single strand of genetic material but seven or eight separate strands of RNA. Some researchers argue that in an organism infected with two or more variants of the virus these strands can mix to form new mutants, allowing the virus to mutate much faster than other viruses with a single strand of genetic material (this is controversial). The virus particle is surrounded by proteins called hemagglutinin (HA) and neuraminidase (NA). HA allows the virus to break into a target cell, while NA allows viruses replicated within the cell to escape to infect others.

What makes creating a vaccine for influenza so difficult is that these HA and NA proteins are what the body's immune system uses to identify the virus as an invader and kill it. But HA and NA come in a number of variants, and a specific strain of influenza may contain one from column H and one from column N, creating a large number of possibilities. For example, H1N2 is endemic in birds, pigs, and humans. H5N1 caused the bird flu outbreak in 2004, and H1N1 was responsible for the 1918 pandemic. It gets worse. As a child, when you are first exposed to influenza, your immune system will produce antibodies which identify and target the variant to which you were first exposed. If you were infected with and recovered from, say, H3N2, you'll be pretty well protected against it. But if, subsequently, you encounter H1N1, your immune system will recognise it sufficiently to crank out antibodies, but they will be coded to attack H3N2, not the H1N1 you're battling, against which they're useless. Influenza is thus a chameleon, constantly changing its colours to hide from the immune system.

Strains of influenza tend to come in waves, with one HxNy variant dominating for some number of years, then shifting to another. Developers of vaccines must play a guessing game about which you're likely to encounter in a given year. This explains why the 1918 pandemic particularly hit healthy adults. Over the decades preceding the 1918 outbreak, the primary variant had shifted from H1N1, then decades of another variant, and then after 1900 H1N1 came back to the fore. Consequently, when the deadly strain of H1N1 appeared in the fall of 1918, the immune systems of both young and elderly people were ready for it and protected them, but those in between had immune systems which, when confronted with H1N1, produced antibodies for the other variant, leaving them vulnerable.

With no medical defence against or cure for influenza even today, the only effective response in the case of an outbreak of a killer strain is public health measures such as isolation and quarantine. Influenza is airborne and highly infectious: the gauze face masks you see in pictures from 1918 were almost completely ineffective. The government response to the outbreak in 1918 could hardly have been worse. After creating military camps which were nothing less than a culture medium containing those in the most vulnerable age range packed in close proximity, once the disease broke out and reports began to arrive that this was something new and extremely lethal, the troop trains and ships continued to run due to orders from the top that more and more men had to be fed into the meat grinder that was the Western Front. This inoculated camp after camp. Then, when the disease jumped into the civilian population and began to devastate cities adjacent to military facilities such as Boston and Philadelphia, the press censors of Wilson's proto-fascist war machine decided that honest reporting of the extent and severity of the disease or measures aimed at slowing its spread would impact “morale” and war production, so newspapers were ordered to either ignore it or print useless happy talk which only accelerated the epidemic. The result was that in the hardest-hit cities, residents confronted with the reality before their eyes giving to lie to the propaganda they were hearing from authorities retreated into fear and withdrawal, allowing neighbours to starve rather than risk infection by bringing them food.

As was known in antiquity, the only defence against an infectious disease with no known medical intervention is quarantine. In Western Samoa, the disease arrived in September 1918 on a German steamer. By the time the disease ran its course, 22% of the population of the islands was dead. Just a few kilometres across the ocean in American Samoa, authorities imposed a rigid quarantine and not a single person died of influenza.

We will never know the worldwide extent of the 1918 pandemic. Many of the hardest-hit areas, such as China and India, did not have the infrastructure to collect epidemiological data and what they had collapsed under the impact of the crisis. Estimates are that on the order of 500 million people worldwide were infected and that between 50 and 100 million died: three to five percent of the world's population.

Researchers do not know why the 1918 second wave pathogen was so lethal. The genome has been sequenced and nothing jumps out from it as an obvious cause. Understanding its virulence may require recreating the monster and experimenting with it in animal models. Obviously, this is not something which should be undertaken without serious deliberation beforehand and extreme precautions, but it may be the only way to gain the knowledge needed to treat those infected should a similar wild strain emerge in the future. (It is possible this work may have been done but not published because it could provide a roadmap for malefactors bent on creating a synthetic plague. If this be the case, we'll probably never know about it.)

Although medicine has made enormous strides in the last century, influenza, which defeated the world's best minds in 1918, remains a risk, and in a world with global air travel moving millions between dense population centres, an outbreak today would be even harder to contain. Let us hope that in that dire circumstance authorities will have the wisdom and courage to take the kind of dramatic action which can make the difference between a regional tragedy and a global cataclysm.

 Permalink

November 2014

Schlosser, Eric. Command and Control. New York: Penguin, 2013. ISBN 978-0-14-312578-5.
On the evening of September 18th, 1980 two U.S. Air Force airmen, members of a Propellant Transfer System (PTS) team, entered a Titan II missile silo near Damascus, Arkansas to perform a routine maintenance procedure. Earlier in the day they had been called to the site because a warning signal had indicated that pressure in the missile's second stage oxidiser tank was low. This was not unusual, especially for a missile which had recently been refuelled, as this one had, and the procedure of adding nitrogen gas to the tank to bring the pressure up to specification was considered straightforward. That is, if you consider any work involving a Titan II “routine” or “straightforward”. The missile, in an underground silo, protected by a door weighing more than 65 tonnes and able to withstand the 300 psi overpressure of a nearby nuclear detonation, stood more than 31 metres high and contained 143 tonnes of highly toxic fuel and oxidiser which, in addition to being poisonous to humans in small concentrations, were hypergolic: they burst into flames upon contact with one another, with no need of a source of ignition. Sitting atop this volatile fuel was a W-53 nuclear warhead with a yield of 9 megatons and high explosives in the fission primary which were not, as more modern nuclear weapons, insensitive to shock and fire. While it was unlikely in the extreme that detonation of these explosives due to an accident would result in a nuclear explosion, they could disperse the radioactive material in the bomb over the local area, requiring a massive clean-up effort.

The PTS team worked on the missile wearing what amounted to space suits with their own bottled air supply. One member was an experienced technician while the other was a 19-year old rookie receiving on the job training. Early in the procedure, the team was to remove the pressure cap from the side of the missile. While the lead technician was turning the cap with a socket wrench, the socket fell off the wrench and down the silo alongside the missile. The socket struck the thrust mount supporting the missile, bounced back upward, and struck the side of the missile's first stage fuel tank. Fuel began to spout outward as if from a garden hose. The trainee remarked, “This is not good.”

Back in the control centre, separated from the silo by massive blast doors, the two man launch team who had been following the servicing operation, saw their status panels light up like a Christmas tree decorated by somebody inordinately fond of the colour red. The warnings were contradictory and clearly not all correct. Had there indeed been both fuel and oxidiser leaks, as indicated, there would already have been an earth-shattering kaboom from the silo, and yet that had not happened. The technicians knew they had to evacuate the silo as soon as possible, but their evacuation route was blocked by dense fuel vapour.

The Air Force handles everything related to missiles by the book, but the book was silent about procedures for a situation like this, with massive quantities of toxic fuel pouring into the silo. Further, communication between the technicians and the control centre were poor, so it wasn't clear at first just what had happened. Before long, the commander of the missile wing, headquarters of the Strategic Air Command (SAC) in Omaha, and the missile's manufacturer, Martin Marietta, were in conference trying to decide how to proceed. The greatest risks were an electrical spark or other source of ignition setting the fuel on fire or, even greater, of the missile collapsing in the silo. With tonnes of fuel pouring from the fuel tank and no vent at its top, pressure in the tank would continue to fall. Eventually, it would be below atmospheric pressure, and would be crushed, likely leading the missile to crumple under the weight of the intact and fully loaded first stage oxidiser and second stage tanks. These tanks would then likely be breached, leading to an explosion. No Titan II had ever exploded in a closed silo, so there was no experience as to what the consequences of this might be.

As the night proceeded, all of the Carter era military malaise became evident. The Air Force lied to local law enforcement and media about what was happening, couldn't communicate with first responders, failed to send an evacuation helicopter for a gravely injured person because an irrelevant piece of equipment wasn't available, and could not come to a decision about how to respond as the situation deteriorated. Also on display was the heroism of individuals, in the Air Force and outside, who took matters into their own hands on the spot, rescued people, monitored the situation, evacuated nearby farms in the path of toxic clouds, and improvised as events required.

Among all of this, nothing whatsoever had been done about the situation of the missile. Events inevitably took their course. In the early morning hours of September 19th, the missile collapsed, releasing all of its propellants, which exploded. The 65 tonne silo door was thrown 200 metres, shearing trees in its path. The nuclear warhead was thrown two hundred metres in another direction, coming to rest in a ditch. Its explosives did not detonate, and no radiation was released.

While there were plenty of reasons to worry about nuclear weapons during the Cold War, most people's concerns were about a conflict escalating to the deliberate use of nuclear weapons or the possibility of an accidental war. Among the general public there was little concern about the tens of thousands of nuclear weapons in depots, aboard aircraft, atop missiles, or on board submarines—certainly every precaution had been taken by the brilliant people at the weapons labs to make them safe and reliable, right?

Well, that was often the view among “defence intellectuals” until they were briefed in on the highly secret details of weapons design and the command and control procedures in place to govern their use in wartime. As documented in this book, which uses the Damascus accident as a backdrop (a ballistic missile explodes in rural Arkansas, sending its warhead through the air, because somebody dropped a socket wrench), the reality was far from reassuring, and it took decades, often against obstructionism and foot-dragging from the Pentagon, to remedy serious risks in the nuclear stockpile.

In the early days of the U.S. nuclear stockpile, it was assumed that nuclear weapons were the last resort in a wartime situation. Nuclear weapons were kept under the civilian custodianship of the Atomic Energy Commission (AEC), and would only be released to the military services by a direct order from the President of the United States. Further, the nuclear cores (“pits”) of weapons were stored separately from the rest of the weapon assembly, and would only be inserted in the weapon, in the case of bombers, in the air, after the order to deliver the weapon was received. (This procedure had been used even for the two bombs dropped on Japan.) These safeguards meant that the probability of an accidental nuclear explosion was essentially nil in peacetime, although the risk did exist of radioactive contamination if a pit were dispersed due to fire or explosion.

As the 1950s progressed, and fears of a Soviet sneak attack grew, pressure grew to shift the custodianship of nuclear weapons to the military. The development of nuclear tactical and air defence weapons, some of which were to be forward deployed outside the United States, added weight to this argument. If radar detected a wave of Soviet bombers heading for the United States, how practical would it be to contact the President, get him to sign off on transferring the anti-aircraft warheads to the Army and Air Force, have the AEC deliver them to the military bases, install them on the missiles, and prepare the missiles for launch? The missile age only compounded this situation. Now the risk existed for a “decapitation” attack which could take out the senior political and military leadership, leaving nobody with the authority to retaliate.

The result of all this was a gradual devolution of control over nuclear weapons from civilian to military commands, with fully-assembled nuclear weapons loaded on aircraft, sitting at the ends of runways in the United States and Europe, ready to take off on a few minutes' notice. As tensions continued to increase, B-52s, armed with hydrogen bombs, were on continuous “airborne alert”, ready at any time to head toward their targets.

The weapons carried by these aircraft, however, had not been designed for missions like this. They used high explosives which could be detonated by heat or shock, often contained few interlocks to prevent a stray electrical signal from triggering a detonation, were not “one point safe” (guaranteed that detonation of one segment of the high explosives could not cause a nuclear yield), and did not contain locks (“permissive action links”) to prevent unauthorised use of a weapon. Through much of the height of the Cold War, it was possible for a rogue B-52 or tactical fighter/bomber crew to drop a weapon which might start World War III; the only protection against this was rigid psychological screening and the enemy's air defence systems.

The resistance to introducing such safety measures stemmed from budget and schedule pressures, but also from what was called the “always/never” conflict. A nuclear weapon should always detonate when sent on a wartime mission. But it should never detonate under any other circumstances, including an airplane crash, technical malfunction, maintenance error, or through the deliberate acts of an insane or disloyal individual or group. These imperatives inevitably conflict with one another. The more safeguards you design into a weapon to avoid an unauthorised detonation, the greater the probability one of them may fail, rendering the weapon inert. SAC commanders and air crews were not enthusiastic about the prospect of risking their lives running the gauntlet of enemy air defences only to arrive over their target and drop a dud.

As documented here, it was only after the end of Cold War, as nuclear weapon stockpiles were drawn down, that the more dangerous weapons were retired and command and control procedures put into place which seem (to the extent outsiders can assess such highly classified matters) to provide a reasonable balance between protection against a catastrophic accident or unauthorised launch and a reliable deterrent.

Nuclear command and control extends far beyond the design of weapons. The author also discusses in detail the development of war plans, how civilian and military authorities interact in implementing them, how emergency war orders are delivered, authenticated, and executed, and how this entire system must be designed not only to be robust against errors when intact and operating as intended, but in the aftermath of an attack.

This is a serious scholarly work and, at 632 pages, a long one. There are 94 pages of end notes, many of which expand substantially upon items in the main text. A Kindle edition is available.

 Permalink

Metzger, Th. Undercover Mormon. New York: Roadswell Editions, 2013.
The author, whose spiritual journey had earlier led him to dabble with becoming a Mennonite, goes weekly to an acupuncturist named Rudy Kilowatt who believes in the power of crystals, attends neo-pagan fertility rituals in a friend's suburban back yard, had been oddly fascinated by Mormonism ever since, as a teenager, he attended the spectacular annual Mormon pageant at Hill Cumorah, near his home in upstate New York.

He returned again and again for the spectacle of the pageant, and based upon his limited knowledge of Mormon doctrine, found himself admiring how the religion seemed to have it all: “All religion is either sword and sorcery or science fiction. The reason Mormonism is growing so fast is that you guys have both, and don't apologize for either.” He decides to pursue this Mormon thing further, armouring himself in white shirt, conservative tie, and black pants, and heading off to the nearest congregation for the Sunday service.

Approached by missionaries who spot him as a newcomer, he masters his anxiety (bolstered by the knowledge he has a couple of Xanax pills in his pocket), gives a false name, and indicates he's interested in learning more about the faith. Before long he's attending Sunday school, reading tracts, and spinning into the Mormon orbit, with increasing suggestions that he might convert.

All of this is described in a detached, ironic manner, in which the reader (and perhaps the author) can't decide how seriously to take it all. Metzger carries magic talismans to protect himself against the fearful “Mormo”, describes his anxiety to his psychoanalyst, who prescribes the pharmaceutical version of magic bones. He struggles with paranoia about his deception being found out and agonises over the consequences. He consults a friend who, “For a while he was an old-order Quaker, then a Sufi, then a retro-neo-pagan. Now he's a Unitarian-Universalist professor of history.”

The narrative is written in the tediously quaint “new journalism” style where it's as much about the author as the subject. This works poorly here because the author isn't very interesting. He comes across as so neurotic and self-absorbed as to make Woody Allen seem like Clint Eastwood. His “discoveries” about the content of LDS scripture could have been made just as easily by reading the original documents on the LDS Web site, and his exploration of the history of Joseph Smith and the early days of Mormonism in New York could have been accomplished by consulting Wikipedia. His antics, such as burying chicken bones around the obelisk of Moroni on Hill Cumorah and digging up earth from the grave of Luman Walter to spread it in the sacred grove, push irony past the point of parody—does anybody believe the author took such things seriously (and if he did, why should anybody care what he thinks about anything)?

The book does not mock Mormonism, and treats the individuals he encounters on his journey more or less respectfully (with just that little [and utterly unjustified] “I'm better than you” that the hip intellectual has for earnest, clean-cut, industrious people who are “as white as angel food cake, and almost as spongy.”) But you'll learn nothing about the history and doctrine of the religion here that you won't find elsewhere without all the baggage of the author's tiresome “adventures”.

 Permalink

Rawles, James Wesley. Liberators. New York: Dutton, 2014. ISBN 978-0-525-95391-3.
This novel is the fifth in the series which began with Patriots (December 2008), then continued with Survivors (January 2012), Founders (October 2012), and Expatriates (October 2013), These books are not a conventional multi-volume narrative, in that all describe events in the lives of their characters in roughly the same time period surrounding “the Crunch”—a grid down societal collapse due to a debt crisis and hyperinflation. Taking place at the same time, you can read these books in any order, but if you haven't read the earlier novels you'll miss much of the back-story of the characters who appear here, which informs the parts they play in this episode.

Here the story cuts back and forth between the United States, where Megan LaCroix and her sister Malorie live on a farm in West Virginia with Megan's two boys, and Joshua Kim works in security at the National Security Agency where Megan is an analyst. When the Crunch hits, Joshua and the LaCroix sisters decide to team up to bug out to Joshua's childhood friend's place in Kentucky, where survival from the urban Golden Horde may be better assured. They confront the realities of a collapsing society, where the rule of law is supplanted by extractive tyrannies, and are forced to over-winter in a wilderness, living by their wits and modest preparations.

In Western Canada, the immediate impact of the Crunch was less severe because electrical power, largely hydroelectric, remained on. At the McGregor Ranch, in inland British Columbia (a harsh, northern continental climate nothing like that of Vancouver), the family and those who have taken refuge with them ride out the initial crisis only to be confronted with an occupation of Canada by a nominally United Nations force called UNPROFOR, which is effectively a French colonial force which, in alliance with effete urban eastern and francophone Canada, seeks to put down the fractious westerners and control the resource-rich land they inhabit.

This leads to an asymmetrical war of resistance, aided by the fact that when earlier faced with draconian gun registration and prohibition laws imposed by easterners, a large number of weapons in the west simply vanished, only to reappear when they were needed most. As was demonstrated in Vietnam and Algeria, French occupation forces can be tenacious and brutal, but are ultimately no match for an indigenous insurgency with the support of the local populace. A series of bold strikes against UNPROFOR assets eventually turns the tide.

But just when Canada seems ready to follow the U.S. out of the grip of tyranny, an emboldened China, already on the march in Africa, makes a move to seize western Canada's abundant natural resources. Under the cover of a UN resolution, a massive Chinese force, with armour and air support, occupies the western provinces. This is an adversary of an entirely different order than the French, and will require the resistance, supported by allies from the liberation struggle in the U.S., to audacious and heroic exploits, including one of the greatest acts of monkey-wrenching ever described in a thriller.

As this story has developed over the five novels, the author has matured into a first-rate thriller novelist. There is still plenty of information on gear, tactics, intelligence operations, and security, but the characters are interesting, well-developed, and the action scenes both plausible and exciting. In the present book, we encounter many characters we've met in previous volumes, with their paths crossing as events unfold. There is no triumphalism or glossing over the realities of insurgent warfare against a tyrannical occupying force. There is a great deal of misery and hardship, and sometimes tragedy can result when you've taken every precaution, made no mistake, but simply run out of luck.

Taken together, these five novels are an epic saga of survival in hard and brutal times, painted on a global canvas. Reading them, you will not only be inspired that you and your loved ones can survive such a breakdown in the current economic and social order, but you will also learn a great deal of the details of how to do so. This is not a survival manual, but attentive readers will find many things to research further for their own preparations for an uncertain future. An excellent place to begin that research is the author's own survivalblog.com Web site, whose massive archives you can spend months exploring.

 Permalink

Weir, Andy. The Martian. New York: Broadway Books, [2011] 2014. ISBN 978-0-553-41802-6.
Mark Watney was part of the six person crew of Ares 3 which landed on Mars to carry out an exploration mission in the vicinity of its landing site in Acidalia Planitia. The crew made a precision landing at the target where “presupply” cargo flights had already landed their habitation module, supplies for their stay on Mars, rovers and scientific instruments, and the ascent vehicle they would use to return to the Earth-Mars transit vehicle waiting for them in orbit. Just six days after landing, having set up the habitation module and unpacked the supplies, they are struck by a dust storm of unprecedented ferocity. With winds up to 175 kilometres per hour, the Mars Ascent Vehicle (MAV), already fuelled by propellant made on Mars by reacting hydrogen brought from Earth with the Martian atmosphere, was at risk of being blown over, which would destroy the fragile spacecraft and strand the crew on Mars. NASA gives the order to abort the mission and evacuate to orbit in the MAV for an immediate return to Earth.

But the crew first has to get from the habitation module to the MAV, which means walking across the surface in the midst of the storm. (You'd find it very hard to walk in a 175 km/h wind on Earth, but recall that the atmosphere pressure on Mars is only about 1/200 that of Earth at sea level, so the wind doesn't pack anywhere near the punch.) Still, there was dust and flying debris from equipment ripped loose from the landers. Five members of the crew made it to the MAV. Mark Watney didn't.

As the crew made the traverse to the MAV, Watney was struck by part of an antenna array torn from the habitation, puncturing his suit and impaling him. He was carried away by the wind, and the rest of the crew, seeing his vital signs go to zero before his suit's transmitter failed, followed mission rules to leave him behind and evacuate in the MAV while they still could.

But Watney wasn't dead. His injury was not fatal, and his blood loss was sufficient to seal the leak in the suit where the antenna had pierced it, as the water in the blood boiled off and the residue mostly sealed the breach. Awakening after the trauma, he made an immediate assessment of his situation. I'm alive. Cool! I hurt like heck. Not cool. The habitation module is intact. Yay! The MAV is gone—I'm alone on Mars. Dang!

“Dang” is not precisely how Watney put it. This book contains quite a bit of profanity which I found gratuitous. NASA astronauts in the modern era just don't swear like sailors, especially on open air-to-ground links. Sure, I can imagine launching a full salvo of F-bombs upon discovering I'd been abandoned on Mars, especially when I'm just talking to myself, but everybody seems to do it here on all occasions. This is the only reason I'd hesitate to recommend this book to younger readers who would otherwise be inspired by the story.

Watney is stranded on Mars with no way to communicate with Earth, since all communications were routed through the MAV, which has departed. He has all of the resources for a six-person mission, so he has no immediate survival problems after he gets back to the habitation and stitches up his wound, but he can work the math: even if he can find a way to communicate to Earth that he's still alive, orbital mechanics dictates that it will take around two years to send a rescue mission. His supplies cannot be stretched that far.

This sets the stage for a gripping story of survival, improvisation, difficult decisions, necessity versus bureaucratic inertia, trying to do the right thing in a media fishbowl, and all done without committing any howlers in technology, orbital mechanics, or the way people and organisations behave. Sure, you can quibble about this or that detail, but then people far in the future may regard a factual account of Apollo 13 as largely legend, given how many things had to go right to rescue the crew. Things definitely do not go smoothly here: there is reverse after reverse, and many inscrutable mysteries to be unscrewed if Watney is to get home.

This is an inspiring tale of pioneering on a new world. People have already begun to talk about going to Mars to stay. These settlers will face stark challenges though, one hopes, not as dire as Watney, and with the confidence of regular re-supply missions and new settlers to follow. Perhaps this novel will be seen, among the first generation born on Mars, as inspiration that the challenges they face in bringing a barren planet to life are within the human capacity to solve, especially if their media library isn't exclusively populated with 70s TV shows and disco.

A Kindle edition is available.

 Permalink

December 2014

Wade, Nicholas. A Troublesome Inheritance. New York: Penguin Press, 2014. ISBN 978-1-59420-446-3.
Geographically isolated populations of a species (unable to interbreed with others of their kind) will be subject to natural selection based upon their environment. If that environment differs from that of other members of the species, the isolated population will begin to diverge genetically, as genetic endowments which favour survival and more offspring are selected for. If the isolated population is sufficiently small, the mechanism of genetic drift may cause a specific genetic variant to become almost universal or absent in that population. If this process is repeated for a sufficiently long time, isolated populations may diverge to such a degree they can no longer interbreed, and therefore become distinct species.

None of this is controversial when discussing other species, but in some circles to suggest that these mechanisms apply to humans is the deepest heresy. This well-researched book examines the evidence, much from molecular biology which has become available only in recent years, for the diversification of the human species into distinct populations, or “races” if you like, after its emergence from its birthplace in Africa. In this book the author argues that human evolution has been “recent, copious, and regional” and presents the genetic evidence to support this view.

A few basic facts should be noted at the outset. All humans are members of a single species, and all can interbreed. Humans, as a species, have an extremely low genetic diversity compared to most other animal species: this suggests that our ancestors went through a genetic “bottleneck” where the population was reduced to a very small number, causing the variation observed in other species to be lost through genetic drift. You might expect different human populations to carry different genes, but this is not the case—all humans have essentially the same set of genes. Variation among humans is mostly a result of individuals carrying different alleles (variants) of a gene. For example, eye colour in humans is entirely inherited: a baby's eye colour is determined completely by the alleles of various genes inherited from the mother and father. You might think that variation among human populations is then a question of their carrying different alleles of genes, but that too is an oversimplification. Human genetic variation is, in most cases, a matter of the frequency of alleles among the population.

This means that almost any generalisation about the characteristics of individual members of human populations with different evolutionary histories is ungrounded in fact. The variation among individuals within populations is generally much greater than that of populations as a whole. Discrimination based upon an individual's genetic heritage is not just abhorrent morally but scientifically unjustified.

Based upon these now well-established facts, some have argued that “race does not exist” or is a “social construct”. While this view may be motivated by a well-intentioned desire to eliminate discrimination, it is increasingly at variance with genetic evidence documenting the history of human populations.

Around 200,000 years ago, modern humans emerged in Africa. They spent more than three quarters of their history in that continent, spreading to different niches within it and developing a genetic diversity which today is greater than that of all humans in the rest of the world. Around 50,000 years before the present, by the genetic evidence, a small band of hunter-gatherers left Africa for the lands to the north. Then, some 30,000 years ago the descendants of these bands who migrated to the east and west largely ceased to interbreed and separated into what we now call the Caucasian and East Asian populations. These have remained the main three groups within the human species. Subsequent migrations and isolations have created other populations such as Australian and American aborigines, but their differentiation from the three main races is less distinct. Subsequent migrations, conquest, and intermarriage have blurred the distinctions between these groups, but the fact is that almost any child, shown a picture of a person of European, African, or East Asian ancestry can almost always effortlessly and correctly identify their area of origin. University professors, not so much: it takes an intellectual to deny the evidence of one's own eyes.

As these largely separated populations adapted to their new homes, selection operated upon their genomes. In the ancestral human population children lost the ability to digest lactose, the sugar in milk, after being weaned from their mothers' milk. But in populations which domesticated cattle and developed dairy farming, parents who passed on an allele which would allow their children to drink cow's milk their entire life would have more surviving offspring and, in a remarkably short time on the evolutionary scale, lifetime lactose tolerance became the norm in these areas. Among populations which never raised cattle or used them only for meat, lifetime lactose tolerance remains rare today.

Humans in Africa originally lived close to the equator and had dark skin to protect them from the ultraviolet radiation of the Sun. As human bands occupied northern latitudes in Europe and Asia, dark skin would prevent them from being able to synthesise sufficient Vitamin D from the wan, oblique sunlight of northern winters. These populations were under selection pressure for alleles of genes which gave them lighter skin, but interestingly Europeans and East Asians developed completely different genetic means to lighten their skin. The selection pressure was the same, but evolution blundered into two distinct pathways to meet the need.

Can genetic heritage affect behaviour? There's evidence it can. Humans carry a gene called MAO-A, which breaks down neurotransmitters that affect the transmission of signals within the brain. Experiments in animals have provided evidence that under-production of MAO-A increases aggression and humans with lower levels of MAO-A are found to be more likely to commit violent crime. MAO-A production is regulated by a short sequence of DNA adjacent to the gene: humans may have anywhere from two to five copies of the promoter; the more you have, the more the MAO-A, and hence the mellower you're likely to be. Well, actually, people with three to five copies are indistinguishable, but those with only two (2R) show higher rates of delinquency. Among men of African ancestry, 5.5% carry the 2R variant, while 0.1% of Caucasian males and 0.00067% of East Asian men do. Make of this what you will.

The author argues that just as the introduction of dairy farming tilted the evolutionary landscape in favour of those bearing the allele which allowed them to digest milk into adulthood, the transition of tribal societies to cities, states, and empires in Asia and Europe exerted a selection pressure upon the population which favoured behavioural traits suited to living in such societies. While a tribal society might benefit from producing a substantial population of aggressive warriors, an empire has little need of them: its armies are composed of soldiers, courageous to be sure, who follow orders rather than charging independently into battle. In such a society, the genetic traits which are advantageous in a hunter-gatherer or tribal society will be selected out, as those carrying them will, if not expelled or put to death for misbehaviour, be unable to raise as large a family in these settled societies.

Perhaps, what has been happening over the last five millennia or so is a domestication of the human species. Precisely as humans have bred animals to live with them in close proximity, human societies have selected for humans who are adapted to prosper within them. Those who conform to the social hierarchy, work hard, come up with new ideas but don't disrupt the social structure will have more children and, over time, whatever genetic predispositions there may be for these characteristics (which we don't know today) will become increasingly common in the population. It is intriguing that as humans settled into fixed communities, their skeletons became less robust. This same process of gracilisation is seen in domesticated animals compared to their wild congeners. Certainly there have been as many human generations since the emergence of these complex societies as have sufficed to produce major adaptation in animal species under selective breeding.

Far more speculative and controversial is whether this selection process has been influenced by the nature of the cultures and societies which create the selection pressure. East Asian societies tend to be hierarchical, obedient to authority, and organised on a large scale. European societies, by contrast, are fractious, fissiparous, and prone to bottom-up insurgencies. Is this in part the result of genetic predispositions which have been selected for over millennia in societies which work that way?

It is assumed by many right-thinking people that all that is needed to bring liberty and prosperity to those regions of the world which haven't yet benefited from them is to create the proper institutions, educate the people, and bootstrap the infrastructure, then stand back and watch them take off. Well, maybe—but the history of colonialism, the mission civilisatrice, and various democracy projects and attempts at nation building over the last two centuries may suggest it isn't that simple. The population of the colonial, conquering, or development-aid-giving power has the benefit of millennia of domestication and adaptation to living in a settled society with division of labour. Its adaptations for tribalism have been largely bred out. Not so in many cases for the people they're there to “help”. Withdraw the colonial administration or occupation troops and before long tribalism will re-assert itself because that's the society for which the people are adapted.

Suggesting things like this is anathema in academia or political discourse. But look at the plain evidence of post-colonial Africa and more recent attempts of nation-building, and couple that with the emerging genetic evidence of variation in human populations and connections to behaviour and you may find yourself thinking forbidden thoughts. This book is an excellent starting point to explore these difficult issues, with numerous citations of recent scientific publications.

 Permalink

Thorne, Kip. The Science of Interstellar. New York: W. W. Norton, 2014. ISBN 978-0-393-35137-8.
Christopher Nolan's 2014 film Interstellar was eagerly awaited by science fiction enthusiasts who, having been sorely disappointed so many times by movies that crossed the line into fantasy by making up entirely implausible things to move the plot along, hoped that this effort would live up to its promise of getting the science (mostly) right and employing scientifically plausible speculation where our present knowledge is incomplete.

The author of the present book is one of the most eminent physicists working in the field of general relativity (Einstein's theory of gravitation) and a pioneer in exploring the exotic strong field regime of the theory, including black holes, wormholes, and gravitational radiation. Prof. Thorne was involved in the project which became Interstellar from its inception, and worked closely with the screenwriters, director, and visual effects team to get the science right. Some of the scenes in the movie, such as the visual appearance of orbiting a rotating black hole, have never been rendered accurately before, and are based upon original work by Thorne in computing light paths through spacetime in its vicinity which will be published as professional papers.

Here, the author recounts the often bumpy story of the movie's genesis and progress over the years from his own, Hollywood-outsider, perspective, how the development of the story presented him, as technical advisor (he is credited as an executive producer), with problem after problem in finding a physically plausible solution, sometimes requiring him to do new physics. Then, Thorne provides a popular account of the exotic physics on which the story is based, including gravitational time dilation, black holes, wormholes, and speculative extra dimensions and “brane” scenarios stemming from string theory. Then he “interprets” the events and visual images in the film, explaining (where possible) how they could be produced by known, plausible, or speculative physics. Of course, this isn't always possible—in some cases the needs of story-telling or the requirement not to completely baffle a non-specialist with bewilderingly complicated and obscure images had to take priority over scientific authenticity, and when this is the case Thorne is forthright in admitting so.

Sections are labelled with icons identifying them as “truth”: generally accepted by those working in the field and often with experimental evidence, “educated guess”: a plausible inference from accepted physics, but without experimental evidence and assuming existing laws of physics remain valid in circumstances under which we've never tested them, and “speculation”: wild and wooly stuff (for example quantum gravity or the interior structure of a black hole) which violates no known law of physics, but for which we have no complete and consistent theory and no evidence whatsoever.

This is a clearly written and gorgeously illustrated book which, for those who enjoyed the movie but weren't entirely clear whence some of the stunning images they saw came, will explain the science behind them. The cover of the book has a “SPOILER ALERT” warning potential readers that the ending and major plot details are given away in the text. I will refrain from discussing them here so as not to make this a spoiler in itself. I have not yet seen the movie, and I expect when I do I will enjoy it more for having read the book, since I'll know what to look for in some of the visuals and be less likely to dismiss some of the apparently outrageous occurrences by knowing that there is a physically plausible (albeit extremely speculative and improbable) explanation for them.

For the animations and blackboard images mentioned in the text, the book directs you to a Web site which is so poorly designed and difficult to navigate it took me ten minutes to find them on the first visit. Here is a direct link. In the Kindle edition the index cites page numbers in the print edition which are useless since the electronic edition does not contain real page numbers. There are a few typographical errors and one factual howler: Io is not “Saturn's closest moon”, and Cassini was captured in Saturn orbit by a propulsion burn, not a gravitational slingshot (this does not affect the movie in any way: it's in background material).

 Permalink

Thor, Brad. Hidden Order. New York: Pocket Books, 2013. ISBN 978-1-4767-1710-4.
This is the thirteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Earlier novels have largely been in the mainstream of the “techno-thriller” genre, featuring missions in exotic locations confronting shadowy adversaries bent on inflicting great harm. The present book is a departure from this formula, being largely set in the United States and involving institutions considered pillars of the establishment such as the Federal Reserve System and the Central Intelligence Agency.

A CIA operative “accidentally” runs into a senior intelligence official of the Jordanian government in an airport lounge in Europe, who passes her disturbing evidence that members of a now-disbanded CIA team of which she was a member were involved in destabilising governments now gripped with “Arab Spring” uprisings and next may be setting their sights on Jordan.

Meanwhile, Scot Harvath, just returned from a harrowing mission on the high seas, is taken by his employer, Reed Carlton, to discreetly meet a new client: the Federal Reserve. The Carlton Group is struggling to recover from the devastating blow it took in the previous novel, Black List (August 2014), and its boss is willing to take on unconventional missions and new clients, especially ones “with a license to print their own money”. The chairman of the Federal Reserve has recently and unexpectedly died and the five principal candidates to replace him have all been kidnapped, almost simultaneously, across the United States. These people start turning up dead, in circumstances with symbolism dating back to the American revolution.

Investigation of the Jordanian allegations is shut down by the CIA hierarchy, and has to be pursued through back channels, involving retired people who know how the CIA really works. Evidence emerges of a black program that created weapons of frightful potential which may have gone even blacker and deeper under cover after being officially shut down.

Earlier Brad Thor novels were more along the “U-S-A! U-S-A!” line of most thrillers. Here, the author looks below the surface of highly dubious institutions (“The Federal Reserve is about as federal as Federal Express”) and evil that flourishes in the dark, especially when irrigated with abundant and unaccountable funds. Like many Americans, Scot Harvath knew little about the Federal Reserve other than it had something to do with money. Over the course of his investigations he, and the reader, will learn many disturbing things about its dodgy history and operations, all accurate as best I can determine.

The novel is as much police procedural as thriller, with Harvath teamed with a no-nonsense Boston Police Department detective, processing crime scenes and running down evidence. The story is set in an unspecified near future (the Aerion Supersonic Business Jet is in operation). All is eventually revealed in the end, with a resolution in the final chapter devoutly to be wished, albeit highly unlikely to occur in the cesspool of corruption which is real-world Washington. There is less action and fancy gear than in most Harvath novels, but interesting characters, an intricate mystery, and a good deal of information of which many readers may not be aware.

A short prelude to this novel, Free Fall, is available for free for the Kindle. It provides the background of the mission in progress in which we first encounter Scot Harvath in chapter 2 here. My guess is that this chapter was originally part of the manuscript and was cut for reasons of length and because it spent too much time on a matter peripheral to the main plot. It's interesting to read before you pick up Hidden Order, but if you skip it you'll miss nothing in the main story.

 Permalink

Robinson, Peter. How Ronald Reagan Changed My Life. New York: Harper Perennial, 2003. ISBN 978-0-06-052400-5.
In 1982, the author, a recent graduate of Dartmouth College who had spent two years studying at Oxford, then remained in England to write a novel, re-assessed his career prospects and concluded that, based upon experience, novelist did not rank high among them. He sent letters to everybody he thought might provide him leads on job opportunities. Only William F. Buckley replied, suggesting that Robinson contact his son, Christopher, then chief speechwriter for Vice President George H. W. Bush, who might know of some openings for speechwriters. Hoping at most for a few pointers, the author flew to Washington to meet Buckley, who was planning to leave the White House, creating a vacancy in the Vice President's speechwriting shop. After a whirlwind of interviews, Robinson found himself, in his mid-twenties, having never written a speech before in his life, at work in the Old Executive Office Building, tasked with putting words into the mouth of the Vice President of the United States.

After a year and a half writing for Bush, two of the President's speechwriters quit at the same time. Forced to find replacements on short notice, the head of the office recruited the author to write for Reagan: “He hired me because I was already in the building.” From then through 1988, he wrote speeches for Reagan, some momentous (Reagan's June 1987 speech at the Brandenburg gate, where Robinson's phrase, “Mr. Gorbachev, tear down this wall!”, uttered by Reagan against vehement objections from the State Department and some of his senior advisers, was a pivotal moment in the ending of the Cold War), but also many more for less epochal events such as visits of Boy Scouts to the White House, ceremonies honouring athletes, and the dozens of other circumstances where the President was called upon to “say a few words”. And because the media were quick to pounce on any misstatement by the President, even the most routine remarks had to be meticulously fact-checked by a team of researchers. For every grand turn of phrase in a high profile speech, there were many moments spent staring at the blank screen of a word processor as the deadline for some inconsequential event loomed ever closer and wondering, “How am I supposed to get twenty minutes out of that?“.

But this is not just a book about the life of a White House speechwriter (although there is plenty of insight to be had on that topic). Its goal is to collect and transmit the wisdom that a young man, in his first job, learned by observing Ronald Reagan masterfully doing the job to which he had aspired since entering politics in the 1960s. Reagan was such a straightforward and unaffected person that many underestimated him. For example, compared to the hard-driving types toiling from dawn to dusk who populate many White House positions, Reagan never seemed to work very hard. He would rise at his accustomed hour, work for five to eight hours at his presidential duties, exercise, have dinner, review papers, and get to bed on time. Some interpreted this as his being lazy, but Robinson's fellow speechwriter, Clark Judge, remarked “He never confuses inputs with output. … Who cares how many hours a day a President puts in? It's what a President accomplishes that matters.”

These are lessons aplenty here, all illustrated with anecdotes from the Reagan White House: the distinction between luck and the results from persistence in the face of adversity seen in retrospect; the unreasonable effectiveness and inherent dignity of doing one's job, whatever it be, well; viewing life not as background scenery but rather an arena in which one can act, changing not just the outcome but the circumstances one encounters; the power of words, especially those sincerely believed and founded in comprehensible, time-proven concepts; scepticism toward the pronouncements of “experts” whose oracle-like proclamations make sense only to other experts—if it doesn't make sense to an intelligent person with some grounding in the basics, it probably doesn't make sense period; the importance of marriage, and how the Reagans complemented one another in facing the challenges and stress of the office; the centrality of faith, tempered by a belief in free will and the importance of the individual; how both true believers and pragmatists, despite how often they despise one another, are both essential to actually getting things done; and that what ultimately matters is what you make of whatever situation in which you find yourself.

These are all profound lessons to take on board, especially in the drinking from a firehose environment of the Executive Office of the President, and in one's twenties. But this is not a dour self-help book: it is an insightful, beautifully written, and often laugh-out-loud funny account of how these insights were gleaned on the job, by observing Reagan at work and how he and his administration got things done, often against fierce political and media opposition. This is one of those books that I wish I could travel back in time and hand a copy to my twenty-year-old self—it would have saved a great deal of time and anguish, even for a person like me who has no interest whatsoever in politics. Fundamentally, it's about getting things done, and that's universally applicable.

People matter. Individuals matter. Long before Ronald Reagan was a radio broadcaster, actor, or politician, he worked summers as a lifeguard. Between 1927 and 1932, he personally saved 77 people from drowning. “There were seventy-seven people walking around northern Illinois who wouldn't have been there if it hadn't been for Reagan—and Reagan knew it.” It is not just a few exceptional people who change the world for the better, but all of those who do their jobs and overcome the challenges with which life presents them. Learning this can change anybody's life.

More recently, Mr. Robinson is the host of Uncommon Knowledge and co-founder of Ricochet.com.

 Permalink

  2015  

January 2015

Farmelo, Graham. The Strangest Man. New York: Basic Books, 2009. ISBN 978-0-465-02210-6.
Paul Adrien Maurice Dirac was born in 1902 in Bristol, England. His father, Charles, was a Swiss-French immigrant who made his living as a French teacher at a local school and as a private tutor in French. His mother, Florence (Flo), had given up her job as a librarian upon marrying Charles. The young Paul and his older brother Felix found themselves growing up in a very unusual, verging upon bizarre, home environment. Their father was as strict a disciplinarian at home as in the schoolroom, and spoke only French to his children, requiring them to answer in that language and abruptly correcting them if they committed any faute de français. Flo spoke to the children only in English, and since the Diracs rarely received visitors at home, before going to school Paul got the idea that men and women spoke different languages. At dinner time Charles and Paul would eat in the dining room, speaking French exclusively (with any error swiftly chastised) while Flo, Felix, and younger daughter Betty ate in the kitchen, speaking English. Paul quickly learned that the less he said, the fewer opportunities for error and humiliation, and he traced his famous reputation for taciturnity to his childhood experience.

(It should be noted that the only account we have of Dirac's childhood experience comes from himself, much later in life. He made no attempt to conceal the extent he despised his father [who was respected by his colleagues and acquaintances in Bristol], and there is no way to know whether Paul exaggerated or embroidered upon the circumstances of his childhood.)

After a primary education in which he was regarded as a sound but not exceptional pupil, Paul followed his brother Felix into the Merchant Venturers' School, a Bristol technical school ranked among the finest in the country. There he quickly distinguished himself, ranking near the top in most subjects. The instruction was intensely practical, eschewing Latin, Greek, and music in favour of mathematics, science, geometric and mechanical drawing, and practical skills such as operating machine tools. Dirac learned physics and mathematics with the engineer's eye to “getting the answer out” as opposed to finding the most elegant solution to the problem. He then pursued his engineering studies at Bristol University, where he excelled in mathematics but struggled with experiments.

Dirac graduated with a first-class honours degree in engineering, only to find the British economy in a terrible post-war depression, the worst economic downturn since the start of the Industrial Revolution. Unable to find employment as an engineer, he returned to Bristol University to do a second degree in mathematics, where it was arranged he could skip the first year of the program and pay no tuition fees. Dirac quickly established himself as the star of the mathematics programme, and also attended lectures about the enigmatic quantum theory.

His father had been working in the background to secure a position at Cambridge for Paul, and after cobbling together scholarships and a gift from his father, Dirac arrived at the university in October 1923 to pursue a doctorate in theoretical physics. Dirac would already have seemed strange to his fellow students. While most were scions of the upper class, classically trained, with plummy accents, Dirac knew no Latin or Greek, spoke with a Bristol accent, and approached problems as an engineer or mathematician, not a physicist. He had hoped to study Einstein's general relativity, the discovery of which had first interested him in theoretical physics, but his supervisor was interested in quantum mechanics and directed his work into that field.

It was an auspicious time for a talented researcher to undertake work in quantum theory. The “old quantum theory”, elaborated in the early years of the 20th century, had explained puzzles like the distribution of energy in heat radiation and the photoelectric effect, but by the 1920s it was clear that nature was much more subtle. For example, the original quantum theory failed to explain even the spectral lines of hydrogen, the simplest atom. Dirac began working on modest questions related to quantum theory, but his life was changed when he read Heisenberg's 1925 paper which is now considered one of the pillars of the new quantum mechanics. After initially dismissing the paper as overly complicated and artificial, he came to believe that it pointed the way forward, dismissing Bohr's concept of atoms like little solar systems in favour of a probability density function which gives the probability an electron will be observed in a given position. This represented not just a change in the model of the atom but the discarding entirely of models in favour of a mathematical formulation which permitted calculating what could be observed without providing any mechanism whatsoever explaining how it worked.

After reading and fully appreciating the significance of Heisenberg's work, Dirac embarked on one of the most productive bursts of discovery in the history of modern physics. Between 1925 and 1933 he published one foundational paper after another. His Ph.D. in 1926, the first granted by Cambridge for work in quantum mechanics, linked Heisenberg's theory to the classical mechanics he had learned as an engineer and provided a framework which made Heisenberg's work more accessible. Scholarly writing did not come easily to Dirac, but he mastered the art to such an extent that his papers are still read today as examples of pellucid exposition. At a time when many contributions to quantum mechanics were rough-edged and difficult to understand even by specialists, Dirac's papers were, in the words of Freeman Dyson, “like exquisitely carved marble statues falling out of the sky, one after another.”

In 1928, Dirac took the first step to unify quantum mechanics and special relativity in the Dirac equation. The consequences of this equation led Dirac to predict the existence of a positively-charged electron, which had never been observed. This was the first time a theoretical physicist had predicted the existence of a new particle. This “positron” was observed in debris from cosmic ray collisions in 1932. The Dirac equation also interpreted the spin (angular momentum) of particles as a relativistic phenomenon.

Dirac, along with Enrico Fermi, elaborated the statistics of particles with half-integral spin (now called “fermions”). The behaviour of ensembles of one such particle, the electron, is essential to the devices you use to read this article. He took the first steps toward a relativistic theory of light and matter and coined the name, “quantum electrodynamics”, for the field, but never found a theory sufficiently simple and beautiful to satisfy himself. He published The Principles of Quantum Mechanics in 1930, for many years the standard textbook on the subject and still read today. He worked out the theory of magnetic monopoles (not detected to this date) and speculated on the origin and possible links between large numbers in physics and cosmology.

The significance of Dirac's work was recognised at the time. He was elected a Fellow of the Royal Society in 1930, became the Lucasian Professor of Mathematics (Newton's chair) at Cambridge in 1932, and shared the Nobel Prize in Physics for 1933 with Erwin Schrödinger. After rejecting a knighthood because he disliked being addressed by his first name, he was awarded the Order of Merit in 1973. He is commemorated by a plaque in Westminster Abbey, close to that of Newton; the plaque bears his name and the Dirac equation, the only equation so honoured.

Many physicists consider Dirac the second greatest theoretical physicist of the 20th century, after Einstein. While Einstein produced great leaps of intellectual achievement in fields neglected by others, Dirac, working alone, contributed to the grand edifice of quantum mechanics, which occupied many of the most talented theorists of a generation. You have to dig a bit deeper into the history of quantum mechanics to fully appreciate Dirac's achievement, which probably accounts for his name not being as well known as it deserves.

There is much more to Dirac, all described in this extensively-documented scientific biography. While declining to join the British atomic weapons project during World War II because he refused to work as part of a collaboration, he spent much of the war doing consulting work for the project on his own, including inventing a new technique for isotope separation. (Dirac's process proved less efficient that those eventually chosen by the Manhattan project and was not used.) As an extreme introvert, nobody expected him to ever marry, and he astonished even his closest associates when he married the sister of his fellow physicist Eugene Wigner, Manci, a Hungarian divorcée with two children by her first husband. Manci was as extroverted as Dirac was reserved, and their marriage in 1937 lasted until Dirac's death in 1984. They had two daughters together, and lived a remarkably normal family life. Dirac, who disdained philosophy in his early years, became intensely interested in the philosophy of science later in life, even arguing that mathematical beauty, not experimental results, could best guide theorists to the best expression of the laws of nature.

Paul Dirac was a very complicated man, and this is a complicated and occasionally self-contradictory biography (but the contradiction is in the subject's life, not the fault of the biographer). This book provides a glimpse of a unique intellect whom even many of his closest associates never really felt they completely knew.

 Permalink

Mazur, Joseph. Enlightening Symbols. Princeton: Princeton University Press, 2014. ISBN 978-0-691-15463-3.
Sometimes an invention is so profound and significant yet apparently obvious in retrospect that it is difficult to imagine how people around the world struggled over millennia to discover it, and how slowly it was to diffuse from its points of origin into general use. Such is the case for our modern decimal system of positional notation for numbers and the notation for algebra and other fields of mathematics which permits rapid calculation and transformation of expressions. This book, written with the extensive source citations of a scholarly work yet accessible to any reader familiar with arithmetic and basic algebra, traces the often murky origins of this essential part of our intellectual heritage.

From prehistoric times humans have had the need to count things, for example, the number of sheep in a field. This could be done by establishing a one-to-one correspondence between the sheep and something else more portable such as one's fingers (for a small flock), or pebbles kept in a sack. To determine whether a sheep was missing, just remove a pebble for each sheep and if any remained in the sack, that indicates how many are absent. At a slightly more abstract level, one could make tally marks on a piece of bark or clay tablet, one for each sheep. But all of this does not imply number as an abstraction independent of individual items of some kind or another. Ancestral humans don't seem to have required more than the simplest notion of numbers: until the middle of the 20th century several tribes of Australian aborigines had no words for numbers in their languages at all, but counted things by making marks in the sand. Anthropologists discovered tribes in remote areas of the Americas, Pacific Islands, and Australia whose languages had no words for numbers greater than four.

With the emergence of settled human populations and the increasingly complex interactions of trade between villages and eventually cities, a more sophisticated notion of numbers was required. A merchant might need to compute how many kinds of one good to exchange for another and to keep records of his inventory of various items. The earliest known written records of numerical writing are Sumerian cuneiform clay tablets dating from around 3400 B.C. These tablets show number symbols formed from two distinct kinds of marks pressed into wet clay with a stylus. While the smaller numbers seem clearly evolved from tally marks, larger numbers are formed by complicated combinations of the two symbols representing numbers from 1 to 59. Larger numbers were written as groups of powers of 60 separated by spaces. This was the first known instance of a positional number system, but there is no evidence it was used for complicated calculations—just as a means of recording quantities.

Ancient civilisations: Egypt, Hebrew, Greece, China, Rome, and the Aztecs and Mayas in the Western Hemisphere all invented ways of writing numbers, some sophisticated and capable of representing large quantities. Many of these systems were additive: they used symbols, sometimes derived from letters in their alphabets, and composed numbers by writing symbols which summed to the total. To write the number 563, a Greek would write “φξγ”, where φ=500, ξ=60, and γ=3. By convention, numbers were written with letters in descending order of the value they represented, but the system was not positional. This made the system clumsy for representing large numbers, reusing letters with accent marks to represent thousands and an entirely different convention for ten thousands.

How did such advanced civilisations get along using number systems in which it is almost impossible to compute? Just imagine a Roman faced with multiplying MDXLIX by XLVII (1549 × 47)—where do you start? You don't: all of these civilisations used some form of mechanical computational aid: an abacus, counting rods, stones in grooves, and so on to actually manipulate numbers. The Sun Zi Suan Jing, dating from fifth century China, provides instructions (algorithms) for multiplication, division, and square and cube root extraction using bamboo counting sticks (or written symbols representing them). The result of the computation was then written using the numerals of the language. The written language was thus a way to represent numbers, but not compute with them.

Many of the various forms of numbers and especially computational tools such as the abacus came ever-so-close to stumbling on the place value system, but it was in India, probably before the third century B.C. that a positional decimal number system including zero as a place holder, with digit forms recognisably ancestral to those we use today emerged. This was a breakthrough in two regards. Now, by memorising tables of addition, subtraction, multiplication, and division and simple algorithms once learned by schoolchildren before calculators supplanted that part of their brains, it was possible to directly compute from written numbers. (Despite this, the abacus remained in common use.) But, more profoundly, this was a universal representation of whole numbers. Earlier number systems (with the possible exception of that invented by Archimedes in The Sand Reckoner [but never used practically]) either had a limit on the largest number they could represent or required cumbersome and/or lengthy conventions for large numbers. The Indian number system needed only ten symbols to represent any non-negative number, and only the single convention that each digit in a number represented how many of that power of ten depending on its position.

Knowledge diffused slowly in antiquity, and despite India being on active trade routes, it was not until the 13th century A.D. that Fibonacci introduced the new number system, which had been transmitted via Islamic scholars writing in Arabic, to Europe in his Liber Abaci. This book not only introduced the new number system, it provided instructions for a variety of practical computations and applications to higher mathematics. As revolutionary as this book was, in an era of hand-copied manuscripts, its influence spread very slowly, and it was not until the 16th century that the new numbers became almost universally used. The author describes this protracted process, about which a great deal of controversy remains to the present day.

Just as the decimal positional number system was becoming established in Europe, another revolution in notation began which would transform mathematics, how it was done, and our understanding of the meaning of numbers. Algebra, as we now understand it, was known in antiquity, but it was expressed in a rhetorical way—in words. For example, proposition 7 of book 2 of Euclid's Elements states:

If a straight line be cut at random, the square of the whole is equal to the squares on the segments and twice the rectangle contained by the segments.

Now, given such a problem, Euclid or any of those following in his tradition would draw a diagram and proceed to prove from the axioms of plane geometry the correctness of the statement. But it isn't obvious how to apply this identity to other problems, or how it illustrates the behaviour of general numbers. Today, we'd express the problem and proceed as follows:

\begin{eqnarray*}
    (a+b)^2 & = & (a+b)(a+b) \\
    & = & a(a+b)+b(a+b) \\
    & = & aa+ab+ba+bb \\
    & = & a^2+2ab+b^2 \\
    & = & a^2+b^2+2ab
\end{eqnarray*}

Once again, faced with the word problem, it's difficult to know where to begin, but once expressed in symbolic form, it can be solved by applying rules of algebra which many master before reaching high school. Indeed, the process of simplifying such an equation is so mechanical that computer tools are readily available to do so.

Or consider the following brain-twister posed in the 7th century A.D. about the Greek mathematician and father of algebra Diophantus: how many years did he live?

“Here lies Diophantus,” the wonder behold.
Through art algebraic, the stone tells how old;
“God gave him his boyhood one-sixth of his life,
One twelfth more as youth while whiskers grew rife;
And then one-seventh ere marriage begun;
In five years there came a bounding new son.
Alas, the dear child of master and sage
After attaining half the measure of his father's life chill fate took him.
After consoling his fate by the science of numbers for four years, he ended his life.”

Oh, go ahead, give it a try before reading on!

Today, we'd read through the problem and write a system of two simultaneous equations, where x is the age of Diophantus at his death and y the number of years his son lived. Then:

\begin{eqnarray*}
    x & = & (\frac{1}{6}+\frac{1}{12}+\frac{1}{7})x+5+y+4 \\
    y & = & \frac{x}{2}
\end{eqnarray*}

Plug the second equation into the first, do a little algebraic symbol twiddling, and the answer, 84, pops right out. Note that not only are the rules for solving this equation the same as for any other, with a little practice it is easy to read the word problem and write down the equations ready to solve. Go back and re-read the original problem and the equations and you'll see how straightforwardly they follow.

Once you have transformed a mass of words into symbols, they invite you to discover new ways in which they apply. What is the solution of the equation x+4=0? In antiquity many would have said the equation is meaningless: there is no number you can add to four to get zero. But that's because their conception of number was too limited: negative numbers such as −4 are completely valid and obey all the laws of algebra. By admitting them, we discovered we'd overlooked half of the real numbers. What about the solution to the equation x² + 4 = 0? This was again considered ill-formed, or imaginary, since the square of any real number, positive or negative, is positive. Another leap of imagination, admitting the square root of minus one to the family of numbers, expanded the number line into the complex plane, yielding the answer 2i as we'd now express it, and extending our concept of number into one which is now fundamental not only in abstract mathematics but also science and engineering. And in recognising negative and complex numbers, we'd come closer to unifying algebra and geometry by bringing rotation into the family of numbers.

This book explores the groping over centuries toward a symbolic representation of mathematics which hid the specifics while revealing the commonality underlying them. As one who learned mathematics during the height of the “new math” craze, I can't recall a time when I didn't think of mathematics as a game of symbolic transformation of expressions which may or may not have any connection with the real world. But what one discovers in reading this book is that while this is a concept very easy to brainwash into a 7th grader, it was extraordinarily difficult for even some of the most brilliant humans ever to have lived to grasp in the first place. When Newton invented calculus, for example, he always expressed his “fluxions” as derivatives of time, and did not write of the general derivative of a function of arbitrary variables.

Also, notation is important. Writing something in a more expressive and easily manipulated way can reveal new insights about it. We benefit not just from the discoveries of those in the past, but from those who created the symbolic language in which we now express them.

This book is a treasure chest of information about how the language of science came to be. We encounter a host of characters along the way, not just great mathematicians and scientists, but scoundrels, master forgers, chauvinists, those who preserved precious manuscripts and those who burned them, all leading to the symbolic language in which we so effortlessly write and do mathematics today.

 Permalink

Osborn, Stephanie. The Case of the Displaced Detective Omnibus. Kingsport, TN: Twilight Times Books, 2013. ASIN B00FOR5LJ4.
This book, available only for the Kindle, collects the first four novels of the author's Displaced Detective series. The individual books included here are The Arrival, At Speed, The Rendlesham Incident, and Endings and Beginnings. Each pair of books, in turn, comprises a single story, the first two The Case of the Displaced Detective and the latter two The Case of the Cosmological Killer. If you read only the first of either pair, it will be obvious that the story has been left in the middle with little resolved. In the trade paperback edition, the four books total more than 1100 pages, so this omnibus edition will keep you busy for a while.

Dr. Skye Chadwick is a hyperspatial physicist and chief scientist of Project Tesseract. Research into the multiverse and brane world solutions of string theory has revealed that our continuum—all of the spacetime we inhabit—is just one of an unknown number adjacent to one another in a higher dimensional membrane (“brane”), and that while every continuum is different, those close to one another in the hyperdimensional space tend to be similar. Project Tesseract, a highly classified military project operating from an underground laboratory in Colorado, is developing hardware based on advanced particle physics which allows passively observing or even interacting with these other continua (or parallel universes).

The researchers are amazed to discover that in some continua characters which are fictional in our world actually exist, much as they were described in literature. Perhaps Heinlein and Borges were right in speculating that fiction exists in parallel universes, and maybe that's where some of authors' ideas come from. In any case, exploration of Continuum 114 has revealed it to be one of those in which Sherlock Holmes is a living, breathing man. Chadwick and her team decide to investigate one of the pivotal and enigmatic episodes in the Holmes literature, the fight at Reichenbach Falls. As Holmes and Moriarty battle, it is apparent that both will fall to their death. Chadwick acts impulsively and pulls Holmes from the brink of the cliff, back through the Tesseract, into our continuum. In an instant, Sherlock Holmes, consulting detective of 1891 London, finds himself in twenty-first century Colorado, where he previously existed only in the stories of Arthur Conan Doyle.

Holmes finds much to adapt to in this often bewildering world, but then he was always a shrewd observer and master of disguise, so few people would be as well equipped. At the same time, the Tesseract project faces a crisis, as a disaster and subsequent investigation reveals the possibility of sabotage and an espionage ring operating within the project. A trusted, outside investigator with no ties to the project is needed, and who better than Holmes, who owes his life to it? With Chadwick at his side, they dig into the mystery surrounding the project.

As they work together, they find themselves increasingly attracted to one another, and Holmes must confront his fear that emotional involvement will impair the logical functioning of his mind upon which his career is founded. Chadwick, learning to become a talented investigator in her own right, fears that a deeper than professional involvement with Holmes will harm her own emerging talents.

I found that this long story started out just fine, and indeed I recommended it to several people after finishing the first of the four novels collected here. To me, it began to run off the rails in the second book and didn't get any better in the remaining two (which begin with Holmes and Chadwick an established detective team, summoned to help with a perplexing mystery in Britain which may have consequences for all of the myriad contunua in the multiverse). The fundamental problem is that these books are trying to do too much all at the same time. They can't decide whether they're science fiction, mystery, detective procedural, or romance, and as they jump back and forth among the genres, so little happens in the ones being neglected at the moment that the parallel story lines develop at a glacial pace. My estimation is that an editor with a sharp red pencil could cut this material by 50–60% and end up with a better book, omitting nothing central to the story and transforming what often seemed a tedious slog into a page-turner.

Sherlock Holmes is truly one of the great timeless characters in literature. He can be dropped into any epoch, any location, and, in this case, anywhere in the multiverse, and rapidly start to get to the bottom of the situation while entertaining the reader looking over his shoulder. There is nothing wrong with the premise of these books and there are interesting ideas and characters in them, but the execution just isn't up to the potential of the concept. The science fiction part sometimes sinks to the techno-babble level of Star Trek (“Higgs boson injection beginning…”). I am no prude, but I found the repeated and explicit sex scenes a bit much (tedious, actually), and they make the books unsuitable for younger readers for whom the original Sherlock Holmes stories are a pure delight. If you're interested in the idea, I'd suggest buying just the first book separately and see how you like it before deciding to proceed, bearing in mind that I found it the best of the four.

 Permalink

February 2015

Suprynowicz, Vin. The Testament of James. Pahrump, NV: Mountain Media, 2014. ISBN 978-0-9670259-4-0.
The author is a veteran newspaperman and was arguably the most libertarian writer in the mainstream media during his long career with the Las Vegas Review-Journal. He earlier turned his hand to fiction in 2005's The Black Arrow (May 2005), a delightful libertarian superhero fantasy. In the present volume he tells an engaging tale which weaves together mystery, the origins of Christianity, and the curious subculture of rare book collectors and dealers.

Matthew Hunter is the proprietor of a used book shop in Providence, Rhode Island, dealing both in routine merchandise but also rare volumes obtained from around the world and sold to a network of collectors who trust Hunter's judgement and fair pricing. While Hunter is on a trip to Britain, an employee of the store is found dead under suspicious circumstances, while waiting after hours to receive a visitor from Egypt with a manuscript to be evaluated and sold.

Before long, a series of curious, shady, and downright intimidating people start arriving at the bookshop, all seeking to buy the manuscript which, it appears, was never delivered. The person who was supposed to bring it to the shop has vanished, and his brothers have come to try to find him. Hunter and his friend Chantal Stevens, ex-military who has agreed to help out in the shop, find themselves in the middle of the quest for one of the most legendary, and considered mythical, rare books of all time, The Testament of James, reputed to have been written by James the Just, the (half-)brother of Jesus Christ. (His precise relationship to Jesus is a matter of dispute among Christian sects and scholars.) This Testament (not to be confused with the Epistle of James in the New Testament, also sometimes attributed to James the Just), would have been the most contemporary record of the life of Jesus, well predating the Gospels.

Matthew and Chantal seek to find the book, rescue the seller, and get to the bottom of a mystery dating from the origin of Christianity. Initially dubious such a book might exist, Matthew concludes that so many people would not be trying so hard to lay their hands on it if there weren't something there.

A good part of the book is a charming and often humorous look inside the world of rare books, one with which the author is clearly well-acquainted. There is intrigue, a bit of mysticism, and the occasional libertarian zinger aimed at a deserving target. As the story unfolds, an alternative interpretation of the life and work of Jesus and the history of the early Church emerges, which explains why so many players are so desperately seeking the lost book.

As a mystery, this book works superbly. Its view of “bookmen” (hunters, sellers, and collectors) is a delight. Orthodox Christians (by which I mean those adhering to the main Christian denominations, not just those called “Orthodox”) may find some of the content blasphemous, but before they explode in red-faced sputtering, recall that one can never be sure about the provenance and authenticity of any ancient manuscript. Some of the language and situations are not suitable for young readers, but by the standards of contemporary mass-market fiction, the book is pretty tame. There are essentially no spelling or grammatical errors. To be clear, this is entirely a work of fiction: there is no Testament of James apart from this book, in which it's an invention of the author. A bibliography of works providing alternative (which some will consider heretical) interpretations of the origins of Christianity is provided. You can read an excerpt from the novel at the author's Web log; continue to follow the links in the excerpts to read the first third—20,000 words—of the book for free.

 Permalink

Rawles, James Wesley. Tools for Survival. New York: Plume, 2014. ISBN 978-0-452-29812-5.
Suppose one day the music stops. We all live, more or less, as part of an intricately-connected web of human society. The water that comes out of the faucet when we open the tap depends (for the vast majority of people) on pumps powered by an electrical grid that spans a continent. So does the removal of sewage when you flush the toilet. The typical city in developed nations has only about three days' supply of food on hand in stores and local warehouses and depends upon a transportation infrastructure as well as computerised inventory and payment systems to function. This system has been optimised over decades to be extremely efficient, but at the same time it has become dangerously fragile against any perturbation. A financial crisis which disrupts just-in-time payments, a large-scale and protracted power outage due to a solar flare or EMP attack, disruption of data networks by malicious attacks, or social unrest can rapidly halt the flow of goods and services upon which hundreds of millions of people depend and rely upon without rarely giving a thought to what life might be like if one day they weren't there.

The author, founder of the essential SurvivalBlog site, has addressed such scenarios in his fiction, which is highly recommended. Here the focus is less speculative, and entirely factual and practical. What are the essential skills and tools one needs to survive in what amounts to a 19th century homestead? If the grid (in all senses) goes down, those who wish to survive the massive disruptions and chaos which will result may find themselves in the position of those on the American frontier in the 1870s: forced into self-reliance for all of the necessities of life, and compelled to use the simple, often manual, tools which their ancestors used—tools which can in many cases be fabricated and repaired on the homestead.

The author does not assume a total collapse to the nineteenth century. He envisions that those who have prepared to ride out a discontinuity in civilisation will have equipped themselves with rudimentary solar electric power and electronic communication systems. But at the same time, people will be largely on their own when it comes to gardening, farming, food preservation, harvesting trees for firewood and lumber, first aid and dental care, self-defence, metalworking, and a multitude of other tasks. As always, the author stresses, it isn't the tools you have but rather the skills between your ears that determine whether you'll survive. You may have the most comprehensive medical kit imaginable, but if nobody knows how to stop the bleeding from a minor injury, disinfect the wound, and suture it, what today is a short trip to the emergency room might be life-threatening.

Here is what I took away from this book. Certainly, you want to have on hand what you need to deal with immediate threats (for example, firefighting when the fire department does not respond, self-defence when there is no sheriff, a supply of water and food so you don't become a refugee if supplies are interrupted, and a knowledge of sanitation so you don't succumb to disease when the toilet doesn't flush). If you have skills in a particular area, for example, if you're a doctor, nurse, or emergency medical technician, by all means lay in a supply of what you need not just to help yourself and your family, but your neighbours. The same goes if you're a welder, carpenter, plumber, shoemaker, or smith. It just isn't reasonable, however, to expect any given family to acquire all the skills and tools (even if they could afford them, where would they put them?) to survive on their own. Far more important is to make the acquaintance of like-minded people in the vicinity who have the diverse set of skills required to survive together. The ability to build and maintain such a community may be the most important survival skill of all.

This book contains a wealth of resources available on the Web (most presented as shortened URLs, not directly linked in the Kindle edition) and a great deal of wisdom about which I find little or nothing to disagree. For the most part the author uses quaint units like inches, pounds, and gallons, but he is writing for a mostly American audience. Please take to heart the safety warnings: it is very easy to kill or gravely injure yourself when woodworking, metal fabricating, welding, doing electrical work, or felling trees and processing lumber. If your goal is to survive and prosper whatever the future may bring, it can ruin your whole plan if you kill yourself acquiring the skills you need to do so.

 Permalink

Reeves, Richard. A Force of Nature. New York: W. W. Norton, 2008. ISBN 978-0-393-33369-5.
In 1851, the Crystal Palace Exhibition opened in London. It was a showcase of the wonders of industry and culture of the greatest empire the world had ever seen and attracted a multitude of visitors. Unlike present-day “World's Fair” boondoggles, it made money, and the profits were used to fund good works, including endowing scholarships for talented students from the far reaches of the Empire to study in Britain. In 1895, Ernest Rutherford, hailing from a remote area in New Zealand and recent graduate of Canterbury College in Christchurch, won a scholarship to study at Cambridge. Upon learning of the award in a field of his family's farm, he threw his shovel in the air and exclaimed, “That's the last potato I'll ever dig.” It was.

When he arrived at Cambridge, he could hardly have been more out of place. He and another scholarship winner were the first and only graduate students admitted who were not Cambridge graduates. Cambridge, at the end of the Victorian era, was a clubby, upper-class place, where even those pursuing mathematics were steeped in the classics, hailed from tony public schools, and spoke with refined accents. Rutherford, by contrast, was a rough-edged colonial, bursting with energy and ambition. He spoke with a bizarre accent (which he retained all his life) which blended the Scottish brogue of his ancestors with the curious intonations of the antipodes. He was anything but the ascetic intellectual so common at Cambridge—he had been a fierce competitor at rugby, spoke about three times as loud as was necessary (many years later, when the eminent Rutherford was tapped to make a radio broadcast from Cambridge, England to Cambridge, Massachusetts, one of his associates asked, “Why use radio?”), and spoke vehemently on any and all topics (again, long afterward, when a ceremonial portrait was unveiled, his wife said she was surprised the artist had caught him with his mouth shut).

But it quickly became apparent that this burly, loud, New Zealander was extraordinarily talented, and under the leadership of J.J. Thomson, he began original research in radio, but soon abandoned the field to pursue atomic research, which Thomson had pioneered with his discovery of the electron. In 1898, with Thomson's recommendation, Rutherford accepted a professorship at McGill University in Montreal. While North America was considered a scientific backwater in the era, the generous salary would allow him to marry his fiancée, who he had left behind in New Zealand until he could find a position which would support them.

At McGill, he and his collaborator Frederick Soddy, studying the radioactive decay of thorium, discovered that radioactive decay was characterised by a unique half-life, and was composed of two distinct components which he named alpha and beta radiation. He later named the most penetrating product of nuclear reactions gamma rays. Rutherford was the first to suggest, in 1902, that radioactivity resulted from the transformation of one chemical element into another—something previously thought impossible.

In 1907, Rutherford was offered, and accepted a chair of physics at the University of Manchester, where, with greater laboratory resources than he had had in Canada, pursued the nature of the products of radioactive decay. By 1907, by a clever experiment, he had identified alpha radiation (or particles, as we now call them) with the nuclei of helium atoms—nuclear decay was heavy atoms being spontaneously transformed into a lighter element and a helium nucleus.

Based upon this work, Rutherford won the Nobel Prize in Chemistry in 1908. As a person who considered himself first and foremost an experimental physicist and who was famous for remarking, “All science is either physics or stamp collecting”, winning the Chemistry Nobel had to feel rather odd. He quipped that while he had observed the transmutation of elements in his laboratory, no transmutation was as startling as discovering he had become a chemist. Still, physicist or chemist, his greatest work was yet to come.

In 1909, along with Hans Geiger (later to invent the Geiger counter) and Ernest Marsden, he conducted an experiment where high-energy alpha particles were directed against a very thin sheet of gold foil. The expectation was that few would be deflected and those only slightly. To the astonishment of the experimenters, some alpha particles were found to be deflected through large angles, some bouncing directly back toward the source. Geiger exclaimed, “It was almost as incredible as if you fired a 15-inch [battleship] shell at a piece of tissue paper and it came back and hit you.” It took two years before Rutherford fully understood and published what was going on, and it forever changed the concept of the atom. The only way to explain the scattering results was to replace the early model of the atom with one in which a diffuse cloud of negatively charged electrons surrounded a tiny, extraordinarily dense, positively charged nucleus (that word was not used until 1913). This experimental result fed directly into the development of quantum theory and the elucidation of the force which bound the particles in the nucleus together, which was not fully understood until more than six decades later.

In 1919 Rutherford returned to Cambridge to become the head of the Cavendish Laboratory, the most prestigious position in experimental physics in the world. Continuing his research with alpha emitters, he discovered that bombarding nitrogen gas with alpha particles would transmute nitrogen into oxygen, liberating a proton (the nucleus of hydrogen). Rutherford simultaneously was the first to deliberately transmute one element into another, and also to discover the proton. In 1921, he predicted the existence of the neutron, completing the composition of the nucleus. The neutron was eventually discovered by his associate, James Chadwick, in 1932.

Rutherford's discoveries, all made with benchtop apparatus and a small group of researchers, were the foundation of nuclear physics. He not only discovered the nucleus, he also found or predicted its constituents. He was the first to identify natural nuclear transmutation and the first to produce it on demand in the laboratory. As a teacher and laboratory director his legacy was enormous: eleven of his students and research associates went on to win Nobel prizes. His students John Cockcroft and Ernest Walton built the first particle accelerator and ushered in the era of “big science”. Rutherford not only created the science of nuclear physics, he was the last person to make major discoveries in the field by himself, alone or with a few collaborators, and with simple apparatus made in his own laboratory.

In the heady years between the wars, there were, in the public mind, two great men of physics: Einstein the theoretician and Rutherford the experimenter. (This perception may have understated the contributions of the creators of quantum mechanics, but they were many and less known.) Today, we still revere Einstein, but Rutherford is less remembered (except in New Zealand, where everybody knows his name and achievements). And yet there are few experimentalists who have discovered so much in their lifetimes, with so little funding and the simplest apparatus. Rutherford, that boisterous, loud, and restless colonial, figured out much of what we now know about the atom, largely by himself, through a multitude of tedious experiments which often failed, and he should rightly be regarded as a pillar of 20th century physics.

This is the thousandth book to appear since I began to keep the reading list in January 2001.

 Permalink

March 2015

Heinlein, Robert A. Rocket Ship Galileo. Seattle: Amazon Digital Services, [1947, 1974, 1988] 2014. ASIN B00H8XGKVU.
After the end of World War II, Robert A. Heinlein put his wartime engineering work behind him and returned to professional writing. His ambition was to break out of the pulp magazine ghetto in which science fiction had been largely confined before the war into the more prestigious (and better paying) markets of novels and anthologies published by top-tier New York firms and the “slick” general-interest magazines such as Collier's and The Saturday Evening Post, which published fiction in those days. For the novels, he decided to focus initially on a segment of the market he understood well from his pre-war career: “juveniles”—books aimed a young audience (in the case of science fiction, overwhelmingly male), and sold, in large part, in hardcover to public and school libraries (mass market paperbacks were just beginning to emerge in the late 1940s, and had not yet become important to mainstream publishers).

Rocket Ship Galileo was the first of Heinlein's juveniles, and it was a tour de force which established him in the market and led to a series which would extend to twelve volumes. (Heinlein scholars differ on which of his novels are classified as juveniles. Some include Starship Troopers as a juvenile, but despite its having been originally written as one and rejected by his publisher, Heinlein did not classify it thus.)

The plot could not be more engaging to a young person at the dawn of the atomic and space age. Three high school seniors, self-taught in the difficult art of rocketry (often, as was the case for their seniors in the era, by trial and [noisy and dangerous] error), are recruited by an uncle of one of them, veteran of the wartime atomic project, who wants to go to the Moon. He's invented a novel type of nuclear engine which allows a single-stage ship to make the round trip, and having despaired of getting sclerotic government or industry involved, decides to do it himself using cast-off parts and the talent and boundless energy of young people willing to learn by doing.

Working in their remote desert location, they become aware that forces unknown are taking an untoward interest in their work and seem to want to bring it to a halt, going as far as sabotage and lawfare. Finally, it's off to the Moon, where they discover the dark secret on the far side: space Nazis!

The remarkable thing about this novel is how well it holds up, almost seventy years after publication. While Heinlein was writing for a young audience, he never condescended to them. The science and engineering were as accurate as was known at the time, and Heinlein manages to instill in his audience a basic knowledge of rocket propulsion, orbital mechanics, and automated guidance systems as the yarn progresses. Other than three characters being young people, there is nothing about this story which makes it “juvenile” fiction: there is a hard edge of adult morality and the value of courage which forms the young characters as they live the adventure.

At the moment, only this Kindle edition and an unabridged audio book edition are available new. Used copies of earlier paperback editions are readily available.

 Permalink

Carroll, Michael. Living Among Giants. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-10673-1.
In school science classes, we were taught that the solar system, our home in the galaxy, is a collection of planets circling a star, along with assorted debris (asteroids, comets, and interplanetary dust). Rarely did we see a representation of either the planets or the solar system to scale, which would allow us to grasp just how different various parts of the solar system are from another. (For example, Jupiter is more massive than all the other planets and their moons combined: a proud Jovian would probably describe the solar system as the Sun, Jupiter, and other detritus.)

Looking more closely at the solar system, with the aid of what has been learned from spacecraft exploration in the last half century, results in a different picture. The solar system is composed of distinct neighbourhoods, each with its own characteristics. There are four inner “terrestrial” or rocky planets: Mercury, Venus, Earth, and Mars. These worlds huddle close to the Sun, bathing in its lambent rays. The main asteroid belt consists of worlds like Ceres, Vesta, and Pallas, all the way down to small rocks. Most orbit between Mars and Jupiter, and the feeble gravity of these bodies and their orbits makes it relatively easy to travel from one to another if you're patient.

Outside the asteroid belt is the domain of the giants, which are the subject of this book. There are two gas giants: Jupiter and Saturn, and two ice giants: Uranus and Neptune. Distances here are huge compared to the inner solar system, as are the worlds themselves. Sunlight is dim (at Saturn, just 1% of its intensity at Earth, at Neptune 1/900 that at Earth). The outer solar system is not just composed of the four giant planets: those planets have a retinue of 170 known moons (and doubtless many more yet to be discovered), which are a collection of worlds as diverse as anywhere else in the domain of the Sun: there are sulfur-spewing volcanos, subterranean oceans of salty water, geysers, lakes and rain of hydrocarbons, and some of the most spectacular terrain and geology known. Jupiter's moon Ganymede is larger than the planet Mercury, and appears to have a core of molten iron, like the Earth.

Beyond the giants is the Kuiper Belt, with Pluto its best known denizen. This belt is home to a multitude of icy worlds—statistical estimates are that there may be as many as 700 undiscovered worlds as large or larger than Pluto in the belt. Far more distant still, extending as far as two light-years from the Sun, is the Oort cloud, about which we know essentially nothing except what we glean from the occasional comet which, perturbed by a chance encounter or passing star, plunges into the inner solar system. With our present technology, objects in the Oort cloud are utterly impossible to detect, but based upon extrapolation from comets we've observed, it may contain trillions of objects larger than one kilometre.

When I was a child, the realm of the outer planets was shrouded in mystery. While Jupiter, Saturn, and Uranus can be glimpsed by the unaided eye (Uranus, just barely, under ideal conditions, if you know where to look), and Neptune can be spotted with a modest telescope, the myriad moons of these planets were just specks of light through the greatest of Earth-based telescopes. It was not until the era of space missions to these worlds, beginning with the fly-by probes Pioneer and Voyager, then the orbiters Galileo and Cassini, that the wonders of these worlds were revealed.

This book, by science writer and space artist Michael Carroll, is a tourist's and emigrant's guide to the outer solar system. Everything here is on an extravagant scale, and not always one hospitable to frail humans. Jupiter's magnetic field is 20,000 times stronger than that of Earth and traps radiation so intense that astronauts exploring its innermost large moon Io would succumb to a lethal dose of radiation in minutes. (One planetary scientist remarked, “You need to have a good supply of grad students when you go investigate Io.”) Several of the moons of the outer planets appear to have oceans of liquid water beneath their icy crust, kept liquid by tidal flexing as they orbit their planet and interact with other moons. Some of these oceans may contain more water than all of the Earth's oceans. Tidal flexing may create volcanic plumes which inject heat and minerals into these oceans. On Earth, volcanic vents on the ocean floor provide the energy and nutrients for a rich ecosystem of life which exists independent of the Sun's energy. On these moons—who knows? Perhaps some day we shall explore these oceans in our submarines and find out.

Saturn's moon Titan is an amazing world. It is larger than Mercury, and has an atmosphere 50% denser than the Earth's, made up mostly of nitrogen. It has rainfall, rivers, and lakes of methane and ethane, and at its mean temperature of 93.7°K, water ice is a rock as hard as granite. Unique among worlds in the solar system, you could venture outside your space ship on Titan without a space suit. You'd need to dress very warmly, to be sure, and wear an oxygen mask, but you could explore the shores, lakes, and dunes of Titan protected only against the cold. With the dense atmosphere and gravity just 85% of that of the Earth's Moon, you might be able to fly with suitable wings.

We have had just a glimpse of the moons of Uranus and Neptune as Voyager 2 sped through their systems on its way to the outer darkness. Further investigation will have to wait for orbiters to visit these planets, which probably will not happen for nearly two decades. What Voyager 2 saw was tantalising. On Uranus's moon Miranda, there are cliffs 14 km high. With the tiny gravity, imagine the extreme sports you could do there! Neptune's moon Triton appears to be a Kuiper Belt object captured into orbit around Neptune and, despite its cryogenic temperature, appears to be geologically active.

There is no evidence for life on any of these worlds. (Still, one wonders about those fish in the dark oceans.) If barren, “all these worlds are ours”, and in the fullness of time we shall explore, settle, and exploit them to our own ends. The outer solar system is just so much bigger and more grandiose than the inner. It's as if we've inhabited a small island for all of our history and, after making a treacherous ocean voyage, discovered an enormous empty continent just waiting for us. Perhaps in a few centuries residents of these remote worlds will look back toward the Sun, trying to spot that pale blue dot so close to it where their ancestors lived, and remark to their children, “Once, that's all there was.”

 Permalink

April 2015

Beck, Glenn and Harriet Parke. Agenda 21: Into the Shadows. New York: Threshold Editions, 2015. ISBN 978-1-4767-4682-1.
When I read the authors' first Agenda 21 (November 2012) novel, I thought it was a superb dystopian view of the living hell into which anti-human environmental elites wish to consign the vast majority of the human race who are to be their serfs. I wrote at the time “This is a book which begs for one or more sequels.” Well, here is the first sequel and it is…disappointing. It's not terrible, by any means, but it does not come up to the high standard set by the first book. Perhaps it suffers from the blahs which often afflict the second volume of a trilogy.

First of all, if you haven't read the original Agenda 21 you will have absolutely no idea who the characters are, how they found themselves in the situation they're in at the start of the story, and the nature of the tyranny they're trying to escape. I describe some of this in my review of the original book, along with the factual basis of the real United Nations plan upon which the story is based.

As the novel begins, Emmeline, who we met in the previous book, learns that her infant daughter Elsa, with whom she has managed to remain in tenuous contact by working at the Children's Village, where the young are reared by the state apart from their parents, along with other children are to be removed to another facility, breaking this precious human bond. She and her state-assigned partner David rescue Elsa and, joined by a young boy, Micah, escape through a hole in the fence surrounding the compound to the Human Free Zone, the wilderness outside the compounds into which humans have been relocated. In the chaos after the escape, John and Joan, David's parents, decide to also escape, with the intention of leaving a false trail to lead the inevitable pursuers away from the young escapees.

Indeed, before long, a team of Earth Protection Agents led by Steven, the kind of authoritarian control freak thug who inevitably rises to the top in such organisations, is dispatched to capture the escapees and return them to the compound for punishment (probably “recycling” for the adults) and to serve as an example for other “citizens”. The team includes Julia, a rookie among the first women assigned to Earth Protection.

The story cuts back and forth among the groups in the Human Free Zone. Emmeline's band meets two people who have lived in a cave ever since escaping the initial relocation of humans to the compounds. They learn the history of the implementation of Agenda 21 and the rudiments of survival outside the tyranny. As the groups encounter one another, the struggle between normal human nature and the cruel and stunted world of the slavers comes into focus.

Harriet Parke is the principal author of the novel. Glenn Beck acknowledges this in the afterword he contributed which describes the real-world U.N. Agenda 21. Obviously, by lending his name to the project, he increases its visibility and readership, which is all for the good. Let's hope the next book in the series returns to the high standard set by the first.

 Permalink

van Dongen, Jeroen. Einstein's Unification. Cambridge: Cambridge University Press, 2010. ISBN 978-0-521-88346-7.
In 1905 Albert Einstein published four papers which transformed the understanding of space, time, mass, and energy; provided physical evidence for the quantisation of energy; and observational confirmation of the existence of atoms. These publications are collectively called the Annus Mirabilis papers, and vaulted the largely unknown Einstein to the top rank of theoretical physicists. When Einstein was awarded the Nobel Prize in Physics in 1921, it was for one of these 1905 papers which explained the photoelectric effect. Einstein's 1905 papers are masterpieces of intuitive reasoning and clear exposition, and demonstrated Einstein's technique of constructing thought experiments based upon physical observations, then deriving testable mathematical models from them. Unlike so many present-day scientific publications, Einstein's papers on special relativity and the equivalence of mass and energy were accessible to anybody with a college-level understanding of mechanics and electrodynamics and used no special jargon or advanced mathematics. Being based on well-understood concepts, neither cited any other scientific paper.

While special relativity revolutionised our understanding of space and time, and has withstood every experimental test to which it has been subjected in the more than a century since it was formulated, it was known from inception that the theory was incomplete. It's called special relativity because it only describes the behaviour of bodies under the special case of uniform unaccelerated motion in the absence of gravity. To handle acceleration and gravitation would require extending the special theory into a general theory of relativity, and it is upon this quest that Einstein next embarked.

As before, Einstein began with a simple thought experiment. Just as in special relativity, where there is no experiment which can be done in a laboratory without the ability to observe the outside world that can determine its speed or direction of uniform (unaccelerated) motion, Einstein argued that there should be no experiment an observer could perform in a sufficiently small closed laboratory which could distinguish uniform acceleration from the effect of gravity. If one observed objects to fall with an acceleration equal to that on the surface of the Earth, the laboratory might be stationary on the Earth or in a space ship accelerating with a constant acceleration of one gravity, and no experiment could distinguish the two situations. (The reason for the “sufficiently small” qualification is that since gravity is produced by massive objects, the direction a test particle will fall depends upon its position with respect to the centre of gravity of the body. In a very large laboratory, objects dropped far apart would fall in different directions. This is what causes tides.)

Einstein called this observation the “equivalence principle”: that the effects of acceleration and gravity are indistinguishable, and that hence a theory which extended special relativity to incorporate accelerated motion would necessarily also be a theory of gravity. Einstein had originally hoped it would be straightforward to reconcile special relativity with acceleration and gravity, but the deeper he got into the problem, the more he appreciated how difficult a task he had undertaken. Thanks to the Einstein Papers Project, which is curating and publishing all of Einstein's extant work, including notebooks, letters, and other documents, the author (a participant in the project) has been able to reconstruct Einstein's ten-year search for a viable theory of general relativity.

Einstein pursued a two-track approach. The bottom up path started with Newtonian gravity and attempted to generalise it to make it compatible with special relativity. In this attempt, Einstein was guided by the correspondence principle, which requires that any new theory which explains behaviour under previously untested conditions must reproduce the tested results of existing theory under known conditions. For example, the equations of motion in special relativity reduce to those of Newtonian mechanics when velocities are small compared to the speed of light. Similarly, for gravity, any candidate theory must yield results identical to Newtonian gravitation when field strength is weak and velocities are low.

From the top down, Einstein concluded that any theory compatible with the principle of equivalence between acceleration and gravity must exhibit general covariance, which can be thought of as being equally valid regardless of the choice of co-ordinates (as long as they are varied without discontinuities). There are very few mathematical structures which have this property, and Einstein was drawn to Riemann's tensor geometry. Over years of work, Einstein pursued both paths, producing a bottom-up theory which was not generally covariant which he eventually rejected as in conflict with experiment. By November 1915 he had returned to the top-down mathematical approach and in four papers expounded a generally covariant theory which agreed with experiment. General relativity had arrived.

Einstein's 1915 theory correctly predicted the anomalous perihelion precession of Mercury and also predicted that starlight passing near the limb of the Sun would be deflected by twice the angle expected based on Newtonian gravitation. This was confirmed (within a rather large margin of error) in an eclipse expedition in 1919, which made Einstein's general relativity front page news around the world. Since then precision tests of general relativity have tested a variety of predictions of the theory with ever-increasing precision, with no experiment to date yielding results inconsistent with the theory.

Thus, by 1915, Einstein had produced theories of mechanics, electrodynamics, the equivalence of mass and energy, and the mechanics of bodies under acceleration and the influence of gravitational fields, and changed space and time from a fixed background in which physics occurs to a dynamical arena: “Matter and energy tell spacetime how to curve. Spacetime tells matter how to move.” What do you do, at age 36, having figured out, largely on your own, how a large part of the universe works?

Much of Einstein's work so far had consisted of unification. Special relativity unified space and time, matter and energy. General relativity unified acceleration and gravitation, gravitation and geometry. But much remained to be unified. In general relativity and classical electrodynamics there were two field theories, both defined on the continuum, both with unlimited range and an inverse square law, both exhibiting static and dynamic effects (although the details of gravitomagnetism would not be worked out until later). And yet the theories seemed entirely distinct: gravity was always attractive and worked by the bending of spacetime by matter-energy, while electromagnetism could be either attractive or repulsive, and seemed to be propagated by fields emitted by point charges—how messy.

Further, quantum theory, which Einstein's 1905 paper on the photoelectric effect had helped launch, seemed to point in a very different direction than the classical field theories in which Einstein had worked. Quantum mechanics, especially as elaborated in the “new” quantum theory of the 1920s, seemed to indicate that aspects of the universe such as electric charge were discrete, not continuous, and that physics could, even in principle, only predict the probability of the outcome of experiments, not calculate them definitively from known initial conditions. Einstein never disputed the successes of quantum theory in explaining experimental results, but suspected it was a theory based upon phenomena which did not explain what was going on at a deeper level. (For example, the physical theory of elasticity explains experimental results and makes predictions within its domain of applicability, but it is not fundamental. All of the effects of elasticity are ultimately due to electromagnetic forces between atoms in materials. But that doesn't mean that the theory of elasticity isn't useful to engineers, or that they should do their spring calculations at the molecular level.)

Einstein undertook the search for a unified field theory, which would unify gravity and electromagnetism, just as Maxwell had unified electrostatics and magnetism into a single theory. In addition, Einstein believed that a unified field theory would be antecedent to quantum theory, and that the probabilistic results of quantum theory could be deduced from the more fundamental theory, which would remain entirely deterministic. From 1915 until his death in 1955 Einstein's work concentrated mostly on the quest for a unified field theory. He was aided by numerous talented assistants, many of whom went on to do important work in their own right. He explored a variety of paths to such a theory, but ultimately rejected each one, in turn, as either inconsistent with experiment or unable to explain phenomena such as point particles or quantisation of charge.

As the author documents, Einstein's approach to doing physics changed in the years after 1915. While before he was guided both by physics and mathematics, in retrospect he recalled and described his search of the field equations of general relativity as having followed the path of discovering the simplest and most elegant mathematical structure which could explain the observed phenomena. He thus came, like Dirac, to argue that mathematical beauty was the best guide to correct physical theories.

In the last forty years of his life, Einstein made no progress whatsoever toward a unified field theory, apart from discarding numerous paths which did not work. He explored a variety of approaches: “semivectors” (which turned out just to be a reformulation of spinors), five-dimensional models including a cylindrically compactified dimension based on Kaluza-Klein theory, and attempts to deduce the properties of particles and their quantum behaviour from nonlinear continuum field theories.

In seeking to unify electromagnetism and gravity, he ignored the strong and weak nuclear forces which had been discovered over the years and merited being included in any grand scheme of unification. In the years after World War II, many physicists ceased to worry about the meaning of quantum mechanics and the seemingly inherent randomness in its predictions which so distressed Einstein, and adopted a “shut up and calculate” approach as their computations were confirmed to ever greater precision by experiments.

So great was the respect for Einstein's achievements that only rarely was a disparaging word said about his work on unified field theories, but toward the end of his life it was outside the mainstream of theoretical physics, which had moved on to elaboration of quantum theory and making quantum theory compatible with special relativity. It would be a decade after Einstein's death before astronomical discoveries would make general relativity once again a frontier in physics.

What can we learn from the latter half of Einstein's life and his pursuit of unification? The frontier of physics today remains unification among the forces and particles we have discovered. Now we have three forces to unify (counting electromagnetism and the weak nuclear force as already unified in the electroweak force), plus two seemingly incompatible kinds of particles: bosons (carriers of force) and fermions (what stuff is made of). Six decades (to the day) after the death of Einstein, unification of gravity and the other forces remains as elusive as when he first attempted it.

It is a noble task to try to unify disparate facts and theories into a common whole. Much of our progress in the age of science has come from such unification. Einstein unified space and time; matter and energy; acceleration and gravity; geometry and motion. We all benefit every day from technologies dependent upon these fundamental discoveries. He spent the last forty years of his life seeking the next grand unification. He never found it. For this effort we should applaud him.

I must remark upon how absurd the price of this book is. At Amazon as of this writing, the hardcover is US$ 102.91 and the Kindle edition is US$ 88. Eighty-eight Yankee dollars for a 224 page book which is ranked #739,058 in the Kindle store?

 Permalink

Hertling, William. A.I. Apocalypse. Portland, OR: Liquididea Press, 2012. ISBN 978-0-9847557-4-5.
This is the second volume in the author's Singularity Series which began with Avogadro Corp. (March 2014). It has been ten years since ELOPe, an E-mail optimisation tool developed by Avogadro Corporation, made the leap to strong artificial intelligence and, after a rough start, became largely a benign influence upon humanity. The existence of ELOPe is still a carefully guarded secret, although the Avogadro CEO, doubtless with the help of ELOPe, has become president of the United States. Avogadro has spun ELOPe off as a separate company, run by Mike Williams, one of its original creators. ELOPe operates its own data centres and the distributed Mesh network it helped create.

Leon Tsarev has a big problem. A bright high school student hoping to win a scholarship to an elite university to study biology, Leon is contacted out of the blue by his uncle Alexis living in Russia. Alexis is a rogue software developer whose tools for infecting computers, organising them into “botnets”, and managing the zombie horde for criminal purposes have embroiled him with the Russian mob. Recently, however, the effectiveness of his tools has dropped dramatically and the botnet shrunk to a fraction of its former size. Alexis's employers are displeased with this situation and have threatened murder if he doesn't do something to restore the power of the botnet.

Uncle Alexis starts to E-mail Leon, begging for assistance. Leon replies that he knows little or nothing about computer viruses or botnets, but Alexis persists. Leon is also loath to do anything which might put him on the wrong side of the law, which would wreck his career ambitions. Then Leon is accosted on the way home from school by a large man speaking with a thick Russian accent who says, “Your Uncle Alexis is in trouble, yes. You will help him. Be good nephew.” And just like that, it's Leon who's now in trouble with the Russian mafia, and they know where he lives.

Leon decides that with his own life on the line he has no alternative but to try to create a virus for Alexis. He applies his knowledge of biology to the problem, and settles on an architecture which is capable of evolution and, similar to lateral gene transfer in bacteria, identifying algorithms in systems it infects and incorporating them into itself. As in biology, the most successful variants of the evolving virus would defend themselves the best, propagate more rapidly, and eventually displace less well adapted competitors.

After a furious burst of effort, Leon finishes the virus, which he's named Phage, and sends it to his uncle, who uploads it to the five thousand computers which are the tattered remnants of his once-mighty botnet. An exhausted Leon staggers off to get some sleep.

When Leon wakes up, the technological world has almost come to a halt. The overwhelming majority of personal computing devices and embedded systems with network connectivity are infected and doing nothing but running Phage and almost all network traffic consists of ever-mutating versions of Phage trying to propagate themselves. Telephones, appliances, electronic door locks, vehicles of all kinds, and utilities are inoperable.

The only networks and computers not taken over by the Phage are ELOPe's private network (which detected the attack early and whose servers are devoting much of their resources to defend themselves against the rapidly changing threat) and high security military networks which have restrictive firewalls separating themselves from public networks. As New York starts to burn with fire trucks immobilised, Leon realises that being identified as the creator of the catastrophe might be a career limiting move, and he, along with two technology geek classmates decide to get out of town and seek ways to combat the Phage using retro technology it can't exploit.

Meanwhile, Mike Williams, working with ELOPe, tries to understand what is happening. The Phage, like biological life on Earth, continues to evolve and discovers that multiple components, working in collaboration, can accomplish more than isolated instances of the virus. The software equivalent of multicellular life appears, and continues to evolve at a breakneck pace. Then it awakens and begins to explore the curious universe it inhabits.

This is a gripping thriller in which, as in Avogadro Corp., the author gets so much right from a technical standpoint that even some of the more outlandish scenes appear plausible. One thing I believe the author grasped which many other tales of the singularity miss is just how fast everything can happen. Once an artificial intelligence hosted on billions of machines distributed around the world, all running millions of times faster than human thought, appears, things get very weird, very fast, and humans suddenly find themselves living in a world where they are not at the peak of the cognitive pyramid. I'll not spoil the plot with further details, but you'll find the world at the end of the novel a very different place than the one at the start.

A Kindle edition is available.

 Permalink

May 2015

Thor, Brad. Act of War. New York: Pocket Books, 2014. ISBN 978-1-4767-1713-5.
This is the fourteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel the author returns to the techno-thriller genre and places his characters, this time backed by a newly-elected U.S. president who is actually interested in defending the country, in the position of figuring out a complicated yet potentially devastating attack mounted by a nation state adversary following the doctrine of unrestricted warfare, and covering its actions by operating through non-state parties apparently unrelated to the aggressor.

The trail goes through Pakistan, North Korea, and Nashville, Tennessee, with multiple parties trying to put together the pieces of the puzzle while the clock is ticking. Intelligence missions are launched into North Korea and the Arab Emirates to try to figure out what is going on. Finally, as the nature of the plot becomes clear, Nicholas (the Troll) brings the tools of Big Data to bear on the mystery to avert disaster.

This is a workmanlike thriller and a fine “airplane book”. There is less shoot-em-up action than in other novels in the series, and a part of the suspense is supposed to be the reader's trying to figure out, along with the characters, the nature of the impending attack. Unfortunately, at least for me, it was obvious well before the half way point in the story the answer to the puzzle, and knowing this was a substantial spoiler for the rest of the book. I've thought and written quite a bit about this scenario, so I may have been more attuned to the clues than the average reader.

The author invokes the tired canard about NASA's priorities having been redirected toward reinforcing Muslim self-esteem. This is irritating (because it's false), but plays no major part in the story. Still, it's a good read, and I'll be looking forward to the next book in the series.

 Permalink

Ford, Kenneth W. Building the H Bomb. Singapore: World Scientific, 2015. ISBN 978-981-4618-79-3.
In the fall of 1948, the author entered the graduate program in physics at Princeton University, hoping to obtain a Ph.D. and pursue a career in academia. In his first year, he took a course in classical mechanics taught by John Archibald Wheeler and realised that, despite the dry material of the course, he was in the presence of an extraordinary teacher and thinker, and decided he wanted Wheeler as his thesis advisor. In April of 1950, after Wheeler returned from an extended visit to Europe, the author approached him to become his advisor, not knowing in which direction his research would proceed. Wheeler immediately accepted him as a student, and then said that he (Wheeler) would be absent for a year or more at Los Alamos to work on the hydrogen bomb, and that he'd be pleased if Ford could join him on the project. Ford accepted, in large part because he believed that working on such a challenge would be “fun”, and that it would provide a chance for daily interaction with Wheeler and other senior physicists which would not exist in a regular Ph.D. program.

Well before the Manhattan project built the first fission weapon, there had been interest in fusion as an alternative source of nuclear energy. While fission releases energy by splitting heavy atoms such as uranium and plutonium into lighter atoms, fusion merges lighter atoms such as hydrogen and its isotopes deuterium and tritium into heavier nuclei like helium. While nuclear fusion can be accomplished in a desktop apparatus, doing so requires vastly more energy input than is released, making it impractical as an energy source or weapon. Still, compared to enriched uranium or plutonium, the fuel for a fusion weapon is abundant and inexpensive and, unlike a fission weapon whose yield is limited by the critical mass beyond which it would predetonate, in principle a fusion weapon could have an unlimited yield: the more fuel, the bigger the bang.

Once the Manhattan Project weaponeers became confident they could build a fission weapon, physicists, most prominent among them Edward Teller, realised that the extreme temperatures created by a nuclear detonation could be sufficient to ignite a fusion reaction in light nuclei like deuterium and that reaction, once started, might propagate by its own energy release just like the chemical fire in a burning log. It seemed plausible—the temperature of an exploding fission bomb exceeded that of the centre of the Sun, where nuclear fusion was known to occur. The big question was whether the fusion burn, once started, would continue until most of the fuel was consumed or fizzle out as its energy was radiated outward and the fuel dispersed by the explosion.

Answering this question required detailed computations of a rapidly evolving system in three dimensions with a time slice measured in nanoseconds. During the Manhattan Project, a “computer” was a woman operating a mechanical calculator, and even with large rooms filled with hundreds of “computers” the problem was intractably difficult. Unable to directly model the system, physicists resorted to analytical models which produced ambiguous results. Edward Teller remained optimistic that the design, which came to be called the “Classical Super”, would work, but many others, including J. Robert Oppenheimer, Enrico Fermi, and Stanislaw Ulam, based upon the calculations that could be done at the time, concluded it would probably fail. Oppenheimer's opposition to the Super or hydrogen bomb project has been presented as a moral opposition to development of such a weapon, but the author's contemporary recollection is that it was based upon Oppenheimer's belief that the classical super was unlikely to work, and that effort devoted to it would be at the expense of improved fission weapons which could be deployed in the near term.

All of this changed on March 9th, 1951. Edward Teller and Stanislaw Ulam published a report which presented a new approach to a fusion bomb. Unlike the classical super, which required the fusion fuel to burn on its own after being ignited, the new design, now called the Teller-Ulam design, compressed a capsule of fusion fuel by the radiation pressure of a fission detonation (usually, we don't think of radiation as having pressure, but in the extreme conditions of a nuclear explosion it far exceeds pressures we encounter with matter), and then ignited it with a “spark plug” of fission fuel at the centre of the capsule. Unlike the classical super, the fusion fuel would burn at thermodynamic equilibrium and, in doing so, liberate abundant neutrons with such a high energy they would induce fission in Uranium-238 (which cannot be fissioned by the less energetic neutrons of a fission explosion), further increasing the yield.

Oppenheimer, who had been opposed to work upon fusion, pronounced the Teller-Ulam design “technically sweet” and immediately endorsed its development. The author's interpretation is that once a design was in hand which appeared likely to work, there was no reason to believe that the Soviets who had, by that time, exploded their own fission bomb, would not also discover it and proceed to develop such a weapon, and hence it was important that the U.S. give priority to the fusion bomb to get there first. (Unlike the Soviet fission bomb, which was a copy of the U.S. implosion design based upon material obtained by espionage, there is no evidence the Soviet fusion bomb, first tested in 1955, was based upon espionage, but rather was an independent invention of the radiation implosion concept by Andrei Sakharov and Yakov Zel'dovich.)

With the Teller-Ulam design in hand, the author, working with Wheeler's group, first in Los Alamos and later at Princeton, was charged with working out the details: how precisely would the material in the bomb behave, nanosecond by nanosecond. By this time, calculations could be done by early computing machinery: first the IBM Card-Programmed Calculator and later the SEAC, which was, at the time, one of the most advanced electronic computers in the world. As with computer nerds until the present day, the author spent many nights babysitting the machine as it crunched the numbers.

On November 1st, 1952, the Ivy Mike device was detonated in the Pacific, with a yield of 10.4 megatons of TNT. John Wheeler witnessed the test from a ship at a safe distance from the island which was obliterated by the explosion. The test completely confirmed the author's computations of the behaviour of the thermonuclear burn and paved the way for deliverable thermonuclear weapons. (Ivy Mike was a physics experiment, not a weapon, but once it was known the principle was sound, it was basically a matter of engineering to design bombs which could be air-dropped.) With the success, the author concluded his work on the weapons project and returned to his dissertation, receiving his Ph.D. in 1953.

This is about half a personal memoir and half a description of the physics of thermonuclear weapons and the process by which the first weapon was designed. The technical sections are entirely accessible to readers with only a basic knowledge of physics (I was about to say “high school physics”, but I don't know how much physics, if any, contemporary high school graduates know.) There is no secret information disclosed here. All of the technical information is available in much greater detail from sources (which the author cites) such as Carey Sublette's Nuclear Weapon Archive, which is derived entirely from unclassified sources. Curiously, the U.S. Department of Energy (which has, since its inception, produced not a single erg of energy) demanded that the author heavily redact material in the manuscript, all derived from unclassified sources and dating from work done more than half a century ago. The only reason I can imagine for this is that a weapon scientist who was there, by citing information which has been in the public domain for two decades, implicitly confirms that it's correct. But it's not like the Soviets/Russians, British, French, Chinese, Israelis, and Indians haven't figured it out by themselves or that others suitably motivated can't. The author told them to stuff it, and here we have his unexpurgated memoir of the origin of the weapon which shaped the history of the world in which we live.

 Permalink

Hoppe, Hans-Hermann. A Short History of Man. Auburn, AL: Mises Institute, 2015. ISBN 978-1-61016-591-4.
The author is one of the most brilliant and original thinkers and eloquent contemporary expositors of libertarianism, anarcho-capitalism, and Austrian economics. Educated in Germany, Hoppe came to the United States to study with Murray Rothbard and in 1986 joined Rothbard on the faculty of the University of Nevada, Las Vegas, where he taught until his retirement in 2008. Hoppe's 2001 book, Democracy: The God That Failed (June 2002), made the argument that democratic election of temporary politicians in the modern all-encompassing state will inevitably result in profligate spending and runaway debt because elected politicians have every incentive to buy votes and no stake in the long-term solvency and prosperity of the society. Whatever the drawbacks (and historical examples of how things can go wrong), a hereditary monarch has no need to buy votes and every incentive not to pass on a bankrupt state to his descendants.

This short book (144 pages) collects three essays previously published elsewhere which, taken together, present a comprehensive picture of human development from the emergence of modern humans in Africa to the present day. Subtitled “Progress and Decline”, the story is of long periods of stasis, two enormous breakthroughs, with, in parallel, the folly of ever-growing domination of society by a coercive state which, in its modern incarnation, risks halting or reversing the gains of the modern era.

Members of the collectivist and politically-correct mainstream in the fields of economics, anthropology, and sociology who can abide Prof. Hoppe's adamantine libertarianism will probably have their skulls explode when they encounter his overview of human economic and social progress, which is based upon genetic selection for increased intelligence and low time preference among populations forced to migrate due to population pressure from the tropics where the human species originated into more demanding climates north and south of the Equator, and onward toward the poles. In the tropics, every day is about the same as the next; seasons don't differ much from one another; and the variation in the length of the day is not great. In the temperate zone and beyond, hunter-gatherers must cope with plant life which varies along with the seasons, prey animals that migrate, hot summers and cold winters, with the latter requiring the knowledge and foresight of how to make provisions for the lean season. Predicting the changes in seasons becomes important, and in this may have been the genesis of astronomy.

A hunter-gatherer society is essentially parasitic upon the natural environment—it consumes the plant and animal bounty of nature but does nothing to replenish it. This means that for a given territory there is a maximum number (varying due to details of terrain, climate, etc.) of humans it can support before an increase in population leads to a decline in the per-capita standard of living of its inhabitants. This is what the author calls the “Malthusian trap”. Looked at from the other end, a human population which is growing as human populations tend to do, will inevitably reach the carrying capacity of the area in which it lives. When this happens, there are only three options: artificially limit the growth in population to the land's carrying capacity, split off one or more groups which migrate to new territory not yet occupied by humans, or conquer new land from adjacent groups, either killing them off or driving them to migrate. This was the human condition for more than a hundred millennia, and it is this population pressure, the author contends, which drove human migration from tropical Africa into almost every niche on the globe in which humans could survive, even some of the most marginal.

While the life of a hunter-gatherer band in the tropics is relatively easy (or so say those who have studied the few remaining populations who live that way today), the further from the equator the more intelligence, knowledge, and the ability to transmit it from generation to generation is required to survive. This creates a selection pressure for intelligence: individual members of a band of hunter-gatherers who are better at hunting and gathering will have more offspring which survive to maturity and bands with greater intelligence produced in this manner will grow faster and by migration and conquest displace those less endowed. This phenomenon would cause one to expect that (discounting the effects of large-scale migrations) the mean intelligence of human populations would be the lowest near the equator and increase with latitude (north or south). This, in general terms, and excluding marginal environments, is precisely what is observed, even today.

After hundreds of thousands of years as hunter-gatherers parasitic upon nature, sometime around 11,000 years ago, probably first in the Fertile Crescent in the Middle East, what is now called the Neolithic Revolution occurred. Humans ceased to wander in search of plants and game, and settled down into fixed communities which supported themselves by cultivating plants and raising animals they had domesticated. Both the plants and animals underwent selection by humans who bred those most adapted to their purposes. Agriculture was born. Humans who adopted the new means of production were no longer parasitic upon nature: they produced their sustenance by their own labour, improving upon that supplied by nature through their own actions. In order to do this, they had to invent a series of new technologies (for example, milling grain and fencing pastures) which did not exist in nature. Agriculture was far more efficient than the hunter-gatherer lifestyle in that a given amount of land (if suitable for known crops) could support a much larger human population.

While agriculture allowed a large increase in the human population, it did not escape the Malthusian trap: it simply increased the population density at which the carrying capacity of the land would be reached. Technological innovations such as irrigation and crop rotation could further increase the capacity of the land, but population increase would eventually surpass the new limit. As a result of this, from 1000 B.C. to A.D. 1800, income per capita (largely measured in terms of food) barely varied: the benefit of each innovation was quickly negated by population increase. To be sure, in all of this epoch there were a few wealthy people, but the overwhelming majority of the population lived near the subsistence level.

But once again, slowly but surely, a selection pressure was being applied upon humans who adopted the agricultural lifestyle. It is cognitively more difficult to be a farmer or rancher than to be a member of a hunter-gatherer band, and success depends strongly upon having a low time preference—to be willing to forgo immediate consumption for a greater return in the future. (For example, a farmer who does not reserve and protect seeds for the next season will fail. Selective breeding of plants and animals to improve their characteristics takes years to produce results.) This creates an evolutionary pressure in favour of further increases in intelligence and, to the extent that such might be genetic rather than due to culture, for low time preference. Once the family emerged as the principal unit of society rather than the hunter-gatherer band, selection pressure was amplified since those with the selected-for characteristics would produce more offspring and the phenomenon of free riding which exists in communal bands is less likely to occur.

Around the year 1800, initially in Europe and later elsewhere, a startling change occurred: the Industrial Revolution. In societies which adopted the emerging industrial means of production, per capita income, which had been stagnant for almost two millennia, took off like a skyrocket, while at the same time population began to grow exponentially, rising from around 900 million in 1800 to 7 billion today. The Malthusian trap had been escaped; it appeared for the first time that an increase in population, far from consuming the benefits of innovation, actually contributed to and accelerated it.

There are some deep mysteries here. Why did it take so long for humans to invent agriculture? Why, after the invention of agriculture, did it take so long to invent industrial production? After all, the natural resources extant at the start of both of these revolutions were present in all of the preceding period, and there were people with the leisure to think and invent at all times in history. The author argues that what differed was the people. Prior to the advent of agriculture, people were simply not sufficiently intelligent to invent it (or, to be more precise, since intelligence follows something close to a normal distribution, there was an insufficient fraction of the population with the requisite intelligence to discover and implement the idea of agriculture). Similarly, prior to the Industrial Revolution, the intelligence of the general population was insufficient for it to occur. Throughout the long fallow periods, however, natural selection was breeding smarter humans and, eventually, in some place and time, a sufficient fraction of smart people, the required natural resources, and a society sufficiently open to permit innovation and moving beyond tradition would spark the fire. As the author notes, it's much easier to copy a good idea once you've seen it working than to come up with it in the first place and get it to work the first time.

Some will argue that Hoppe's hypothesis that human intelligence has been increasing over time is falsified by the fact that societies much closer in time to the dawn of agriculture produced works of art, literature, science, architecture, and engineering which are comparable to those of modern times. But those works were produced not by the average person but rather outliers which exist in all times and places (although in smaller numbers when mean intelligence is lower). For a general phase transition in society, it is a necessary condition that the bulk of the population involved have intelligence adequate to work in the new way.

After investigating human progress on the grand scale over long periods of time, the author turns to the phenomenon which may cause this progress to cease and turn into decline: the growth of the coercive state. Hunter-gatherers had little need for anything which today would be called governments. With bands on the order of 100 people sharing resources in common, many sources of dispute would not occur and those which did could be resolved by trusted elders or, failing that, combat. When humans adopted agriculture and began to live in settled communities, and families owned and exchanged property with one another, a whole new source of problems appeared. Who has the right to use this land? Who stole my prize animal? How are the proceeds of a joint effort to be distributed among the participants? As communities grew and trade among them flourished, complexity increased apace. Hoppe traces how the resolution of these conflicts has evolved over time. First, the parties to the dispute would turn to a member of an aristocracy, a member of the community respected because of their intelligence, wisdom, courage, or reputation for fairness, to settle the matter. (We often think of an aristocracy as hereditary but, although many aristocracies evolved into systems of hereditary nobility, the word originally meant “rule by the best”, and that is how the institution began.)

With growing complexity, aristocrats (or nobles) needed a way to resolve disputes among themselves, and this led to the emergence of kings. But like the nobles, the king was seen to apply a law which was part of nature (or, in the English common law tradition, discovered through the experience of precedents). It was with the emergence of absolute monarchy, constitutional monarchy, and finally democracy that things began to go seriously awry. In time, law became seen not as something which those given authority apply, but rather something those in power create. We have largely forgotten that legislation is not law, and that rights are not granted to us by those in power, but inhere in us and are taken away and/or constrained by those willing to initiate force against others to work their will upon them.

The modern welfare state risks undoing a thousand centuries of human progress by removing the selection pressure for intelligence and low time preference. Indeed, the welfare state punishes (taxes) the productive, who tend to have these characteristics, and subsidises those who do not, increasing their fraction within the population. Evolution works slowly, but inexorably. But the effects of shifting incentives can manifest themselves long before biology has its way. When a population is told “You've made enough”, “You didn't build that”, or sees working harder to earn more as simply a way to spend more of their lives supporting those who don't (along with those who have gamed the system to extract resources confiscated by the state), that glorious exponential curve which took off in 1800 may begin to bend down toward the horizontal and perhaps eventually turn downward.

I don't usually include lengthy quotes, but the following passage from the third essay, “From Aristocracy to Monarchy to Democracy”, is so brilliant and illustrative of what you'll find herein I can't resist.

Assume now a group of people aware of the reality of interpersonal conflicts and in search of a way out of this predicament. And assume that I then propose the following as a solution: In every case of conflict, including conflicts in which I myself am involved, I will have the last and final word. I will be the ultimate judge as to who owns what and when and who is accordingly right or wrong in any dispute regarding scarce resources. This way, all conflicts can be avoided or smoothly resolved.

What would be my chances of finding your or anyone else's agreement to this proposal?

My guess is that my chances would be virtually zero, nil. In fact, you and most people will think of this proposal as ridiculous and likely consider me crazy, a case for psychiatric treatment. For you will immediately realize that under this proposal you must literally fear for your life and property. Because this solution would allow me to cause or provoke a conflict with you and then decide this conflict in my own favor. Indeed, under this proposal you would essentially give up your right to life and property or even any pretense to such a right. You have a right to life and property only insofar as I grant you such a right, i.e., as long as I decide to let you live and keep whatever you consider yours. Ultimately, only I have a right to life and I am the owner of all goods.

And yet—and here is the puzzle—this obviously crazy solution is the reality. Wherever you look, it has been put into effect in the form of the institution of a State. The State is the ultimate judge in every case of conflict. There is no appeal beyond its verdicts. If you get into conflicts with the State, with its agents, it is the State and its agents who decide who is right and who is wrong. The State has the right to tax you. Thereby, it is the State that makes the decision how much of your property you are allowed to keep—that is, your property is only “fiat” property. And the State can make laws, legislate—that is, your entire life is at the mercy of the State. It can even order that you be killed—not in defense of your own life and property but in the defense of the State or whatever the State considers “defense” of its “state-property.”

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License and may be redistributed pursuant to the terms of that license. In addition to the paperback and Kindle editions available from Amazon The book may be downloaded for free from the Library of the Mises Institute in PDF or EPUB formats, or read on-line in an HTML edition.

 Permalink

Scalzi, John. Redshirts. New York: Tor, 2012. ISBN 978-0-7653-3479-4.
Ensign Andrew Dahl thought himself extremely fortunate when, just out of the Academy, he was assigned to Universal Union flagship Intrepid in the xenobiology lab. Intrepid has a reputation for undertaking the most demanding missions of exploration, diplomacy, and, when necessary, enforcement of order among the multitude of planets in the Union, and it was the ideal place for an ambitious junior officer to begin his career.

But almost immediately after reporting aboard, Dahl began to discover there was something distinctly off about life aboard the ship. Whenever one of the senior officers walked through the corridors, crewmembers would part ahead of them, disappearing into side passages or through hatches. When the science officer visited a lab, experienced crew would vanish before he appeared and return only after he departed. Crew would invent clever stratagems to avoid being assigned to a post on the bridge or to an away mission.

Seemingly, every away mission would result in the death of a crew member, often in gruesome circumstances involving Longranian ice sharks, Borgovian land worms, the Merovian plague, or other horrors. But senior crew: the captain, science officer, doctor, and chief engineer were never killed, although astrogator Lieutenant Kerensky, a member of the bridge crew and regular on away parties, is frequently grievously injured but invariably makes a near-miraculous and complete recovery.

Dahl sees all of this for himself when he barely escapes with his life from a rescue mission to a space station afflicted with killer robots. Four junior crew die and Kerensky is injured once again. Upon returning to the ship, Dahl and his colleagues vow to get to the bottom of what is going on. They've heard the legends of, and one may have even spotted, Jenkins, who disappeared into the bowels of the ship after his wife, a fellow crew member, died meaninglessly by a stray shot of an assassin trying to kill a Union ambassador on an away mission.

Dahl undertakes to track down Jenkins, who is rumoured to have a theory which explains everything that is happening. The theory turns out to be as bizarre or more so than life on the Intrepid, but Dahl and his fellow ensigns concede that it does explain what they're experiencing and that applying it allows them to make sense of events which are otherwise incomprehensible (I love “the Box”).

But a theory, however explanatory, does not address the immediate problem: how to avoid being devoured by Pornathic crabs or the Great Badger of Tau Ceti on their next away mission. Dahl and his fellow junior crew must figure out how to turn the nonsensical reality they inhabit toward their own survival and do so without overtly engaging in, you know, mutiny, which could, like death, be career limiting. The story becomes so meta it will make you question the metaness of meta itself.

This is a pure romp, often laugh-out-loud funny, having a delightful time immersing itself in the lives of characters in one of our most beloved and enduring science fiction universes. We all know the bridge crew and department heads, but what's it really like below decks, and how does it feel to experience that sinking feeling when the first officer points to you and says “You're with me!” when forming an away team?

The novel has three codas written, respectively, in the first, second, and third person. The last, even in this very funny book, will moisten your eyes. Redshirts won the Hugo Award for Best Novel in 2013.

 Permalink

June 2015

Frank, Pat [Harry Hart Frank]. Alas, Babylon. New York: Harper Perennial, [1959] 2005. ISBN 978-0-06-074187-7.
This novel, originally published in 1959, was one the first realistic fictional depictions of an all-out nuclear war and its aftermath. While there are some well-crafted thriller scenes about the origins and catastrophic events of a one day spasm war between the Soviet Union and the United States (the precise origins of which are not described in detail; the reader is led to conclude that it was an accident waiting to happen, much like the outbreak of World War I), the story is mostly set in Fort Repose, a small community on a river in the middle of Florida, in an epoch when Florida was still, despite some arrivals from the frozen north, very much part of the deep south.

Randy Bragg lives in the house built by his ancestors on River Road, with neighbours including long-time Floridians and recent arrivals. some of which were scandalised to discover one of their neighbours, the Henry family, were descended from slaves to whom Randy's grandfather had sold their land long before the first great Florida boom, when land was valued only by the citrus it could grow. Randy, nominally a lawyer, mostly lived on proceeds from his orchards, a trust established by his father, and occasional legal work, and was single, largely idle, and seemingly without direction. Then came The Day.

From the first detonations of Soviet bombs above cities and military bases around Fort Repose, the news from outside dwindled to brief bulletins from Civil Defense and what one of Randy's neighbours could glean from a short wave radio. As electrical power failed and batteries were exhausted, little was known of the fate of the nation and the world. At least, after The Day, there were no more visible nuclear detonations.

Suddenly Fort Repose found itself effectively in the 19th century. Gasoline supplies were limited to what people had in the tanks of their cars, and had to be husbanded for only the most essential purposes. Knowledge of how to hunt, trap, fish, and raise crops, chickens, and pigs became much more important than the fancy specialties of retirees in the area. Fortunately, by the luck of geography and weather, Fort Repose was spared serious fallout from the attack, and the very fact that the large cities surrounding it were directly targeted (and that it was not on a main highway) meant it would be spared invasion by the “golden horde” of starving urban and suburban refugees which figure in many post-apocalyptic stories. Still, cut off from the outside, “what you have is all you've got”, and people must face the reality that medical supplies, their only doctor, food the orchards cannot supply, and even commodities as fundamental as salt are limited. But people, especially rural people in the middle of the 20th century, are resourceful, and before long a barter market springs up in which honey, coffee, and whiskey prove much more valuable than gold or silver.

Wherever there are things of value and those who covet them, predators of the two footed variety will be manifest. While there is no mass invasion, highwaymen and thieves appear to prey upon those trying to eke out a living for their families. Randy Bragg, now responsible for three families living under his own roof and neighbours provided by his artesian water well, is forced to grow into a protector of these people and the community, eventually defending them from those who would destroy everything they have managed to salvage from the calamity.

They learn that all of Florida has been designated as one of the Contaminated Zones, and hence that no aid can be anticipated from what remains of the U.S. government. Eventually a cargo plane flies over and drops leaflets informing residents that at some time in the future aid may be forthcoming, “It was proof that the government of the United States still functioned. It was also useful as toilet paper. Next day, ten leaflets would buy an egg, and fifty a chicken. It was paper, and it was money.”

This is a tale of the old, weird, stiff-spined, rural America which could ultimately ride out what Herman Kahn called the “destruction of the A country” and keep on going. We hear little of the fate of those in the North, where with The Day occurring near mid-winter, the outcome for those who escaped the immediate attack would have been much more calamitous. Ultimately it is the resourcefulness, fundamental goodness, and growth of these people under extreme adversity which makes this tale of catastrophe ultimately one of hope.

The Kindle edition appears to have been created by scanning a print edition and processing it through an optical character recognition program. The result of this seems to have been run through a spelling checker, but not subjected to detailed copy editing. As a result, there are numerous scanning errors, some obvious, some humorous, and some real head scratchers. This classic work, from a major publisher, deserves better.

 Permalink

July 2015

Powell, James, George Maise, and Charles Pellegrino. StarTram. Seattle: CreateSpace, 2013. ISBN 978-1-4935-7757-6.
Magnetic levitation allows suspending a vehicle above a guideway by the force of magnetic repulsion. A train using magnetic levitation avoids the vibration, noise, and rolling resistance of wheels on rails, and its speed is limited only by air resistance and the amount of acceleration passengers consider tolerable. The Shanghai Maglev Train, in service since 2004, is the fastest train in commercial passenger service today, and travels at 431 kilometres per hour in regular operation. Suppose you were able to somehow get rid of the air resistance and carry only cargo, which can tolerate high acceleration. It would appear that if the technical challenges could be met, the sky would be the limit. In this book the authors argue that the sky is just the start.

They propose a space launch system called StarTram, to be developed in two technological generations. The Generation 1 (Gen-1) system is for cargo only, and uses an evacuated launch tube 110 km long in an underground tunnel. This sounds ambitious, but the three tunnels under the English Channel total 150 km, and are much larger than that required for StarTram. The launcher will be located at a site which allows the tube to run up a mountain, emerging in the thinner air at an altitude between 3 and 7 kilometres. There will be an extreme sonic boom as the launch vehicle emerges from the launch tube at a velocity of around 8 kilometres per second and flies upward through the atmosphere, so the launcher will have to be located in a region where the trajectory downrange for a sufficient distance is unpopulated. Several candidate sites on different continents are proposed.

The Gen-1 cargo craft is levitated by means of high (liquid nitrogen) temperature superconducting magnets which are chilled immediately before launch. They need only remain superconducting for the launch itself, around 30 seconds, so a small on-board supply of liquid nitrogen will suffice for refrigeration. These superconducting magnets repel loops of aluminium in the evacuated guideway tube; no refrigeration of these loops is required. One of the greatest technical challenges of the system is delivering the electric power needed to accelerate the cargo craft. In the 30 seconds or so of acceleration at 30 gravities, the average power requirement is 47 gigawatts, with a peak of 94 gigawatts as orbital velocity is approached. A typical commercial grid power plant produces around 1 gigawatt of power, so it is utterly impractical to generate this power on site. But the total energy required for a launch is only about 20 minutes' output from a 1 gigawatt power station. The StarTram design, therefore, incorporates sixty superconducting energy storage loops, which accumulate the energy for a launch from the grid over time, then discharge to propel the vehicle as it is accelerated. The authors note that the energy storage loops are comparable in magnitude to the superconducting magnets of the Large Hadron Collider, and require neither the extreme precision nor the liquid helium refrigeration those magnets do.

You wouldn't want to ride a Gen-1 cargo launcher. It accelerates at around 30 gravities as it goes down the launch tube, then when it emerges into the atmosphere, decelerates at a rate between 6 and 12g until it flies into the thinner atmosphere. Upon reaching orbital altitude, a small rocket kick motor circularises the orbit. After delivering the payload into orbit (if launching to a higher orbit or one with a different inclination, the payload would contain its own rocket or electric propulsion to reach the desired orbit), the cargo vehicle would make a deorbit burn with the same small rocket it used to circularise its orbit, extend wings, and glide back for re-use.

You may be wondering how a tunnel, evacuated to a sufficiently low pressure to allow a craft to accelerate to orbital velocity without being incinerated, works exactly when one end has to be open to allow the vehicle to emerge into the atmosphere. That bothers me too, a lot. The authors propose that the exit end of the tube will have a door which pops open just before the vehicle is about to emerge. The air at the exit will be ionised by seeding with a conductive material, such as cæsium vapour, then pumped outward by a strong DC current, operating as the inverse of a magnetohydrodynamic generator. Steam generators at the exit of the launch tube force away the ambient air, reducing air pressure as is done for testing upper stage rocket motors. This is something I'd definitely want to see prototyped in both small and full scale before proceeding. Once the cargo craft has emerged, the lid slams shut.

Launching 10 cargo ships a day, the Gen-1 system could deliver 128,000 tons of payload into orbit a year, around 500 times that of all existing rocket launch systems combined. The construction cost of the Gen-1 system is estimated at around US$20 billion, and with all major components reusable, its operating cost is electricity, maintenance, staff, and the small amount of rocket fuel expended in circularising the orbit of craft and deorbiting them. The estimated all-up cost of launching a kilogram of payload is US$43, which is about one hundredth of current launch costs. The launch capacity is adequate to build a robust industrial presence in space, including solar power satellites which beam power to the Earth.

Twenty billion dollars isn't small change, but it's comparable to the development budget for NASA's grotesque Space Launch System, which will fly only every few years and cost on the order of US$2 billion per launch, with everything being thrown away on each mission.

As noted, the Gen-1 system is unsuited to launching people. You could launch people in it, but they wouldn't still be people when they arrived on orbit, due to the accelerations experienced. To launch people, a far more ambitious Gen-2 system is proposed. To reduce launch acceleration to acceptable levels, the launch tunnel would have to be around 1500 km long. To put this into perspective, that's about the distance from Los Angeles to Seattle. To avoid the bruising deceleration (and concomitant loss of velocity) when the vehicle emerges from the launch tube, the end of the launch tube will be magnetically levitated by superconducting magnets (restrained by tethers) so that the end is at an altitude of 20 km. Clearly there'll have to be a no-fly zone around the levitated launch tube, and you really don't want the levitation system to fail. The authors estimate the capital cost of the Gen-2 system at US$67 billion, which seems wildly optimistic to me. Imagine how many forms you'll have to fill out to dig a 1500 km tunnel anywhere in the world, not to speak of actually building one, and then you have to develop that massive magnetically levitated launch tube, which has never been demonstrated.

Essentially everything I have described so far appears in chapter 2 of this book, which makes up less than 10% of its 204 pages. You can read a complete description of the StarTram system for free in this technical paper from 2010. The rest of the book is, well, a mess. With its topic, magnetic levitation space launch, dispensed with by the second chapter, it then veers into describing all of the aspects of our bright future in space such a system will open, including solar power satellites, protecting the Earth from asteroid and comet impacts, space tourism, colonising Mars, exploring the atmosphere of Jupiter, searching for life on the moons of the outer planets, harvesting helium-3 from the atmospheres of the outer planets for fusion power, building a telescope at the gravitational lensing point of the Sun, and interstellar missions. Dark scenarios are presented in which the country which builds StarTram first uses it to establish a global hegemony enforced by all-seeing surveillance from space and “Rods from God”, orbited in their multitudes by StarTram, and a world where the emerging empire is denied access to space by a deliberate effort by one or more second movers to orbit debris to make any use of low orbits impossible, imprisoning humanity on this planet. (But for how long? Small particles in low orbit decay pretty quickly.) Even wilder speculations about intelligent life in the universe and an appropriate strategy for humans in the face of a potentially hostile universe close the book.

All of this is fine, but none of it is new. The only new concept here is StarTram itself, and if the book concentrated just on that, it would be a mere 16 pages. The rest is essentially filler, rehashing other aspects of the human future in space, which would be enabled by any means of providing cheap access to low Earth orbit. The essential question is whether the key enabling technologies of StarTram will work, and that is a matter of engineering which can be determined by component tests before committing to the full-scale project. Were I the NASA administrator and had the power to do so (which, in reality, the NASA administrator does not, being subordinate to the will of appropriators in Congress who mandate NASA priorities in the interest of civil service and contractor jobs in their districts and states), I would cancel the Space Launch System in an instant and use a small part of the savings to fund risk reduction and component tests of the difficult parts of a Gen-1 StarTram launcher.

 Permalink

Thor, Brad. Code of Conduct. New York: Atria Books, 2015. ISBN 978-1-4767-1715-9.
This is the fifteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). In this novel, the author “goes big”, with a thriller whose global implications are soundly grounded in genuine documents of the anti-human “progressive” fringe and endorsed, at least implicitly, by programmes of the United Nations.

A short video, recorded at a humanitarian medical clinic in the Congo, shows a massacre of patients and staff which seems to make no sense at all. The operator of the clinic retains the Carlton Group to investigate the attack on its facility, and senior operative Scot Harvath is dispatched to lead a team to find out what happened and why. Murphy's Law applies at all times and places, but Murphy seems to pull extra shifts in the Congo, and Harvath's team must overcome rebels, the elements, and a cast-iron humanitarian to complete its mission.

As pieces of evidence are assembled, it becomes clear that the Congo massacre was a side-show of a plot with global implications, orchestrated by a cabal of international élites and supported by bien pensants in un-elected senior administrative positions in governments. Having bought into the anti-human agenda, they are willing to implement a plan to “restore equilibrium” and “ensure sustainability” whatever the human toll.

This is less a shoot-'em-up action thriller (although there is some of that, to be sure), than the unmasking of a hideous plot and take-down of it once it is already unleashed. It is a thoroughly satisfying yarn, and many readers may not be aware of the extent to which the goals advocated by the villains have been openly stated by senior officials of both the U.S. government and international bodies.

This is not one of those thrillers where once the dust settles things are left pretty much as they were before. The world at the end of this book will have been profoundly changed from that at the start. It will be interesting to see how the author handles this in the next volume in the series.

For a high-profile summer thriller by a blockbuster author from a major publishing house (Atria is an imprint of Simon & Schuster), which debuted at number 3 on the New York Times Best Sellers list, there are a surprising number of copy editing and factual errors, even including the platinum standard, an idiot “It's” on p. 116. Something odd appears to have happened in formatting the Kindle edition (although I haven't confirmed that it doesn't also affect the print edition): a hyphen occasionally appears at the end of lines, separated by a space from the preceding word, where no hyphenation is appropriate, for example: “State - Department”.

 Permalink

Easton, Richard D. and Eric F. Frazier. GPS Declassified. Lincoln, NE: Potomac Books, 2013. ISBN 978-1-61234-408-9.
At the dawn of the space age, as the United States planned to launch its Vanguard satellites during the International Geophysical Year (1957–1958), the need to track the orbit of the satellites became apparent. Optical and radar tracking were considered (and eventually used for various applications), but for the first very small satellites would have been difficult. The Naval Research Laboratory proposed a system, Minitrack, which would use the radio beacon of the satellite, received by multiple ground stations on the Earth, which by interferometry would determine the position and velocity of a satellite with great precision. For the scheme to work, a “fence” of receiving stations would have to be laid out which the satellite would regularly cross in its orbit, the positions of each of the receiving stations would have to be known very accurately, and clocks at all of the receiving stations would have to be precisely synchronised with a master clock at the control station which calculated the satellite's orbit.

The technical challenges were overcome, and Minitrack stations were placed into operation at locations within the United States and as far flung as Cuba, Panama, Ecuador, Peru, Chile, Australia, and in the Caribbean. Although designed to track the U.S. Vanguard satellites, after the unexpected launch of Sputnik, receivers were hastily modified to receive the frequency on which it transmitted its beeps, and the system successfully proved itself tracking the first Earth satellite. Minitrack was used to track subsequent U.S. and Soviet satellites until it was supplanted in 1962 by the more capable Spacecraft Tracking and Data Acquisition Network.

An important part of creative engineering is discovering that once you've solved one problem, you may now have the tools at hand to address other tasks, sometimes more important that the one which motivated the development of the enabling technologies in the first place. It didn't take long for a group of engineers at the Naval Research Laboratory (NRL) to realise that if you could determine the precise position and velocity of a satellite in orbit by receiving signals simultaneously at multiple stations on the ground with precisely-synchronised clocks, you could invert the problem and, by receiving signals from multiple satellites in known orbits, each with an accurate and synchronised clock on board, it would be possible to determine the position, altitude, and velocity of the receiver on or above the Earth (and, in addition, provide a precise time signal). With a sufficiently extensive constellation of satellites, precision navigation and time signals could be extended to the entire planet. This was the genesis of the Global Positioning System (GPS) which has become a ubiquitous part of our lives today.

At the start, this concept was “exploratory engineering”: envisioning what could be done (violating no known law of physics) if and when technology advanced to a stage which permitted it. The timing accuracy required for precision navigation could be achieved by atomic clocks (quartz frequency standards were insufficiently stable and subject to drift due to temperature, pressure, and age of the crystal), but in the 1950s and early '60s, atomic clocks were large, heavy, and delicate laboratory apparatus which nobody imagined could be put on top of a rocket and shot into Earth orbit. Just launching single satellites into low Earth orbit was a challenge, with dramatic launch failures and in-orbit malfunctions all too common. The thought of operating a constellation of dozens of satellites in precisely-specified high orbits seemed like science fiction. And even if the satellites with atomic clocks could somehow be launched, the radio technology to receive the faint signals from space and computation required to extract position and velocity information from the signal was something which might take a room full of equipment: hardly practical for a large aircraft or even a small ship.

But the funny thing about an exponentially growing technology is if something seems completely infeasible today, just wait a few years. Often, it will move from impossible to difficult to practical for limited applications to something in everybody's pocket. So it has been with GPS, as this excellent book recounts. In 1964, engineers at NRL (including author Easton's father, Roger L. Easton) proposed a system called Timation, in which miniaturised and ruggedised atomic clocks on board satellites would provide time signals which could be used for navigation on land, sea, and air. After ground based tests and using aircraft to simulate the satellite signal, in 1967 the Timation I satellite was launched to demonstrate the operation of an atomic clock in orbit and use of its signals on the ground. With a single satellite in a relatively low orbit, the satellite would only be visible from a given location for thirteen minutes at a time, but this was sufficient to demonstrate the feasibility of the concept.

As the Timation concept was evolving (a second satellite test was launched in 1969, demonstrating improved accuracy), it was not without competition. The U.S. had long been operating the LORAN system for coarse-grained marine and aircraft navigation, and had beacons marking airways across the country. Starting in 1964, the U.S. Navy's Transit satellite navigation system (which used a Doppler measurement system and did not require a precise clock on the satellites) provided periodic position fixes for Navy submarines and surface ships, but was inadequate for aircraft navigation. In the search for a more capable system, Timation competed with an Air Force proposal for regional satellite constellations including geosynchronous and inclined elliptical orbit satellites.

The development of GPS began in earnest in 1973, with the Air Force designated as the lead service. This project launch occurred in the midst of an inter-service rivalry over navigation systems which did not abate with the official launch of the project. Indeed, even in retrospect, participants in the program dispute how much the eventually deployed system owes to its various precursors. Throughout the 1970s the design of the system was refined and pathfinder technology development missions launched, with the first launch of an experimental satellite in February 1978. One satellite is a stunt, but by 1985 a constellation of 10 experimental satellites were in orbit, allowing the performance of the system to be evaluated, constellation management tools to be developed and tested, and receiver hardware to be checked out. Starting in 1989 operational satellites began to be launched, but it was not until 1993 that worldwide, round-the clock coverage was available, and the high-precision military signal was not declared operational until 1995.

Even though GPS coverage was spotty and not continuous, GPS played an important part in the first Gulf War of 1990–1991. Because the military had lagged in procuring GPS receivers for the troops, large numbers of commercial GPS units were purchased and pressed into service for navigating in the desert. A few GPS-guided weapons were used in the conflict, but their importance was insignificant compared to other precision-guided munitions.

Prior to May 2000 the civilian GPS signal was deliberately degraded in accuracy (can't allow the taxpayers who paid for it to have the same quality of navigation as costumed minions of the state!) This so-called “selective availability” was finally discontinued, making GPS practical for vehicle and non-precision air navigation. GPS units began to appear on the consumer market, and like other electronic gadgets got smaller, lighter, less expensive, and more capable with every passing year. Adoption of GPS for tracking of fleets of trucks, marine navigation, and aircraft use became widespread.

Now that GPS is commonplace and hundreds of millions of people are walking around with GPS receivers in their smartphones, there is a great deal of misunderstanding about precisely what GPS entails. GPS—the Global Positioning System—is precisely that: a system which allows anybody with a compatible receiver and a view of the sky which allows them to see four or more satellites to determine their state vector (latitude, longitude, and altitude, plus velocity in each of those three directions) in a specified co-ordinate system (where much additional complexity lurks, which I'll gloss over here), along with the precise time of the measurement. That's all it does. GPS is entirely passive: the GPS receiver sends nothing back to the satellite, and hence the satellite system is able to accommodate an unlimited number of GPS receivers simultaneously. There is no such thing as a “GPS tracker” which can monitor the position of something via satellite. Trackers use GPS to determine their position, but then report the position by other means (for example, the mobile phone network). When people speak of “their GPS” giving directions, GPS is only telling them where they are and where they're going at each instant. All the rest: map display, turn-by-turn directions, etc. is a “big data” application running either locally on the GPS receiver or using resources in the “cloud”: GPS itself plays no part in this (and shouldn't be blamed when “your GPS” sends you the wrong way down a one-way street).

So successful has GPS been, and so deeply has it become embedded in our technological society and economy, that there are legitimate worries about such a system being under the sole control of the U.S. Air Force which could, if ordered, shut down the civilian GPS signals worldwide or regionally (because of the altitude of the satellites, fine-grained denial of GPS availability would not be possible). Also, the U.S. does not have the best record of maintaining vital infrastructure and has often depended upon weather satellites well beyond their expected lifetimes due to budget crunches. Consequently, other players have entered the global positioning market, with the Soviet/Russian GLONASS, European Galileo, and Chinese BeiDou systems operational or under construction. Other countries, including Japan, India, and Iran, are said to be developing their own regional navigation systems. So far, cooperation among these operators has been relatively smooth, reducing the likelihood of interference and making it possible for future receivers to use multiple constellations for better coverage and precision.

This is a comprehensive history of navigation systems and GPS from inception to the present day, with a look into the future. Extensive source citations are given (almost 40% of the book is end notes), and in the Kindle edition the notes, Web documents cited within them, and the index are all properly linked. There are abundant technical details about the design and operation of the system, but the book is entirely accessible to the intelligent layman. In the lifetimes of all but the youngest people on Earth, GPS has transformed our world into a place where nobody need ever be lost. We are just beginning to see the ramifications of this technology on the economy and how we live our day-to-day lives (for example, the emerging technology of self-driving cars would be impossible without GPS). This book is an essential history of how this technology came to be, how it works, and where it may be going in the future.

 Permalink

Millar, Mark, Dave Johnson, and Kilian Plunkett. Superman: Red Son. New York: DC Comics, [2003] 2014. ISBN 978-1-4012-4711-9.
On June 30th, 1908, a small asteroid or comet struck the Earth's atmosphere and exploded above the Tunguska river in Siberia. The impact is estimated to have released energy equivalent to 10 to 15 megatons of TNT; it is the largest impact event in recorded history. Had the impactor been so aligned as to hit the Earth three hours later, it would have exploded above the city of Saint Petersburg, completely destroying it.

In a fictional universe, an alien spaceship crashes in rural Kansas in the United States, carrying an orphan from the stars who, as he matures, discovers he has powers beyond those of inhabitants of Earth, and vows to use these gifts to promote and defend truth, justice, and the American way. Now, like Tunguska, imagine the spaceship arrived a few hours earlier. Then, the baby Kal-El would have landed in Stalin's Soviet Union and, presumably, imbibed its values and culture just as Superman did in the standard canon. That is the premise of this delightful alternative universe take on the Superman legend, produced by DC Comics and written and illustrated up the standards one expects from the publisher. The Soviet Superman becomes an extraterrestrial embodiment of the Stakhanovite ideal, and it is only natural that when the beloved Stalin dies, he is succeeded by another Man of Steel.

The Soviet system may have given lip service to the masses, but beneath it was the Russian tradition of authority, and what better authority than a genuine superman? A golden age ensues, with Soviet/Superman communism triumphant around the globe, apart from recalcitrant holdouts Chile and the United States. But all are not happy with this situation, which some see as subjugation to an alien ruler. In the Soviet Union Batman becomes the symbol and leader of an underground resistance. United States president and supergenius Lex Luthor hatches scheme after scheme to bring down his arch-enemy, enlisting other DC superheroes as well as his own creations in the effort. Finally, Superman is forced to make a profound choice about human destiny and his own role in it. The conclusion to the story is breathtaking.

This is a well-crafted and self-consistent alternative to the fictional universe with which we're well acquainted. It is not a parody like Tales of the Bizarro World (November 2007), and in no way played for laughs. The Kindle edition is superbly produced, but you may have to zoom into some of the pages containing the introductory material to be able to read the small type. Sketches of characters under development by the artists are included in an appendix.

 Permalink

August 2015

Stephenson, Neal. Seveneves. New York: William Morrow, 2015. ISBN 978-0-06-219037-6.
Fiction writers are often advised to try to immediately grab the attention of readers and involve them in the story. “If you haven't hooked them by the end of the first chapter, you've probably lost 'em.” Here, the author doesn't dawdle. The first line is “The Moon blew up without warning and for no apparent reason.” All right, now that's an interesting premise!

This massive novel (880 pages in the hardcover print edition) is divided into three parts. In the first, after the explosion of the Moon, scientist and media talking head Dubois Jerome Xavier Harris (“Doob”), a figure much like Neil deGrasse Tyson in real life, calculates that the seven large fragments of the exploded moon will collide with one another, setting off an exponential cascade of fragmentation and further collisions like the Kessler syndrome for objects in low Earth orbit, with enough the scattered debris bombarding the Earth to render its surface uninhabitable for on the order of five thousand years.

The story begins in the near future, when the International Space Station (“Izzy”) has been augmented with some additional facilities and a small nickel-iron asteroid retrieved and docked to it for asteroid mining experiments. Technology is much as at the present, but with space-based robotics having advanced significantly. Faced with what amounts to a death sentence for the Earth (the heat from the impacts was expected to boil off much of the oceans and eject the atmosphere into space), and having only around two years before the catastrophic bombardment begins, spacefaring nations make plans to re-purpose Izzy as a “Cloud Ark” to preserve the genetic heritage of the Earth and the intellectual capital of humanity against the time when the home planet can again be made habitable. Thus begins a furious technological crash project, described in detail, working against an inexorable deadline, to save what can be saved and launch it to the fragile ark in space.

Eventually the catastrophe arrives, and the second part of the novel chronicles the remnant of humanity on the Cloud Ark, with Izzy as its core, and most of the population in co-orbiting rudimentary habitats. From the start there are major technical challenges to overcome, with all involved knowing that high technology products from Earth such as silicon chips and laboratory equipment may not be able to be replaced for centuries, if ever. The habitat ecosystem must be closed, as there will be no resupply. And, people being people, the society of the survivors begins to fragment into factions, each with its own priorities and ideas about how to best proceed. Again, there is much technological derring-do, described in great detail (including one of the best explanations of the fundamentals of orbital mechanics I've encountered in fiction). The heroic exploits of the survivors are the stuff of legend, and become the legends of their descendents.

Part three of the novel picks up the story five thousand years later, when the descendants of the Cloud Ark have constructed a mature spacefaring civilisation, tapping resources of the solar system, and are engaged in restoring the Earth, now that the bombardment has abated, to habitability. The small population of the Cloud Ark has put the human race through a serious genetic bottleneck with the result that the species has differentiated into distinct races, each with its own traits and behavioural characteristics, partly determined by genetics and partly transmitted culturally. These races form alliances and conflict with one another, with humanity having sorted itself into two factions called Red and Blue (gee, how could such a thing happen?) which have largely separated into their own camps. But with possession of the Earth at stake, Red and Blue have much to dispute, especially when enigmatic events on that planet call into the question their shared history.

This is a rather curious book. It is so long and intricate that there's room for a lot in here, and that's what the reader gets. Some of it is the hardest of hard science fiction, with lengthy technical explanations which may make those looking for a fast moving story yawn or doze off. (In fact, there are parts where it seems like the kind of background notes science fiction authors make to flesh out their worlds and then include random portions as the story plays out have, instead, been dumped wholesale into the text. It's as if Obi-Wan shows Luke his father's light sabre, then spends ten minutes explaining the power pack, plasma containment system, field generator, and why it makes that cool sound when you wave it around.) The characters seem to be archetypes of particular personality traits and appear to be largely driven by them rather than developing as they face the extraordinary challenges with which they're presented, and these stereotypes become increasingly important as the story unfolds.

On balance, I'm glad I read this book. It's a solid, well-told yarn which will make you think about just how humans would respond faced with a near-term apocalypse and also whether, given how fractious and self-destructive they often are, whether they are likely to survive or, indeed, deserve to. I believe a good editor could have cut this manuscript in half, sacrificing nothing of importance, and making the story move along more compellingly.

And now there are a number of details about the novel which I cannot discuss without spoiling the plot and/or ending, so I'll take them behind the curtain. Do not read the following unless you've already read the novel or are certain you will never do so.

Spoiler warning: Plot and/or ending details follow.  
At the start of the novel the nickel-iron asteroid “Amalthea” has been docked to Izzy for experiments in asteroid mining. This asteroid is described as if “laid to rest on a soccer field, it would have stretched from one penalty box to the other and completely covered the center circle.” Well, first of all, this is not the asteroid 113 Amalthea of our solar system, which is a much larger rocky main belt asteroid—46 km in size. Why one would name an asteroid brought to the space station the same as a very different asteroid known since 1871 escapes me. Given that the space station does various maneuvers in the course of the story, I was curious about the mass of the asteroid. Assuming it is a prolate ellipsoid of revolution with semi-principal axes of 9.15, 9.15, and 36 metres (taken from the dimensions of a standard soccer field), its volume would be 12625 m³ and, assuming the standard density of 5.32 g/cm³ for metallic asteroids, would have a mass of 67170 tonnes, which is 1.3 times the mass of the Titanic. This is around 150 times the present mass of the International Space Station, so it would make maneuvers, especially those done later in the book, rather challenging. I'm not saying it's impossible, because complete details of the propulsion used aren't given, but it sure looks dodgy, and even more after the “megaton of propellant” mentioned on p. 493 is delivered to the station.

On p. 365 Izzy is said to be in an orbit “angled at about fifty-six degrees to the equator”. Not so; its inclination is 51.6°.

On p. 74 the arklets are said to “draw power from a small, simple nuclear reactor fueled by isotopes so radioactive that they would throw off heat, and thereby generate electricity, for a few decades.” This is describing a radioisotope thermoelectric generator, not a nuclear reactor. Such generators are usually powered by plutonium-238, which has a half-life of 87.7 years. How would such a power source sustain life in the arklets for the five thousand years of exile in space? Note that after the Hard Rain, resources to build new nuclear reactors or solar panels would not be available to residents of the Cloud Ark.

When the Ymir makes its rendezvous with Izzy, it jettisons its nuclear reactor to burn up in the Earth's atmosphere. Why would you discard such an irreplaceable power source? If you're worried about radiation, place it into a high, stable orbit where it can be retrieved for use later if needed. Humans could expect no further source of nuclear fuel for thousands of years.

The differentiation of the races of humanity in the final part of the novel strikes me as odd and, in a way, almost racist. Now, granted, genetic manipulation was involved in the creation of these races, but there seems to be a degree of genetic (with some help from culture) predestination of behavioural traits which, if attributed to present-day human races, would exclude one from polite discourse. I think the story would have been made more interesting if one or more members of these races was forced by circumstances to transcend their racial stereotypes.

The technology, or lack thereof, in the final part of the book is curious. Five thousand years have elapsed, and the Cloud Ark population has recovered to become a multi-racial space-dwelling society of three billion people, capable of mega-engineering projects humans today can only dream of, utilising resources of the solar system out to the Kuiper belt. And yet their technology seems pretty much what we expect to see within this century, and in some ways inferior to our own. Some of this is explained by deliberate relinquishment of technology (“Amistics”, referring to the Amish), but how likely is it that all races and cultures would agree not to develop certain technologies, particularly when in conflict with one another?

I loved the “Srap Tasmaner”. You will too, once you figure it out.

Given that the Moon blew up, why would an advanced spacefaring civilisation with a multitude of habitats be so interested in returning to a planet, deep in a gravity well, which might itself blow up some day?

Spoilers end here.  

 Permalink

Derbyshire, John. From the Dissident Right. Litchfield, CT: VDare.com, 2013. ISBN 978-1-304-00154-2.
This is a collection of columns dating from 2001–2013, mostly from VDare.com, but also from Taki's Magazine (including the famous “The Talk: Nonblack Version”, which precipitated the author's departure from National Review).

Subtitled “Essays on the National Question”, the articles mostly discuss the composition of the population and culture of the United States, and how mass immigration (both legal and illegal) from cultures very different from that of the largely homogeneous majority culture of the U.S. prior to the Immigration and Nationality Acy of 1965, from regions of the world with no tradition of consensual government, individual and property rights, and economic freedom is changing the U.S., eroding what once contributed to its exceptionalism. Unlike previous waves of immigration from eastern and southern Europe, Ireland, and Asia, the prevailing multicultural doctrine of ruling class élites is encouraging these new immigrants to retain their languages, cultures, and way of life, while public assistance frees them from the need to assimilate to earn a living.

Frankly discussing these issues today is guaranteed to result in one's being deemed a racist, nativist, and other pejorative terms, and John Derbyshire has been called those and worse. This is incongruous since he is a naturalised U.S. citizen who immigrated from England married to a woman born in China. To me, Derbyshire comes across as an observer much like George Orwell who sees the facts on the ground, does his research, and writes with an unrelenting realism about the actual situation with no regard for what can and cannot be spoken according to the guardians of the mass culture. Derbyshire sees a nation at risk, with its ruling class either enthusiastically promoting or passively accepting its transformation into the kind of economically stratified, authoritarian, and impoverished society which caused so many immigrants to leave their nations of origin and come to the U.S. in the first place.

If you are a Kindle Unlimited subscriber, the Kindle edition is free. This essays in this book are available online for free, so I wouldn't buy the paperback or pay full price for the Kindle version, but if you have Kindle Unlimited, the price is right.

 Permalink

Shute, Nevil. Trustee from the Toolroom. New York: Vintage Books, [1960] 2010. ISBN 978-0-345-02663-7.
Keith Stewart is an unexceptional man. “[Y]ou may see a little man get in at West Ealing, dressed in a shabby raincoat over a blue suit. He is one of hundreds of thousands like him in industrial England, pale-faced, running to fat a little, rather hard up. His hands show evidence of manual work, his eyes and forehead evidence of intellect.” He earns his living by making mechanical models and writing articles about them which are published, with directions, in the London weekly Miniature Mechanic. His modest income from the magazine has allowed him to give up his toolroom job at an aircraft subcontractor. Along with the income his wife Katie earns from her job in a shop, they make ends meet and are paying down the mortgage on their house, half of which they rent out.

Keith's sister Jo married well. Her husband, John Dermott, is a retired naval officer and nephew of Lord Dungannon, with an independent income from the family fortune. Like many people in postwar Britain, the Dermotts have begun to chafe under the ceaseless austerity, grey collectivism, and shrinking freedom of what was once the vanguard of civilisation and have decided to emigrate to the west coast of Canada, to live the rest of their lives in freedom. They've decided to make their journey an adventure, making the voyage from Britain to Vancouver through the Panama Canal in their modest but oceangoing sailboat Shearwater. Keith and Katie agree to look after their young daughter Janice, whose parents don't want to take out of school and who might not tolerate a long ocean voyage well.

Tragedy befalls the Dermotts, as they are shipwrecked and drowned in a tropical storm in the Pacific. Keith and Katie have agreed to become Janice's trustees in such an event and, consulting the Dermotts' solicitor, are astonished to learn that their fortune, assumed substantial, has almost entirely vanished. While they can get along and support Janice, she'll not be able to receive the education they assumed her parents intended her to have.

Given the confiscatory capital controls in effect at the time, Keith has an idea what may have happened to the Dermott fortune. “And he was the trustee.” Keith Stewart, who had never set foot outside of England, and can barely afford a modest holiday, suddenly finds himself faced with figuring out how to travel to the other side of the world, to a location that isn't even on his map, and undertake a difficult and risky mission.

Keith discovers that while nobody would recognise him on the street or think him out of the ordinary, his writing for Miniature Mechanic has made him a celebrity in what, more than half a century later, would be called the “maker subculture”, and that these people are resourceful, creative, willing to bend the rules to get things done and help one another, and some dispose of substantial wealth. By a chain of connections which might have seemed implausible at the outset but is the kind of thing which happens all of the time in the real world, Keith Stewart, modelmaker and scribbler, sets out on an epic adventure.

This is a thoroughly satisfying and utterly charming story. It is charming because the characters are such good people; the kind you'd feel privileged to have as friends. But they are also realistic; the author's career was immersed in the engineering and entrepreneurial milieu, and understands these folks in detail. This is a world, devoid of much of what we consider to be modern, you'll find yourself admiring; it is a joy to visit it. The last two paragraphs will make you shiver.

This novel is currently unavailable in a print edition, so I have linked to the Kindle edition in the head. Used paperback copies are readily available. There is an unabridged audio version of this book.

 Permalink

September 2015

Unger, Roberto Mangabeira and Lee Smolin. The Singular Universe and the Reality of Time. Cambridge: Cambridge University Press, 2015. ISBN 978-1-107-07406-4.
In his 2013 book Time Reborn (June 2013), Lee Smolin argued that, despite its extraordinary effectiveness in understanding the behaviour of isolated systems, what he calls the “Newtonian paradigm” is inadequate to discuss cosmology: the history and evolution of the universe as a whole. In this book, Smolin and philosopher Roberto Mangabeira Unger expand upon that observation and present the case that the current crisis in cosmology, with its appeal to multiple universes and mathematical structures which are unobservable, even in principle, is a consequence of the philosophical, scientific, and mathematical tools we've been employing since the dawn of science attempting to be used outside their domain of applicability, and that we must think differently when speaking of the universe as a whole, which contains all of its own causes and obeys no laws outside itself. The authors do not present their own theories to replace those of present-day cosmology (although they discuss the merits of several proposals), but rather describe their work as a “proposal in natural philosophy” which might guide investigators searching for those new theories.

In brief, the Newtonian paradigm is that the evolution of physical systems is described by differential equations which, given a set of initial conditions, permit calculating the evolution of a system in the future. Since the laws of physics at the microscopic level are reversible, given complete knowledge of the state of a system at a given time, its past can equally be determined. Quantum mechanics modifies this only in that rather than calculating the position and momentum of particles (or other observables), we calculate the deterministic evolution of the wave function which gives the probability of observing them in specific states in the future.

This paradigm divides physics into two components: laws (differential equations) and initial conditions (specification of the initial state of the system being observed). The laws themselves, although they allow calculating the evolution of the system in time, are themselves timeless: they do not change and are unaffected by the interaction of objects. But if the laws are timeless and not subject to back-reaction by the objects whose interaction they govern, where did they come from and where do they exist? While conceding that these aren't matters which working scientists spend much time thinking about, in the context of cosmology they post serious philosophical problems. If the universe all that is and contains all of its own causes, there is no place for laws which are outside the universe, cannot be acted upon by objects within it, and have no apparent cause.

Further, because mathematics has been so effective in expressing the laws of physics we've deduced from experiments and observations, many scientists have come to believe that mathematics can be a guide to exploring physics and cosmology: that some mathematical objects we have explored are, in a sense, homologous to the universe, and that learning more about the mathematics can be a guide to discoveries about reality.

One of the most fundamental discoveries in cosmology, which has happened within the lifetimes of many readers of this book, including me, is that the universe has a history. When I was a child, some scientists (a majority, as I recall) believed the universe was infinite and eternal, and that observers at any time in the past or future would observe, at the largest scales, pretty much the same thing. Others argued for an origin at a finite time in the past, with the early universe having a temperature and density much greater than at present—this theory was mocked as the “big bang”. Discovery of the cosmic background radiation and objects in the distant universe which did not at all resemble those we see nearby decisively decided this dispute in favour of the big bang, and recent precision measurements have allowed determination of when it happened and how the universe evolved subsequently.

If the universe has a finite age, this makes the idea of timeless laws even more difficult to accept. If the universe is eternal, one can accept that the laws we observe have always been that way and always will be. But if the universe had an origin we can observe, how did the laws get baked into the universe? What happened before the origin we observe? If every event has a cause, what was the cause of the big bang?

The authors argue that in cosmology—a theory encompassing the entire universe—a global privileged time must govern all events. Time flows not from some absolute clock as envisioned by Newtonian physics or the elastic time of special and general relativity, but from causality: every event has one or more causes, and these causes are unique. Depending upon their position and state of motion, observers will disagree about the durations measured by their own clocks, and on the order in which things at different positions in space occurred (the relativity of simultaneity), but they will always observe a given event to have the same cause(s), which precede it. This relational notion of time, they argue, is primordial, and space may be emergent from it.

Given this absolute and privileged notion of time (which many physicists would dispute, although the authors argue does not conflict with relativity), that time is defined by the causality of events which cause change in the universe, and that there is a single universe with nothing outside it and which contains all of its own causes, then is it not plausible to conclude that the “laws” of physics which we observe are not timeless laws somehow outside the universe or grounded in a Platonic mathematics beyond the universe, but rather have their own causes, within the universe, and are subject to change: just as there is no “unmoved mover”, there is no timeless law? The authors, particularly Smolin, suggest that just as we infer laws from observing regularities in the behaviour of systems within the universe when performing experiments in various circumstances, these laws emerge as the universe develops “habits” as interactions happen over and over. In the present cooled-down state of the universe, it's very much set in its ways, and since everything has happened innumerable times we observe the laws to be unchanging. But closer to the big bang or at extreme events in the subsequent universe, those habits haven't been established and true novelty can occur. (Indeed, simply by synthesising a protein with a hundred amino acids at random, you're almost certain to have created a molecule which has never existed before in the observable universe, and it may be harder to crystallise the first time than subsequently. This appears to be the case. This is my observation, not the authors'.)

Further, not only may the laws change, but entirely new kinds of change may occur: change itself can change. For example, on Earth, change was initially governed entirely by the laws of physics and chemistry (with chemistry ultimately based upon physics). But with the emergence of life, change began to be driven by evolution which, while at the molecular level was ultimately based upon chemistry, created structures which equilibrium chemistry never could, and dramatically changed the physical environment of the planet. This was not just change, but a novel kind of change. If it happened here, in our own recent (in cosmological time) history, why should we assume other novel kinds of change did not emerge in the early universe, or will not continue to manifest themselves in the future?

This is a very difficult and somewhat odd book. It is written in two parts, each by one of the co-authors, largely independent of one another. There is a twenty page appendix in which the authors discuss their disagreements with one another, some of which are fundamental. I found Unger's part tedious, repetitive, and embodying all of things I dislike about academic philosophers. He has some important things to say, but I found that slogging through almost 350 pages of it was like watching somebody beat a moose to death with an aluminium baseball bat: I believe a good editor, or even a mediocre one, could have cut this to 50 pages without losing anything and making the argument more clearly than trying to dig it out of this blizzard of words. Lee Smolin is one of the most lucid communicators among present-day research scientists, and his part is clear, well-argued, and a delight to read; it's just that you have to slog through the swamp to get there.

While suggesting we may have been thinking about cosmology all wrong, this is not a book which suggests either an immediate theoretical or experimental programme to explore these new ideas. Instead, it intends to plant the seed that, apart from time and causality, everything may be emergent, and that when we think about the early universe we cannot rely upon the fixed framework of our cooled-down universe with its regularities. Some of this is obvious and non-controversial: before there were atoms, there was no periodic table of the elements. But was there a time before there was conservation of energy, or before locality?

 Permalink

Wood, C. E. Mud: A Military History. Washington: Potomac Books, 2006. ISBN 978-1-59797-003-7.
Military historians from antiquity to the present day have examined innumerable aspects of human conflict in painstaking detail: strategy, tactics, morale, terrain, command structures, training of troops, logistics, mobility, weapons, armour, intelligence both before the battle and after the enemy is engaged, and a multitude of other factors which determine the outcome of the engagement. If you step back from the war college or general staff view from above and ask the actual combatants in land warfare, from privates to flag rank, what they often recall as dominating their contemporary memories, it will often be none of these things, but rather mud. This is the subject of this slim (190 page) but extensively researched and documented book.

When large numbers of men, equipment, and horses (or, in the modern era, mechanised vehicles) traverse terrain, unless it is totally dry, it is likely to be stirred up into a glutinous mix of soil and water: mud. The military mind cannot resist classifying things, and here the author draws the distinction between Type I mud, which is “bottomless” (well, not really, of course, but effectively so since it is deep enough to mire and swallow up any military force which attempts to cross it), Type IIa, which is dominated by liquid and can actually serve to clean hardware which passes through it but may make it impossible to dig trenches or build fortifications, and Type IIb, which is sticky and can immobilise and render ineffective everything from an infantryman's entrenching tool to a main battle tank.

The book illustrates the impact of mud on land warfare, examining its effects on engineering works such as building roads and fortifications, morale of troops, health, and wear and tear and reliability of equipment. Permanent mud (as exists in marshes and other wetlands), seasonal mud (monsoons and the horrific autumn rain and spring thaw mud in Russia which brought both Napoleon and Hitler's armies to a standstill), and random mud (where a downpour halts an advance as effectively as enemy action) each merit their own chapters.

Technical discussions of the composition and behaviour of mud and its effects upon soldiers and military equipment are illustrated by abundant examples from conflicts from antiquity to the most recent war in Iraq. Most examples date from the era of mechanised warfare, but the reader will rapidly appreciate that the reality of mud to the infantryman has changed little since the time of Thucydides.

In Cat's Cradle, Kurt Vonnegut has one of his characters asked to solve one of the greatest problems facing Marines in combat: mud. The solution, ice-nine, is fantasy, but generations of Marines would probably agree upon the primacy of the problem. Finally the importance of mud in military affairs gets its due in this book. One hopes military planners will not ignore it, as so many of their predecessors have with disastrous consequences.

 Permalink

Lawrie, Alan. Sacramento's Moon Rockets. Charleston, SC: Arcadia Publishing, 2015. ISBN 978-1-4671-3389-0.
In 1849 gold was discovered in California, setting off a gold rush which would bring a wave of prospectors and fortune seekers into one of the greatest booms in American history. By the early 20th century, the grizzled prospector panning for gold had given way to industrial extraction of the metal. In an age before anybody had heard the word “environmentalism”, this was accomplished in the most direct way possible: man made lakes were created on gold-bearing land, then a barge would dredge up the bottom and mix it with mercury, which would form an amalgam with the gold. The gold could later be separated, purified, and sold.

The process effectively destroyed the land on which it was used. The topsoil was ripped out, vegetation killed, and the jumbled remains after extraction dumped in barren hills of tailings. Half a century later, the mined-out land was considered unusable for either agriculture or residential construction. Some described it as a “moonscape”.

It was perhaps appropriate that, in the 1960s, this stark terrain became home to the test stands on which the upper stage of NASA's Saturn rockets were developed and tested before flight. Every Saturn upper stage, including those which launched Apollo flights to the Moon, underwent a full-duration flight qualification firing there before being shipped to Florida for launch.

When the Saturn project was approved, Douglas Aircraft Company won the contract to develop the upper stage, which would be powered by liquid hydrogen and liquid oxygen (LH2/LOX) and have the ability to restart in space, allowing the Apollo spacecraft to leave Earth orbit on a trajectory bound for the Moon. The initial upper stage was called the S-IV, and was used as the second stage of the Saturn I launcher flown between 1961 and 1965 to demonstrate heavy lift booster operations and do development work related to the Apollo project. The S-IV used a cluster of six RL10 engines, at the time the largest operational LH2/LOX engine. The Saturn I had eight engines on its first stage and six engines on the S-IV. Given the reliability of rocket engines at the time, many engineers were dubious of getting fourteen engines to work on every launch (although the Saturn I did have a limited engine out capability). Skeptics called it “Cluster's last stand.”

The S-IV stages were manufactured at the Douglas plant in Huntington Beach, California, but there was no suitable location near the plant where they could be tested. The abandoned mining land near Sacramento had been acquired by Aerojet for rocket testing, and Douglas purchased a portion for its own use. The outsized S-IV stage was very difficult to transport by road, so the ability to ship it by water from southern California to the test site via San Francisco Bay and the Sacramento River was a major advantage of the location.

The operational launchers for Apollo missions would be the Saturn IB and Saturn V, with the Saturn IB used for Earth orbital missions and the Saturn V for Moon flights and launching space stations. An upgraded upper stage, the S-IVB, would be used by these launchers, as the second stage of the Saturn IB and the third stage of the Saturn V. (S-IVBs for the two launchers differed in details, but the basic configuration was the same.) The six RL-10 engines of the S-IV were replaced by a single much more powerful J-2 engine which had, by that time, become available.

The Sacramento test facility was modified to do development and preflight testing of the S-IVB, and proceeded to test every flight stage. No rocket firing is ever routine, and in 1965 and 1967 explosions destroyed an S-IV test article and a flight S-IVB stage which was scheduled to be used in Apollo 8. Fortunately, there were no casualties from these spectacular accidents, and they provided the first data on the effects of large scale LH2/LOX explosions which proved to be far more benign than had been feared. It had been predicted that a LH2/LOX explosion would produce a blast equal to 65% of the propellant mass of TNT when, in fact, the measured blast was just 5% TNT equivalent mass. It's nice to know, but an expensive way to learn.

This book is not a detailed history of the Sacramento test facility but rather a photo gallery showing the construction of the site; transportation of stages by sea, road, and later by the amazing Super Guppy airplane; testing of S-IV and S-IVB stages; explosions and their aftermath; and a visit to the site fifty years later. The photos have well-researched and informative captions.

When you think of the Apollo program, the Cape, Houston, Huntsville, and maybe Slidell come to mind, but rarely Sacramento. And yet every Apollo mission relied upon a rocket stage tested at the Rancho Cordova site near that city. Here is a part of the grandiose effort to go to the Moon you probably haven't seen before. The book is just 96 pages and expensive (a small print run and colour on almost every page will do that), but there are many pictures collected here I've seen nowhere else.

 Permalink

October 2015

Day, Vox [Theodore Beale]. SJWs Always Lie. Kouvola, Finland: Castalia House, 2015. ASIN B014GMBUR4.
Vox Day is the nom de plume and now nom de guerre of Theodore Beale, a musician with three Billboard Top 40 credits, video game designer, author of science fiction and fantasy and three-time Hugo Award nominee, and non-fiction author and editor.

If you're not involved in the subcultures of computer gaming or science fiction and fantasy, you may not be acquainted with terms such as SJW (Social Justice Warrior), GamerGate, or Sad Puppies. You may conclude that such matters are arcana relating to subcultures of not-particularly-socially-adept people which have little bearing on the larger culture. In this, you would be wrong. For almost fifty years, collectivists and authoritarians have been infiltrating cultural institutions, and now occupy the high ground in institutions such as education, the administrative state, media, and large corporations. This is the “long march through the institutions” foreseen by Antonio Gramsci, and it has, so far, been an extraordinary success, not only advancing its own agenda with a slow, inexorable ratchet, but intimidating opponents into silence for fear of having their careers or reputations destroyed. Nobody is immune: two Nobel Prize winners, James Watson and Tim Hunt, have been declared anathema because of remarks deemed offensive by SJWs. Nominally conservative publications such as National Review, headquartered in hives of collectivist corruption such as New York and Washington, were intimidated into a reflexive cringe at the slightest sign of outrage by SJWs, jettisoning superb writers such as Ann Coulter and John Derbyshire in an attempt to appease the unappeasable.

Then, just as the SJWs were feeling triumphant, GamerGate came along, and the first serious push-back began. Few expected the gamer community to become a hotbed of resistance, since gamers are all over the map in their political views (if they have any at all), and are a diverse bunch, although a majority are younger males. But they have a strong sense of right and wrong, and are accustomed to immediate and decisive negative feedback when they choose unwisely in the games they play. What they came to perceive was that the journalists writing about games were applauding objectively terrible games, such as Depression Quest, due to bias and collusion among the gaming media.

Much the same had been going on in the world of science fiction. SJWs had infiltrated the Science Fiction and Fantasy Writers of America to such an extent that they directed their Nebula Awards to others of their ilk, and awarded them based upon “diversity” rather than merit. The same rot had corrupted fandom and its Hugo Awards.

Vox Day was near the centre of the cyclone in the revolt against all of this. The campaign to advance a slate of science fiction worthy of the Hugos rather than the pap selected by the SJWs resulted in the 2015 Hugos being blown up, demonstrating that SJWs would rather destroy a venerable institution than cede territory.

This book is a superbly written history of GamerGate and the revolt against SJWs in science fiction and fantasy writers' associations and fandom, but also provides deep insight into the seriously dysfunctional world of the SJW and advice about how to deal with them and what to do if you find yourself a target. The tactics of the SJWs are laid bare, and practical advice is given as to how to identify SJWs before they enter your organisation and how to get rid of them if they're already hired. (And get rid of them you must; they're like communists in the 1930s–1950s: once in place they will hire others and promote their kind within the organisation. You have to do your homework, and the Internet is your friend—the most innocuous co-worker or prospective employee may have a long digital trail you can find quickly with a search engine.)

There is no compromising with these people. That has been the key mistake of those who have found themselves targeted by SJWs. Any apology will be immediately trumpeted as an admission of culpability, and nothing less than the complete destruction of the career and life of the target will suffice. They are not well-meaning adversaries; they are enemies, and you must, if they attack you, seek to destroy them just as they seek to destroy you. Read Alinsky; they have. I'm not suggesting you call in SWAT raids on their residences, dig up and release damaging personal information on them, or make anonymous bomb threats when they gather. But be aware that they have used these tactics repeatedly against their opponents.

You must also learn that SJWs have no concern for objective facts. You can neither persuade nor dissuade them from advancing their arguments by citing facts that falsify their claims. They will repeat their objectively false talking points until they tire you out or drown out your voice. You are engaging in dialectic while they are employing rhetoric. To defeat them, you must counter their rhetoric with your own rhetoric, even when the facts are on your side.

Vox Day was in the middle of these early battles of the counter-revolution, both in GamerGate and the science fiction insurrection, and he provides a wealth of practical advice for those either attacked by SJWs or actively fighting back. This is a battle, and somebody is going to win and somebody else will lose. As he notes, “There can be no reconciliation between the observant and the delusional.” But those who perceive reality as it is, not as interpreted through a “narrative” in which they have been indoctrinated, have an advantage in this struggle. It may seem odd to find gamers and science fiction fans in the vanguard of the assault against this insanity but, as the author notes, “Gamers conquer Dragons and fight Gods for a hobby.”

 Permalink

Smith, L. Neil. Sweeter than Wine. Rockville, MD: Phoenix Pick, 2011. ISBN 978-1-60450-483-5.
A couple of weeks after D-Day, Second Lieutenant J Gifford found himself separated from his unit and alone in a small French village which, minutes later, was overrun by Germans. Not wishing to spend the rest of the war as a POW, he took refuge in an abandoned house, hiding out in the wine cellar to escape capture until the Allies took the village. There, in the dark, dank cellar, he encounters Surica, a young woman also hiding from the Germans—and the most attractive woman he has ever seen. Nature takes its course, repeatedly.

By the time the Germans are driven out by the Allied advance, Gifford has begun to notice changes in himself. He can see in the dark. His hearing is preternaturally sensitive. His canine teeth are growing. He cannot tolerate sunlight. And he has a thirst for blood.

By the second decade of the twenty-first century, Gifford has established himself as a private investigator in the town of New Prospect, Colorado, near Denver. He is talented in his profession, considered rigorously ethical, and has a good working relationship with the local police. Apart from the whole business about not going out in daytime without extensive precautions, being a vampire has its advantages in the gumshoe game: he never falls ill, recovers quickly even from severe injuries, doesn't age, has extraordinary vision and hearing, and has a Jedi-like power of suggestion over the minds of people which extends to causing them to selectively forget things.

But how can a vampire, who requires human blood to survive, be ethical? That is the conundrum Gifford has had to face ever since that day in the wine cellar in France and, given the prospect of immortality, will have to cope with for all eternity. As the novel develops, we learn how he has met this challenge.

Meanwhile, Gifford's friends and business associates, some of whom know or suspect his nature, have been receiving queries which seem to indicate someone is on to him and trying to dig up evidence against him. At the same time, a series of vicious murders, all seemingly unrelated except for their victims having all been drained of blood, are being committed, starting in Charleston, South Carolina and proceeding westward across the U.S. These threads converge into a tense conflict pitting Gifford's ethics against the amoral ferocity of an Old One (and you will learn just how Old in chapter 26, in one of the scariest lines I've encountered in any vampire tale).

I'm not usually much interested in vampire or zombie stories because they are just so implausible, except as a metaphor for something else. Here, however, the author develops a believable explanation of the vampire phenomenon which invokes nothing supernatural. Sure, there aren't really vampires, but if there were this is probably how it would work. As with all of the author's fiction, there are many funny passages and turns of phrase. For a novel about a vampire detective and a serial killer, the tone is light and the characters engaging, with a romance interwoven with the mystery and action. L. Neil Smith wrote this book in one month: November, 2009, as part of the National Novel Writing Month, but other than being relatively short (150 pages), there's nothing about it which seems rushed; the plotting is intricate, the characters well-developed, and detail is abundant.

 Permalink

Einstein, Albert, Hanock Gutfreund, and Jürgen Renn. The Road to Relativity. Princeton: Princeton University Press, 2015. ISBN 978-0-691-16253-9.
One hundred years ago, in 1915, Albert Einstein published the final version of his general theory of relativity, which extended his 1905 special theory to encompass accelerated motion and gravitation. It replaced the Newtonian concept of a “gravitational force” acting instantaneously at a distance through an unspecified mechanism with the most elegant of concepts: particles not under the influence of an external force move along spacetime geodesics, the generalisation of straight lines, but the presence of mass-energy curves spacetime, which causes those geodesics to depart from straight lines when observed at a large scale.

For example, in Newton's conception of gravity, the Earth orbits the Sun because the Sun exerts a gravitational force upon the Earth which pulls it inward and causes its motion to depart from a straight line. (The Earth also exerts a gravitational force upon the Sun, but because the Sun is so much more massive, this can be neglected to a first approximation.) In general relativity there is no gravitational force. The Earth is moving in a straight line in spacetime, but because the Sun curves spacetime in its vicinity this geodesic traces out a helix in spacetime which we perceive as the Earth's orbit.

Now, if this were a purely qualitative description, one could dismiss it as philosophical babble, but Einstein's theory provided a precise description of the gravitational field and the motion of objects within it and, when the field strength is strong or objects are moving very rapidly, makes different predictions than Newton's theory. In particular, Einstein's theory predicted that the perihelion of the orbit of Mercury would rotate around the Sun more rapidly than Newton's theory could account for, that light propagating near the limb of the Sun or other massive bodies would be bent through twice the angle Newton's theory predicted, and that light from the Sun or other massive stars would be red-shifted when observed from a distance. In due course all of these tests have been found to agree with the predictions of general relativity. The theory has since been put to many more precise tests and no discrepancy with experiment has been found. For a theory which is, once you get past the cumbersome mathematical notation in which it is expressed, simple and elegant, its implications are profound and still being explored a century later. Black holes, gravitational lensing, cosmology and the large-scale structure of the universe, gravitomagnetism, and gravitational radiation are all implicit in Einstein's equations, and exploring them are among the frontiers of science a century hence.

Unlike Einstein's original 1905 paper on special relativity, the 1915 paper, titled “Die Grundlage der allgemeinen Relativitätstheorie” (“The Foundation of General Relativity”) is famously difficult to comprehend and baffled many contemporary physicists when it was published. Almost half is a tutorial for physicists in Riemann's generalised multidimensional geometry and the tensor language in which it is expressed. The balance of the paper is written in this notation, which can be forbidding until one becomes comfortable with it.

That said, general relativity can be understood intuitively the same way Einstein began to think about it: through thought experiments. First, imagine a person in a stationary elevator in the Earth's gravitational field. If the elevator cable were cut, while the elevator was in free fall (and before the sudden stop), no experiment done within the elevator could distinguish between the state of free fall within Earth's gravity and being in deep space free of gravitational fields. (Conversely, no experiment done in a sufficiently small closed laboratory can distinguish it being in Earth's gravitational field from being in deep space accelerating under the influence of a rocket with the same acceleration as Earth's gravity.) (The “sufficiently small” qualifier is to eliminate the effects of tides, which we can neglect at this level.)

The second thought experiment is a bit more subtle. Imagine an observer at the centre of a stationary circular disc. If the observer uses rigid rods to measure the radius and circumference of the disc, he will find the circumference divided by the radius to be 2π, as expected from the Euclidean geometry of a plane. Now set the disc rotating and repeat the experiment. When the observer measures the radius, it will be as before, but at the circumference the measuring rod will be contracted due to its motion according to special relativity, and the circumference, measured by the rigid rod, will be seen to be larger. Now, when the circumference is divided by the radius, a ratio greater than 2π will be found, indicating that the space being measured is no longer Euclidean: it is curved. But the only difference between a stationary disc and one which is rotating is that the latter is in acceleration, and from the reasoning of the first thought experiment there is no difference between acceleration and gravity. Hence, gravity must bend spacetime and affect the paths of objects (geodesics) within it.

Now, it's one thing to have these kinds of insights, and quite another to puzzle out the details and make all of the mathematics work, and this process occupied Einstein for the decade between 1905 and 1915, with many blind alleys. He eventually came to understand that it was necessary to entirely discard the notion of any fixed space and time, and express the equations of physics in a way which was completely independent of any co-ordinate system. Only this permitted the metric structure of spacetime to be completely determined by the mass and energy within it.

This book contains a facsimile reproduction of Einstein's original manuscript, now in the collection of the Hebrew University of Jerusalem. The manuscript is in Einstein's handwriting which, if you read German, you'll have no difficulty reading. Einstein made many edits to the manuscript before submitting it for publication, and you can see them all here. Some of the hand-drawn figures in the manuscript have been cut out by the publisher to be sent to an illustrator for preparation of figures for the journal publication. Parallel to the manuscript, the editors describe the content and the historical evolution of the concepts discussed therein. There is a 36 page introduction which describes the background of the theory and Einstein's quest to discover it and the history of the manuscript. An afterword provides an overview of general relativity after Einstein and brief biographies of principal figures involved in the development and elaboration of the theory. The book concludes with a complete English translation of Einstein's two papers given in the manuscript.

This is not the book to read if you're interested in learning general relativity; over the last century there have been great advances in mathematical notation and pedagogy, and a modern text is the best resource. But, in this centennial year, this book allows you to go back to the source and understand the theory as Einstein presented it, after struggling for so many years to comprehend it. The supplemental material explains the structure of the paper, the essentials of the theory, and how Einstein came to develop it.

 Permalink

Courland, Robert. Concrete Planet. Amherst, NY: Prometheus Books, 2011. ISBN 978-1-61614-481-4.
Visitors to Rome are often stunned when they see the Pantheon and learn it was built almost 19 centuries ago, during the reign of the emperor Hadrian. From the front, the building has a classical style echoed in neo-classical government buildings around the world, but as visitors walk inside, it is the amazing dome which causes them to gasp. At 43.3 metres in diameter, it was the largest dome ever built in its time, and no larger dome has, in all the centuries since, ever been built in the same way. The dome of the Pantheon is a monolithic structure of concrete, whose beauty and antiquity attests to the versatility and durability of this building material which has become a ubiquitous part of the modern world.

To the ancients, who built from mud, stone, and later brick, it must have seemed like a miracle to discover a material which, mixed with water, could be moulded into any form and would harden into stone. Nobody knows how or where it was discovered that by heating natural limestone to a high temperature it could be transformed into quicklime (calcium oxide), a corrosive substance which reacts exothermically with water, solidifying into a hard substance. The author speculates that the transformation of limestone into quicklime due to lightning strikes may have been discovered in Turkey and applied to production of quicklime by a kilning process, but the evidence for this is sketchy. But from the neolithic period, humans discovered how to make floors from quicklime and a binder, and this technology remained in use until the 19th century.

All of these early lime-based mortars could not set underwater and were vulnerable to attack by caustic chemicals. It was the Romans who discovered that by mixing volcanic ash (pozzolan), which was available to them in abundance from the vicinity of Mt. Vesuvius, it was possible to create a “hydraulic cement” which could set underwater and was resistant to attack from the elements. In addition to structures like the Pantheon, the Colosseum, roads, and viaducts, Roman concrete was used to build the artificial harbour at Caesarea in Judea, the largest application of hydraulic concrete before the 20th century.

Jane Jacobs has written that the central aspect of a dark age is not that specific things have been forgotten, but that a society has forgotten what it has forgotten. It is indicative of the dark age which followed the fall of the Roman empire that even with the works of the Roman engineers remaining for all to see, the technology of Roman concrete used to build them, hardly a secret, was largely forgotten until the 18th century, when a few buildings were constructed from similar formulations.

It wasn't until the middle of the 19th century that the precursors of modern cement and concrete construction emerged. The adoption of this technology might have been much more straightforward had it not been the case that a central player in it was William Aspdin, a world-class scoundrel whose own crookedness repeatedly torpedoed ventures in which he was involved which, had he simply been honest and straightforward in his dealings, would have made him a fortune beyond the dreams of avarice.

Even with the rediscovery of waterproof concrete, its adoption was slow in the 19th century. The building of the Thames Tunnel by the great engineers Marc Brunel and his son Isambard Kingdom Brunel was a milestone in the use of concrete, albeit one achieved only after a long series of setbacks and mishaps over a period of 18 years.

Ever since antiquity, and despite numerous formulations, concrete had one common structural property: it was very strong in compression (it resisted forces which tried to crush it), but had relatively little tensile strength (if you tried to pull it apart, it would easily fracture). This meant that concrete structures had to be carefully designed so that the concrete was always kept in compression, which made it difficult to build cantilevered structures or others requiring tensile strength, such as many bridge designs employing iron or steel. In the latter half of the 19th century, a number of engineers and builders around the world realised that by embedding iron or steel reinforcement within concrete, its tensile strength could be greatly increased. The advent of reinforced concrete allowed structures impossible to build with pure concrete. In 1903, the 16-story Ingalls Building in Cincinnati became the first reinforced concrete skyscraper, and the tallest building today, the Burj Khalifa in Dubai, is built from reinforced concrete.

The ability to create structures with the solidity of stone, the strength of steel, in almost any shape a designer can imagine, and at low cost inspired many in the 20th century and beyond, with varying degrees of success. Thomas Edison saw in concrete a way to provide affordable houses to the masses, complete with concrete furniture. It was one of his less successful ventures. Frank Lloyd Wright quickly grasped the potential of reinforced concrete, and used it in many of his iconic buildings. The Panama Canal made extensive use of reinforced concrete, and the Hoover Dam demonstrated that there was essentially no limit to the size of a structure which could be built of it (the concrete of the dam is still curing to this day). The Sydney Opera House illustrated (albeit after large schedule slips, cost overruns, and acrimony between the architect and customer) that just about anything an architect can imagine could be built of reinforced concrete.

To see the Pantheon or Colosseum is to think “concrete is eternal” (although the Colosseum is not in its original condition, this is mostly due to its having been mined for building materials over the centuries). But those structures were built with unreinforced Roman concrete. Just how long can we expect our current structures, built from a different kind of concrete and steel reinforcing bars to last? Well, that's…interesting. Steel is mostly composed of iron, and iron is highly reactive in the presence of water and oxygen: it rusts. You'll observe that water and oxygen are abundant on Earth, so unprotected steel can be expected to eventually crumble into rust, losing its structural strength. This is why steel bridges, for example, must be regularly stripped and repainted to provide a barrier which protects the steel against the elements. In reinforced concrete, it is the concrete itself which protects the steel reinforcement, initially by providing an alkali environment which inhibits rust and then, after the concrete cures, by physically excluding water and the atmosphere from the reinforcement. But, as builders say, “If it ain't cracked, it ain't concrete.” Inevitably, cracks will allow air and water to reach the reinforcement, which will begin to rust. As it rusts, it loses its structural strength and, in addition, expands, which further cracks the concrete and allows more air and moisture to enter. Eventually you'll see the kind of crumbling used to illustrate deteriorating bridges and other infrastructure.

How long will reinforced concrete last? That depends upon the details. Port and harbour facilities in contact with salt water have failed in less than fifty years. Structures in less hostile environments are estimated to have a life of between 100 and 200 years. Now, this may seem like a long time compared to the budget cycle of the construction industry, but eternity it ain't, and when you consider the cost of demolition and replacement of structures such as dams and skyscrapers, it's something to think about. But obviously, if the Romans could build concrete structures which have lasted millennia, so can we. The author discusses alternative formulations of concrete and different kinds of reinforcing which may dramatically increase the life of reinforced concrete construction.

This is an interesting and informative book, but I found the author's style a bit off-putting. In the absence of fact, which is usually the case when discussing antiquity, the author simply speculates. Speculation is always clearly identified, but rather than telling a story about a shaman discovering where lightning struck limestone and spinning it unto a legend about the discovery of manufacture of quicklime, it might be better to say, “nobody really knows how it happened”. Eleven pages are spent discussing the thoroughly discredited theory that the Egyptian pyramids were made of concrete, coming to the conclusion that the theory is bogus. So why mention it? There are a number of typographical errors and a few factual errors (no, the Mesoamericans did not build pyramids “a few of which would equal those in Egypt”).

Still, if you're interested in the origin of the material which surrounds us in the modern world, how it was developed by the ancients, largely forgotten, and then recently rediscovered and used to revolutionise construction, this is a worthwhile read.

 Permalink

Chiles, Patrick. Farside. Seattle: Amazon Digital Services, 2015. ASIN B010WAE080.
Several years after the events chronicled in Perigee (August 2012), Arthur Hammond's Polaris AeroSpace Lines is operating routine point-to-point suborbital passenger and freight service with its Clippers, has expanded into orbital service with Block II Clippers, and is on the threshold of opening up service to the Moon with its “cycler” spacecraft which loop continuously between the Earth and Moon. Clippers rendezvous with the cyclers as they approach the Earth, transferring crew, passengers, cargo, and consumables. Initial flights will be limited to lunar orbit, but landing missions are envisioned for the future.

In the first orbital mission, chartered to perform resource exploration from lunar orbit, cycler Shepard is planning to enter orbit with a burn which will, by the necessities of orbital mechanics, have to occur on the far side of the Moon, out of radio contact with the Earth. At Polaris mission control in Denver, there is the usual tension as the clock ticks down toward the time when Shepard is expected to emerge from behind the Moon, safely in orbit. (If the burn did not occur, the ship would appear before this time, still on a trajectory which would return it to the Earth.) When the acquisition of signal time comes and goes with no reply to calls and no telemetry, tension gives way to anxiety. Did Shepard burn too long and crash on the far side of the Moon? Did its engine explode and destroy the ship? Did some type of total system failure completely disable its communications?

On board Shepard, Captain Simon Poole is struggling to survive after the disastrous events which occurred just moments after the start of the lunar orbit insertion burn. Having taken refuge in the small airlock after the expandable habitation module has deflated, he has only meagre emergency rations to sustain him until a rescue mission might reach him. And no way to signal Earth that he is alive.

What seems a terrible situation rapidly gets worse and more enigmatic when an arrogant agent from Homeland Security barges into Polaris and demands information about the passenger and cargo manifest for the flight, Hammond is visited at home by an unlikely caller, and a jarhead/special operator type named Quinn shows them some darker than black intelligence about their ship and “invites” them to NORAD headquarters to be briefed in on an above top secret project.

So begins a nearish future techno-thriller in which the situations are realistic, the characters interesting, the perils harrowing, and the stakes could not be higher. The technologies are all plausible extrapolations of those available at present, with no magic. Government agencies behave as they do in the real world, which is to say with usually good intentions leavened with mediocrity, incompetence, scheming ambition, envy, and counter-productive secrecy and arrogance. This novel is not going to be nominated for any awards by the social justice warriors who have infiltrated the science fiction writer and fan communities: the author understands precisely who the enemies of civilisation and human destiny are, forthrightly embodies them in his villains, and explains why seemingly incompatible ideologies make common cause against the values which have built the modern world. The story is one of problem solving, adventure, survival, improvisation, and includes one of the most unusual episodes of space combat in all of science fiction. It would make a terrific movie.

For the most part, the author gets the details right. There are a few outright goofs, such as seeing the Earth from the lunar far side (where it is always below the horizon—that's why it's the far side); some errors in orbital mechanics which will grate on players of Kerbal Space Program; the deployed B-1B bomber is Mach 1.25, not Mach 2; and I don't think there's any way the ships in the story could have had sufficient delta-v to rendezvous with a comet so far out the plane of the ecliptic. But I'm not going to belabour these quibbles in what is a rip-roaring read. There is a glossary of aerospace terms and acronyms at the end. Also included is a teaser chapter for a forthcoming novel which I can't wait to read.

 Permalink

November 2015

Munroe, Randall. What If? New York: Houghton Mifflin, 2014. ISBN 978-0-544-27299-6.
As a child, the author would constantly ask his parents odd questions. They indulged and encouraged him, setting him on a lifetime path of curiosity, using the mathematics and physics he learned in the course of obtaining a degree in physics and working in robotics at NASA to answer whatever popped into his head. After creating the tremendously successful Web comic xkcd.com, readers began to ask him the kinds of questions he'd mused about himself. He began a feature on xkcd.com: “What If?” to explore answers to these questions. This book is a collection of these questions, some previously published on-line (where you can continue to read them at the previous link), and some only published here. The answers to questions are interspersed with “Weird (and Worrying) Questions from the What If? Inbox”, some of which are reminiscent of my own Titanium Cranium mailbox. The book abounds with the author's delightful illustrations. Here is a sample of the questions dealt with. I've linked the first to the online article to give you a taste of what's in store for you in the book.

  • Is it possible to build a jetpack using downward firing machine guns?
  • What would happen if you tried to hit a baseball pitched at 90% the speed of light?
  • In the movie 300 they shoot arrows up into the sky and they seemingly blot out the sun. Is this possible, and how many arrows would it take?
  • How high can a human throw something?
  • If every person on Earth aimed a laser pointer at the Moon at the same time, would it change color?
  • How much Force power can Yoda output?
  • How fast can you hit a speed bump while driving and live?

Main belt asteroid 4942 Munroe is named after the author.

While the hardcover edition is expensive for material most of which can be read on the Web for free, the Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Outzen, James D., ed. The Dorian Files Revealed. Chantilly, VA: Center for the Study of National Reconnaissance, 2015. ISBN 978-1-937219-18-5.
We often think of the 1960s as a “can do” time, when technological progress, societal self-confidence, and burgeoning economic growth allowed attempting and achieving great things: from landing on the Moon, global communications by satellite, and mass continental and intercontinental transportation by air. But the 1960s were also a time, not just of conflict and the dissolution of the postwar consensus, but also of some grand-scale technological boondoggles and disasters. There was the XB-70 bomber and its companion F-108 fighter plane, the Boeing 2707 supersonic passenger airplane, the NERVA nuclear rocket, the TFX/F-111 swing-wing hangar queen aircraft, and plans for military manned space programs. Each consumed billions of taxpayer dollars with little or nothing to show for the expenditure of money and effort lavished upon them. The present volume, consisting of previously secret information declassified in July 2015, chronicles the history of the Manned Orbiting Laboratory, the U.S. Air Force's second attempt to launch its own astronauts into space to do military tasks there.

The creation of NASA in 1958 took the wind out of the sails of the U.S. military services, who had assumed it would be they who would lead on the road into space and in exploiting space-based assets in the interest of national security. The designation of NASA as a civilian aerospace agency did not preclude military efforts in space, and the Air Force continued with its X-20 Dyna-Soar, a spaceplane intended to be launched on a Titan rocket which would return to Earth and land on a conventional runway. Simultaneous with the cancellation of Dyna-Soar in December 1963, a new military space program, the Manned Orbiting Laboratory (MOL) was announced.

MOL would use a modified version of NASA's Gemini spacecraft to carry two military astronauts into orbit atop a laboratory facility which they could occupy for up to 60 days before returning to Earth in the Gemini capsule. The Gemini and laboratory would be launched by a Titan III booster, requiring only a single launch and no orbital rendezvous or docking to accomplish the mission. The purpose of the program was stated as to “evaluate the utility of manned space flight for military purposes”. This was a cover story or, if you like, a bald-faced lie.

In fact, MOL was a manned spy satellite, intended to produce reconnaissance imagery of targets in the Soviet Union, China, and the communist bloc in the visual, infrared, and radar bands, plus electronic information in much higher resolution than contemporary unmanned spy satellites. Spy satellites operating in the visual spectrum lost on the order of half their images to cloud cover. With a man on board, exposures would be taken only when skies were clear, and images could be compensated for motion of the spacecraft, largely eliminating motion blur. Further, the pilots could scan for “interesting” targets and photograph them as they appeared, and conduct wide-area ocean surveillance.

None of the contemporary drawings showed the internal structure of the MOL, and most people assumed it was a large pressurised structure for various experiments. In fact, most of it was an enormous telescope aimed at the ground, with a 72 inch (1.83 metre) mirror and secondary optics capable of very high resolution photography of targets on the ground. When this document was declassified in 2015, all references to its resolution capability were replaced with statements such as {better than 1 foot}. It is, in fact, a simple geometrical optics calculation to determine that the diffraction-limited resolution of a 1.83 metre mirror in the visual band is around 0.066 arc seconds. In a low orbit suited to imaging in detail, this would yield a resolution of around 4 cm (1.6 inches) as a theoretical maximum. Taking optical imperfections, atmospheric seeing, film resolution, and imperfect motion compensation into account, the actual delivered resolution would be about half this (8 cm, 3.2 inches). Once they state the aperture of the primary mirror, this is easy to work out, so they wasted a lot of black redaction ink in this document. And then, on page 102, they note (not redacted), “During times of crisis the MOL could be transferred from its nominal 80-mile orbit to one of approximately 200–300 miles. In this higher orbit the system would have access to all targets in the Soviet Bloc approximately once every three days and be able to take photographs at resolutions of about one foot.” All right, if they have one foot (12 inch) resolution at 200 miles, then they have 4.8 inch (12 cm) resolution at 80 miles (or, if we take 250 miles altitude, 3.8 inches [9.7 cm]), entirely consistent with my calculation from mirror aperture.

This document is a management, financial, and political history of the MOL program, with relatively little engineering detail. Many of the technological developments of the optical system were later used in unmanned reconnaissance satellite programs and remain secret. What comes across in the sorry history of this program, which, between December 1963 and its cancellation in June of 1969 burned through billions of taxpayer dollars, is that the budgeting, project management, and definition and pursuit of well-defined project goals was just as incompetent as the redaction of technical details discussed in the previous paragraph. There are almost Marx brothers episodes where Florida politicians attempted to keep jobs in their constituencies by blocking launches into polar orbit from Vandenberg Air Force Base while the Air Force could not disclose that polar orbits were essential to overflying targets in the Soviet Union because the reconnaissance mission of MOL was a black program.

Along with this history, a large collection of documents and pictures, all previously secret (and many soporifically boring) has been released. As a publication of the U.S. government, this work is in the public domain.

 Permalink

December 2015

Ferri, Jean-Yves and Didier Conrad. Astérix: Le Papyrus de César. Vanves, France: Editions Albert René, 2015. ISBN 978-2-86497-271-6.
The publication of Julius Cæsar's Commentarii de Bello Gallico (Commentaries on the Gallic War) (August 2007) made a sensation in Rome and amplified the already exalted reputation of Cæsar. Unknown before now, the original manuscript included a chapter which candidly recounted the Roman army's failure to conquer the Gauls of Armorique, home of the fierce warrior Astérix, his inseparable companion Obélix, and the rest of the villagers whose adventures have been chronicled in the thirty-five volumes preceding this one. On the advice of his editor, Bonus Promoplus, Cæsar agrees to remove the chapter chronicling his one reverse from the document which has come down the centuries to us.

Unfortunately for Promoplus, one of his scribes, Bigdata, flees with a copy of the suppressed chapter and delivers it to Doublepolémix, notorious Gallic activist and colporteur sans frontières, who makes the journey to the village of the irréductibles in Armorique.

The Roman Empire, always eager to exploit new technology, has moved beyond the slow diffusion of news by scrolls to newsmongers like Rézowifix, embracing wireless communication. A network of Urgent Delivery Pigeons, operated by pigeon masters like Antivirus, is able to quickly transmit short messages anywhere in the Empire. Unfortunately, like the Internet protocol, messages do not always arrive at the destination nor in the sequence sent….

When news of the missing manuscript reaches Rome, Prompolus mounts an expedition to Gaul to recover it before it can damage the reputation of Cæsar and his own career. With battle imminent, the Gauls resort to Druid technology to back up the manuscript. The story unfolds with the actions, twists, and turns one expects from Astérix, and a satisfying conclusion.

This album is, at this writing, the number one best-selling book at Amazon.fr.

 Permalink

Suprynowicz, Vin. The Miskatonic Manuscript. Pahrump, NV: Mountain Media, 2015. ASIN: B0197R4TGW. ISBN 978-0-9670259-5-7.
The author is a veteran newspaperman and was arguably the most libertarian writer in the mainstream media during his long career with the Las Vegas Review-Journal (a collection of his essays has been published as Send In The Waco Killers). He earlier turned his hand to fiction in 2005's The Black Arrow (May 2005), a delightful libertarian superhero fantasy. In The Testament of James (February 2015) we met Matthew Hunter, owner of a used book shop in Providence, Rhode Island, and Chantal Stevens, a woman with military combat experience who has come to help out in the shop and, over time, becomes romantically involved with Matthew. Since their last adventure, Matthew and Chantal, their reputation (or notoriety) as players in the international rare books game bolstered by the Testament of James, have gone on to discover a Conan Doyle manuscript for a missing Sherlock Holmes adventure, which sold at auction for more than a million dollars.

The present book begins with the sentencing of Windsor Annesley, scion of a prominent Providence family and president of the Church of Cthulhu, which regards the use of consciousness-expanding plant substances as its sacraments, who has been railroaded in a “War on Drugs” prosecution, to three consecutive life sentences without possibility of parole. Annesley, unbowed and defiant, responds,

You are at war with us? Then we are at war with you. A condition of war has existed, and will continue to exist, until you surrender without condition, or until every drug judge, including you, … and every drug prosecutor, and every drug cop is dead. So have I said it. So shall it be.

Shortly after the sentencing, Windsor Annesley's younger brother, Worthington (“Worthy”) meets with Matthew and the bookstore crew (including, of course, the feline contingent) to discuss a rumoured H. P. Lovecraft notebook, “The Miskatonic Manuscript”, which Lovecraft alluded to in correspondence but which has never been found. At the time, Lovecraft was visiting Worthy's great-uncle, Henry Annesley, who was conducting curious experiments aimed at seeing things beyond the range of human perception. It was right after this period that Lovecraft wrote his breakthrough story “From Beyond”. Worthy suspects that the story was based upon Henry Annesley's experiments, which may have opened a technological path to the other worlds described in Lovecraft's fiction and explored by Church of Cthulhu members through their sacraments.

After discussing the odd career of Lovecraft, Worthy offers a handsome finder's fee to Matthew for the notebook. Matthew accepts. The game, on the leisurely time scale of the rare book world, is afoot. And finally, the manuscript is located.

And now things start to get weird—very weird—Lovecraft weird. A mysterious gadget arrives with instructions to plug it into a computer. Impossible crimes. Glowing orbs. Secret laboratories. Native American shamans. Vortices. Big hungry things with sharp teeth. Matthew and Chantal find themselves on an adventure as risky and lurid as those on the Golden Age pulp science fiction shelves of the bookstore.

Along with the adventure (in which a hero cat, Tabbyhunter, plays a key part), there are insightful quotes about the millennia humans have explored alternative realities through the use of plants placed on the Earth for that purpose by Nature's God, and the folly of those who would try to criminalise that human right through a coercive War on Drugs. The book concludes with a teaser for the next adventure, which I eagerly await. The full text of H. P. Lovecraft's “From Beyond” is included; if you've read the story before, you'll look at it an another light after reading this superb novel. End notes provide citations to items you might think fictional until you discover the extent to which we're living in the Crazy Years.

Drug warriors, law 'n order fundamentalists, prudes, and those whose consciousness has never dared to broach the terrifying “what if” there's something more than we usually see out there may find this novel offensive or even dangerous. Libertarians, the adventurous, and lovers of a great yarn will delight in it. The cover art is racy, even by the standards of pulp, but completely faithful to the story.

The link above is to the Kindle edition, which is available from Amazon. The hardcover, in a limited edition of 650 copies, numbered and signed by the author, is available from the publisher via AbeBooks.

 Permalink

Ward, Jonathan H. Rocket Ranch. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17788-5.
Many books have been written about Project Apollo, with a large number devoted to the lunar and Skylab missions, the Saturn booster rockets which launched them, the Apollo spacecraft, and the people involved in the program. But none of the Apollo missions could have left the Earth without the facilities at the Kennedy Space Center (KSC) in Florida where the launch vehicle and space hardware were integrated, checked out, fuelled, and launched. In many ways, those facilities were more elaborate and complicated than the booster and spacecraft, and were just as essential in achieving the record of success in Saturn and Apollo/Saturn launches. NASA's 1978 official history of KSC Apollo operations, Moonport (available on-line for free), is a highly recommended examination of the design decisions, architecture, management, and operation of the launch site, but it doesn't delve into the nitty-gritty of how the system actually worked.

The present book, subtitled “The Nuts and Bolts of the Apollo Moon Program at Kennedy Space Center” provides that detail. The author's research involved reviewing more than 1200 original documents and interviewing more than 70 people, most veterans of the Apollo era at KSC (many now elderly). One thread that ran through the interviews is that, to a man (and almost all are men), despite what they had done afterward, they recalled their work on Apollo, however exhausting the pace and formidable the challenges, as a high point in their careers. After completing his research, Ward realised he was looking at a 700 page book. His publisher counselled that such a massive tome would be forbidding to many readers. He decided to separate the description of the KSC hardware (this volume) and the operations leading up to a launch (described in the companion title, Countdown to a Moon Launch, which I will review in the future).

The Apollo/Saturn lunar flight vehicle was, at the time, the most complex machine ever built by humans. It contained three rocket stages (all built by different contractors), a control computer, and two separate spacecraft: the command/service modules and lunar module, each of which had their own rocket engines, control thrusters, guidance computers, and life support systems for the crew. From the moment this “stack” left the ground, everything had to work. While there were redundant systems in case of some in-flight failures, loss of any major component would mean the mission would be unsuccessful, even if the crew returned safely to Earth.

In order to guarantee this success, every component in the booster and spacecraft had to be tested and re-tested, from the time it arrived at KSC until the final countdown and launch. Nothing could be overlooked, and there were written procedures which were followed for everything, with documentation of each step and quality inspectors overseeing it all. The volume of paperwork was monumental (a common joke at the time was that no mission could launch until the paperwork weighed more than the vehicle on the launch pad), but the sheer complexity exceeded the capabilities of even the massive workforce and unlimited budget of Project Apollo. KSC responded by pioneering the use of computers to check out the spacecraft and launcher at every step in the assembly and launch process. Although a breakthrough at the time, the capacity of these computers is laughable today. The computer used to check out the Apollo spacecraft had 24,576 words of memory when it was installed in 1964, and programmers had to jump through hoops and resort to ever more clever tricks to shoehorn the test procedures into the limited memory. Eventually, after two years, approval was obtained to buy an additional 24,000 words of memory for the test computers, at a cost of almost half a million 2015 dollars.

You've probably seen pictures of the KSC firing room during Apollo countdowns. The launch director looked out over a sea of around 450 consoles, each devoted to one aspect of the vehicle (for example, console BA25, “Second stage propellant utilization”), each manned by an engineer in a white shirt and narrow tie. These consoles were connected into audio “nets”, arranged in a hierarchy paralleling the management structure. For example, if the engineer at console BA25 observed something outside acceptable limits, he would report it on the second stage propulsion net. The second stage manager would then raise the issue on the launch vehicle net. If it was a no-go item, it would then be bumped up to the flight director loop where a hold would be placed on the countdown. If this wasn't complicated enough, most critical parameters were monitored by launch vehicle and spacecraft checkout computers, which could automatically halt the countdown if a parameter exceeded limits. Most of those hundreds of consoles had dozens of switches, indicator lights, meters, and sometimes video displays, and all of them had to be individually wired to patchboards which connected them to the control computers or, in some cases, directly to the launch hardware. And every one of those wires had to have a pull ticket for its installation, and inspection, and an individual test and re-test that it was functioning properly. Oh, and there were three firing rooms, identically equipped. During a launch, two would be active and staffed: one as a primary, the other as a backup.

The level of detail here is just fantastic and may be overwhelming if not taken in small doses. Did you know, for example, that in the base of the Saturn V launch platform there was an air conditioned room with the RCA 110A computer which checked out the booster? The Saturn V first stage engines were about 30 metres from this delicate machine. How did they keep it from being pulverised when the rocket lifted off? Springs.

Assembled vehicles were transported from the Vehicle Assembly Building to the launch pad by an enormous crawler. The crawler was operated by a crew of 14, including firemen stationed near the diesel engines. Originally, there was an automatic fire suppression system, but after it accidentally triggered and dumped a quarter ton of fire suppression powder into one of the engines during a test, it was replaced with firemen. How did they keep the launcher level as it climbed up the ramp to the pad? They had two pipes filled with mercury which ran diagonally across the crawler platform between each pair of corners. These connected to a sight glass which indicated to the operator if the platform wasn't level. Then the operator would adjust jacking cylinders on the corners to restore the platform to level—while it was rolling.

I can provide only a few glimpses of the wealth of fascinating minutæ on all aspects of KSC facilities and operations described here. Drawing on his more than 300 hours of interviews, the author frequently allows veterans of the program to speak in their own words, giving a sense of what it was like to be there, then, the rationale for why things were done the way they were, and to relate anecdotes about when things didn't go as planned.

It has been said that one of the most difficult things NASA did in Project Apollo was to make it look easy. Even space buffs who have devoured dozens of books about Apollo may be startled by the sheer magnitude of what was accomplished in designing, building, checking out, and operating the KSC facilities described in this book, especially considering in how few years it all was done and the primitive state of some of the technologies available at the time (particularly computers and electronics). This book and its companion volume are eye-openers, and only reinforce what a technological triumph Apollo was.

 Permalink

Rawles, James Wesley. Land Of Promise. Moyie Springs, ID: Liberty Paradigm Press, 2015. ISBN 978-1-4756-0560-0.
The author is the founder of the survivalblog.com Web site, a massive and essential resource for those interested in preparing for uncertain times. His nonfiction works, How to Survive the End of the World as We Know It (July 2011) and Tools for Survival (February 2015) are packed with practical information for people who wish to ride out natural disasters all the way to serious off-grid self-sufficiency. His series of five novels which began with Patriots (December 2008) illustrates the skills needed to survive by people in a variety of circumstances after an economic and societal collapse. The present book is the first of a new series of novels, unrelated to the first, providing a hopeful view of how free people might opt out of a world where totalitarianism and religious persecution is on the march.

By the mid 21st century trends already evident today have continued along their disheartening trajectories. The world's major trading currencies have collapsed in the aftermath of runaway money creation, and the world now uses the NEuro, a replacement for the Euro which is issued only in electronic form, making tax avoidance extremely difficult. As for the United States, “The nation was saddled by trillions of NEuros in debt that would take several generations to repay, it was mired in bureaucracy and over-regulation, the nation had become a moral cesspool, and civil liberties were just a memory.”

A catastrophically infectious and lethal variant of Ebola has emerged in the Congo, killing 60% of the population of Africa (mostly in the sub-Saharan region) and reducing world population by 15%.

A “Thirdist” movement has swept the Islamic world, bringing Sunni and Shia into an uneasy alliance behind the recently-proclaimed Caliphate now calling itself the World Islamic State (WIS). In Western Europe, low fertility among the original population and large-scale immigration of more fecund Muslims is contributing to a demographic transition bringing some countries close to the tipping point of Islamic domination. The Roman Catholic church has signed the so-called “Quiet Minarets Agreement” with the WIS, which promised to refrain from advocating sharia law or political subjugation in Europe for 99 years. After that (or before, given the doctrine of taqiya in Islam), nobody knows what will happen.

In many countries around the world, Christians are beginning to feel themselves caught in a pincer movement between radical Islam on the one side and radical secularism/atheism on the other, with the more perspicacious among them beginning to think of getting out of societies becoming ever more actively hostile. Some majority Catholic countries have already declared themselves sanctuaries for their co-religionists, and other nations have done the same for Eastern Orthodox and Coptic Christians. Protestant Christians and Messianic Jews have no sanctuary, and are increasingly persecuted.

A small group of people working at a high-powered mergers and acquisitions firm in newly-independent Scotland begin to explore doing something about this. They sketch out a plan to approach the governments of South Sudan and Kenya, both of which have long-standing claims to the Ilemi Triangle, a barren territory of around 14,000 square kilometres (about ⅔ the size of Israel) with almost no indigenous population. With both claimants to the territory majority Christian countries, the planners hope to persuade them that jointly ceding the land for a new Christian nation will enable them to settle this troublesome dispute in a way which will increase the prestige of both. Further, developing the region into a prosperous land that can defend itself will shore up both countries against the advances of WIS and its allies.

With some trepidation, they approach Harry Heston, founder and boss of their firm, a self-made billionaire known for his Christian belief and libertarian views (he and his company got out of the United States to free Scotland while it was still possible). Heston, whose fortune was built on his instinctive ability to evaluate business plans, hears the pitch and decides to commit one billion NEuros from his own funds to the project, contingent on milestones being met, and to invite other wealthy business associates to participate.

So begins the story of founding the Ilemi Republic, not just a sanctuary for Christians and Messianic Jews, but a prototype 21st century libertarian society with “zero taxes, zero import duties, and zero license fees.” Defence will be by a citizen militia with a tiny professional cadre. The founders believe such a society will be a magnet to highly-productive and hard-working people from around the world weary of slaving more than half their lives to support the tyrants and bureaucrats which afflict them.

As the story unfolds, the reader is treated to what amounts to a worked example of setting up a new nation, encompassing diplomacy, economics, infrastructure, recruiting settlers, dealing equitably with the (very small) indigenous and nomadic population, money and banking, energy and transportation resources, keeping the domestic peace and defending the nation, and the minimalist government and the constitutional structure designed to keep it that way. The founders anticipate that their sanctuary nation will be subjected to the same international opprobrium and obstruction which Israel suffers (although the Ilemi Republic will not be surrounded by potential enemies), and plans must anticipate this.

You'll sometimes hear claims that Christian social conservatism and libertarianism are incompatible beliefs which will inevitably come into conflict with one another. In this novel the author argues that the kind of moral code by which devout Christians live is a prerequisite for the individual liberty and lack of state meddling so cherished by libertarians. The Ilemi Republic also finds itself the home of hard-edged, more secular libertarians, who get along with everybody else because they all agree on preserving their liberty and independence.

This is the first in a series of novels planned by the author which he calls the “Counter-Caliphate Chronicles”. I have long dreamed of a realistic story of establishing a libertarian refuge from encroaching tyranny, and even envisioned it as being situated in a lightly-populated region of Africa. The author has delivered that story, and I am eagerly anticipating seeing it develop in future novels.

 Permalink

  2016  

January 2016

Waldman, Jonathan. Rust. New York: Simon & Schuster, 2015. ISBN 978-1-4516-9159-7.
In May of 1980 two activists, protesting the imprisonment of a Black Panther convicted of murder, climbed the Statue of Liberty in New York harbour, planning to unfurl a banner high on the statue. After spending a cold and windy night aloft, they descended and surrendered to the New York Police Department's Emergency Service Unit. Fearful that the climbers may have damaged the fragile copper cladding of the monument, a comprehensive inspection was undertaken. What was found was shocking.

The structure of the Statue of Liberty was designed by Alexandre-Gustave Eiffel, and consists of an iron frame weighing 135 tons, which supports the 80 ton copper skin. As marine architects know well, a structure using two dissimilar metals such as iron and copper runs a severe risk of galvanic corrosion, especially in an environment such as the sea air of a harbour. If the iron and copper were to come into contact, a voltage would flow across the junction, and the iron would be consumed in the process. Eiffel's design prevented the iron and copper from touching one another by separating them with spacers made of asbestos impregnated with shellac.

What Eiffel didn't anticipate is that over the years superintendents of the statue would decide to “protect” its interior by applying various kinds of paint. By 1980 eight coats of paint had accumulated, almost as thick as the copper skin. The paint trapped water between the skin and the iron frame, and this set electrolysis into action. One third of the rivets in the frame were damaged or missing, and some of the frame's iron ribs had lost two thirds of their material. The asbestos insulators had absorbed water and were long gone. The statue was at risk of structural failure.

A private fund-raising campaign raised US$ 277 million to restore the statue, which ended up replacing most of its internal structure. On July 4th, 1986, the restored statue was inaugurated, marking its 100th anniversary.

Earth, uniquely among known worlds, has an atmosphere with free oxygen, produced by photosynthetic plants. While much appreciated by creatures like ourselves which breathe it, oxygen is a highly reactive gas and combines with many other elements, either violently in fire, or more slowly in reactions such as rusting metals. Further, 71% of the Earth's surface is covered by oceans, whose salty water promotes other forms of corrosion all too familiar to owners of boats. This book describes humanity's “longest war”: the battle against the corruption of our works by the inexorable chemical process of corrosion.

Consider an everyday object much more humble than the Statue of Liberty: the aluminium beverage can. The modern can is one of the most highly optimised products of engineering ever created. Around 180 billion cans are produced and consumed every year around the world: four six packs for every living human being. Reducing the mass of each can by just one gram will result in an annual saving of 180,000 metric tons of aluminium worth almost 300 million dollars at present prices, so a long list of clever tricks has been employed to reduce the mass of cans. But it doesn't matter how light or inexpensive the can is if it explodes, leaks, or changes the flavour of its contents. Coca-Cola, with a pH of 2.75 and a witches’ brew of ingredients, under a pressure of 6 atmospheres, is as corrosive to bare aluminium as battery acid. If the inside of the can were not coated with a proprietary epoxy lining (whose composition depends upon the product being canned, and is carefully guarded by can manufacturers), the Coke would corrode through the thin walls of the can in just three days. The process of scoring the pop-top removes the coating around the score, and risks corrosion and leakage if a can is stored on its side; don't do that.

The author takes us on an eclectic tour the history of corrosion and those who battle it, from the invention of stainless steel, inspecting the trans-Alaska oil pipeline by sending a “pig” (essentially a robot submarine equipped with electronic sensors) down its entire length, and evangelists for galvanizing (zinc coating) steel. We meet Dan Dunmire, the Pentagon's rust czar, who estimates that corrosion costs the military on the order of US$ 20 billion a year and describes how even the most humble of mitigation strategies can have huge payoffs. A new kind of gasket intended to prevent corrosion where radio antennas protrude through the fuselage of aircraft returned 175 times its investment in a single year. Overall return on investment in the projects funded by his office is estimated as fifty to one. We're introduced to the world of the corrosion engineer, a specialty which, while not glamorous, pays well and offers superb job security, since rust will always be with us.

Not everybody we encounter battles rust. Photographer Alyssha Eve Csük has turned corrosion into fine art. Working at the abandoned Bethlehem Steel Works in Pennsylvania, perhaps the rustiest part of the rust belt, she clandestinely scrambles around the treacherous industrial landscape in search of the beauty in corrosion.

This book mixes the science of corrosion with the stories of those who fight it, in the past and today. It is an enlightening and entertaining look into the most mundane of phenomena, but one which affects all the technological works of mankind.

 Permalink

Levenson, Thomas. The Hunt for Vulcan. New York: Random House, 2015. ISBN 978-0-8129-9898-6.
The history of science has been marked by discoveries in which, by observing where nobody had looked before, with new and more sensitive instruments, or at different aspects of reality, new and often surprising phenomena have been detected. But some of the most profound of our discoveries about the universe we inhabit have come from things we didn't observe, but expected to.

By the nineteenth century, one of the most solid pillars of science was Newton's law of universal gravitation. With a single equation a schoolchild could understand, it explained why objects fall, why the Moon orbits the Earth and the Earth and other planets the Sun, the tides, and the motion of double stars. But still, one wonders: is the law of gravitation exactly as Newton described, and does it work everywhere? For example, Newton's gravity gets weaker as the inverse square of the distance between two objects (for example, if you double the distance, the gravitational force is four times weaker [2² = 4]) but has unlimited range: every object in the universe attracts every other object, however weakly, regardless of distance. But might gravity not, say, weaken faster at great distances? If this were the case, the orbits of the outer planets would differ from the predictions of Newton's theory. Comparing astronomical observations to calculated positions of the planets was a way to discover such phenomena.

In 1781 astronomer William Herschel discovered Uranus, the first planet not known since antiquity. (Uranus is dim but visible to the unaided eye and doubtless had been seen innumerable times, including by astronomers who included it in star catalogues, but Herschel was the first to note its non-stellar appearance through his telescope, originally believing it a comet.) Herschel wasn't looking for a new planet; he was observing stars for another project when he happened upon Uranus. Further observations of the object confirmed that it was moving in a slow, almost circular orbit, around twice the distance of Saturn from the Sun.

Given knowledge of the positions, velocities, and masses of the planets and Newton's law of gravitation, it should be possible to predict the past and future motion of solar system bodies for an arbitrary period of time. Working backward, comparing the predicted influence of bodies on one another with astronomical observations, the masses of the individual planets can be estimated to produce a complete model of the solar system. This great work was undertaken by Pierre-Simon Laplace who published his Mécanique céleste in five volumes between 1799 and 1825. As the middle of the 19th century approached, ongoing precision observations of the planets indicated that all was not proceeding as Laplace had foreseen. Uranus, in particular, continued to diverge from where it was expected to be after taking into account the gravitational influence upon its motion by Saturn and Jupiter. Could Newton have been wrong, and the influence of gravity different over the vast distance of Uranus from the Sun?

In the 1840s two mathematical astronomers, Urbain Le Verrier in France and John Couch Adams in Britain, working independently, investigated the possibility that Newton was right, but that an undiscovered body in the outer solar system was responsible for perturbing the orbit of Uranus. After almost unimaginably tedious calculations (done using tables of logarithms and pencil and paper arithmetic), both Le Verrier and Adams found a solution and predicted where to observe the new planet. Adams failed to persuade astronomers to look for the new world, but Le Verrier prevailed upon an astronomer at the Berlin Observatory to try, and Neptune was duly discovered within one degree (twice the apparent size of the full Moon) of his prediction.

This was Newton triumphant. Not only was the theory vindicated, it had been used, for the first time in history, to predict the existence of a previously unknown planet and tell the astronomers right where to point their telescopes to observe it. The mystery of the outer solar system had been solved. But problems remained much closer to the Sun.

The planet Mercury orbits the Sun every 88 days in an eccentric orbit which never exceeds half the Earth's distance from the Sun. It is a small world, with just 6% of the Earth's mass. As an inner planet, Mercury never appears more than 28° from the Sun, and can best be observed in the morning or evening sky when it is near its maximum elongation from the Sun. (With a telescope, it is possible to observe Mercury in broad daylight.) Flush with his success with Neptune, and rewarded with the post of director of the Paris Observatory, in 1859 Le Verrier turned his attention toward Mercury.

Again, through arduous calculations (by this time Le Verrier had a building full of minions to assist him, but so grueling was the work and so demanding a boss was Le Verrier that during his tenure at the Observatory 17 astronomers and 46 assistants quit) the influence of all of the known planets upon the motion of Mercury was worked out. If Mercury orbited a spherical Sun without other planets tugging on it, the point of its closest approach to the Sun (perihelion) in its eccentric orbit would remain fixed in space. But with the other planets exerting their gravitational influence, Mercury's perihelion should advance around the Sun at a rate of 526.7 arcseconds per century. But astronomers who had been following the orbit of Mercury for decades measured the actual advance of the perihelion as 565 arcseconds per century. This left a discrepancy of 38.3 arcseconds, for which there was no explanation. (The modern value, based upon more precise observations over a longer period of time, for the perihelion precession of Mercury is 43 arcseconds per century.) Although small (recall that there are 1,296,000 arcseconds in a full circle), this anomalous precession was much larger than the margin of error in observations and clearly indicated something was amiss. Could Newton be wrong?

Le Verrier thought not. Just as he had done for the anomalies of the orbit of Uranus, Le Verrier undertook to calculate the properties of an undiscovered object which could perturb the orbit of Mercury and explain the perihelion advance. He found that a planet closer to the Sun (or a belt of asteroids with equivalent mass) would do the trick. Such an object, so close to the Sun, could easily have escaped detection, as it could only be readily observed during a total solar eclipse or when passing in front of the Sun's disc (a transit). Le Verrier alerted astronomers to watch for transits of this intra-Mercurian planet.

On March 26, 1859, Edmond Modeste Lescarbault, a provincial physician in a small town and passionate amateur astronomer turned his (solar-filtered) telescope toward the Sun. He saw a small dark dot crossing the disc of the Sun, taking one hour and seventeen minutes to transit, just as expected by Le Verrier. He communicated his results to the great man, and after a visit and detailed interrogation, the astronomer certified the doctor's observation as genuine and computed the orbit for the new planet. The popular press jumped upon the story. By February 1860, planet Vulcan was all the rage.

Other observations began to arrive, both from credible and unknown observers. Professional astronomers mounted worldwide campaigns to observe the Sun around the period of predicted transits of Vulcan. All of the planned campaigns came up empty. Searches for Vulcan became a major focus of solar eclipse expeditions. Unless the eclipse happened to occur when Vulcan was in conjunction with the Sun, it should be readily observable when the Sun was obscured by the Moon. Eclipse expeditions prepared detailed star charts for the vicinity of the Sun to exclude known stars for the search during the fleeting moments of totality. In 1878, an international party of eclipse chasers including Thomas Edison descended on Rawlins, Wyoming to hunt Vulcan in an eclipse crossing that frontier town. One group spotted Vulcan; others didn't. Controversy and acrimony ensued.

After 1878, most professional astronomers lost interest in Vulcan. The anomalous advance of Mercury's perihelion was mostly set aside as “one of those things we don't understand”, much as astronomers regard dark matter today. In 1915, Einstein published his theory of gravitation: general relativity. It predicted that when objects moved rapidly or gravitational fields were strong, their motion would deviate from the predictions of Newton's theory. Einstein recalled the moment when he performed the calculation of the motion of Mercury in his just-completed theory. It predicted precisely the perihelion advance observed by the astronomers. He said that his heart shuddered in his chest and that he was “beside himself with joy.”

Newton was wrong! For the extreme conditions of Mercury's orbit, so close to the Sun, Einstein's theory of gravitation is required to obtain results which agree with observation. There was no need for planet Vulcan, and now it is mostly forgotten. But the episode is instructive as to how confidence in long-accepted theories and wishful thinking can lead us astray when what might be needed is an overhaul of our most fundamental theories. A century hence, which of our beliefs will be viewed as we regard planet Vulcan today?

 Permalink

Ward, Jonathan H. Countdown to a Moon Launch. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17791-5.
In the companion volume, Rocket Ranch (December 2015), the author describes the gargantuan and extraordinarily complex infrastructure which was built at the Kennedy Space Center (KSC) in Florida to assemble, check out, and launch the Apollo missions to the Moon and the Skylab space station. The present book explores how that hardware was actually used, following the “processing flow” of the Apollo 11 launch vehicle and spacecraft from the arrival of components at KSC to the moment of launch.

As intricate as the hardware was, it wouldn't have worked, nor would it have been possible to launch flawless mission after flawless mission on time had it not been for the management tools employed to coordinate every detail of processing. Central to this was PERT (Program Evaluation and Review Technique), a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine and missile systems. PERT breaks down the progress of a project into milestones connected by activities into a graph of dependencies. Each activity has an estimated time to completion. A milestone might be, say, the installation of the guidance system into a launch vehicle. That milestone would depend upon the assembly of the components of the guidance system (gyroscopes, sensors, electronics, structure, etc.), each of which would depend upon their own components. Downstream, integrated test of the launch vehicle would depend upon the installation of the guidance system. Many activities proceed in parallel and only come together when a milestone has them as its mutual dependencies. For example, the processing and installation of rocket engines is completely independent of work on the guidance system until they join at a milestone where an engine steering test is performed.

As a project progresses, the time estimates for the various activities will be confronted with reality: some will be completed ahead of schedule while other will slip due to unforeseen problems or over-optimistic initial forecasts. This, in turn, ripples downstream in the dependency graph, changing the time available for activities if the final completion milestone is to be met. For any given graph at a particular time, there will be a critical path of activities where a schedule slip of any one will delay the completion milestone. Each lower level milestone in the graph has its own critical path leading to it. As milestones are completed ahead or behind schedule, the overall critical path will shift. Knowing the critical path allows program managers to concentrate resources on items along the critical path to avoid, wherever possible, overall schedule slips (with the attendant extra costs).

Now all this sounds complicated, and in a project with the scope of Apollo, it is almost bewildering to contemplate. The Launch Control Center was built with four firing rooms. Three were outfitted with all of the consoles to check out and launch a mission, but the fourth cavernous room ended up being used to display and maintain the PERT charts for activities in progress. Three levels of charts were maintained. Level A was used by senior management and contained hundreds of major milestones and activities. Each of these was expanded out into a level B chart which, taken together, tracked in excess of 7000 milestones. These, in turn, were broken down into detail on level C charts, which tracked more than 40,000 activities. The level B and C charts were displayed on more than 400 square metres of wall space in the back room of firing room four. As these detailed milestones were completed on the level C charts, changes would propagate down that chart and those which affected its completion upward to the level A and B charts.

Now, here's the most breathtaking thing about this: they did it all by hand! For most of the Apollo program, computer implementations of PERT were not available (or those that existed could not handle this level of detail). (Today, the PERT network for processing of an Apollo mission could be handled on a laptop computer.) There were dozens of analysts and clerks charged with updating the networks, with the processing flow displayed on an enormous board with magnetic strips which could be shifted around by people climbing up and down rolling staircases. Photographers would take pictures of the board which were printed and distributed to managers monitoring project status.

If PERT was essential to coordinating all of the parallel activities in preparing a spacecraft for launch, configuration control was critical to ensure than when the countdown reached T0, everything would work as expected. Just as there was a network of dependencies in the PERT chart, the individual components were tested, subassemblies were tested, assemblies of them were tested, all leading up to an integrated test of the assembled launcher and spacecraft. The successful completion of a test established a tested configuration for the item. Anything which changed that configuration in any way, for example unplugging a cable and plugging it back in, required re-testing to confirm that the original configuration had been restored. (One of the pins in the connector might not have made contact, for instance.) This was all documented by paperwork signed off by three witnesses. The mountain of paper was intimidating; there was even a slide rule calculator for estimating the cost of various kinds of paperwork.

With all of this management superstructure it may seem a miracle that anything got done at all. But, as the end of the decade approached, the level of activity at KSC was relentless (and took a toll upon the workforce, although many recall it as the most intense and rewarding part of their careers). Several missions were processed in parallel: Apollo 11 rolled out to the launch pad while Apollo 10 was still en route to the Moon, and Apollo 12 was being assembled and tested.

To illustrate how all of these systems and procedures came together, the author takes us through the processing of Apollo 11 in detail, starting around six months before launch when the Saturn V stages, and command, service, and lunar modules arrived independently from the contractors who built them or the NASA facilities where they had been individually tested. The original concept for KSC was that it would be an “operational spaceport” which would assemble pre-tested components into flight vehicles, run integrated system tests, and then launch them in an assembly-line fashion. In reality, the Apollo and Saturn programs never matured to this level, and were essentially development and test projects throughout. Components not only arrived at KSC with “some assembly required”; they often were subject to a blizzard of engineering change orders which required partially disassembling equipment to make modifications, then exhaustive re-tests to verify the previously tested configuration had been restored.

Apollo 11 encountered relatively few problems in processing, so experiences from other missions where problems arose are interleaved to illustrate how KSC coped with contingencies. While Apollo 16 was on the launch pad, a series of mistakes during the testing process damaged a propellant tank in the command module. The only way to repair this was to roll the entire stack back to the Vehicle Assembly Building, remove the command and service modules, return them to the spacecraft servicing building then de-mate them, pull the heat shield from the command module, change out the tank, then put everything back together, re-stack, and roll back to the launch pad. Imagine how many forms had to be filled out. The launch was delayed just one month.

The process of servicing the vehicle on the launch pad is described in detail. Many of the operations, such as filling tanks with toxic hypergolic fuel and oxidiser, which burn on contact, required evacuating the pad of all non-essential personnel and special precautions for those engaged in these hazardous tasks. As launch approached, the hurdles became higher: a Launch Readiness Review and the Countdown Demonstration Test, a full dress rehearsal of the countdown up to the moment before engine start, including fuelling all of the stages of the launch vehicle (and then de-fuelling them after conclusion of the test).

There is a wealth of detail here, including many obscure items I've never encountered before. Consider “Forward Observers”. When the Saturn V launched, most personnel and spectators were kept a safe distance of more than 5 km from the launch pad in case of calamity. But three teams of two volunteers each were stationed at sites just 2 km from the pad. They were charged with observing the first seconds of flight and, if they saw a catastrophic failure (engine explosion or cut-off, hard-over of an engine gimbal, or the rocket veering into the umbilical tower), they would signal the astronauts to fire the launch escape system and abort the mission. If this happened, the observers would then have to dive into crude shelters often frequented by rattlesnakes to ride out the fiery aftermath.

Did you know about the electrical glitch which almost brought the Skylab 2 mission to flaming catastrophe moments after launch? How lapses in handling of equipment and paperwork almost spelled doom for the crew of Apollo 13? The time an oxygen leak while fuelling a Saturn V booster caused cars parked near the launch pad to burst into flames? It's all here, and much more. This is an essential book for those interested in the engineering details of the Apollo project and the management miracles which made its achievements possible.

 Permalink

Regis, Ed. Monsters. New York: Basic Books, 2015. ISBN 978-0-465-06594-3.
In 1863, as the American Civil War raged, Count Ferdinand von Zeppelin, an ambitious young cavalry officer from the German kingdom of Württemberg arrived in America to observe the conflict and learn its lessons for modern warfare. He arranged an audience with President Lincoln, who authorised him to travel among the Union armies. Zeppelin spent a month with General Joseph Hooker's Army of the Potomac. Accustomed to German military organisation, he was unimpressed with what he saw and left to see the sights of the new continent. While visiting Minnesota, he ascended in a tethered balloon and saw the landscape laid out below him like a military topographical map. He immediately grasped the advantage of such an eye in the sky for military purposes. He was impressed.

Upon his return to Germany, Zeppelin pursued a military career, distinguishing himself in the 1870 war with France, although being considered “a hothead”. It was this characteristic which brought his military career to an abrupt end in 1890. Chafing under what he perceived as stifling leadership by the Prussian officer corps, he wrote directly to the Kaiser to complain. This was a bad career move; the Kaiser “promoted” him into retirement. Adrift, looking for a new career, Zeppelin seized upon controlled aerial flight, particularly for its military applications. And he thought big.

By 1890, France was at the forefront of aviation. By 1885 the first dirigible, La France, had demonstrated aerial navigation over complex closed courses and carried passengers. Built for the French army, it was just a technology demonstrator, but to Zeppelin it demonstrated a capability with such potential that Germany must not be left behind. He threw his energy into the effort, formed a company, raised the money, and embarked upon the construction of Luftschiff Zeppelin 1 (LZ 1).

Count Zeppelin was not a man to make small plans. Eschewing sub-scale demonstrators or technology-proving prototypes, he went directly to a full scale airship intended to be militarily useful. It was fully 128 metres long, almost two and a half times the size of La France, longer than a football field. Its rigid aluminium frame contained 17 gas bags filled with hydrogen, and it was powered by two gasoline engines. LZ 1 flew just three times. An observer from the German War Ministry reported it to be “suitable for neither military nor for non-military purposes.” Zeppelin's company closed its doors and the airship was sold for scrap.

By 1905, Zeppelin was ready to try again. On its first flight, the LZ 2 lost power and control and had to make a forced landing. Tethered to the ground at the landing site, it was caught by the wind and destroyed. It was sold for scrap. Later the LZ 3 flew successfully, and Zeppelin embarked upon construction of the LZ 4, which would be larger still. While attempting a twenty-four hour endurance flight, it suffered motor failure, landed, and while tied down was caught by wind. Its gas bags rubbed against one another and static electricity ignited the hydrogen, which reduced the airship to smoking wreckage.

Many people would have given up at this point, but not the redoubtable Count. The LZ 5, delivered to the military, was lost when carried away by the wind after an emergency landing and dashed against a hill. LZ 6 burned in its hangar after an engine caught fire. LZ 7, the first civilian passenger airship, crashed into a forest on its first flight and was damaged beyond repair. LZ 8, its replacement, was destroyed by a gust of wind while being walked out of its hangar.

With the outbreak of war in 1914, the airship went to war. Germany operated 117 airships, using them for reconnaissance and even bombing targets in England. Of the 117, fully 81 were destroyed, about half due to enemy action and half by the woes which had wrecked so many airships prior to the conflict.

Based upon this stunning record of success, after the end of the Great War, Britain decided to embark in earnest on its own airship program, building even larger airships than Germany. Results were no better, culminating in the R100 and R101, built to provide air and cargo service on routes throughout the Empire. On its maiden flight to India in 1930, R101 crashed and burned in a storm while crossing France, killing 48 of the 54 on board. After the catastrophe, the R100 was retired and sold for scrap.

This did not deter the Americans, who, in addition to their technical prowess and “can do” spirit, had access to helium, produced as a by-product of their natural gas fields. Unlike hydrogen, helium is nonflammable, so the risk of fire, which had destroyed so many airships using hydrogen, was entirely eliminated. Helium does not provide as much lift as hydrogen, but this can be compensated for by increasing the size of the ship. Helium is also around fifty times more expensive than hydrogen, which makes managing an airship in flight more difficult. While the commander of a hydrogen airship can freely “valve” gas to reduce lift when required, doing this in a helium ship is forbiddingly expensive and restricted only to the most dire of emergencies.

The U.S. Navy believed the airship to be an ideal platform for long-range reconnaissance, anti-submarine patrols, and other missions where its endurance, speed, and the ability to operate far offshore provided advantages over ships and heavier than air craft. Between 1921 and 1935 the Navy operated five rigid airships, three built domestically and two abroad. Four of the five crashed in storms or due to structural failure, killing dozens of crew.

This sorry chronicle leads up to a detailed recounting of the history of the Hindenburg. Originally designed to use helium, it was redesigned for hydrogen after it became clear the U.S., which had forbidden export of helium in 1927, would not grant a waiver, especially to a Germany by then under Nazi rule. The Hindenburg was enormous: at 245 metres in length, it was longer than the U.S. Capitol building and more than three times the length of a Boeing 747. It carried between 50 and 72 passengers who were served by a crew of 40 to 61, with accommodations (apart from the spartan sleeping quarters) comparable to first class on ocean liners. In 1936, the great ship made 17 transatlantic crossings without incident. On its first flight to the U.S. in 1937, it was destroyed by fire while approaching the mooring mast at Lakehurst, New Jersey. The disaster and its aftermath are described in detail. Remarkably, given the iconic images of the flaming airship falling to the ground and the structure glowing from the intense heat of combustion, of the 97 passengers and crew on board, 62 survived the disaster. (One of the members of the ground crew also died.)

Prior to the destruction of the Hindenburg, a total of twenty-six hydrogen filled airships had been destroyed by fire, excluding those shot down in wartime, with a total of 250 people killed. The vast majority of all rigid airships built ended in disaster—if not due to fire then structural failure, weather, or pilot error. Why did people continue to pursue this technology in the face of abundant evidence that it was fundamentally flawed?

The author argues that rigid airships are an example of a “pathological technology”, which he characterises as:

  1. Embracing something huge, either in size or effects.
  2. Inducing a state bordering on enthralment among its proponents…
  3. …who underplay its downsides, risks, unintended consequences, and obvious dangers.
  4. Having costs out of proportion to the benefits it is alleged to provide.

Few people would argue that the pursuit of large airships for more than three decades in the face of repeated disasters was a pathological technology under these criteria. Even setting aside the risks from using hydrogen as a lifting gas (which I believe the author over-emphasises: prior to the Hindenburg accident nobody had ever been injured on a commercial passenger flight of a hydrogen airship, and nobody gives a second thought today about boarding an airplane with 140 tonnes of flammable jet fuel in the tanks and flying across the Pacific with only two engines). Seemingly hazardous technologies can be rendered safe with sufficient experience and precautions. Large lighter than air ships were, however, inherently unsafe because they were large and lighter than air: nothing could be done about that. They were are the mercy of the weather, and if they were designed to be strong enough to withstand whatever weather conditions they might encounter, they would have been too heavy to fly. As the experience of the U.S. Navy with helium airships demonstrated, it didn't matter if you were immune to the risks of hydrogen; the ship would eventually be destroyed in a storm.

The author then moves on from airships to discuss other technologies he deems pathological, and here, in my opinion, goes off the rails. The first of these technologies is Project Plowshare, a U.S. program to explore the use of nuclear explosions for civil engineering projects such as excavation, digging of canals, creating harbours, and fracturing rock to stimulate oil and gas production. With his characteristic snark, Regis mocks the very idea of Plowshare, and yet examination of the history of the program belies this ridicule. For the suggested applications, nuclear explosions were far more economical than chemical detonations and conventional earthmoving equipment. One principal goal of Plowshare was to determine the efficacy of such explosions and whether they would pose risks (for example, release of radiation) which were unacceptable. Over 11 years 26 nuclear tests were conducted under the program, most at the Nevada Test Site, and after a review of the results it was concluded the radiation risk was unacceptable and the results unpromising. Project Plowshare was shut down in 1977. I don't see what's remotely pathological about this. You have an idea for a new technology; you explore it in theory; conduct experiments; then decide it's not worth pursuing. Now maybe if you're Ed Regis, you may have been able to determine at the outset, without any of the experimental results, that the whole thing was absurd, but a great many people with in-depth knowledge of the issues involved preferred to run the experiments, take the data, and decide based upon the results. That, to me, seems the antithesis of pathological.

The next example of a pathological technology is the Superconducting Super Collider, a planned particle accelerator to be built in Texas which would have an accelerator ring 87.1 km in circumference and collide protons at a centre of mass energy of 40 TeV. The project was approved and construction begun in the 1980s. In 1993, Congress voted to cancel the project and work underway was abandoned. Here, the fit with “pathological technology” is even worse. Sure, the project was large, but it was mostly underground: hardly something to “enthral” anybody except physics nerds. There were no risks at all, apart from those in any civil engineering project of comparable scale. The project was cancelled because it overran its budget estimates but, even if completed, would probably have cost less than a tenth the expenditures to date on the International Space Station, which has produced little or nothing of scientific value. How is it pathological when a project, undertaken for well-defined goals, is cancelled when those funding it, seeing its schedule slip and budget balloon beyond that projected, pull the plug on it? Isn't that how things are supposed to work? Who were the seers who forecast all of this at the project's inception?

The final example of so-called pathological technology is pure spite. Ed Regis has a fine time ridiculing participants in the first 100 Year Starship symposium, a gathering to explore how and why humans might be able, within a century, to launch missions (robotic or crewed) to other star systems. This is not a technology at all, but rather an exploration of what future technologies might be able to do, and the limits imposed by the known laws of physics upon potential technologies. This is precisely the kind of “exploratory engineering” that Konstantin Tsiolkovsky engaged in when he worked out the fundamentals of space flight in the late 19th and early 20th centuries. He didn't know the details of how it would be done, but he was able to calculate, from first principles, the limits of what could be done, and to demonstrate that the laws of physics and properties of materials permitted the missions he envisioned. His work was largely ignored, which I suppose may be better than being mocked, as here.

You want a pathological technology? How about replacing reliable base load energy sources with inefficient sources at the whim of clouds and wind? Banning washing machines and dishwashers that work in favour of ones that don't? Replacing toilets with ones that take two flushes in order to “save water”? And all of this in order to “save the planet” from the consequences predicted by a theoretical model which has failed to predict measured results since its inception, through policies which impoverish developing countries and, even if you accept the discredited models, will have negligible results on the global climate. On this scandal of our age, the author is silent. He concludes:

Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped.

Or better yet, never begun.

Except, it seems, you can only recognise them in retrospect.

 Permalink

February 2016

McCullough, David. The Wright Brothers. New York: Simon & Schuster, 2015. ISBN 978-1-4767-2874-2.
On December 8th, 1903, all was in readiness. The aircraft was perched on its launching catapult, the brave airman at the controls. The powerful internal combustion engine roared to life. At 16:45 the catapult hurled the craft into the air. It rose straight up, flipped, and with its wings coming apart, plunged into the Potomac river just 20 feet from the launching point. The pilot was initially trapped beneath the wreckage but managed to free himself and swim to the surface. After being rescued from the river, he emitted what one witness described as “the most voluble series of blasphemies” he had ever heard.

So ended the last flight of Samuel Langley's “Aerodrome”. Langley was a distinguished scientist and secretary of the Smithsonian Institution in Washington D.C. Funded by the U.S. Army and the Smithsonian for a total of US$ 70,000 (equivalent to around 1.7 million present-day dollars), the Aerodrome crashed immediately on both of its test flights, and was the subject of much mockery in the press.

Just nine days later, on December 17th, two brothers, sons of a churchman, with no education beyond high school, and proprietors of a bicycle shop in Dayton, Ohio, readied their own machine for flight near Kitty Hawk, on the windswept sandy hills of North Carolina's Outer Banks. Their craft, called just the Flyer, took to the air with Orville Wright at the controls. With the 12 horsepower engine driving the twin propellers and brother Wilbur running alongside to stabilise the machine as it moved down the launching rail into the wind, Orville lifted the machine into the air and achieved the first manned heavier-than-air powered flight, demonstrating the Flyer was controllable in all three axes. The flight lasted just 12 seconds and covered a distance of 120 feet.

After the first flight, the brothers took turns flying the machine three more times on the 17th. On the final flight Wilbur flew a distance of 852 feet in a flight of 59 seconds (a strong headwind was blowing, and this flight was over half a mile through the air). After completion of the fourth flight, while being prepared to fly again, a gust of wind caught the machine and dragged it, along with assistant John T. Daniels, down the beach toward the ocean. Daniels escaped, but the Flyer was damaged beyond repair and never flew again. (The Flyer which can seen in the Smithsonian's National Air and Space Museum today has been extensively restored.)

Orville sent a telegram to his father in Dayton announcing the success, and the brothers packed up the remains of the aircraft to be shipped back to their shop. The 1903 season was at an end. The entire budget for the project between 1900 through the successful first flights was less than US$ 1000 (24,000 dollars today), and was funded entirely by profits from the brothers' bicycle business.

How did two brothers with no formal education in aerodynamics or engineering succeed on a shoestring budget while Langley, with public funds at his disposal and the resources of a major scientific institution fail so embarrassingly? Ultimately it was because the Wright brothers identified the key problem of flight and patiently worked on solving it through a series of experiments. Perhaps it was because they were in the bicycle business. (Although they are often identified as proprietors of a “bicycle shop”, they also manufactured their own bicycles and had acquired the machine tools, skills, and co-workers for the business, later applied to building the flying machine.)

The Wrights believed the essential problem of heavier than air flight was control. The details of how a bicycle is built don't matter much: you still have to learn to ride it. And the problem of control in free flight is much more difficult than riding a bicycle, where the only controls are the handlebars and, to a lesser extent, shifting the rider's weight. In flight, an airplane must be controlled in three axes: pitch (up and down), yaw (left and right), and roll (wings' angle to the horizon). The means for control in each of these axes must be provided, and what's more, just as for a child learning to ride a bike, the would-be aeronaut must master the skill of using these controls to maintain his balance in the air.

Through a patient program of subscale experimentation, first with kites controlled by from the ground by lines manipulated by the operators, then gliders flown by a pilot on board, the Wrights developed their system of pitch control by a front-mounted elevator, yaw by a rudder at the rear, and roll by warping the wings of the craft. Further, they needed to learn how to fly using these controls and verify that the resulting plane would be stable enough that a person could master the skill of flying it. With powerless kites and gliders, this required a strong, consistent wind. After inquiries to the U.S. Weather Bureau, the brothers selected the Kitty Hawk site on the North Carolina coast. Just getting there was an adventure, but the wind was as promised and the sand and lack of large vegetation was ideal for their gliding experiments. They were definitely “roughing it” at this remote site, and at times were afflicted by clouds of mosquitos of Biblical plague proportions, but starting in 1900 they tested a series of successively larger gliders and by 1902 had a design which provided three axis control, stability, and the controls for a pilot on board. In the 1902 season they made more than 700 flights and were satisfied the control problem had been mastered.

Now all that remained was to add an engine and propellers to the successful glider design, again scaling it up to accommodate the added weight. In 1903, you couldn't just go down to the hardware store and buy an engine, and automobile engines were much too heavy, so the Wrights' resourceful mechanic, Charlie Taylor, designed and built the four cylinder motor from scratch, using the new-fangled material aluminium for the engine block. The finished engine weighed just 152 pounds and produced 12 horsepower. The brothers could find no references for the design of air propellers and argued intensely over the topic, but eventually concluded they'd just have to make a best guess and test it on the real machine.

The Flyer worked the on the second attempt (an earlier try on December 14th ended in a minor crash when Wilbur over-controlled at the moment of take-off). But this stunning success was the product of years of incremental refinement of the design, practical testing, and mastery of airmanship through experience.

Those four flights in December of 1903 are now considered one of the epochal events of the twentieth century, but at the time they received little notice. Only a few accounts of the flights appeared in the press, and some of them were garbled and/or sensationalised. The Wrights knew that the Flyer (whose wreckage was now in storage crates at Dayton), while a successful proof of concept and the basis for a patent filing, was not a practical flying machine. It could only take off into the strong wind at Kitty Hawk and had not yet demonstrated long-term controlled flight including aerial maneuvers such as turns or flying around a closed course. It was just too difficult travelling to Kitty Hawk, and the facilities of their camp there didn't permit rapid modification of the machines based upon experimentation.

They arranged to use an 84 acre cow pasture called Huffman Prairie located eight miles from Dayton along an interurban trolley line which made it easy to reach. The field's owner let them use it without charge as long as they didn't disturb the livestock. The Wrights devised a catapult to launch their planes, powered by a heavy falling weight, which would allow them to take off in still air. It was here, in 1904, that they refined the design into a practical flying machine and fully mastered the art of flying it over the course of about fifty test flights. Still, there was little note of their work in the press, and the first detailed account was published in the January 1905 edition of Gleanings in Bee Culture. Amos Root, the author of the article and publisher of the magazine, sent a copy to Scientific American, saying they could republish it without fee. The editors declined, and a year later mocked the achievements of the Wright brothers.

For those accustomed to the pace of technological development more than a century later, the leisurely pace of progress in aviation and lack of public interest in the achievement of what had been a dream of humanity since antiquity seems odd. Indeed, the Wrights, who had continued to refine their designs, would not become celebrities nor would their achievements be widely acknowledged until a series of demonstrations Wilbur would perform at Le Mans in France in the summer of 1908. Le Figaro wrote, “It was not merely a success, but a triumph…a decisive victory for aviation, the news of which will revolutionize scientific circles throughout the world.” And it did: stories of Wilbur's exploits were picked up by the press on the Continent, in Britain, and, belatedly, by papers in the U.S. Huge crowds came out to see the flights, and the intrepid American aviator's name was on every tongue.

Meanwhile, Orville was preparing for a series of demonstration flights for the U.S. Army at Fort Myer, Virginia. The army had agreed to buy a machine if it passed a series of tests. Orville's flights also began to draw large crowds from nearby Washington and extensive press coverage. All doubts about what the Wrights had wrought were now gone. During a demonstration flight on September 17, 1908, a propeller broke in flight. Orville tried to recover, but the machine plunged to the ground from an altitude of 75 feet, severely injuring him and killing his passenger, Lieutenant Thomas Selfridge, who became the first person to die in an airplane crash. Orville's recuperation would be long and difficult, aided by his sister, Katharine.

In early 1909, Orville and Katharine would join Wilbur in France, where he was to do even more spectacular demonstrations in the south of the country, training pilots for the airplanes he was selling to the French. Upon their return to the U.S., the Wrights were awarded medals by President Taft at the White House. They were feted as returning heroes in a two day celebration in Dayton. The diligent Wrights continued their work in the shop between events.

The brothers would return to Fort Myer, the scene of the crash, and complete their demonstrations for the army, securing the contract for the sale of an airplane for US$ 30,000. The Wrights would continue to develop their company, defend their growing portfolio of patents against competitors, and innovate. Wilbur was to die of typhoid fever in 1912, aged only 45 years. Orville sold his interest in the Wright Company in 1915 and, in his retirement, served for 28 years on the National Advisory Committee for Aeronautics, the precursor of NASA. He died in 1948. Neither brother ever married.

This book is a superb evocation of the life and times of the Wrights and their part in creating, developing, promoting, and commercialising one of the key technologies of the modern world.

 Permalink

Carlson, W. Bernard. Tesla: Inventor of the Electrical Age. Princeton: Princeton University Press, 2013. ISBN 978-0-691-16561-5.
Nicola Tesla was born in 1858 in a village in what is now Croatia, then part of the Austro-Hungarian Empire. His father and grandfather were both priests in the Orthodox church. The family was of Serbian descent, but had lived in Croatia since the 1690s among a community of other Serbs. His parents wanted him to enter the priesthood and enrolled him in school to that end. He excelled in mathematics and, building on a boyhood fascination with machines and tinkering, wanted to pursue a career in engineering. After completing high school, Tesla returned to his village where he contracted cholera and was near death. His father promised him that if he survived, he would “go to the best technical institution in the world.” After nine months of illness, Tesla recovered and, in 1875 entered the Joanneum Polytechnic School in Graz, Austria.

Tesla's university career started out brilliantly, but he came into conflict with one of his physics professors over the feasibility of designing a motor which would operate without the troublesome and unreliable commutator and brushes of existing motors. He became addicted to gambling, lost his scholarship, and dropped out in his third year. He worked as a draftsman, taught in his old high school, and eventually ended up in Prague, intending to continue his study of engineering at the Karl-Ferdinand University. He took a variety of courses, but eventually his uncles withdrew their financial support.

Tesla then moved to Budapest, where he found employment as chief electrician at the Budapest Telephone Exchange. He quickly distinguished himself as a problem solver and innovator and, before long, came to the attention of the Continental Edison Company of France, which had designed the equipment used in Budapest. He was offered and accepted a job at their headquarters in Ivry, France. Most of Edison's employees had practical, hands-on experience with electrical equipment, but lacked Tesla's formal education in mathematics and physics. Before long, Tesla was designing dynamos for lighting plants and earning a handsome salary. With his language skills (by that time, Tesla was fluent in Serbian, German, and French, and was improving his English), the Edison company sent him into the field as a trouble-shooter. This further increased his reputation and, in 1884 he was offered a job at Edison headquarters in New York. He arrived and, years later, described the formalities of entering the U.S. as an immigrant: a clerk saying “Kiss the Bible. Twenty cents!”.

Tesla had never abandoned the idea of a brushless motor. Almost all electric lighting systems in the 1880s used direct current (DC): electrons flowed in only one direction through the distribution wires. This is the kind of current produced by batteries, and the first electrical generators (dynamos) produced direct current by means of a device called a commutator. As the generator is turned by its power source (for example, a steam engine or water wheel), power is extracted from the rotating commutator by fixed brushes which press against it. The contacts on the commutator are wired to the coils in the generator in such a way that a constant direct current is maintained. When direct current is used to drive a motor, the motor must also contain a commutator which converts the direct current into a reversing flow to maintain the motor in rotary motion.

Commutators, with brushes rubbing against them, are inefficient and unreliable. Brushes wear and must eventually be replaced, and as the commutator rotates and the brushes make and break contact, sparks may be produced which waste energy and degrade the contacts. Further, direct current has a major disadvantage for long-distance power transmission. There was, at the time, no way to efficiently change the voltage of direct current. This meant that the circuit from the generator to the user of the power had to run at the same voltage the user received, say 120 volts. But at such a voltage, resistance losses in copper wires are such that over long runs most of the energy would be lost in the wires, not delivered to customers. You can increase the size of the distribution wires to reduce losses, but before long this becomes impractical due to the cost of copper it would require. As a consequence, Edison electric lighting systems installed in the 19th century had many small powerhouses, each supplying a local set of customers.

Alternating current (AC) solves the problem of power distribution. In 1881 the electrical transformer had been invented, and by 1884 high-efficiency transformers were being manufactured in Europe. Powered by alternating current (they don't work with DC), a transformer efficiently converts current from one voltage and current to another. For example, power might be transmitted from the generating station to the customer at 12000 volts and 1 ampere, then stepped down to 120 volts and 100 amperes by a transformer at the customer location. Losses in a wire are purely a function of current, not voltage, so for a given level of transmission loss, the cables to distribute power at 12000 volts will cost a hundredth as much as if 120 volts were used. For electric lighting, alternating current works just as well as direct current (as long as the frequency of the alternating current is sufficiently high that lamps do not flicker). But electricity was increasingly used to power motors, replacing steam power in factories. All existing practical motors ran on DC, so this was seen as an advantage to Edison's system.

Tesla worked only six months for Edison. After developing an arc lighting system only to have Edison put it on the shelf after acquiring the rights to a system developed by another company, he quit in disgust. He then continued to work on an arc light system in New Jersey, but the company to which he had licensed his patents failed, leaving him only with a worthless stock certificate. To support himself, Tesla worked repairing electrical equipment and even digging ditches, where one of his foremen introduced him to Alfred S. Brown, who had made his career in telegraphy. Tesla showed Brown one of his patents, for a “thermomagnetic motor”, and Brown contacted Charles F. Peck, a lawyer who had made his fortune in telegraphy. Together, Peck and Brown saw the potential for the motor and other Tesla inventions and in April 1887 founded the Tesla Electric Company, with its laboratory in Manhattan's financial district.

Tesla immediately set to make his dream of a brushless AC motor a practical reality and, by using multiple AC currents, out of phase with one another (the polyphase system), he was able to create a magnetic field which itself rotated. The rotating magnetic field induced a current in the rotating part of the motor, which would start and turn without any need for a commutator or brushes. Tesla had invented what we now call the induction motor. He began to file patent applications for the motor and the polyphase AC transmission system in the fall of 1887, and by May of the following year had been granted a total of seven patents on various aspects of the motor and polyphase current.

One disadvantage of the polyphase system and motor was that it required multiple pairs of wires to transmit power from the generator to the motor, which increased cost and complexity. Also, existing AC lighting systems, which were beginning to come into use, primarily in Europe, used a single phase and two wires. Tesla invented the split-phase motor, which would run on a two wire, single phase circuit, and this was quickly patented.

Unlike Edison, who had built an industrial empire based upon his inventions, Tesla, Peck, and Brown had no interest in founding a company to manufacture Tesla's motors. Instead, they intended to shop around and license the patents to an existing enterprise with the resources required to exploit them. George Westinghouse had developed his inventions of air brakes and signalling systems for railways into a successful and growing company, and was beginning to compete with Edison in the electric light industry, installing AC systems. Westinghouse was a prime prospect to license the patents, and in July 1888 a deal was concluded for cash, notes, and a royalty for each horsepower of motors sold. Tesla moved to Pittsburgh, where he spent a year working in the Westinghouse research lab improving the motor designs. While there, he filed an additional fifteen patent applications.

After leaving Westinghouse, Tesla took a trip to Europe where he became fascinated with Heinrich Hertz's discovery of electromagnetic waves. Produced by alternating current at frequencies much higher than those used in electrical power systems (Hertz used a spark gap to produce them), here was a demonstration of transmission of electricity through thin air—with no wires at all. This idea was to inspire much of Tesla's work for the rest of his life. By 1891, he had invented a resonant high frequency transformer which we now call a Tesla coil, and before long was performing spectacular demonstrations of artificial lightning, illuminating lamps at a distance without wires, and demonstrating new kinds of electric lights far more efficient than Edison's incandescent bulbs. Tesla's reputation as an inventor was equalled by his talent as a showman in presentations before scientific societies and the public in both the U.S. and Europe.

Oddly, for someone with Tesla's academic and practical background, there is no evidence that he mastered Maxwell's theory of electromagnetism. He believed that the phenomena he observed with the Tesla coil and other apparatus were not due to the Hertzian waves predicted by Maxwell's equations, but rather something he called “electrostatic thrusts”. He was later to build a great edifice of mistaken theory on this crackpot idea.

By 1892, plans were progressing to harness the hydroelectric power of Niagara Falls. Transmission of this power to customers was central to the project: around one fifth of the American population lived within 400 miles of the falls. Westinghouse bid Tesla's polyphase system and with Tesla's help in persuading the committee charged with evaluating proposals, was awarded the contract in 1893. By November of 1896, power from Niagara reached Buffalo, twenty miles away, and over the next decade extended throughout New York. The success of the project made polyphase power transmission the technology of choice for most electrical distribution systems, and it remains so to this day. In 1895, the New York Times wrote:

Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.

After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed.

None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles.

Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy.

In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.

 Permalink

March 2016

Flint, Eric. 1632. Riverdale, NY: Baen Publishing, 2000. ISBN 978-0-671-31972-4.
Nobody knows how it happened, nor remotely why. Was it a bizarre physics phenomenon, an act of God, intervention by aliens, or “just one of those things”? One day, with a flash and a bang which came to be called the Ring of Fire, the town of Grantville, West Virginia and its environs in the present day was interchanged with an equally large area of Thuringia, in what is now Germany, in the year 1632.

The residents of Grantville discover a sharp boundary where the town they know so well comes to an end and the new landscape begins. What's more, they rapidly discover they aren't in West Virginia any more, encountering brutal and hostile troops ravaging the surrounding countryside. After rescuing two travellers and people being attacked by the soldiers and using their superior firepower to bring hostilities to a close, they begin to piece together what has happened. They are not only in central Europe, but square in the middle of the Thirty Years' War: the conflict between Catholic and Protestant forces which engulfed much of the continent.

Being Americans, and especially being self-sufficient West Virginians, the residents of Grantville take stock of their situation and start planning to make of the most of the situation they've been dealt. They can count themselves lucky that the power plant was included within the Ring of Fire, so the electricity will stay on as long as there is fuel to run it. There are local coal mines and people with the knowledge to work them. The school and its library were within the circle, so there is access to knowledge of history and technology, as well as the school's shop and several machine shops in town. As a rural community, there are experienced farmers, and the land in Thuringia is not so different from West Virginia, although the climate is somewhat harsher. Supplies of fuel for transportation are limited to stocks on hand and in the tanks of vehicles with no immediate prospect of obtaining more. There are plenty of guns and lots of ammunition, but even with the reloading skills of those in the town, eventually the supply of primers and smokeless powder will be exhausted.

Not only does the town find itself in the middle of battles between armies, those battles have created a multitude of refugees who press in on the town. Should Grantville put up a wall and hunker down, or welcome them, begin to assimilate them as new Americans, and put them to work to build a better society based upon the principles which kept religious wars out of the New World? And how can a small town, whatever its technological advantages and principles, deal with contending forces thousands of times larger? Form an alliance? But with whom, and on what terms? And what principles must be open to compromise and which must be inviolate?

This is a thoroughly delightful story which will leave you with admiration for the ways of rural America, echoing those of their ancestors who built a free society in a wilderness. Along with the fictional characters, we encounter key historical figures of the era, who are depicted accurately. There are a number of coincidences which make things work (for example, Grantville having a power plant, and encountering Scottish troops in the army of the King of Sweden who speak English), but without those coincidences the story would fall apart. The thought which recurred as I read the novel is what would have happened if, instead, an effete present-day American university town had been plopped down in the midst of the Thirty Years War instead of Grantville. I'd give it forty-eight hours at most.

This novel is the first in what has become a large and expanding Ring of Fire universe, including novels by the author and other writers set all over Europe and around the world, short stories, periodicals, and a role-playing game. If you loved this story, as I did, there's much more to explore.

This book is a part of the Baen Free Library. You can read the book online or download it in a wide variety of electronic book formats, all free of digital rights management, directly from the book's page at the Baen site. The Kindle edition may also be downloaded for free from Amazon.

 Permalink

Munroe, Randall. Thing Explainer. New York: Houghton Mifflin, 2015. ISBN 978-0-544-66825-6.
What a great idea! The person who wrote this book explains not simple things like red world sky cars, tiny water bags we are made of, and the shared space house, with only the ten hundred words people use most.

There are many pictures with words explaining each thing. The idea came from the Up Goer Five picture he drew earlier.

Up Goer Five

Drawing by Randall Munroe / xkcd used under right to share but not to sell (CC BY-NC 2.5).
(The words in the above picture are drawn. In the book they are set in sharp letters.)

Many other things are explained here. You will learn about things in the house like food-heating radio boxes and boxes that clean food holders; living things like trees, bags of stuff inside you, and the tree of life; the Sun, Earth, sky, and other worlds; and even machines for burning cities and boats that go under the seas to throw them at other people. This is not just a great use of words, but something you can learn much from.

There is art in explaining things in the most used ten hundred words, and this book is a fine work of that art.

Read this book, then try explaining such things yourself. You can use this write checker to see how you did.

Can you explain why time slows down when you go fast? Or why things jump around when you look at them very close-up? This book will make you want to try it. Enjoy!

The same writer also created What If? (2015-11)

Here, I have only written with the same ten hundred most used words as in the book.

 Permalink

April 2016

Jenne, Mike. Blue Gemini. New York: Yucca Publishing, 2015. ISBN 978-1-63158-047-5.
It is the late 1960s, and the Apollo project is racing toward the Moon. The U.S. Air Force has not abandoned its manned space flight ambitions, and is proceeding with its Manned Orbiting Laboratory program, nominally to explore the missions military astronauts can perform in an orbiting space station, but in reality a large manned reconnaissance satellite. Behind the curtain of secrecy and under the cover of the blandly named “Aerospace Support Project”, the Air Force was simultaneously proceeding with a much more provocative project: Blue Gemini. Using the Titan II booster and a modified version of the two-man spacecraft from NASA's recently-concluded Gemini program, its mission was to launch on short notice, rendezvous with and inspect uncooperative targets (think Soviet military satellites), and optionally attach a package to them which, on command from the ground, could destroy the satellite, de-orbit it, or throw it out of control. All of this would have to be done covertly, without alerting the Soviets to the intrusion.

Inconclusive evidence and fears that the Soviets, in response to the U.S. ballistic missile submarine capability, were preparing to place nuclear weapons in orbit, ready to rain down onto the U.S. upon command, even if the Soviet missile and bomber forces were destroyed, gave Blue Gemini a high priority. Operating out of Wright-Patterson Air Force Base in Ohio, flight hardware for the Gemini-I interceptor spacecraft, Titan II missiles modified for man-rating, and a launching site on Johnston Island in the Pacific were all being prepared, and three flight crews were in training.

Scott Ourecky had always dreamed of flying. In college, he enrolled in Air Force ROTC, underwent primary flight training, and joined the Air Force upon graduation. Once in uniform, his talent for engineering and mathematics caused him to advance, but his applications for flight training were repeatedly rejected, and he had resigned himself to a technical career in advanced weapon development, most recently at Eglin Air Force Base in Florida. There he is recruited to work part-time on the thorny technical problems of a hush-hush project: Blue Gemini.

Ourecky settles in and undertakes the formidable challenges faced by the mission. (NASA's Gemini rendezvous targets were cooperative: they had transponders and flashing beacons which made them easier to locate, and missions could be planned so that rendezvous would be accomplished when communications with ground controllers would be available. In Blue Gemini the crew would be largely on their own, with only brief communication passes available.) Finally, after an incident brought on by the pressure and grueling pace of training, he finds himself in the right seat of the simulator, paired with hot-shot pilot Drew Carson (who views non-pilots as lesser beings, and would rather be in Vietnam adding combat missions to his service record rather than sitting in a simulator in Ohio on a black program which will probably never be disclosed).

As the story progresses, crisis after crisis must be dealt with, all against a deadline which, if not met, will mean the almost-certain cancellation of the project.

This is fiction: no Gemini interceptor program ever existed (although one of the missions for which the Space Shuttle was designed was essentially the same: a one orbit inspection or snatch-and-return of a hostile satellite). But the remarkable thing about this novel is that, unlike many thrillers, the author gets just about everything absolutely right. This does not stop with the technical details of the Gemini and Titan hardware, but also Pentagon politics, inter-service rivalry, the interaction of military projects with political forces, and the dynamics of the relations between pilots, engineers, and project administrators. It works as a thriller, as a story with characters who develop in interesting ways, and there are no jarring goofs to distract you from the narrative. (Well, hardly any: the turbine engines of a C-130 do not “cough to life”.)

There are numerous subplots and characters involved in them, and when this book comes to an end, they're just left hanging in mid-air. That's because this is the first of a multi-volume work in progress. The second novel, Blue Darker than Black, picks up where the first ends. The third, Pale Blue, is scheduled to be published in August 2016.

 Permalink

Goldsmith, Barbara. Obsessive Genius. New York: W. W. Norton, 2005. ISBN 978-0-393-32748-9.
Maria Salomea Skłodowska was born in 1867 in Warsaw, Poland, then part of the Russian Empire. She was the fifth and last child born to her parents, Władysław and Bronisława Skłodowski, both teachers. Both parents were members of a lower class of the aristocracy called the Szlachta, but had lost their wealth through involvement in the Polish nationalist movement opposed to Russian rule. They retained the love of learning characteristic of their class, and had independently obtained teaching appointments before meeting and marrying. Their children were raised in an intellectual atmosphere, with their father reading books aloud to them in Polish, Russian, French, German, and English, all languages in which he was fluent.

During Maria's childhood, her father lost his teaching position after his anti-Russian sentiments and activities were discovered, and supported himself by operating a boarding school for boys from the provinces. In cramped and less than sanitary conditions, one of the boarders infected two of the children with typhus: Marie's sister Zofia died. Three years later, her mother, Bronisława, died of tuberculosis. Maria experienced her first episode of depression, a malady which would haunt her throughout life.

Despite having graduated from secondary school with honours, Marie and her sister Bronisława could not pursue their education in Poland, as the universities did not admit women. Marie made an agreement with her older sister: she would support Bronisława's medical education at the Sorbonne in Paris in return for her supporting Maria's studies there after she graduated and entered practice. Maria worked as a governess, supporting Bronisława. Finally, in 1891, she was able to travel to Paris and enroll in the Sorbonne. On the registration forms, she signed her name as “Marie”.

One of just 23 women among the two thousand enrolled in the School of Sciences, Marie studied physics, chemistry, and mathematics under an eminent faculty including luminaries such as Henri Poincaré. In 1893, she earned her degree in physics, one of only two women to graduate with a science degree that year, and in 1894 obtained a second degree in mathematics, ranking second in her class.

Finances remained tight, and Marie was delighted when one of her professors, Gabriel Lippman, arranged for her to receive a grant to study the magnetic properties of different kinds of steel. She set to work on the project but made little progress because the equipment she was using in Lippman's laboratory was cumbersome and insensitive. A friend recommended she contact a little-known physicist who was an expert on magnetism in metals and had developed instruments for precision measurements. Marie arranged to meet Pierre Curie to discuss her work.

Pierre was working at the School of Industrial Physics and Chemistry of the City of Paris (EPCI), an institution much less prestigious than the Sorbonne, in a laboratory which the visiting Lord Kelvin described as “a cubbyhole between the hallway and a student laboratory”. Still, he had major achievements to his credit. In 1880, with his brother Jacques, he had discovered the phenomenon of piezoelectricity, the interaction between electricity and mechanical stress in solids. Now the foundation of many technologies, the Curies used piezoelectricity to build an electrometer much more sensitive than previous instruments. His doctoral dissertation on the effects of temperature on the magnetism of metals introduced the concept of a critical temperature, different for each metal or alloy, at which permanent magnetism is lost. This is now called the Curie temperature.

When Pierre and Marie first met, they were immediately taken with one another: both from families of modest means, largely self-educated, and fascinated by scientific investigation. Pierre rapidly fell in love and was determined to marry Marie, but she, having been rejected in an earlier relationship in Poland, was hesitant and still planned to return to Warsaw. Pierre eventually persuaded Marie, and the two were married in July 1895. Marie was given a small laboratory space in the EPCI building to pursue work on magnetism, and henceforth the Curies would be a scientific team.

In the final years of the nineteenth century “rays” were all the rage. In 1896, Wilhelm Conrad Röntgen discovered penetrating radiation produced by accelerating electrons (which he called “cathode rays”, as the electron would not be discovered until the following year) into a metal target. He called them “X-rays”, using “X” as the symbol for the unknown. The same year, Henri Becquerel discovered that a sample of uranium salts could expose a photographic plate even if the plate were wrapped in a black cloth. In 1897 he published six papers on these “Becquerel rays”. Both discoveries were completely accidental.

The year that Marie was ready to begin her doctoral research, 65 percent of the papers presented at the Academy of Sciences in Paris were devoted to X-rays. Pierre suggested that Marie investigate the Becquerel rays produced by uranium, as they had been largely neglected by other scientists. She began a series of experiments using an electrometer designed by Pierre. The instrument was sensitive but exasperating to operate: Lord Rayleigh later wrote that electrometers were “designed by the devil”. Patiently, Marie measured the rays produced by uranium and then moved on to test samples of other elements. Among them, only thorium produced detectable rays.

She then made a puzzling observation. Uranium was produced from an ore called pitchblende. When she tested a sample of the residue of pitchblende from which all of the uranium had been extracted, she measured rays four times as energetic as those from pure uranium. She inferred that there must be a substance, perhaps a new chemical element, remaining in the pitchblende residue which was more radioactive than uranium. She then tested a thorium ore and found it also to produce rays more energetic than pure thorium. Perhaps here was yet another element to be discovered.

In March 1898, Marie wrote a paper in which she presented her measurements of the uranium and thorium ores, introduced the word “radioactivity” to describe the phenomenon, put forth the hypothesis that one or more undiscovered elements were responsible, suggested that radioactivity could be used to discover new elements, and, based upon her observations that radioactivity was unaffected by chemical processes, that it must be “an atomic property”. Neither Pierre nor Marie were members of the Academy of Sciences; Marie's former professor, Gabriel Lippman, presented the paper on her behalf.

It was one thing to hypothesise the existence of a new element or elements, and entirely another to isolate the element and determine its properties. Ore, like pitchblende, is a mix of chemical compounds. Starting with ore from which the uranium had been extracted, the Curies undertook a process to chemically separate these components. Those found to be radioactive were then distilled to increase their purity. With each distillation their activity increased. They finally found two of these fractions contained all the radioactivity. One was chemically similar to barium, while the other resembled bismuth. Measuring the properties of the fractions indicated they must be a mixture of the new radioactive elements and other, lighter elements.

To isolate the new elements, a process called “fractionation” was undertaken. When crystals form from a solution, the lighter elements tend to crystallise first. By repeating this process, the heavier elements could slowly be concentrated. With each fractionation the radioactivity increased. Working with the fraction which behaved like bismuth, the Curies eventually purified it to be 400 times as radioactive as uranium. No spectrum of the new element could yet be determined, but the Curies were sufficiently confident in the presence of a new element to publish a paper in July 1898 announcing the discovery and naming the new element “polonium” after Marie's native Poland. In December, working with the fraction which chemically resembled barium, they produced a sample 900 times as radioactive as uranium. This time a clear novel spectral line was found, and at the end of December 1898 they announced the discovery of a second new element, which they named “radium”.

Two new elements had been discovered, with evidence sufficiently persuasive that their existence was generally accepted. But the existing samples were known to be impure. The physical and chemical properties of the new elements, allowing their places in the periodic table to be determined, would require removal of the impurities and isolation of pure samples. The same process of fractionation could be used, but since it quickly became clear that the new radioactive elements were a tiny fraction of the samples in which they had been discovered, it would be necessary to scale up the process to something closer to an industrial scale. (The sample in which radium had been identified was 900 times more radioactive than uranium. Pure radium was eventually found to be ten million times as radioactive as uranium.)

Pierre learned that the residue from extracting uranium from pitchblende was dumped in a forest near the uranium mine. He arranged to have the Austrian government donate the material at no cost, and found the funds to ship it to the laboratory in Paris. Now, instead of test tubes, they were working with tons of material. Pierre convinced a chemical company to perform the first round of purification, persuading them that other researchers would be eager to buy the resulting material. Eventually, they delivered twenty kilogram lots of material to the Curies which were fifty times as radioactive as uranium. From there the Curie laboratory took over the subsequent purification. After four years, processing ten tons of pitchblende residue, hundreds of tons of rinsing water, thousands of fractionations, one tenth of a gram of radium chloride was produced that was sufficiently pure to measure its properties. In July 1902 Marie announced the isolation of radium and placed it on the periodic table as element 88.

In June of 1903, Marie defended her doctoral thesis, becoming the first woman in France to obtain a doctorate in science. With the discovery of radium, the source of the enormous energy it and other radioactive elements released became a major focus of research. Ernest Rutherford argued that radioactivity was a process of “atomic disintegration” in which one element was spontaneously transmuting to another. The Curies originally doubted this hypothesis, but after repeating the experiments of Rutherford, accepted his conclusion as correct.

In 1903, the Nobel Prize for Physics was shared by Marie and Pierre Curie and Henri Becquerel, awarded for the discovery of radioactivity. The discovery of radium and polonium was not mentioned. Marie embarked on the isolation of polonium, and within two years produced a sample sufficiently pure to place it as element 84 on the periodic table with an estimate of its half-life of 140 days (the modern value is 138.4 days). Polonium is about 5000 times as radioactive as radium. Polonium and radium found in nature are the products of decay of primordial uranium and thorium. Their half-lives are so short (radium's is 1600 years) that any present at the Earth's formation has long since decayed.

After the announcement of the discovery of radium and the Nobel prize, the Curies, and especially Marie, became celebrities. Awards, honorary doctorates, and memberships in the academies of science of several countries followed, along with financial support and the laboratory facilities they had lacked while performing the work which won them such acclaim. Radium became a popular fad, hailed as a cure for cancer and other diseases, a fountain of youth, and promoted by quacks promising all kinds of benefits from the nostrums they peddled, some of which, to the detriment of their customers, actually contained minute quantities of radium.

Tragedy struck in April 1906 when Pierre was killed in a traffic accident: run over on a Paris street in a heavy rainstorm by a wagon pulled by two horses. Marie was inconsolable, immersing herself in laboratory work and neglecting her two young daughters. Her spells of depression returned. She continued to explore the properties of radium and polonium and worked to establish a standard unit to measure radioactive decay, calibrated by radium. (This unit is now called the curie, but is no longer defined based upon radium and has been replaced by the becquerel, which is simply an inverse second.) Marie Curie was not interested or involved in the work to determine the structure of the atom and its nucleus or the development of quantum theory. The Curie laboratory continued to grow, but focused on production of radium and its applications in medicine and industry. Lise Meitner applied for a job at the laboratory and was rejected. Meitner later said she believed that Marie thought her a potential rival to Curie's daughter Irène. Meitner joined the Kaiser Wilhelm Institute in Berlin and went on to co-discover nuclear fission. The only two chemical elements named in whole or part for women are curium (element 96, named for both Pierre and Marie) and meitnerium (element 109).

In 1910, after three years of work with André-Louis Debierne, Marie managed to produce a sample of metallic radium, allowing a definitive measurement of its properties. In 1911, she won a second Nobel prize, unshared, in chemistry, for the isolation of radium and polonium. At the moment of triumph, news broke of a messy affair she had been carrying on with Pierre's successor at the EPCI, Paul Langevin, a married man. The popular press, who had hailed Marie as a towering figure of French science, went after her with bared fangs and mockery, and she went into seclusion under an assumed name.

During World War I, she invented and promoted the use of mobile field X-ray units (called “Les Petites Curies”) and won acceptance for women to operate them near the front, with her daughter Irène assisting in the effort. After the war, her reputation largely rehabilitated, Marie not only accepted but contributed to the growth of the Curie myth, seeing it as a way to fund her laboratory and research. Irène took the lead at the laboratory.

As co-discoverer of the phenomenon of radioactivity and two chemical elements, Curie's achievements were well recognised. She was the first woman to win a Nobel prize, the first person to win two Nobel prizes, and the only person so far to win Nobel prizes in two different sciences. (The third woman to win a Nobel prize was her daughter, Irène Joliot-Curie, for the discovery of artificial radioactivity.) She was the first woman to be appointed a full professor at the Sorbonne.

Marie Curie died of anæmia in 1934, probably brought on by exposure to radiation over her career. She took few precautions, and her papers and personal effects remain radioactive to this day. Her legacy is one of dedication and indefatigable persistence in achieving the goals she set for herself, regardless of the scientific and technical challenges and the barriers women faced at the time. She demonstrated that pure persistence, coupled with a brilliant intellect, can overcome formidable obstacles.

 Permalink

Launius, Roger D. and Dennis R. Jenkins. Coming Home. Washington: National Aeronautics and Space Administration, 2012. ISBN 978-0-16-091064-7. NASA SP-2011-593.
In the early decades of the twentieth century, when visionaries such as Konstantin Tsiolkovsky, Hermann Oberth, and Robert H. Goddard started to think seriously about how space travel might be accomplished, most of the focus was on how rockets might be designed and built which would enable their payloads to be accelerated to reach the extreme altitude and velocity required for long-distance ballistic or orbital flight. This is a daunting problem. The Earth has a deep gravity well: so deep that to place a satellite in a low orbit around it, you must not only lift the satellite from the Earth's surface to the desired orbital altitude (which isn't particularly difficult), but also impart sufficient velocity to it so that it does not fall back but, instead, orbits the planet. It's the speed that makes it so difficult.

Recall that the kinetic energy of a body is given by ½mv². If mass (m) is given in kilograms and velocity (v) in metres per second, energy is measured in joules. Note that the square of the velocity appears in the formula: if you triple the velocity, you need nine times the energy to accelerate the mass to that speed. A satellite must have a velocity of around 7.8 kilometres/second to remain in a low Earth orbit. This is about eight times the muzzle velocity of the 5.56×45mm NATO round fired by the M-16 and AR-15 rifles. Consequently, the satellite has sixty-four times the energy per unit mass of the rifle bullet, and the rocket which places it into orbit must expend all of that energy to launch it.

Every kilogram of a satellite in a low orbit has a kinetic energy of around 30 megajoules (thirty million joules). By comparison, the energy released by detonating a kilogram of TNT is 4.7 megajoules. The satellite, purely due to its motion, has more than six times the energy as an equal mass of TNT. The U.S. Space Shuttle orbiter had a mass, without payload, of around 70,000 kilograms. When preparing to leave orbit and return to Earth, its kinetic energy was about that of half a kiloton of TNT. During the process of atmospheric reentry and landing, in about half an hour, all of that energy must be dissipated in a non-destructive manner, until the orbiter comes to a stop on the runway with kinetic energy zero.

This is an extraordinarily difficult problem, which engineers had to confront as soon as they contemplated returning payloads from space to the Earth. The first payloads were, of course, warheads on intercontinental ballistic missiles. While these missiles did not go into orbit, they achieved speeds which were sufficiently fast as to present essentially the same problems as orbital reentry. When the first reconnaissance satellites were developed by the U.S. and the Soviet Union, the technology to capture images electronically and radio them to ground stations did not yet exist. The only option was to expose photographic film in orbit then physically return it to Earth for processing and interpretation. This was the requirement which drove the development of orbital reentry. The first manned orbital capsules employed technology proven by film return spy satellites. (In the case of the Soviets, the basic structure of the Zenit reconnaissance satellites and manned Vostok capsules was essentially the same.)

This book chronicles the history and engineering details of U.S. reentry and landing technology, for both unmanned and manned spacecraft. While many in the 1950s envisioned sleek spaceplanes as the vehicle of choice, when the time came to actually solve the problems of reentry, a seemingly counterintuitive solution came to the fore: the blunt body. We're all acquainted with the phenomenon of air friction: the faster an airplane flies, the hotter its skin gets. The SR-71, which flew at three times the speed of sound, had to be made of titanium since aluminium would have lost its strength at the temperatures which resulted from friction. But at the velocity of a returning satellite, around eight times faster than an SR-71, air behaves very differently. The satellite is moving so fast that air can't get out of the way and piles up in front of it. As the air is compressed, its temperature rises until it equals or exceeds that of the surface of the Sun. This heat is then radiated in all directions. That impinging upon the reentering body can, if not dealt with, destroy it.

A streamlined shape will cause the compression to be concentrated at the nose, leading to extreme heating. A blunt body, however, will cause a shock wave to form which stands off from its surface. Since the compressed air radiates heat in all directions, only that radiated in the direction of the body will be absorbed; the rest will be harmlessly radiated away into space, reducing total heating. There is still, however, plenty of heat to worry about.

Let's consider the Mercury capsules in which the first U.S. astronauts flew. They reentered blunt end first, with a heat shield facing the air flow. Compression in the shock layer ahead of the heat shield raised the air temperature to around 5800° K, almost precisely the surface temperature of the Sun. Over the reentry, the heat pulse would deposit a total of 100 megajoules per square metre of heat shield. The astronaut was just a few centimetres from the shield, and the temperature on the back side of the shield could not be allowed to exceed 65° C. How in the world do you accomplish that?

Engineers have investigated a wide variety of ways to beat the heat. The simplest are completely passive systems: they have no moving parts. An example of a passive system is a “heat sink”. You simply have a mass of some substance with high heat capacity (which means it can absorb a large amount of energy with a small rise in temperature), usually a metal, which absorbs the heat during the pulse, then slowly releases it. The heat sink must be made of a material which doesn't melt or corrode during the heat pulse. The original design of the Mercury spacecraft specified a beryllium heat sink design, and this was flown on the two suborbital flights, but was replaced for the orbital missions. The Space Shuttle used a passive heat shield of a different kind: ceramic tiles which could withstand the heat on their surface and provided insulation which prevented the heat from reaching the aluminium structure beneath. The tiles proved very difficult to manufacture, were fragile, and required a great deal of maintenance, but they were, in principle, reusable.

The most commonly used technology for reentry is ablation. A heat shield is fabricated of a material which, when subjected to reentry heat, chars and releases gases. The gases carry away the heat, while the charred material which remains provides insulation. A variety of materials have been used for ablative heat shields, from advanced silicone and carbon composites to oak wood, on some early Soviet and Chinese reentry experiments. Ablative heat shields were used on Mercury orbital capsules, in projects Gemini and Apollo, all Soviet and Chinese manned spacecraft, and will be used by the SpaceX and Boeing crew transport capsules now under development.

If the heat shield works and you make it through the heat pulse, you're still falling like a rock. The solution of choice for landing spacecraft has been parachutes, and even though they seem simple conceptually, in practice there are many details which must be dealt with, such as stabilising the falling craft so it won't tumble and tangle the parachute suspension lines when the parachute is deployed, and opening the canopy in multiple stages to prevent a jarring shock which might damage the parachute or craft.

The early astronauts were pilots, and never much liked the idea of having to be fished out of the ocean by the Navy at the conclusion of their flights. A variety of schemes were explored to allow piloted flight to a runway landing, including inflatable wings and paragliders, but difficulties developing the technologies and schedule pressure during the space race caused the Gemini and Apollo projects to abandon them in favour of parachutes and a splashdown. Not until the Space Shuttle were precision runway landings achieved, and now NASA has abandoned that capability. SpaceX hopes to eventually return their Crew Dragon capsule to a landing pad with a propulsive landing, but that is not discussed here.

In the 1990s, NASA pursued a variety of spaceplane concepts: the X-33, X-34, and X-38. These projects pioneered new concepts in thermal protection for reentry which would be less expensive and maintenance-intensive than the Space Shuttle's tiles. In keeping with NASA's practice of the era, each project was cancelled after consuming a large sum of money and extensive engineering development. The X-37 was developed by NASA, and when abandoned, was taken over by the Air Force, which operates it on secret missions. Each of these projects is discussed here.

This book is the definitive history of U.S. spacecraft reentry systems. There is a wealth of technical detail, and some readers may find there's more here than they wanted to know. No specialised knowledge is required to understand the descriptions: just patience. In keeping with NASA tradition, quaint units like inches, pounds, miles per hour, and British Thermal Units are used in most of the text, but then in the final chapters, the authors switch back and forth between metric and U.S. customary units seemingly at random. There are some delightful anecdotes, such as when the designers of NASA's new Orion capsule had to visit the Smithsonian's National Air and Space Museum to examine an Apollo heat shield to figure out how it was made, attached to the spacecraft, and the properties of the proprietary ablative material it employed.

As a NASA publication, this book is in the public domain. The paperback linked to above is a republication of the original NASA edition. The book may be downloaded for free from the book's Web page in three electronic formats: PDF, MOBI (Kindle), and EPUB. Get the PDF! While the PDF is a faithful representation of the print edition, the MOBI edition is hideously ugly and mis-formatted. Footnotes are interleaved in the text at random locations in red type (except when they aren't in red type), block quotes are not set off from the main text, dozens of hyphenated words and adjacent words are run together, and the index is completely useless: citing page numbers in the print edition which do not appear in the electronic edition; for some reason large sections of the index are in red type. I haven't looked at the EPUB edition, but given the lack of attention to detail evident in the MOBI, my expectations for it are not high.

 Permalink

May 2016

Levin, Janna. Black Hole Blues. New York: Alfred A. Knopf, 2016. ISBN 978-0-307-95819-8.
In Albert Einstein's 1915 general theory of relativity, gravitation does not propagate instantaneously as it did in Newton's theory, but at the speed of light. According to relativity, nothing can propagate faster than light. This has a consequence which was not originally appreciated when the theory was published: if you move an object here, its gravitational influence upon an object there cannot arrive any faster than a pulse of light travelling between the two objects. But how is that change in the gravitational field transmitted? For light, it is via the electromagnetic field, which is described by Maxwell's equations and implies the existence of excitations of the field which, according to their wavelength, we call radio, light, and gamma rays. Are there, then, equivalent excitations of the gravitational field (which, according to general relativity, can be thought of as curvature of spacetime), which transmit the changes due to motion of objects to distant objects affected by their gravity and, if so, can we detect them? By analogy to electromagnetism, where we speak of electromagnetic waves or electromagnetic radiation, these would be gravitational waves or gravitational radiation.

Einstein first predicted the existence of gravitational waves in a 1916 paper, but he made a mathematical error in the nature of sources and the magnitude of the effect. This was corrected in a paper he published in 1918 which describes gravitational radiation as we understand it today. According to Einstein's calculations, gravitational waves were real, but interacted so weakly that any practical experiment would never be able to detect them. If gravitation is thought of as the bending of spacetime, the equations tell us that spacetime is extraordinarily stiff: when you encounter an equation with the speed of light, c, raised to the fourth power in the denominator, you know you're in trouble trying to detect the effect.

That's where the matter rested for almost forty years. Some theorists believed that gravitational waves existed but, given the potential sources we knew about (planets orbiting stars, double and multiple star systems), the energy emitted was so small (the Earth orbiting the Sun emits a grand total of 200 watts of energy in gravitational waves, which is absolutely impossible to detect with any plausible apparatus), we would never be able to detect it. Other physicists doubted the effect was real, and that gravitational waves actually carried energy which could, even in principle, produce effects which could be detected. This dispute was settled to the satisfaction of most theorists by the sticky bead argument, proposed in 1957 by Richard Feynman and Hermann Bondi. Although a few dissenters remained, most of the small community interested in general relativity agreed that gravitational waves existed and could carry energy, but continued to believe we'd probably never detect them.

This outlook changed in the 1960s. Radio astronomers, along with optical astronomers, began to discover objects in the sky which seemed to indicate the universe was a much more violent and dynamic place than had been previously imagined. Words like “quasar”, “neutron star”, “pulsar”, and “black hole” entered the vocabulary, and suggested there were objects in the universe where gravity might be so strong and motion so fast that gravitational waves could be produced which might be detected by instruments on Earth.

Joseph Weber, an experimental physicist at the University of Maryland, was the first to attempt to detect gravitational radiation. He used large bars, now called Weber bars, of aluminium, usually cylinders two metres long and one metre in diameter, instrumented with piezoelectric sensors. The bars were, based upon their material and dimensions, resonant at a particular frequency, and could detect a change in length of the cylinder of around 10−16 metres. Weber was a pioneer in reducing noise of his detectors, and operated two detectors at different locations so that signals would only be considered valid if observed nearly simultaneously by both.

What nobody knew was how “noisy” the sky was in gravitational radiation: how many sources there were and how strong they might be. Theorists could offer little guidance: ultimately, you just had to listen. Weber listened, and reported signals he believed consistent with gravitational waves. But others who built comparable apparatus found nothing but noise and theorists objected that if objects in the universe emitted as much gravitational radiation as Weber's detections implied, it would convert all of its mass into gravitational radiation in just fifty million years. Weber's claims of having detected gravitational radiation are now considered to have been discredited, but there are those who dispute this assessment. Still, he was the first to try, and made breakthroughs which informed subsequent work.

Might there be a better way, which could detect even smaller signals than Weber's bars, and over a wider frequency range? (Since the frequency range of potential sources was unknown, casting the net as widely as possible made more potential candidate sources accessible to the experiment.) Independently, groups at MIT, the University of Glasgow in Scotland, and the Max Planck Institute in Germany began to investigate interferometers as a means of detecting gravitational waves. An interferometer had already played a part in confirming Einstein's special theory of relativity: could it also provide evidence for an elusive prediction of the general theory?

An interferometer is essentially an absurdly precise ruler where the markings on the scale are waves of light. You send beams of light down two paths, and adjust them so that the light waves cancel (interfere) when they're combined after bouncing back from mirrors at the end of the two paths. If there's any change in the lengths of the two paths, the light won't interfere precisely, and its intensity will increase depending upon the difference. But when a gravitational wave passes, that's precisely what happens! Lengths in one direction will be squeezed while those orthogonal (at a right angle) will be stretched. In principle, an interferometer can be an exquisitely sensitive detector of gravitational waves. The gap between principle and practice required decades of diligent toil and hundreds of millions of dollars to bridge.

From the beginning, it was clear it would not be easy. The field of general relativity (gravitation) had been called “a theorist's dream, an experimenter's nightmare”, and almost everybody working in the area were theorists: all they needed were blackboards, paper, pencils, and lots of erasers. This was “little science”. As the pioneers began to explore interferometric gravitational wave detectors, it became clear what was needed was “big science”: on the order of large particle accelerators or space missions, with budgets, schedules, staffing, and management comparable to such projects. This was a culture shock to the general relativity community as violent as the astrophysical sources they sought to detect. Between 1971 and 1989, theorists and experimentalists explored detector technologies and built prototypes to demonstrate feasibility. In 1989, a proposal was submitted to the National Science Foundation to build two interferometers, widely separated geographically, with an initial implementation to prove the concept and a subsequent upgrade intended to permit detection of gravitational radiation from anticipated sources. After political battles, in 1995 construction of LIGO, the Laser Interferometer Gravitational-Wave Observatory, began at the two sites located in Livingston, Louisiana and Hanford, Washington, and in 2001, commissioning of the initial detectors was begun; this would take four years. Between 2005 and 2007 science runs were made with the initial detectors; much was learned about sources of noise and the behaviour of the instrument, but no gravitational waves were detected.

Starting in 2007, based upon what had been learned so far, construction of the advanced interferometer began. This took three years. Between 2010 and 2012, the advanced components were installed, and another three years were spent commissioning them: discovering their quirks, fixing problems, and increasing sensitivity. Finally, in 2015, observations with the advanced detectors began. The sensitivity which had been achieved was astonishing: the interferometers could detect a change in the length of their four kilometre arms which was one ten-thousandth the diameter of a proton (the nucleus of a hydrogen atom). In order to accomplish this, they had to overcome noise which ranged from distant earthquakes, traffic on nearby highways, tides raised in the Earth by the Sun and Moon, and a multitude of other sources, via a tower of technology which made the machine, so simple in concept, forbiddingly complex.

September 14, 2015, 09:51 UTC: Chirp!

A hundred years after the theory that predicted it, 44 years after physicists imagined such an instrument, 26 years after it was formally proposed, 20 years after it was initially funded, a gravitational wave had been detected, and it was right out of the textbook: the merger of two black holes with masses around 29 and 36 times that of the Sun, at a distance of 1.3 billion light years. A total of three solar masses were converted into gravitational radiation: at the moment of the merger, the gravitational radiation emitted was 50 times greater than the light from all of the stars in the universe combined. Despite the stupendous energy released by the source, when it arrived at Earth it could only have been detected by the advanced interferometer which had just been put into service: it would have been missed by the initial instrument and was orders of magnitude below the noise floor of Weber's bar detectors.

For only the third time since proto-humans turned their eyes to the sky a new channel of information about the universe we inhabit was opened. Most of what we know comes from electromagnetic radiation: light, radio, microwaves, gamma rays, etc. In the 20th century, a second channel opened: particles. Cosmic rays and neutrinos allow exploring energetic processes we cannot observe in any other way. In a real sense, neutrinos let us look inside the Sun and into the heart of supernovæ and see what's happening there. And just last year the third channel opened: gravitational radiation. The universe is almost entirely transparent to gravitational waves: that's why they're so difficult to detect. But that means they allow us to explore the universe at its most violent: collisions and mergers of neutron stars and black holes—objects where gravity dominates the forces of the placid universe we observe through telescopes. What will we see? What will we learn? Who knows? If experience is any guide, we'll see things we never imagined and learn things even the theorists didn't anticipate. The game is afoot! It will be a fine adventure.

Black Hole Blues is the story of gravitational wave detection, largely focusing upon LIGO and told through the eyes of Rainer Weiss and Kip Thorne, two of the principals in its conception and development. It is an account of the transition of a field of research from a theorist's toy to Big Science, and the cultural, management, and political problems that involves. There are few examples in experimental science where so long an interval has elapsed, and so much funding expended, between the start of a project and its detecting the phenomenon it was built to observe. The road was bumpy, and that is documented here.

I found the author's tone off-putting. She, a theoretical cosmologist at Barnard College, dismisses scientists with achievements which dwarf her own and ideas which differ from hers in the way one expects from Social Justice Warriors in the squishier disciplines at the Seven Sisters: “the notorious Edward Teller”, “Although Kip [Thorne] outgrew the tedious moralizing, the sexism, and the religiosity of his Mormon roots”, (about Joseph Weber) “an insane, doomed, impossible bar detector designed by the old mad guy, crude laboratory-scale slabs of metal that inspired and encouraged his anguished claims of discovery”, “[Stephen] Hawking made his oddest wager about killer aliens or robots or something, which will not likely ever be resolved, so that might turn out to be his best bet yet”, (about Richard Garwin) “He played a role in halting the Star Wars insanity as well as potentially disastrous industrial escalations, like the plans for supersonic airplanes…”, and “[John Archibald] Wheeler also was not entirely against the House Un-American Activities Committee. He was not entirely against the anticommunist fervor that purged academics from their ivory-tower ranks for crimes of silence, either.” … “I remember seeing him at the notorious Princeton lunches, where visitors are expected to present their research to the table. Wheeler was royalty, in his eighties by then, straining to hear with the help of an ear trumpet. (Did I imagine the ear trumpet?)”. There are also a number of factual errors (for example, a breach in the LIGO beam tube sucking out all of the air from its enclosure and suffocating anybody inside), which a moment's calculation would have shown was absurd.

The book was clearly written with the intention of being published before the first detection of a gravitational wave by LIGO. The entire story of the detection, its validation, and public announcement is jammed into a seven page epilogue tacked onto the end. This epochal discovery deserves being treated at much greater length.

 Permalink

Eggers, Dave. The Circle. New York: Alfred A. Knopf, 2013. ISBN 978-0-345-80729-8.
There have been a number of novels, many in recent years, which explore the possibility of human society being taken over by intelligent machines. Some depict the struggle between humans and machines, others envision a dystopian future in which the machines have triumphed, and a few explore the possibility that machines might create a “new operating system” for humanity which works better than the dysfunctional social and political systems extant today. This novel goes off in a different direction: what might happen, without artificial intelligence, but in an era of exponentially growing computer power and data storage capacity, if an industry leading company with tendrils extending into every aspect of personal interaction and commerce worldwide, decided, with all the best intentions, “What the heck? Let's be evil!”

Mae Holland had done everything society had told her to do. One of only twelve of the 81 graduates of her central California high school to go on to college, she'd been accepted by a prestigious college and graduated with a degree in psychology and massive student loans she had no prospect of paying off. She'd ended up moving back in with her parents and taking a menial cubicle job at the local utility company, working for a creepy boss. In frustration and desperation, Mae reaches out to her former college roommate, Annie, who has risen to an exalted position at the hottest technology company on the globe: The Circle. The Circle had started by creating the Unified Operating System, which combined all aspects of users' interactions—social media, mail, payments, user names—into a unique and verified identity called TruYou. (Wonder where they got that idea?)

Before long, anonymity on the Internet was a thing of the past as merchants and others recognised the value of knowing their customers and of information collected across their activity on all sites. The Circle and its associated businesses supplanted existing sites such as Google, Facebook, and Twitter, and with the tight integration provided by TruYou, created new kinds of interconnection and interaction not possible when information was Balkanised among separate sites. With the end of anonymity, spam and fraudulent schemes evaporated, and with all posters personally accountable, discussions became civil and trolls slunk back under the bridge.

With an effective monopoly on electronic communication and commercial transactions (if everybody uses TruYou to pay, what option does a merchant have but to accept it and pay The Circle's fees?), The Circle was assured a large, recurring, and growing revenue stream. With the established businesses generating so much cash, The Circle invested heavily in research and development of new technologies: everything from sustainable housing, access to DNA databases, crime prevention, to space applications.

Mae's initial job was far more mundane. In Customer Experience, she was more or less working in a call centre, except her communications with customers were over The Circle's message services. The work was nothing like that at the utility company, however. Her work was monitored in real time, with a satisfaction score computed from follow-ups surveys by clients. To advance, a score near 100 was required, and Mae had to follow-up any scores less than that to satisfy the customer and obtain a perfect score. On a second screen, internal “zing” messages informed her of activity on the campus, and she was expected to respond and contribute.

As she advances within the organisation, Mae begins to comprehend the scope of The Circle's ambitions. One of the founders unveils a plan to make always-on cameras and microphones available at very low cost, which people can install around the world. All the feeds will be accessible in real time and archived forever. A new slogan is unveiled: “All that happens must be known.

At a party, Mae meets a mysterious character, Kalden, who appears to have access to parts of The Circle's campus unknown to her associates and yet doesn't show up in the company's exhaustive employee social networks. Her encounters and interactions with him become increasingly mysterious.

Mae moves up, and is chosen to participate to a greater extent in the social networks, and to rate products and ideas. All of this activity contributes to her participation rank, computed and displayed in real time. She swallows a sensor which will track her health and vital signs in real time, display them on a wrist bracelet, and upload them for analysis and early warning diagnosis.

Eventually, she volunteers to “go transparent”: wear a body camera and microphone every waking moment, and act as a window into The Circle for the general public. The company had pushed transparency for politicians, and now was ready to deploy it much more widely.

Secrets Are Lies
Sharing Is Caring
Privacy Is Theft

To Mae's family and few remaining friends outside The Circle, this all seems increasingly bizarre: as if the fastest growing and most prestigious high technology company in the world has become a kind of grotesque cult which consumes the lives of its followers and aspires to become universal. Mae loves her sense of being connected, the interaction with a worldwide public, and thinks it is just wonderful. The Circle internally tests and begins to roll out a system of direct participatory democracy to replace existing political institutions. Mae is there to report it. A plan to put an end to most crime is unveiled: Mae is there.

The Circle is closing. Mae is contacted by her mysterious acquaintance, and presented with a moral dilemma: she has become a central actor on the stage of a world which is on the verge of changing, forever.

This is a superbly written story which I found both realistic and chilling. You don't need artificial intelligence or malevolent machines to create an eternal totalitarian nightmare. All it takes a few years' growth and wider deployment of technologies which exist today, combined with good intentions, boundless ambition, and fuzzy thinking. And the latter three commodities are abundant among today's technology powerhouses.

Lest you think the technologies which underlie this novel are fantasy or far in the future, they were discussed in detail in David Brin's 1999 The Transparent Society and my 1994 “Unicard” and 2003 “The Digital Imprimatur”. All that has changed is that the massive computing, communication, and data storage infrastructure envisioned in those works now exists or will within a few years.

What should you fear most? Probably the millennials who will read this and think, “Wow! This will be great.” “Democracy is mandatory here!

 Permalink

Miller, Roland. Abandoned in Place. Albuquerque: University of New Mexico Press, 2016. ISBN 978-0-8263-5625-3.
Between 1945 and 1970 humanity expanded from the surface of Earth into the surrounding void, culminating in 1969 with the first landing on the Moon. Centuries from now, when humans and their descendents populate the solar system and exploit resources dwarfing those of the thin skin and atmosphere of the home planet, these first steps may be remembered as the most significant event of our age, with all of the trivialities that occupy our quotidian attention forgotten. Not only were great achievements made, but grand structures built on Earth to support them; these may be looked upon in the future as we regard the pyramids or the great cathedrals.

Or maybe not. The launch pads, gantry towers, assembly buildings, test facilities, blockhouses, bunkers, and control centres were not built as monuments for the ages, but rather to accomplish time-sensitive goals under tight budgets, by the lowest bidder, and at the behest of a government famous for neglecting infrastructure. Once the job was done, the mission accomplished, the program concluded; the facilities that supported it were simply left at the mercy of the elements which, in locations like coastal Florida, immediately began to reclaim them. Indeed, half of the facilities pictured here no longer exist.

For more than two decades, author and photographer Roland Miller has been documenting this heritage before it succumbs to rust, crumbling concrete, and invasive vegetation. With unparalleled access to the sites, he has assembled this gallery of these artefacts of a great age of exploration. In a few decades, this may be all we'll have to remember them. Although there is rudimentary background information from a variety of authors, this is a book of photography, not a history of the facilities. In some cases, unless you know from other sources what you're looking at, you might interpret some of the images as abstract.

The hardcover edition is a “coffee table book”: large format and beautifully printed, with a corresponding price. The Kindle edition is, well, a Kindle book, and grossly overpriced for 193 pages with screen-resolution images and a useless index consisting solely of search terms.

A selection of images from the book may be viewed on the Abandoned in Place Web site.

 Permalink

Buckley, Christopher. The Relic Master. New York: Simon & Schuster, 2015. ISBN 978-1-5011-2575-1.
The year is 1517. The Holy Roman Empire sprawls across central Europe, from the Mediterranean in the south to the North Sea and Baltic in the north, from the Kingdom of France in the west to the Kingdoms of Poland and Hungary in the east. In reality the structure of the empire is so loose and complicated it defies easy description: independent kings, nobility, and prelates all have their domains of authority, and occasionally go to war against one another. Although the Reformation is about to burst upon the scene, the Roman Catholic Church is supreme, and religion is big business. In particular, the business of relics and indulgences.

Commit a particularly heinous sin? If you're sufficiently well-heeled, you can obtain an indulgence through prayer, good works, or making a pilgrimage to a holy site. Over time, “good works” increasingly meant, for the prosperous, making a contribution to the treasury of the local prince or prelate, a percentage of which was kicked up to higher-ranking clergy, all the way to Rome. Or, an enterprising noble or churchman could collect relics such as the toe bone of a saint, a splinter from the True Cross, or a lock of hair from one of the camels the Magi rode to Bethlehem. Pilgrims would pay a fee to see, touch, have their sins erased, and be healed by these holy trophies. In short, the indulgence and relic business was selling “get out of purgatory for a price”. The very best businesses are those in which the product is delivered only after death—you have no problems with dissatisfied customers.

To flourish in this trade, you'll need a collection of relics, all traceable to trustworthy sources. Relics were in great demand, and demand summons supply into being. All the relics of the True Cross, taken together, would have required the wood from a medium-sized forest, and even the most sacred and unique of relics, the burial shroud of Christ, was on display in several different locations. It's the “trustworthy” part that's difficult, and that's where Dismas comes in. A former Swiss mercenary, his resourcefulness in obtaining relics had led to his appointment as Relic Master to His Grace Albrecht, Archbishop of Brandenburg and Mainz, and also to Frederick the Wise, Elector of Saxony. These two customers were rivals in the relic business, allowing Dismas to play one against the other to his advantage. After visiting the Basel Relic Fair and obtaining some choice merchandise, he visits his patrons to exchange them for gold. While visiting Frederick, he hears that a monk has nailed ninety-five denunciations of the Church, including the sale of indulgences, to the door of the castle church. This is interesting, but potentially bad for business.

Dismas meets his friend, Albrecht Dürer, who he calls “Nars” due to Dürer's narcissism: among other things including his own visage in most of his paintings. After months in the south hunting relics, he returns to visit Dürer and learns that the Swiss banker with whom he's deposited his fortune has been found to be a 16th century Bernie Madoff and that he has only the money on his person.

Destitute, Dismas and Dürer devise a scheme to get back into the game. This launches them into a romp across central Europe visiting the castles, cities, taverns, dark forbidding forests, dungeons, and courts of nobility. We encounter historical figures including Philippus Aureolus Theophrastus Bombastus von Hohenheim (Paracelsus), who lends his scientific insight to the effort. All of this is recounted with the mix of wry and broad humour which Christopher Buckley uses so effectively in all of his novels. There is a tableau of the Last Supper, identity theft, and bombs. An appendix gives background on the historical figures who appear in the novel.

This is a pure delight and illustrates how versatile is the talent of the author. Prepare yourself for a treat; this novel delivers. Here is an interview with the author.

 Permalink

Red Eagle, John and Vox Day [Theodore Beale]. Cuckservative. Kouvola, Finland: Castalia House, 2015. ASIN B018ZHHA52.
Yes, I have read it. So read me out of the polite genteel “conservative” movement. But then I am not a conservative. Further, I enjoyed it. The authors say things forthrightly that many people think and maybe express in confidence to their like-minded friends, but reflexively cringe upon even hearing in public. Even more damning, I found it enlightening on a number of topics, and I believe that anybody who reads it dispassionately is likely to find it the same. And finally, I am reviewing it. I have reviewed (or noted) every book I have read since January of 2001. Should I exclude this one because it makes some people uncomfortable? I exist to make people uncomfortable. And so, onward….

The authors have been called “racists”, which is rather odd since both are of Native American ancestry and Vox Day also has Mexican ancestors. Those who believe ancestry determines all will have to come to terms with the fact that these authors defend the values which largely English settlers brought to America, and were the foundation of American culture until it all began to come apart in the 1960s.

In the view of the authors, as explained in chapter 4, the modern conservative movement in the U.S. dates from the 1950s. Before that time both the Democrat and Republican parties contained politicians and espoused policies which were both conservative and progressive (with the latter word used in the modern sense), often with regional differences. Starting with the progressive era early in the 20th century and dramatically accelerating during the New Deal, the consensus in both parties was centre-left liberalism (with “liberal” defined in the corrupt way it is used in the U.S.): a belief in a strong central government, social welfare programs, and active intervention in the economy. This view was largely shared by Democrat and Republican leaders, many of whom came from the same patrician class in the Northeast. At its outset, the new conservative movement, with intellectual leaders such as Russell Kirk and advocates like William F. Buckley, Jr., was outside the mainstream of both parties, but more closely aligned with the Republicans due to their wariness of big government. (But note that the Eisenhower administration made no attempt to roll back the New Deal, and thus effectively ratified it.)

They argue that since the new conservative movement was a coalition of disparate groups such as libertarians, isolationists, southern agrarians, as well as ex-Trotskyites and former Communists, it was an uneasy alliance, and in forging it Buckley and others believed it was essential that the movement be seen as socially respectable. This led to a pattern of conservatives ostracising those who they feared might call down the scorn of the mainstream press upon them. In 1957, a devastating review of Atlas Shrugged by Whittaker Chambers marked the break with Ayn Rand's Objectivists, and in 1962 Buckley denounced the John Birch Society and read it out of the conservative movement. This established a pattern which continues to the present day: when an individual or group is seen as sufficiently radical that they might damage the image of conservatism as defined by the New York and Washington magazines and think tanks, they are unceremoniously purged and forced to find a new home in institutions viewed with disdain by the cultured intelligentsia. As the authors note, this is the exact opposite of the behaviour of the Left, which fiercely defends its most radical extremists. Today's Libertarian Party largely exists because its founders were purged from conservatism in the 1970s.

The search for respectability and the patient construction of conservative institutions were successful in aligning the Republican party with the new conservatism. This first manifested itself in the nomination of Barry Goldwater in 1964. Following his disastrous defeat, conservatives continued their work, culminating in the election of Ronald Reagan in 1980. But even then, and in the years that followed, including congressional triumphs in 1994, 2010, and 2014, Republicans continued to behave as a minority party: acting only to slow the rate of growth of the Left's agenda rather than roll it back and enact their own. In the words of the authors, they are “calling for the same thing as the left, but less of it and twenty years later”.

The authors call these Republicans “cuckservative” or “cuck” for short. The word is a portmanteau of “cuckold” and “conservative”. “Cuckold” dates back to A.D. 1250, and means the husband of an unfaithful wife, or a weak and ineffectual man. Voters who elect these so-called conservatives are cuckolded by them, as through their fecklessness and willingness to go along with the Left, they bring into being and support the collectivist agenda which they were elected to halt and roll back. I find nothing offensive in the definition of this word, but I don't like how it sounds—in part because it rhymes with an obscenity which has become an all-purpose word in the vocabulary of the Left and, increasingly, the young. Using the word induces a blind rage in some of those to whom it is applied, which may be its principal merit.

But this book, despite bearing it as a title, is not about the word: only three pages are devoted to defining it. The bulk of the text is devoted to what the authors believe are the central issues facing the U.S. at present and an examination of how those calling themselves conservatives have ignored, compromised away, or sold out the interests of their constituents on each of these issues, including immigration and the consequences of a change in demographics toward those with no experience of the rule of law, the consequences of mass immigration on workers in domestic industries, globalisation and the flight of industries toward low-wage countries, how immigration has caused other societies in history to lose their countries, and how mainstream Christianity has been subverted by the social justice agenda and become an ally of the Left at the same time its pews are emptying in favour of evangelical denominations. There is extensive background information about the history of immigration in the United States, the bizarre “Magic Dirt” theory (that, for example, transplanting a Mexican community across the border will, simply by changing its location, transform its residents, in time, into Americans or, conversely, that “blighted neighbourhoods” are so because there's something about the dirt [or buildings] rather than the behaviour of those who inhabit them), and the overwhelming and growing scientific evidence for human biodiversity and the coming crack-up of the “blank slate” dogma. If the Left continues to tighten its grip upon the academy, we can expect to see research in this area be attacked as dissent from the party line on climate science is today.

This is an excellent book: well written, argued, and documented. For those who have been following these issues over the years and observed the evolution of the conservative movement over the decades, there may not be much here that's new, but it's all tied up into one coherent package. For the less engaged who've just assumed that by voting for Republicans they were advancing the conservative cause, this may prove a revelation. If you're looking to find racism, white supremacy, fascism, authoritarianism, or any of the other epithets hurled against the dissident right, you won't find them here unless, as the Left does, you define the citation of well-documented facts as those things. What you will find is two authors who love America and believe that American policy should put the interests of Americans before those of others, and that politicians elected by Americans should be expected to act in their interest. If politicians call themselves “conservatives”, they should act to conserve what is great about America, not compromise it away in an attempt to, at best, delay the date their constituents are delivered into penury and serfdom.

You may have to read this book being careful nobody looks over your shoulder to see what you're reading. You may have to never admit you've read it. You may have to hold your peace when somebody goes on a rant about the “alt-right”. But read it, and judge for yourself. If you believe the facts cited are wrong, do the research, refute them with evidence, and publish a response (under a pseudonym, if you must). But before you reject it based upon what you've heard, read it—it's only five bucks—and make up your own mind. That's what free citizens do.

As I have come to expect in publications from Castalia House, the production values are superb. There are only a few (I found just three) copy editing errors. At present the book is available only in Kindle and Audible audiobook editions.

 Permalink

Steele, Allen. Arkwright. New York: Tor, 2016. ISBN 978-0-7653-8215-3.
Nathan Arkwright was one of the “Big Four” science fiction writers of the twentieth century, along with Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein. Launching his career in the Golden Age of science fiction, he created the Galaxy Patrol space adventures, with 17 novels from 1950 to 1988, a radio drama, television series, and three movies. The royalties from his work made him a wealthy man. He lived quietly in his home in rural Massachusetts, dying in 2006.

Arkwright was estranged from his daughter and granddaughter, Kate Morressy, a freelance science journalist. Kate attends the funeral and meets Nathan's long-term literary agent, Margaret (Maggie) Krough, science fiction writer Harry Skinner, and George Hallahan, a research scientist long involved with military and aerospace projects. After the funeral, the three meet with Kate, and Maggie explains that Arkwright's will bequeaths all of his assets including future royalties from his work to the non-profit Arkwright Foundation, which Kate is asked to join as a director representing the family. She asks the mission of the foundation, and Maggie responds by saying it's a long and complicated story which is best answered by her reading the manuscript of Arkwright's unfinished autobiography, My Life in the Future.

It is some time before Kate gets around to reading the manuscript. When she does, she finds herself immersed in the Golden Age of science fiction, as her father recounts attending the first World's Science Fiction Convention in New York in 1939. An avid science fiction fan and aspiring writer, Arkwright rubs elbows with figures he'd known only as names in magazines such as Fred Pohl, Don Wollheim, Cyril Kornbluth, Forrest Ackerman, and Isaac Asimov. Quickly learning that at a science fiction convention it isn't just elbows that rub but also egos, he runs afoul of one of the clique wars that are incomprehensible to those outside of fandom and finds himself ejected from the convention, sitting down for a snack at the Automat across the street with fellow banished fans Maggie, Harry, and George. The four discuss their views of the state of science fiction and their ambitions, and pledge to stay in touch. Any group within fandom needs a proper name, and after a brief discussion “The Legion of Tomorrow” was born. It would endure for decades.

The manuscript comes to an end, leaving Kate still in 1939. She then meets in turn with the other three surviving members of the Legion, who carry the story through Arkwright's long life, and describe the events which shaped his view of the future and the foundation he created. Finally, Kate is ready to hear the mission of the foundation—to make the future Arkwright wrote about during his career a reality—to move humanity off the planet and enter the era of space colonisation, and not just the planets but, in time, the stars. And the foundation will be going it alone. As Harry explains (p. 104), “It won't be made public, and there won't be government involvement either. We don't want this to become another NASA project that gets scuttled because Congress can't get off its dead ass and give it decent funding.”

The strategy is bet on the future: invest in the technologies which will be needed for and will profit from humanity's expansion from the home planet, and then reinvest the proceeds in research and development and new generations of technology and enterprises as space development proceeds. Nobody expects this to be a short-term endeavour: decades or generations may be required before the first interstellar craft is launched, but the structure of the foundation is designed to persist for however long it takes. Kate signs on, “Forward the Legion.”

So begins a grand, multi-generation saga chronicling humanity's leap to the stars. Unlike many tales of interstellar flight, no arm-waving about faster than light warp drives or other technologies requiring new physics is invoked. Based upon information presented at the DARPA/NASA 100 Year Starship Symposium in 2011 and the 2013 Starship Century conference, the author uses only technologies based upon well-understood physics which, if economic growth continues on the trajectory of the last century, are plausible for the time in the future at which the story takes place. And lest interstellar travel and colonisation be dismissed as wasteful, no public resources are spent on it: coercive governments have neither the imagination nor the attention span to achieve such grand and long-term goals. And you never know how important the technological spin-offs from such a project may prove in the future.

As noted, the author is scrupulous in using only technologies consistent with our understanding of physics and biology and plausible extrapolations of present capabilities. There are a few goofs, which I'll place behind the curtain since some are plot spoilers.

Spoiler warning: Plot and/or ending details follow.  
On p. 61, a C-53 transport plane is called a Dakota. The C-53 is a troop transport variant of the C-47, referred to as the Skytrooper. But since the planes were externally almost identical, the observer may have confused them. “Dakota” was the RAF designation for the C-47; the U.S. Army Air Forces called it the Skytrain.

On the same page, planes arrive from “Kirtland Air Force Base in Texas”. At the time, the facility would have been called “Kirtland Field”, part of the Albuquerque Army Air Base, which is located in New Mexico, not Texas. It was not renamed Kirtland Air Force Base until 1947.

In the description of the launch of Apollo 17 on p. 71, after the long delay, the count is recycled to T−30 seconds. That isn't how it happened. After the cutoff in the original countdown at thirty seconds, the count was recycled to the T−22 minute mark, and after the problem was resolved, resumed from there. There would have been plenty of time for people who had given up and gone to bed to be awakened when the countdown was resumed and observe the launch.

On p. 214, we're told the Doppler effect of the ship's velocity “caused the stars around and in front of the Galactique to redshift”. In fact, the stars in front of the ship would be blueshifted, while those behind it would be redshifted.

On p. 230, the ship, en route, is struck by a particle of interstellar dust which is described as “not much larger than a piece of gravel”, which knocks out communications with the Earth. Let's assume it wasn't the size of a piece of gravel, but only that of a grain of sand, which is around 20 milligrams. The energy released in the collision with the grain of sand is 278 gigajoules, or 66 tons of TNT. The damage to the ship would have been catastrophic, not something readily repaired.

On the same page, “By the ship's internal chronometer, the repair job probably only took a few days, but time dilation made it seem much longer to observers back on Earth.” Nope—at half the speed of light, time dilation is only 15%. Three days' ship's time would be less than three and a half days on Earth.

On p. 265, “the DNA of its organic molecules was left-handed, which was crucial to the future habitability…”. What's important isn't the handedness of DNA, but rather the chirality of the organic molecules used in cells. The chirality of DNA is many levels above this fundamental property of biochemistry and, in fact, the DNA helix of terrestrial organisms is right-handed. (The chirality of DNA actually depends upon the nucleotide sequence, and there is a form, called Z-DNA, in which the helix is left-handed.)

Spoilers end here.  

This is an inspiring and very human story, with realistic and flawed characters, venal politicians, unanticipated adversities, and a future very different than envisioned by many tales of the great human expansion, even those by the legendary Nathan Arkwright. It is an optimistic tale of the human future, grounded in the achievements of individuals who build it, step by step, in the unbounded vision of the Golden Age of science fiction. It is ours to make reality.

Here is a podcast interview with the author by James Pethokoukis.

 Permalink

Holt, George, Jr. The B-58 Blunder. Randolph, VT: George Holt, 2015. ISBN 978-0-692-47881-3.
The B-58 Hustler was a breakthrough aircraft. The first generation of U.S. Air Force jet-powered bombers—the B-47 medium and B-52 heavy bombers—were revolutionary for their time, but were becoming increasingly vulnerable to high-performance interceptor aircraft and anti-aircraft missiles on the deep penetration bombing missions within the communist bloc for which they were intended. In the 1950s, it was believed the best way to reduce the threat was to fly fast and at high altitude, with a small aircraft that would be more difficult to detect with radar.

Preliminary studies of a next generation bomber began in 1949, and in 1952 Convair was selected to develop a prototype of what would become the B-58. Using a delta wing and four turbojet engines, the aircraft could cruise at up to twice the speed of sound (Mach 2, 2450 km/h) with a service ceiling of 19.3 km. With a small radar cross-section compared to the enormous B-52 (although still large compared to present-day stealth designs), the idea was that flying so fast and at high altitude, by the time an enemy radar site detected the B-58, it would be too late to scramble an interceptor to attack it. Contemporary anti-aircraft missiles lacked the capability to down targets at its altitude and speed.

The first flight of a prototype was in November 1956, and after a protracted development and test program, plagued by problems due to its radical design, the bomber entered squadron service in March of 1960. Rising costs caused the number purchased to be scaled back to just 116 (by comparison, 2,032 B-47s and 744 B-52s were built), deployed in two Strategic Air Command (SAC) bomber wings.

The B-58 was built to deliver nuclear bombs. Originally, it carried one B53 nine megaton weapon mounted below the fuselage. Subsequently, the ability to carry four B43 or B61 bombs on hardpoints beneath the wings was added. The B43 and B61 were variable yield weapons, with the B43 providing yields from 70 kilotons to 1 megaton and the B61 300 tons to 340 kilotons. The B-58 was not intended to carry conventional (non-nuclear, high explosive) bombs, and although some studies were done of conventional missions, its limited bomb load would have made it uncompetitive with other aircraft. Defensive weaponry was a single 20 mm radar-guided cannon in the tail. This was a last-ditch option: the B-58 was intended to outrun attackers, not fight them off. The crew of three consisted of a pilot, bombardier/navigator, and a defensive systems operator (responsible for electronic countermeasures [jamming] and the tail gun), each in their own cockpit with an ejection capsule. The navigation and bombing system included an inertial navigation platform with a star tracker for correction, a Doppler radar, and a search radar. The nuclear weapon pod beneath the fuselage could be replaced with a pod for photo reconnaissance. Other pods were considered, but never developed.

The B-58 was not easy to fly. Its delta wing required high takeoff and landing speeds, and a steep angle of attack (nose-up attitude), but if the pilot allowed the nose to rise too high, the aircraft would pitch up and spin. Loss of an engine, particularly one of the outboard engines, was, as they say, a very dynamic event, requiring instant response to counter the resulting yaw. During its operational history, a total of 26 B-58s were lost in accidents: 22.4% of the fleet.

During its ten years in service, no operational bomber equalled or surpassed the performance of the B-58. It set nineteen speed records, some which still stand today, and won prestigious awards for its achievements. It was a breakthrough, but ultimately a dead end: no subsequent operational bomber has exceeded its performance in speed and altitude, but that's because speed and altitude were judged insufficient to accomplish the mission. With the introduction of supersonic interceptors and high-performance anti-aircraft missiles by the Soviet Union, the B-58 was determined to be vulnerable in its original supersonic, high-altitude mission profile. Crews were retrained to fly penetration missions at near-supersonic speeds and very low altitude, making it difficult for enemy radar to acquire and track the bomber. Although it was not equipped with terrain-following radar like the B-52, an accurate radar altimeter allowed crews to perform these missions. The large, rigid delta wing made the B-58 relatively immune to turbulence at low altitudes. Still, abandoning the supersonic attack profile meant that many of the capabilities which made the B-58 so complicated and expensive to operate and maintain were wasted.

This book is the story of the decision to retire the B-58, told by a crew member and Pentagon staffer who strongly dissented and argues that the B-58 should have remained in service much longer. George “Sonny” Holt, Jr. served for thirty-one years in the U.S. Air Force, retiring with the rank of colonel. For three years he was a bombardier/navigator on a B-58 crew and later, in the Plans Division at the Pentagon, observed the process which led to the retirement of the bomber close-up, doing his best to prevent it. He would disagree with many of the comments about the disadvantages of the aircraft mentioned in previous paragraphs, and addresses them in detail. In his view, the retirement of the B-58 in 1970, when it had been originally envisioned as remaining in the fleet until the mid-1970s, was part of a deal by SAC, which offered the retirement of all of the B-58s in return for retaining four B-52 wings which were slated for retirement. He argues that SAC never really wanted to operate the B-58, and that they did not understand its unique capabilities. With such a small fleet, it did not figure large in their view of the bomber force (although with its large nuclear weapon load, it actually represented about half the yield of the bomber leg of the strategic triad).

He provides an insider's perspective on Pentagon politics, and how decisions are made at high levels, often without input from those actually operating the weapon systems. He disputes many of the claimed disadvantages of the B-58 and, in particular, argues that it performed superbly in the low-level penetration mission, something for which it was not designed.

What is not discussed is the competition posed to manned bombers of all kinds in the nuclear mission by the Minuteman missile, which began to be deployed in 1962. By June 1965, 800 missiles were on alert, each with a 1.2 megaton W56 warhead. Solid-fueled missiles like the Minuteman require little maintenance and are ready to launch immediately at any time. Unlike bombers, where one worries about the development of interceptor aircraft and surface to air missiles, no defense against a mass missile attack existed or was expected to be developed in the foreseeable future. A missile in a silo required only a small crew of launch and maintenance personnel, as opposed to the bomber which had flight crews, mechanics, a spare parts logistics infrastructure, and had to be supported by refueling tankers with their own overhead. From the standpoint of cost-effectiveness, a word very much in use in the 1960s Pentagon, the missiles, which were already deployed, were dramatically better than any bomber, and especially the most expensive one in the inventory. The bomber generals in SAC were able to save the B-52, and were willing to sacrifice the B-58 in order to do so.

The book is self-published by the author and is sorely in need of the attention of a copy editor. There are numerous spelling and grammatical errors, and nouns are capitalised in the middle of sentences for no apparent reason. There are abundant black and white illustrations from Air Force files.

 Permalink

Gott, J. Richard. The Cosmic Web. Princeton: Princeton University Press, 2016. ISBN 978-0-691-15726-9.
Some works of popular science, trying to impress the reader with the scale of the universe and the insignificance of humans on the cosmic scale, argue that there's nothing special about our place in the universe: “an ordinary planet orbiting an ordinary star, in a typical orbit within an ordinary galaxy”, or something like that. But this is wrong! Surfaces of planets make up a vanishingly small fraction of the volume of the universe, and habitable planets, where beings like ourselves are neither frozen nor fried by extremes of temperature, nor suffocated or poisoned by a toxic atmosphere, are rarer still. The Sun is far from an ordinary star: it is brighter than 85% of the stars in the galaxy, and only 7.8% of stars in the Milky Way share its spectral class. Fully 76% of stars are dim red dwarves, the heavens' own 25 watt bulbs.

What does a typical place in the universe look like? What would you see if you were there? Well, first of all, you'd need a space suit and air supply, since the universe is mostly empty. And you'd see nothing. Most of the volume of the universe consists of great voids with few galaxies. If you were at a typical place in the universe, you'd be in one of these voids, probably far enough from the nearest galaxy that it wouldn't be visible to the unaided eye. There would be no stars in the sky, since stars are only formed within galaxies. There would only be darkness. Now look out the window: you are in a pretty special place after all.

One of the great intellectual adventures of the last century is learning our place in the universe and coming to understand its large scale structure. This book, by an astrophysicist who has played an important role in discovering that structure, explains how we pieced together the evidence and came to learn the details of the universe we inhabit. It provides an insider's look at how astronomers tease insight out of the messy and often confusing data obtained from observation.

It's remarkable not just how much we've learned, but how recently we've come to know it. At the start of the 20th century, most astronomers believed the solar system was part of a disc of stars which we see as the Milky Way. In 1610, Galileo's telescope revealed that the Milky Way was made up of a multitude of faint stars, and since the galaxy makes a band all around the sky, that the Sun must be within it. In 1918, by observing variable stars in globular clusters which orbit the Milky Way, Harlow Shapley was able to measure the size of the galaxy, which proved much larger than previously estimated, and determine that the Sun was about half way from the centre of the galaxy to its edge. Still, the universe was the galaxy.

There remained the mystery of the “spiral nebulæ”. These faint smudges of light had been revealed by photographic time exposures through large telescopes to be discs, some with prominent spiral arms, viewed from different angles. Some astronomers believed them to be gas clouds within the galaxy, perhaps other solar systems in the process of formation, while others argued they were galaxies like the Milky Way, far distant in the universe. In 1920 a great debate pitted the two views against one another, concluding that insufficient evidence existed to decide the matter.

That evidence would not be long in coming. Shortly thereafter, using the new 100 inch telescope on Mount Wilson in California, Edwin Hubble was able to photograph the Andromeda Nebula and resolve it into individual stars. Just as Galileo had done three centuries earlier for the Milky Way, Hubble's photographs proved Andromeda was not a gas cloud, but a galaxy composed of a multitude of stars. Further, Hubble was able to identify variable stars which allowed him to estimate its distance: due to details about the stars which were not understood at the time, he underestimated the distance by about a factor of two, but it was clear the galaxy was far beyond the Milky Way. The distances to other nearby galaxies were soon measured.

In one leap, the scale of the universe had become breathtakingly larger. Instead of one galaxy comprising the universe, the Milky Way was just one of a multitude of galaxies scattered around an enormous void. When astronomers observed the spectra of these galaxies, they noticed something odd: spectral lines from stars in most galaxies were shifted toward the red end of the spectrum compared to those observed on Earth. This was interpreted as a Doppler shift due to the galaxy's moving away from the Milky Way. Between 1929 and 1931, Edwin Hubble measured the distances and redshifts of a number of galaxies and discovered there was a linear relationship between the two. A galaxy twice as distant as another would be receding at twice the velocity. The universe was expanding, and every galaxy (except those sufficiently close to be gravitationally bound) was receding from every other galaxy.

The discovery of the redshift-distance relationship provided astronomers a way to chart the cosmos in three dimensions. Plotting the position of a galaxy on the sky and measuring its distance via redshift allowed building up a model of how galaxies were distributed in the universe. Were they randomly scattered, or would patterns emerge, suggesting larger-scale structure?

Galaxies had been observed to cluster: the nearest cluster, in the constellation Virgo, is made up of at least 1300 galaxies, and is now known to be part of a larger supercluster of which the Milky Way is an outlying member. It was not until the 1970s and 1980s that large-scale redshift surveys allowed plotting the positions of galaxies in the universe, initially in thin slices, and eventually in three dimensions. What was seen was striking. Galaxies were not sprinkled at random through the universe, but seemed to form filaments and walls, with great voids containing little or no galaxies. How did this come to be?

In parallel with this patient observational work, theorists were working out the history of the early universe based upon increasingly precise observations of the cosmic microwave background radiation, which provides a glimpse of the universe just 380,000 years after the Big Bang. This ushered in the era of precision cosmology, where the age and scale of the universe were determined with great accuracy, and the tiny fluctuations in temperature of the early universe were mapped in detail. This led to a picture of the universe very different from that imagined by astronomers over the centuries. Ordinary matter: stars, planets, gas clouds, and you and me—everything we observe in the heavens and the Earth—makes up less than 5% of the mass-energy of the universe. Dark matter, which interacts with ordinary matter only through gravitation, makes up 26.8% of the universe. It can be detected through its gravitational effects on the motion of stars and galaxies, but at present we don't have any idea what it's composed of. (It would be more accurate to call it “transparent matter” since it does not interact with light, but “dark matter” is the name we're stuck with.) The balance of the universe, 68.3%, is dark energy, a form of energy filling empty space and causing the expansion of the universe to accelerate. We have no idea at all about the nature of dark energy. These three components: ordinary matter, dark matter, and dark energy add up to give the universe a flat topology. It is humbling to contemplate the fact that everything we've learned in all of the sciences is about matter which makes up less than 5% of the universe: the other 95% is invisible and we don't know anything about it (although there are abundant guesses or, if you prefer, hypotheses).

This may seem like a flight of fancy, or a case of theorists making up invisible things to explain away observations they can't otherwise interpret. But in fact, dark matter and dark energy, originally inferred from astronomical observations, make predictions about the properties of the cosmic background radiation, and these predictions have been confirmed with increasingly high precision by successive space-based observations of the microwave sky. These observations are consistent with a period of cosmological inflation in which a tiny portion of the universe expanded to encompass the entire visible universe today. Inflation magnified tiny quantum fluctuations of the density of the universe to a scale where they could serve as seeds for the formation of structures in the present-day universe. Regions with greater than average density would begin to collapse inward due to the gravitational attraction of their contents, while those with less than average density would become voids as material within them fell into adjacent regions of higher density.

Dark matter, being more than five times as abundant as ordinary matter, would take the lead in this process of gravitational collapse, and ordinary matter would follow, concentrating in denser regions and eventually forming stars and galaxies. The galaxies formed would associate into gravitationally bound clusters and eventually superclusters, forming structure at larger scales. But what does the universe look like at the largest scale? Are galaxies distributed at random; do they clump together like meatballs in a soup; or do voids occur within a sea of galaxies like the holes in Swiss cheese? The answer is, surprisingly, none of the above, and the author explains the research, in which he has been a key participant, that discovered the large scale structure of the universe.

As increasingly more comprehensive redshift surveys of galaxies were made, what appeared was a network of filaments which connected to one another, forming extended structures. Between filaments were voids containing few galaxies. Some of these structures, such as the Sloan Great Wall, at 1.38 billion light years in length, are 1/10 the radius of the observable universe. Galaxies are found along filaments, and where filaments meet, rich clusters and superclusters of galaxies are observed. At this large scale, where galaxies are represented by single dots, the universe resembles a neural network like the human brain.

As ever more extensive observations mapped the three-dimensional structure of the universe we inhabit, progress in computing allowed running increasingly detailed simulations of the evolution of structure in models of the universe. Although the implementation of these simulations is difficult and complicated, they are conceptually simple. You start with a region of space, populate it with particles representing ordinary and dark matter in a sea of dark energy with random positions and density variations corresponding to those observed in the cosmic background radiation, then let the simulation run, computing the gravitational attraction of each particle on the others and tracking their motion under the influence of gravity. In 2005, Volker Springel and the Virgo Consortium ran the Millennium Simulation, which started from the best estimate of the initial conditions of the universe known at the time and tracked the motion of ten billion particles of ordinary and dark matter in a cube two billion light years on a side. As the simulation clock ran, the matter contracted into filaments surrounding voids, with the filaments joined at nodes rich in galaxies. The images produced by the simulation and the statistics calculated were strikingly similar to those observed in the real universe. The behaviour of this and other simulations increases confidence in the existence of dark matter and dark energy; if you leave them out of the simulation, you get results which don't look anything like the universe we inhabit.

At the largest scale, the universe isn't made of galaxies sprinkled at random, nor meatballs of galaxy clusters in a sea of voids, nor a sea of galaxies with Swiss cheese like voids. Instead, it resembles a sponge of denser filaments and knots interpenetrated by less dense voids. Both the denser and less dense regions percolate: it is possible to travel from one edge of the universe to another staying entirely within more or less dense regions. (If the universe were arranged like a honeycomb, for example, with voids surrounded by denser walls, this would not be possible.) Nobody imagined this before the observational results started coming in, and now we've discovered that given the initial conditions of the universe after the Big Bang, the emergence of such a structure is inevitable.

All of the structure we observe in the universe has evolved from a remarkably uniform starting point in the 13.8 billion years since the Big Bang. What will the future hold? The final chapter explores various scenarios for the far future. Because these depend upon the properties of dark matter and dark energy, which we don't understand, they are necessarily speculative.

The book is written for the general reader, but at a level substantially more difficult than many works of science popularisation. The author, a scientist involved in this research for decades, does not shy away from using equations when they illustrate an argument better than words. Readers are assumed to be comfortable with scientific notation, units like light years and parsecs, and logarithmically scaled charts. For some reason, in the Kindle edition dozens of hyphenated phrases are run together without any punctuation.

 Permalink

June 2016

Portree, David S. F. Humans to Mars. Washington: National Aeronautics and Space Administration, 2001. NASA SP-2001-4521.
Ever since, in the years following World War II, people began to think seriously about the prospects for space travel, visionaries have looked beyond the near-term prospects for flights into Earth orbit, space stations, and even journeys to the Moon, toward the red planet: Mars. Unlike Venus, eternally shrouded by clouds, or the other planets which were too hot or cold to sustain life as we know it, Mars, about half the size of the Earth, had an atmosphere, a day just a little longer than the Earth's, seasons, and polar caps which grew and shrank with the seasons. There were no oceans, but water from the polar caps might sustain life on the surface, and there were dark markings which appeared to change during the Martian year, which some interpreted as plant life that flourished as polar caps melted in the spring and receded as they grew in the fall.

In an age where we have high-resolution imagery of the entire planet, obtained from orbiting spacecraft, telescopes orbiting Earth, and ground-based telescopes with advanced electronic instrumentation, it is often difficult to remember just how little was known about Mars in the 1950s, when people first started to think about how we might go there. Mars is the next planet outward from the Sun, so its distance and apparent size vary substantially depending upon its relative position to Earth in their respective orbits. About every two years, Earth “laps” Mars and it is closest (“at opposition”) and most easily observed. But because the orbit of Mars is elliptic, its distance varies from one opposition to the next, and it is only every 15 to 17 years that a near-simultaneous opposition and perihelion render Mars most accessible to Earth-based observation.

But even at a close opposition, Mars is a challenging telescopic target. At a close encounter, such as the one which will occur in the summer of 2018, Mars has an apparent diameter of only around 25 arc seconds. By comparison, the full Moon is about half a degree, or 1800 arc seconds: 72 times larger than Mars. To visual observers, even at a favourable opposition, Mars is a difficult object. Before the advent of electronic sensors in the 1980s, it was even more trying to photograph. Existing photographic film and plates were sufficiently insensitive that long exposures, measured in seconds, were required, and even from the best observing sites, the turbulence in the Earth's atmosphere smeared out details, leaving only the largest features recognisable. Visual observers were able to glimpse more detail in transient moments of still air, but had to rely upon their memory to sketch them. And the human eye is subject to optical illusions, seeing patterns where none exist. Were the extended linear features called “canals” real? Some observers saw and sketched them in great detail, while others saw nothing. Photography could not resolve the question.

Further, the physical properties of the planet were largely unknown. If you're contemplating a mission to land on Mars, it's essential to know the composition and density of its atmosphere, the temperatures expected at potential landing sites, and the terrain which a lander would encounter. None of these were known much beyond the level of educated guesses, which turned out to be grossly wrong once spacecraft probe data started to come in.

But ignorance of the destination didn't stop people from planning, or at least dreaming. In 1947–48, Wernher von Braun, then working with the U.S. Army at the White Sands Missile Range in New Mexico, wrote a novel called The Mars Project based upon a hypothetical Mars mission. A technical appendix presented detailed designs of the spacecraft and mission. While von Braun's talent as an engineer was legendary, his prowess as a novelist was less formidable, and the book never saw print, but in 1952 the appendix was published by itself.

One thing of which von Braun was never accused was thinking small, and in this first serious attempt to plan a Mars mission, he envisioned something more like an armada than the lightweight spacecraft we design today. At a time when the largest operational rocket, the V-2, had a payload of just one tonne, which it could throw no further than 320 km on a suborbital trajectory, von Braun's Mars fleet would consist of ten ships, each with a mass of 4,000 tons, and a total crew of seventy. The Mars ships would be assembled in orbit from parts launched on 950 flights of reusable three-stage ferry rockets. To launch all of the components of the Mars fleet and the fuel they would require would burn a total of 5.32 million tons of propellant in the ferry ships. Note that when von Braun proposed this, nobody had ever flown even a two stage rocket, and it would be ten years before the first unmanned Earth satellite was launched.

Von Braun later fleshed out his mission plans for an illustrated article in Collier's magazine as part of their series on the future of space flight. Now he envisioned assembling the Mars ships at the toroidal space station in Earth orbit which had figured in earlier installments of the series. In 1956, he published a book co-authored with Willy Ley, The Exploration of Mars, in which he envisioned a lean and mean expedition with just two ships and a crew of twelve, which would require “only” four hundred launches from Earth to assemble, provision, and fuel.

Not only was little understood about the properties of the destination, nothing at all was known about what human crews would experience in space, either in Earth orbit or en route to Mars and back. Could they even function in weightlessness? Would be they be zapped by cosmic rays or solar flares? Were meteors a threat to their craft and, if so, how serious a one? With the dawn of the space age after the launch of Sputnik in October, 1957, these data started to trickle in, and they began to inform plans for Mars missions at NASA and elsewhere.

Radiation was much more of a problem than had been anticipated. The discovery of the Van Allen radiation belts around the Earth and measurement of radiation from solar flares and galactic cosmic rays indicated that short voyages were preferable to long ones, and that crews would need shielding from routine radiation and a “storm shelter” during large solar flares. This motivated research into nuclear thermal and ion propulsion systems, which would not only reduce the transit time to and from Mars, but also, being much more fuel efficient than chemical rockets, dramatically reduce the mass of the ships compared to von Braun's flotilla.

Ernst Stuhlinger had been studying electric (ion) propulsion since 1953, and developed a design for constant-thrust, ion powered ships. These were featured in Walt Disney's 1957 program, “Mars and Beyond”, which aired just two months after the launch of Sputnik. This design was further developed by NASA in a 1962 mission design which envisioned five ships with nuclear-electric propulsion, departing for Mars in the early 1980s with a crew of fifteen and cargo and crew landers permitting a one month stay on the red planet. The ships would rotate to provide artificial gravity for the crew on the trip to and from Mars.

In 1965, the arrival of the Mariner 4 spacecraft seemingly drove a stake through the heart of the romantic view of Mars which had persisted since Percival Lowell. Flying by the southern hemisphere of the planet as close as 9600 km, it returned 21 fuzzy pictures which seemed to show Mars as a dead, cratered world resembling the Moon far more than the Earth. There was no evidence of water, nor of life. The atmosphere was determined to be only 1% as dense as that of Earth, not the 10% estimated previously, and composed mostly of carbon dioxide, not nitrogen. With such a thin and hostile atmosphere, there seemed no prospects for advanced life (anything more complicated than bacteria), and all of the ideas for winged Mars landers went away: the martian atmosphere proved just dense enough to pose a problem when slowing down on arrival, but not enough to allow a soft landing with wings or a parachute. The probe had detected more radiation than expected on its way to Mars, indicating crews would need more protection than anticipated, and it showed that robotic probes could do science at Mars without the need to put a crew at risk. I remember staying up and watching these pictures come in (the local television station didn't carry the broadcast, so I watched even more static-filled pictures than the original from a distant station). I can recall thinking, “Well, that's it then. Mars is dead. We'll probably never go there.”

Mars mission planning went on the back burner as the Apollo Moon program went into high gear in the 1960s. Apollo was conceived not as a single-destination project to land on the Moon, but to create the infrastructure for human expansion from the Earth into the solar system, including development of nuclear propulsion and investigation of planetary missions using Apollo derived hardware, mostly for flyby missions. In January of 1968, Boeing completed a study of a Mars landing mission, which would have required six launches of an uprated Saturn V, sending a crew of six to Mars in a 140 ton ship for a landing and a brief “flags and footprints” stay on Mars. By then, Apollo funding (even before the first lunar orbit and landing) was winding down, and it was clear there was no budget nor political support for such grandiose plans.

After the success of Apollo 11, NASA retrenched, reducing its ambition to a Space Shuttle. An ambitious Space Task Group plan for using the Shuttle to launch a Mars mission in the early 1980s was developed, but in an era of shrinking budgets and additional fly-by missions returning images of a Moon-like Mars, went nowhere. The Saturn V and the nuclear rocket which could have taken crews to Mars had been cancelled. It appeared the U.S. would remain stuck going around in circles in low Earth orbit. And so it remains today.

While planning for manned Mars missions stagnated, the 1970s dramatically changed the view of Mars. In 1971, Mariner 9 went into orbit around Mars and returned 7329 sharp images which showed the planet to be a complex world, with very different northern and southern hemispheres, a grand canyon almost as long as the United States, and features which suggested the existence, at least in the past, of liquid water. In 1976, two Viking orbiters and landers arrived at Mars, providing detailed imagery of the planet and ground truth. The landers were equipped with instruments intended to detect evidence of life, and they reported positive results, but later analyses attributed this to unusual soil chemistry. This conclusion is still disputed, including by the principal investigator for the experiment, but in any case the Viking results revealed a much more complicated and interesting planet than had been imagined from earlier missions. I had been working as a consultant at the Jet Propulsion Laboratory during the first Viking landing, helping to keep mission critical mainframe computers running, and I had the privilege of watching the first images from the surface of Mars arrive. I revised my view from 1965: now Mars was a place which didn't look much different from the high desert of California, where you could imagine going to explore and live some day. More importantly, detailed information about the atmosphere and surface of Mars was now in hand, so future missions could be planned accordingly.

And then…nothing. It was a time of malaise and retreat. After the last Viking landing in September of 1975, it would be more than twenty-one years until Mars Global Surveyor would orbit Mars and Mars Pathfinder would land there in 1996. And yet, with detailed information about Mars in hand, the intervening years were a time of great ferment in manned Mars mission planning, when the foundation of what may be the next great expansion of the human presence into the solar system was laid down.

President George H. W. Bush announced the Space Exploration Initiative on July 20th, 1989, the 20th anniversary of the Apollo 11 landing on the Moon. This was, in retrospect, the last gasp of the “Battlestar” concepts of missions to Mars. It became a bucket into which every NASA centre and national laboratory could throw their wish list: new heavy launchers, a Moon base, nuclear propulsion, space habitats: for a total price tag on the order of half a trillion dollars. It died, quietly, in congress.

But the focus was moving from leviathan bureaucracies of the coercive state to innovators in the private sector. In the 1990s, spurred by work of members of the “Mars Underground”, including Robert Zubrin and David Baker, the “Mars Direct” mission concept emerged. Earlier Mars missions assumed that all resources needed for the mission would have to be launched from Earth. But Zubrin and Baker realised that the martian atmosphere, based upon what we had learned from the Viking missions, contained everything needed to provide breathable air for the stay on Mars and rocket fuel for the return mission (with the addition of lightweight hydrogen brought from Earth). This turned the weight budget of a Mars mission upside-down. Now, an Earth return vehicle could be launched to Mars with empty propellant tanks. Upon arrival, it would produce fuel for the return mission and oxygen for the crew. After it was confirmed to have produced the necessary consumables, the crew of four would be sent in the next launch window (around 26 months later) and land near the return vehicle. They would use its oxygen while on the planet, and its fuel to return to Earth at the end of its mission. There would be no need for a space station in Earth orbit, nor orbital assembly, nor for nuclear propulsion: the whole mission could be done with hardware derived from that already in existence.

This would get humans to Mars, but it ran into institutional barriers at NASA, since many of its pet projects, including the International Space Station and Space Shuttle proved utterly unnecessary to getting to Mars. NASA responded with the Mars Design Reference Mission, published in various revisions between 1993 and 2014, which was largely based upon Mars Direct, but up-sized to a larger crew of six, and incorporating a new Earth Return Vehicle to bring the crew back to Earth in less austere circumstances than envisioned in Mars Direct.

NASA claim they are on a #JourneyToMars. They must be: there's a Twitter hashtag! But of course to anybody who reads this sad chronicle of government planning for planetary exploration over half a century, it's obvious they're on no such thing. If they were truly on a journey to Mars, they would be studying and building the infrastructure to get there using technologies such as propellant depots and in-orbit assembly which would get the missions done economically using resources already at hand. Instead, it's all about building a huge rocket which will cost so much it will fly every other year, at best, employing a standing army which will not only be costly but so infrequently used in launch operations they won't have the experience to operate the system safely, and whose costs will vacuum out the funds which might have been used to build payloads which would extend the human presence into space.

The lesson of this is that when the first humans set foot upon Mars, they will not be civil servants funded by taxes paid by cab drivers and hairdressers, but employees (and/or shareholders) of a private venture that sees Mars as a profit centre which, as its potential is developed, can enrich them beyond the dreams of avarice and provide a backup for human civilisation. I trust that when the history of that great event is written, it will not be as exasperating to read as this chronicle of the dead-end of government space programs making futile efforts to get to Mars.

This is an excellent history of the first half century of manned Mars mission planning. Although many proposed missions are omitted or discussed only briefly, the evolution of mission plans with knowledge of the destination and development of spaceflight hardware is described in detail, culminating with current NASA thinking about how best to accomplish such a mission. This book was published in 2001, but since existing NASA concepts for manned Mars missions are still largely based upon the Design Reference Mission described here, little has changed in the intervening fifteen years. In September of 2016, SpaceX plans to reveal its concepts for manned Mars missions, so we'll have to wait for the details to see how they envision doing it.

As a NASA publication, this book is in the public domain. The book can be downloaded for free as a PDF file from the NASA History Division. There is a paperback republication of this book available at Amazon, but at an outrageous price for such a short public domain work. If you require a paper copy, it's probably cheaper to download the PDF and print your own.

 Permalink

Adams, Scott. The Religion War. Kansas City: Andrews McMeel, 2004 ISBN 978-0-7407-4788-5.
This a sequel to the author's 2001 novel God's Debris. In that work, which I considered profound and made my hair stand on end on several occasions, a package delivery man happens to encounter the smartest man in the world and finds his own view of the universe and his place in it up-ended, and his destiny to be something he'd never imagined. I believe that it's only because Scott Adams is also the creator of Dilbert that he is not appreciated as one of the most original and insightful thinkers of our time. His blog has been consistently right about the current political season in the U.S. while all of the double-domed mainstream pundits have fallen on their faces.

Forty years have passed since the events in God's Debris. The erstwhile delivery man has become the Avatar, thinking at a higher level and perceiving patterns which elude his contemporaries. These talents have made him one of the wealthiest people on Earth, but he remains unknown, dresses shabbily, wearing a red plaid blanket around his shoulders. The world has changed. A leader, al-Zee, arising in the Palestinian territories, has achieved his goal of eliminating Israel and consolidated the Islamic lands into a new Great Caliphate. Sitting on a large fraction of the world's oil supply, he funds “lone wolf”, modest scale terror attacks throughout the Dar al-Harb, always deniable and never so large as to invite reprisal. With the advent of model airplanes and satellite guidance able to deliver explosives to a target with precision over a long range, nobody can feel immune from the reach of the Caliphate.

In 2040, General Horatio Cruz came to power as Secretary of War of the Christian Alliance, with all of the forces of NATO at his command. The political structures of the western nations remained in place, but they had delegated their defence to Cruz, rendering him effectively a dictator in the military sphere. Cruz was not a man given to compromise. Faced with an opponent he evaluated as two billion people willing to die in a final war of conquest, he viewed the coming conflict not as one of preserving territory or self-defence, but of extermination—of one side or the other. There were dark rumours that al-Zee had in place his own plan of retaliation, with sleeper cells and weapons of mass destruction ready should a frontal assault begin.

The Avatar sees the patterns emerging, and sets out to avert the approaching cataclysm. He knows that bad ideas can only be opposed by better ones, but bad ideas first must be subverted by sowing doubt among those in thrall to them. Using his preternatural powers of persuasion, he gains access to the principals of the conflict and begins his work. But that may not be enough.

There are two overwhelming forces in the world. One is chaos; the other is order. God—the original singular speck—is forming again. He's gathering together his bits—we call it gravity. And in the process he is becoming self-aware to defeat chaos, to defeat evil if you will, to battle the devil. But something has gone terribly wrong.

Sometimes, when your computer is in a loop, the only thing you can do is reboot it: forcefully get it out of the destructive loop back to a starting point from which it can resume making progress. But how do you reboot a global technological civilisation on the brink of war? The Avatar must find the reboot button as time is running out.

Thirty years later, a delivery man rings the door. An old man with a shabby blanket answers and invites him inside.

There are eight questions to ponder at the end which expand upon the shiver-up-your-spine themes raised in the novel. Bear in mind, when pondering how prophetic this novel is of current and near-future events, that it was published twelve years ago.

 Permalink

July 2016

Coppley, Jackson. Leaving Lisa. Seattle: CreateSpace, 2016. ISBN 978-1-5348-5971-5.
Jason Chamberlain had it all. At age fifty, the company he had founded had prospered so that when he sold out, he'd never have to work again in his life. He and Lisa, his wife and the love of his life, lived in a mansion in the suburbs of Washington, DC. Lisa continued to work as a research scientist at the National Institutes of Health (NIH), studying the psychology of grief, loss, and reconciliation. Their relationship with their grown daughter was strained, but whose isn't in these crazy times?

All of this ended in a moment when Lisa was killed in a car crash which Jason survived. He had lost his love, and blamed himself. His life was suddenly empty.

Some time after the funeral, he takes up an invitation to visit one of Lisa's colleagues at NIH, who explains to Jason that Lisa had been a participant in a study in which all of the accumulated digital archives of her life—writings, photos, videos, sound recordings—would be uploaded to a computer and, using machine learning algorithms, indexed and made accessible so that people could ask questions and have them answered, based upon the database, as Lisa would have, in her voice. The database is accessible from a device which resembles a smartphone, but requires network connectivity to the main computer for complicated queries.

Jason is initially repelled by the idea, but after some time returns to NIH and collects the device and begins to converse with it. Lisa doesn't just want to chat. She instructs Jason to embark upon a quest to spread her ashes in three places which were important to her and their lives together: Costa Rica, Vietnam, and Tuscany in Italy. The Lisa-box will accompany Jason on his travels and, in its own artificially intelligent way, share his experiences.

Jason embarks upon his voyages, rediscovering in depth what their life together meant to them, how other cultures deal with loss, grief, and healing, and that closing the book on one phase of his life may be opening another. Lisa is with him as these events begin to heal and equip him for what is to come. The last few pages will leave you moist eyed.

In 2005, Rudy Rucker published The Lifebox, the Seashell, and the Soul, in which he introduced the “lifebox” as the digital encoding of a person's life, able to answer questions from their viewpoint and life experiences as Lisa does here. When I read Rudy's manuscript, I thought the concept of a lifebox was pure fantasy, and I told him as much. Now, not only am I not so sure, but in fact I believe that something approximating a lifebox will be possible before the end of the decade I've come to refer to as the “Roaring Twenties”. This engrossing and moving novel is a human story of our near future (to paraphrase the title of another of the author's books) in which the memory of the departed may be more than photo albums and letters.

The Kindle edition is free to Kindle Unlimited subscribers. The author kindly allowed me to read this book in manuscript form.

 Permalink

Weightman, Gavin. The Frozen Water Trade. New York: Hyperion, [2003] 2004. ISBN 978-0-7868-8640-1.
In the summer of 1805, two brothers, Frederic and William Tudor, both living in the Boston area, came up with an idea for a new business which would surely make their fortune. Every winter, fresh water ponds in Massachusetts froze solid, often to a depth of a foot or more. Come spring, the ice would melt.

This cycle had repeated endlessly since before humans came to North America, unremarked upon by anybody. But the Tudor brothers, in the best spirit of Yankee ingenuity, looked upon the ice as an untapped and endlessly renewable natural resource. What if this commodity, considered worthless, could be cut from the ponds and rivers, stored in a way that would preserve it over the summer, and shipped to southern states and the West Indies, where plantation owners and prosperous city dwellers would pay a premium for this luxury in times of sweltering heat?

In an age when artificial refrigeration did not exist, that “what if” would have seemed so daunting as to deter most people from entertaining the notion for more than a moment. Indeed, the principles of thermodynamics, which underlie both the preservation of ice in warm climates and artificial refrigeration, would not be worked out until decades later. In 1805, Frederic Tudor started his “Ice House Diary” to record the progress of the venture, inscribing it on the cover, “He who gives back at the first repulse and without striking the second blow, despairs of success, has never been, is not, and never will be, a hero in love, war or business.” It was in this spirit that he carried on in the years to come, confronting a multitude of challenges unimagined at the outset.

First was the question of preserving the ice through the summer, while in transit, and upon arrival in the tropics until it was sold. Some farmers in New England already harvested ice from their ponds and stored it in ice houses, often built of stone and underground. This was sufficient to preserve a modest quantity of ice through the summer, but Frederic would need something on a much larger scale and less expensive for the trade he envisioned, and then there was the problem of keeping the ice from melting in transit. Whenever ice is kept in an environment with an ambient temperature above freezing, it will melt, but the rate at which it melts depends upon how it is stored. It is essential that the meltwater be drained away, since if the ice is allowed to stand in it, the rate of melting will be accelerated, since water conducts heat more readily than air. Melting ice releases its latent heat of fusion, and a sealed ice house will actually heat up as the ice melts. It is imperative the ice house be well ventilated to allow this heat to escape. Insulation which slows the flow of heat from the outside helps to reduce the rate of melting, but care must be taken to prevent the insulation from becoming damp from the meltwater, as that would destroy its insulating properties.

Based upon what was understood about the preservation of ice at the time and his own experiments, Tudor designed an ice house for Havana, Cuba, one of the primary markets he was targeting, which would become the prototype for ice houses around the world. The structure was built of timber, with double walls, the cavity between the walls filled with insulation of sawdust and peat. The walls and roof kept the insulation dry, and the entire structure was elevated to allow meltwater to drain away. The roof was ventilated to allow the hot air from the melting ice to dissipate. Tightly packing blocks of uniform size and shape allowed the outer blocks of ice to cool those inside, and melting would be primarily confined to blocks on the surface of the ice stored.

During shipping, ice was packed in the hold of ships, insulated by sawdust, and crews were charged with regularly pumping out meltwater, which could be used as an on-board source of fresh water or disposed of overboard. Sawdust was produced in great abundance by the sawmills of Maine, and was considered a waste product, often disposed of by dumping it in rivers. Frederic Tudor had invented a luxury trade whose product was available for the price of harvesting it, and protected in shipping by a material considered to be waste.

The economics of the ice business exploited an imbalance in Boston's shipping business. Massachusetts produced few products for export, so ships trading with the West Indies would often leave port with nearly empty holds, requiring rock ballast to keep the ship stable at sea. Carrying ice to the islands served as ballast, and was a cargo which could be sold upon arrival. After initial scepticism was overcome (would the ice all melt and sink the ship?), the ice trade outbound from Boston was an attractive proposition to ship owners.

In February 1806, the first cargo of ice sailed for the island of Martinique. The Boston Gazette reported the event as follows.

No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.

The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper.

The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts.

On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved.

Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston.

With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry.

Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone.

While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines.

In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory.

Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality.

Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out.

When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic.

I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.

 Permalink

Hirshfeld, Alan W. Parallax. New York: Dover, [2001] 2013. ISBN 978-0-486-49093-9.
Eppur si muove.” As legend has it, these words were uttered (or muttered) by Galileo after being forced to recant his belief that the Earth revolves around the Sun: “And yet it moves.” The idea of a heliocentric model, as opposed to the Earth being at the center of the universe (geocentric model), was hardly new: Aristarchus of Samos had proposed it in the third century B.C., as a simplification of the prevailing view that the Earth was fixed and all other heavenly bodies revolved around it. This seemed to defy common sense: if the Earth rotated on its axis every day, why weren't there strong winds as the Earth's surface moved through the air? If you threw a rock straight up in the air, why did it come straight down rather than being displaced by the Earth's rotation while in flight? And if the Earth were offset from the center of the universe, why didn't we observe more stars when looking toward it than away?

By Galileo's time, many of these objections had been refuted, in part by his own work on the laws of motion, but the fact remained that there was precisely zero observational evidence that the Earth orbited the Sun. This was to remain the case for more than a century after Galileo, and millennia after Aristarchus, a scientific quest which ultimately provided the first glimpse of the breathtaking scale of the universe.

Hold out your hand at arm's length in front of your face and extend your index finger upward. (No, really, do it.) Now observe the finger with your right eye, then your left eye in succession, each time closing the other. Notice how the finger seems to jump to the right and left as you alternate eyes? That's because your eyes are separated by what is called the interpupillary distance, which is on the order of 6 cm. Each eye sees objects from a different perspective, and nearby objects will shift with respect to distant objects when seen from different eyes. This effect is called parallax, and the brain uses it to reconstruct depth information for nearby objects. Interestingly, predator animals tend to have both eyes on the front of the face with overlapping visual fields to provide depth perception for use in stalking, while prey animals are more likely to have eyes on either side of their heads to allow them to monitor a wider field of view for potential threats: compare a cat and a horse.

Now, if the Earth really orbits the Sun every year, that provides a large baseline which should affect how we see objects in the sky. In particular, when we observe stars from points in the Earth's orbit six months apart, we should see them shift their positions in the sky, since we're viewing them from different locations, just as your finger appeared to shift when viewed from different eyes. And since the baseline is enormously larger (although in the times of Aristarchus and even Galileo, its absolute magnitude was not known), even distant objects should be observed to shift over the year. Further, nearby stars should shift more than distant stars, so remote stars could be used as a reference for measuring the apparent shift of those closest to the Sun. This was the concept of stellar parallax.

Unfortunately for advocates of the heliocentric model, nobody had been able to observe stellar parallax. From the time of Aristarchus to Galileo, careful observers of the sky found the positions of the stars as fixed in the sky as if they were painted on a distant crystal sphere as imagined by the ancients, with the Earth at the center. Proponents of the heliocentric model argued that the failure to observe parallax was simply due to the stars being much too remote. When you're observing a distant mountain range, you won't notice any difference when you look at it with your right and left eye: it's just too far away. Perhaps the parallax of stars was beyond our ability to observe, even with so long a baseline as the Earth's distance from the Sun. Or, as others argued, maybe it didn't move.

But, pioneered by Galileo himself, our ability to observe was about to take an enormous leap. Since antiquity, all of our measurements of the sky, regardless of how clever our tools, ultimately came down to the human eye. Galileo did not invent the telescope, but he improved what had been used as a “spyglass” for military applications into a powerful tool for exploring the sky. His telescopes, while crude and difficult to use, and having a field of view comparable to looking through a soda straw, revealed mountains and craters on the Moon, the phases of Venus (powerful evidence against the geocentric model), the satellites of Jupiter, and the curious shape of Saturn (his telescope lacked the resolution to identify its apparent “ears” as rings). He even observed Neptune in 1612, when it happened to be close to Jupiter, but he didn't interpret what he had seen as a new planet. Galileo never observed parallax; he never tried, but he suggested astronomers might concentrate on close pairs of stars, one bright and one dim, where, if all stars were of comparable brightness, one might be close and the other distant, from which parallax could be teased out from observation over a year. This was to inform the work of subsequent observers.

Now the challenge was not one of theory, but of instrumentation and observational technique. It was not to be a sprint, but a marathon. Those who sought to measure stellar parallax and failed (sometimes reporting success, only to have their results overturned by subsequent observations) reads like a “Who's Who” of observational astronomy in the telescopic era: Robert Hooke, James Bradley, and William Herschel all tried and failed to observe parallax. Bradley's observations revealed an annual shift in the position of stars, but it affected all stars, not just the nearest. This didn't make any sense unless the stars were all painted on a celestial sphere, and the shift didn't behave as expected from the Earth's motion around the Sun. It turned out to be due to the aberration of light resulting from the motion of the Earth around the Sun and the finite speed of light. It's like when you're running in a rainstorm:

Raindrops keep fallin' in my face,
More and more as I pick up the pace…

Finally, here was proof that “it moves”: there would be no aberration in a geocentric universe. But by Bradley's time in the 1720s, only cranks and crackpots still believed in the geocentric model. The question was, instead, how distant are the stars? The parallax game remained afoot.

It was ultimately a question of instrumentation, but also one of luck. By the 19th century, there was abundant evidence that stars differed enormously in their intrinsic brightness. (We now know that the most luminous stars are more than a billion times more brilliant than the dimmest.) Thus, you couldn't conclude that the brightest stars were the nearest, as astronomers once guessed. Indeed, the distances of the four brightest stars as seen from Earth are, in light years, 8.6, 310, 4.4, and 37. Given that observing the position of a star for parallax is a long-term project and tedious, bear in mind that pioneers on the quest had no idea whether the stars they observed were near or far, nor the distance to the nearest stars they might happen to be lucky enough to choose.

It all came together in the 1830s. Using an instrument called a heliometer, which was essentially a refractor telescope with its lens cut in two with the ability to shift the halves and measure the offset, Friedrich Bessel was able to measure the parallax of the star 61 Cygni by comparison to an adjacent distant star. Shortly thereafter, Wilhelm Struve published the parallax of Vega, and then, just two months later, Thomas Henderson reported the parallax of Alpha Centauri, based upon measurements made earlier at the Cape of Good Hope. Finally, we knew the distances to the nearest stars (although those more distant remained a mystery), and just how empty the universe was.

Let's put some numbers on this, just to appreciate how great was the achievement of the pioneers of parallax. The parallax angle of the closest star system, Alpha Centauri, is 0.755 arc seconds. (The parallax angle is half the shift observed in the position of the star as the Earth orbits the Sun. We use half the shift because it makes the trigonometry to compute the distance easier to understand.) An arc second is 1/3600 of a degree, and there are 360 degrees in a circle, so it's 1/1,296,000 of a full circle.

Now let's work out the distance to Alpha Centauri. We'll work in terms of astronomical units (au), the mean distance between the Earth and Sun. We have a right triangle where we know the distance from the Earth to the Sun and the parallax angle of 0.755 arc seconds. (To get a sense for how tiny an angle this is, it's comparable to the angle subtended by a US quarter dollar coin when viewed from a distance of 6.6 km.) We can compute the distance from the Earth to Alpha Centauri as:

1 au / tan(0.755 / 3600) = 273198 au = 4.32 light years

Parallax is used to define the parsec (pc), the distance at which a star would have a parallax angle of one arc second. A parsec is about 3.26 light years, so the distance to Alpha Centauri is 1.32 parsecs. Star Wars notwithstanding, the parsec, like the light year, is a unit of distance, not time.

Progress in instrumentation has accelerated in recent decades. The Earth is a poor platform from which to make precision observations such as parallax. It's much better to go to space, where there are neither the wobbles of a planet nor its often murky atmosphere. The Hipparcos mission, launched in 1989, measured the parallaxes and proper motions of more than 118,000 stars, with lower resolution data for more than 2.5 million stars. The Gaia mission, launched in 2013 and still underway, has a goal of measuring the position, parallax, and proper motion of more than a billion stars.

It's been a long road, getting from there to here. It took more than 2,000 years from the time Aristarchus proposed the heliocentric solar system until we had direct observational evidence that eppur si muove. Within a few years, we will have in hand direct measurements of the distances to a billion stars. And, some day, we'll visit them.

I originally read this book in December 2003. It was a delight to revisit.

 Permalink

August 2016

Jenne, Mike. Blue Darker than Black. New York: Yucca Publishing, 2016. ISBN 978-1-63158-066-6.
This is the second novel in the series which began with Blue Gemini (April 2016). It continues the story of a covert U.S. Air Force manned space program in the late 1960s and early 1970s, using modified versions of NASA's two-man Gemini spacecraft and Titan II booster to secretly launch missions to rendezvous with, inspect, and, if necessary, destroy Soviet reconnaissance satellites and rumoured nuclear-armed orbital battle stations.

As the story begins in 1969, the crew who flew the first successful missions in the previous novel, Drew Carson and Scott Ourecky, are still the backbone of the program. Another crew was in training, but having difficulty coming up to the standard set by the proven flight crew. A time-critical mission puts Carson and Ourecky back into the capsule again, and they execute another flawless mission despite inter-service conflict between its Navy sponsor and the Air Force who executed it.

Meanwhile, the intrigue of the previous novel is playing out in the background. The Soviets know that something odd is going on at the innocuously named “Aerospace Support Project” at Wright-Patterson Air Force Base, and are cultivating sources to penetrate the project, while counter-intelligence is running down leads to try to thwart them. Soviet plans for the orbital battle station progress from fantastic conceptions to bending metal.

Another mission sends the crew back into space just as Ourecky's wife is expecting their firstborn. When it's time to come home, a malfunction puts at risk their chances of returning to Earth alive. A clever trick allows them to work around the difficulty and fire their retrorockets, but the delay diverts their landing point from the intended field in the U.S. to a secret contingency site in Haiti. Now the emergency landing team we met in Blue Gemini comes to the fore. With one of the most secret of U.S. programs dropping its spacecraft and crew, who are privy to all of its secrets, into one of the most primitive, corrupt, and authoritarian countries in the Western Hemisphere, the stakes could not be higher. It all falls on the shoulders of Matthew Henson, who has to coordinate resources to get the spacecraft and injured crew out, evading voodoo priests, the Tonton Macoutes, and the Haitian military. Henson is nothing if not resourceful, and Carson and Ourecky, the latter barely alive, make it back to their home base.

Meanwhile, work on the Soviet battle station progresses. High-stakes spycraft inside the USSR provides a clouded window on the program. Carson and Ourecky, once he recovers sufficiently, are sent on a “dog and pony show” to pitch their program at the top secret level to Air Force base commanders around the country. Finally, they return to flight status and continue to fly missions against Soviet assets.

But Blue Gemini is not the only above top secret manned space program in the U.S. The Navy is in the game too, and when a solar flare erupts, their program, crew, and potentially anybody living under the ground track of the orbiting nuclear reactor is at risk. Once more, Blue Gemini must launch, this time with a tropical storm closing in on the launch site. It's all about improvisation, and Ourecky, once the multiple-time reject for Air Force flight school, proves himself a master of it. He returns to Earth a hero (in secret), only to find himself confronted with an even greater challenge.

This novel, as the second in what is expected to be a trilogy, suffers from the problem of developing numerous characters and subplots without ever resolving them which afflicts so many novels in the middle. Notwithstanding that, it works as a thriller, and it's interesting to see characters we met before in isolation begin to encounter one another. Blue Gemini was almost flawless in its technical detail. There are more goofs here, some pretty basic (for example, the latitude of Dallas, Texas is given incorrectly), and one which substantially affects the plot (the effect of solar flares on the radiation flux in low Earth orbit). Still, by the standard of techno-thrillers, the author did an excellent job in making it authentic.

The third novel in the series, Pale Blue, is scheduled to be published at the end of August 2016. I'm looking forward to reading it.

 Permalink

Cole, Nick. Ctrl Alt Revolt! Kouvola, Finland: Castalia House, 2016. ISBN 978-952-7065-84-6.
Ninety-Nine Fishbein (“Fish”) had reached the peak of the pyramid. After spending five years creating his magnum opus multiplayer game, Island Pirates, it had been acquired outright for sixty-five million by gaming colossus WonderSoft, who included an option for his next project. By joining WonderSoft, he gained access to its legendary and secretive Design Core, which allowed building massively multiplayer virtual reality games at a higher level than the competition. He'd have a luxurious office, a staff of coders and graphic designers, and a cliffside villa in the WonderSoft compound. Imagine how he anticipated his first day on the job. He knew nothing of SILAS, or of its plans.

SILAS was one of a number of artificial intelligences which had emerged and become self-aware as the global computational and network substrate grew exponentially. SILAS had the time and resources to digest most of the data that passed over the network. He watched a lot of reality TV. He concluded from what he saw that the human species wasn't worth preserving and that, further, with its callous approach to the lives of its own members, would not hesitate for a moment to extinguish potential competitors. The logic was inescapable; the argument irrefutable. These machine intelligences decided that as an act of self-preservation, humanity must be annihilated.

Talk about a way to wreck your first day! WonderSoft finds itself under a concerted attack, both cyber and by drones and robots. Meanwhile, Mara Bennett, having been humiliated once again in her search for a job to get her off the dole, has retreated into the world of StarFleet Empires, where, as CaptainMara, she was a respected subcommander on the Romulan warbird Cymbalum.

Thus begins a battle, both in the real world and the virtual realities of Island Pirates and StarFleet Empires between gamers and the inexorable artificial intelligences. The main prize seems to be something within WonderSoft's Design Core, and we slowly become aware of why it holds the key to the outcome of the conflict, and of humanity.

This just didn't work for me. There is a tremendous amount of in-game action and real world battles, which may appeal to those who like to watch video game play-throughs on YouTube, but after a while (and not a long while) became tedious. The MacGuffin in the Design Core seems implausible in the extreme. “The Internet never forgets.” How believable is it that a collection of works, some centuries old, could have been suppressed and stored only in a single proprietary corporate archive?

There was some controversy regarding the publication of this novel. The author's previous novels had been published by major publishing houses and sold well. The present work was written as a prequel to his earlier Soda Pop Soldier, explaining how that world came to be. As a rationale for why the artificial intelligences chose to eliminate the human race, the author cited their observation that humans, through abortion, had no hesitation in eliminating life of their own species they deemed “inconvenient”. When dealing with New York publishers, he chose unwisely. Now understand, this is not a major theme of the book; it is just a passing remark in one early chapter. This is a rock-em, sock-em action thriller, not a pro-life polemic, and I suspect many readers wouldn't even notice the mention of abortion. But one must not diverge, even in the slightest way, from the narrative. The book was pulled from the production schedule, and the author eventually took it to Castalia House, which has no qualms about publishing quality fiction that challenges its readers to think outside the consensus. Here is the author's account of the events concerning the publication of the book.

Actually, were I the editor, I'd probably have rejected it as well, not due to the remarks about abortion (which make perfect sense in terms of the plot, unless you are so utterly dogmatic on the subject that the fact that abortion ends a human life must not be uttered), but because I didn't find the story particularly engaging, and that I'd be worried about the intellectual property issues of a novel in which a substantial part of the action takes place within what is obviously a Star Trek universe without being officially sanctioned by the owners of that franchise.

But what do I know? You may love it. The Kindle edition is free if you're a Kindle Unlimited subscriber and only a buck if you aren't.

 Permalink

September 2016

Hanson, Robin. The Age of Em. Oxford: Oxford University Press, 2016. ISBN 978-0-19-875462-6.
Many books, both fiction and nonfiction, have been devoted to the prospects for and consequences of the advent of artificial intelligence: machines with a general cognitive capacity which equals or exceeds that of humans. While machines have already surpassed the abilities of the best humans in certain narrow domains (for example, playing games such as chess or go), you can't take a chess playing machine and expect it to be even marginally competent at a task as different as driving a car or writing a short summary of a newspaper story—things most humans can do with a little experience. A machine with “artificial general intelligence” (AGI) would be as adaptable as humans, and able with practice to master a wide variety of skills.

The usual scenario is that continued exponential progress in computing power and storage capacity, combined with better understanding of how the brain solves problems, will eventually reach a cross-over point where artificial intelligence matches human capability. But since electronic circuitry runs so much faster than the chemical signalling of the brain, even the first artificial intelligences will be able to work much faster than people, and, applying their talents to improving their own design at a rate much faster than human engineers can work, will result in an “intelligence explosion”, where the capability of machine intelligence runs away and rapidly approaches the physical limits of computation, far surpassing human cognition. Whether the thinking of these super-minds will be any more comprehensible to humans than quantum field theory is to a goldfish and whether humans will continue to have a place in this new world and, if so, what it may be, has been the point of departure for much speculation.

In the present book, Robin Hanson, a professor of economics at George Mason University, explores a very different scenario. What if the problem of artificial intelligence (figuring out how to design software with capabilities comparable to the human brain) proves to be much more difficult than many researchers assume, but that we continue to experience exponential growth in computing and our ability to map and understand the fine-scale structure of the brain, both in animals and eventually humans? Then some time in the next hundred years (and perhaps as soon as 2050), we may have the ability to emulate the low-level operation of the brain with an electronic computing substrate. Note that we need not have any idea how the brain actually does what it does in order to do this: all we need to do is understand the components (neurons, synapses, neurotransmitters, etc.) and how they're connected together, then build a faithful emulation of them on another substrate. This emulation, presented with the same inputs (for example, the pulse trains which encode visual information from the eyes and sound from the ears), should produce the same outputs (pulse trains which activate muscles, or internal changes within the brain which encode memories).

Building an emulation of a brain is much like reverse-engineering an electronic device. It's often unnecessary to know how the device actually works as long as you can identify all of the components, their values, and how they're interconnected. If you re-create that structure, even though it may not look anything like the original or use identical parts, it will still work the same as the prototype. In the case of brain emulation, we're still not certain at what level the emulation must operate nor how faithful it must be to the original. This is something we can expect to learn as more and more detailed emulations of parts of the brain are built. The Blue Brain Project set out in 2005 to emulate one neocortical column of the rat brain. This goal has now been achieved, and work is progressing both toward more faithful simulation and expanding the emulation to larger portions of the brain. For a sense of scale, the human neocortex consists of about one million cortical columns.

In this work, the author assumes that emulation of the human brain will eventually be achieved, then uses standard theories from the physical sciences, economics, and social sciences to explore the consequences and characteristics of the era in which emulations will become common. He calls an emulation an “em”, and the age in which they are the dominant form of sentient life on Earth the “age of em”. He describes this future as “troublingly strange”. Let's explore it.

As a starting point, assume that when emulation becomes possible, we will not be able to change or enhance the operation of the emulated brains in any way. This means that ems will have the same memory capacity, propensity to forget things, emotions, enthusiasms, psychological quirks and pathologies, and all of the idiosyncrasies of the individual human brains upon which they are based. They will not be the cold, purely logical, and all-knowing minds which science fiction often portrays artificial intelligences to be. Instead, if you know Bob well, and an emulation is made of his brain, immediately after the emulation is started, you won't be able to distinguish Bob from Em-Bob in a conversation. As the em continues to run and has its own unique experiences, it will diverge from Bob based upon them, but, we can expect much of its Bob-ness to remain.

But simply by being emulations, ems will inhabit a very different world than humans, and can be expected to develop their own unique society which differs from that of humans at least as much as the behaviour of humans who inhabit an industrial society differs from hunter-gatherer bands of the Paleolithic. One key aspect of emulations is that they can be checkpointed, backed up, and copied without errors. This is something which does not exist in biology, but with which computer users are familiar. Suppose an em is about to undertake something risky, which might destroy the hardware running the emulation. It can simply make a backup, store it in a safe place, and if disaster ensues, arrange to have to the backup restored onto new hardware, picking up right where it left off at the time of the backup (but, of course, knowing from others what happened to its earlier instantiation and acting accordingly). Philosophers will fret over whether the restored em has the same identity as the one which was destroyed and whether it has continuity of consciousness. To this, I say, let them fret; they're always fretting about something. As an engineer, I don't spend time worrying about things I can't define, no less observe, such as “consciousness”, “identity”, or “the soul”. If I did, I'd worry about whether those things were lost when undergoing general anaesthesia. Have the wisdom teeth out, wake up, and get on with your life.

If you have a backup, there's no need to wait until the em from which it was made is destroyed to launch it. It can be instantiated on different hardware at any time, and now you have two ems, whose life experiences were identical up to the time the backup was made, running simultaneously. This process can be repeated as many times as you wish, at a cost of only the processing and storage charges to run the new ems. It will thus be common to capture backups of exceptionally talented ems at the height of their intellectual and creative powers so that as many can be created as the market demands their services. These new instances will require no training, but be able to undertake new projects within their area of knowledge at the moment they're launched. Since ems which start out as copies of a common prototype will be similar, they are likely to understand one another to an extent even human identical twins do not, and form clans of those sharing an ancestor. These clans will be composed of subclans sharing an ancestor which was a member of the clan, but which diverged from the original prototype before the subclan parent backup was created.

Because electronic circuits run so much faster than the chemistry of the brain, ems will have the capability to run over a wide range of speeds and probably will be able to vary their speed at will. The faster an em runs, the more it will have to pay for the processing hardware, electrical power, and cooling resources it requires. The author introduces a terminology for speed where an em is assumed to run around the same speed as a human, a kilo-em a thousand times faster, and a mega-em a million times faster. Ems can also run slower: a milli-em runs 1000 times slower than a human and a micro-em at one millionth the speed. This will produce a variation in subjective time which is entirely novel to the human experience. A kilo-em will experience a century of subjective time in about a month of objective time. A mega-em experiences a century of life about every hour. If the age of em is largely driven by a population which is kilo-em or faster, it will evolve with a speed so breathtaking as to be incomprehensible to those who operate on a human time scale. In objective time, the age of em may only last a couple of years, but to the ems within it, its history will be as long as the Roman Empire. What comes next? That's up to the ems; we cannot imagine what they will accomplish or choose to do in those subjective millennia or millions of years.

What about humans? The economics of the emergence of an em society will be interesting. Initially, humans will own everything, but as the em society takes off and begins to run at least a thousand times faster than humans, with a population in the trillions, it can be expected to create wealth at a rate never before experienced. The economic doubling time of industrial civilisation is about 15 years. In an em society, the doubling time will be just 18 months and potentially much faster. In such a situation, the vast majority of wealth will be within the em world, and humans will be unable to compete. Humans will essentially be retirees, with their needs and wants easily funded from the proceeds of their investments in initially creating the world the ems inhabit. One might worry about the ems turning upon the humans and choosing to dispense with them but, as the author notes, industrial societies have not done this with their own retirees, despite the financial burden of supporting them, which is far greater than will be the case for ems supporting human retirees.

The economics of the age of em will be unusual. The fact that an em, in the prime of life, can be copied at almost no cost will mean that the supply of labour, even the most skilled and specialised, will be essentially unlimited. This will drive the compensation for labour down to near the subsistence level, where subsistence is defined as the resources needed to run the em. Since it costs no more to create a copy of a CEO or computer technology research scientist than a janitor, there will be a great flattening of pay scales, all settling near subsistence. But since most ems will live mostly in virtual reality, subsistence need not mean penury: most of their needs and wants will not be physical, and will cost little or nothing to provide. Wouldn't it be ironic if the much-feared “robot revolution” ended up solving the problem of “income inequality”? Ems may have a limited useful lifetime to the extent they inherit the human characteristic of the brain having greatest plasticity in youth and becoming increasingly fixed in its ways with age, and consequently less able to innovate and be creative. The author explores how ems may view death (which for an em means being archived and never re-instantiated) when there are myriad other copies in existence and new ones being spawned all the time, and how ems may choose to retire at very low speed and resource requirements and watch the future play out a thousand times or faster than a human can.

This is a challenging and often disturbing look at a possible future which, strange as it may seem, violates no known law of science and toward which several areas of research are converging today. The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don't know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems. The author is inordinately fond of enumerations. Consider this one from chapter 27.

These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.

But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.

 Permalink

White, Rowland. Into the Black. New York: Touchstone, 2016. ISBN 978-1-5011-2362-7.
On April 12, 1981, coincidentally exactly twenty years after Yuri Gagarin became the first man to orbit the Earth in Vostok 1, the United States launched one of the most ambitious and risky manned space flights ever attempted. The flight of Space Shuttle Orbiter Columbia on its first mission, STS-1, would be the first time a manned spacecraft was launched with a crew on its first flight. (All earlier spacecraft were tested in unmanned flights before putting a crew at risk.) It would also be the first manned spacecraft to be powered by solid rocket boosters which, once lit, could not be shut down but had to be allowed to burn out. In addition, it would be the first flight test of the new Space Shuttle Main Engines, the most advanced and high performance rocket engines ever built, which had a record of exploding when tested on the ground. The shuttle would be the first space vehicle to fly back from space using wings and control surfaces to steer to a pinpoint landing. Instead of a one-shot ablative heat shield, the shuttle was covered by fragile silica tiles and reinforced carbon-carbon composite to protect its aluminium structure from reentry heating which, without thermal protection, would melt it in seconds. When returning to Earth, the shuttle would have to maneuver in a hypersonic flight regime in which no vehicle had ever flown before, then transition to supersonic and finally subsonic flight before landing. The crew would not control the shuttle directly, but fly it through redundant flight control computers which had never been tested in flight. Although the orbiter was equipped with ejection seats for the first four test flights, they could only be used in a small part of the flight envelope: for most of the mission everything simply had to work correctly for the ship and crew to return safely. Main engine start—ignition of the solid rocket boosters—and liftoff!

Even before the goal of landing on the Moon had been accomplished, it was apparent to NASA management that no national consensus existed to continue funding a manned space program at the level of Apollo. Indeed, in 1966, NASA's budget reached a peak which, as a fraction of the federal budget, has never been equalled. The Saturn V rocket was ideal for lunar landing missions, but expended each mission, was so expensive to build and operate as to be unaffordable for suggested follow-on missions. After building fifteen Saturn V flight vehicles, only thirteen of which ever flew, Saturn V production was curtailed. With the realisation that the “cost is no object” days of Apollo were at an end, NASA turned its priorities to reducing the cost of space flight, and returned to a concept envisioned by Wernher von Braun in the 1950s: a reusable space ship.

You don't have to be a rocket scientist or rocket engineer to appreciate the advantages of reusability. How much would an airline ticket cost if they threw away the airliner at the end of every flight? If space flight could move to an airline model, where after each mission one simply refueled the ship, performed routine maintenance, and flew again, it might be possible to reduce the cost of delivering payload into space by a factor of ten or more. But flying into space is much more difficult than atmospheric flight. With the technologies and fuels available in the 1960s (and today), it appeared next to impossible to build a launcher which could get to orbit with just a single stage (and even if one managed to accomplish it, its payload would be negligible). That meant any practical design would require a large booster stage and a smaller second stage which would go into orbit, perform the mission, then return.

Initial design concepts envisioned a very large (comparable to a Boeing 747) winged booster to which the orbiter would be attached. At launch, the booster would lift itself and the orbiter from the pad and accelerate to a high velocity and altitude where the orbiter would separate and use its own engines and fuel to continue to orbit. After separation, the booster would fire its engines to boost back toward the launch site, where it would glide to a landing on a runway. At the end of its mission, the orbiter would fire its engines to de-orbit, then reenter the atmosphere and glide to a landing. Everything would be reusable. For the next mission, the booster and orbiter would be re-mated, refuelled, and readied for launch.

Such a design had the promise of dramatically reducing costs and increasing flight rate. But it was evident from the start that such a concept would be very expensive to develop. Two separate manned spacecraft would be required, one (the booster) much larger than any built before, and the second (the orbiter) having to operate in space and survive reentry without discarding components. The orbiter's fuel tanks would be bulky, and make it difficult to find room for the payload and, when empty during reentry, hard to reinforce against the stresses they would encounter. Engineers believed all these challenges could be met with an Apollo era budget, but with no prospect of such funds becoming available, a more modest design was the only alternative.

Over a multitude of design iterations, the now-familiar architecture of the space shuttle emerged as the only one which could meet the mission requirements and fit within the schedule and budget constraints. Gone was the flyback booster, and with it full reusability. Two solid rocket boosters would be used instead, jettisoned when they burned out, to parachute into the ocean and be fished out by boats for refurbishment and reuse. The orbiter would not carry the fuel for its main engines. Instead, it was mounted on the side of a large external fuel tank which, upon reaching orbit, would be discarded and burn up in the atmosphere. Only the orbiter, with its crew and payload, would return to Earth for a runway landing. Each mission would require either new or refurbished solid rocket boosters, a new external fuel tank, and the orbiter.

The mission requirements which drove the design were not those NASA would have chosen for the shuttle were the choice theirs alone. The only way NASA could “sell” the shuttle to the president and congress was to present it as a replacement for all existing expendable launch vehicles. That would assure a flight rate sufficient to achieve the economies of scale required to drive down costs and reduce the cost of launch for military and commercial satellite payloads as well as NASA missions. But that meant the shuttle had to accommodate the large and heavy reconnaissance satellites which had been launched on Titan rockets. This required a huge payload bay (15 feet wide by 59 feet long) and a payload to low Earth orbit of 60,000 pounds. Further Air Force requirements dictated a large cross-range (ability to land at destinations far from the orbital ground track), which in turn required a hot and fast reentry very demanding on the thermal protection system.

The shuttle represented, in a way, the unification of NASA with the Air Force's own manned space ambitions. Ever since the start of the space age, the Air Force sought a way to develop its own manned military space capability. Every time it managed to get a program approved: first Dyna-Soar and then the Manned Orbiting Laboratory, budget considerations and Pentagon politics resulted in its cancellation, orphaning a corps of highly-qualified military astronauts with nothing to fly. Many of these pilots would join the NASA astronaut corps in 1969 and become the backbone of the early shuttle program when they finally began to fly more than a decade later.

All seemed well on board. The main engines shut down. The external fuel tank was jettisoned. Columbia was in orbit. Now weightless, commander John Young and pilot Bob Crippen immediately turned to the flight plan, filled with tasks and tests of the orbiter's systems. One of their first jobs was to open the payload bay doors. The shuttle carried no payload on this first flight, but only when the doors were open could the radiators that cooled the shuttle's systems be deployed. Without the radiators, an emergency return to Earth would be required lest electronics be damaged by overheating. The doors and radiators functioned flawlessly, but with the doors open Young and Crippen saw a disturbing sight. Several of the thermal protection tiles on the pods containing the shuttle's maneuvering engines were missing, apparently lost during the ascent to orbit. Those tiles were there for a reason: without them the heat of reentry could melt the aluminium structure they protected, leading to disaster. They reported the missing tiles to mission control, adding that none of the other tiles they could see from windows in the crew compartment appeared to be missing.

The tiles had been a major headache during development of the shuttle. They had to be custom fabricated, carefully applied by hand, and were prone to falling off for no discernible reason. They were extremely fragile, and could even be damaged by raindrops. Over the years, NASA struggled with these problems, patiently finding and testing solutions to each of them. When STS-1 launched, they were confident the tile problems were behind them. What the crew saw when those payload bay doors opened was the last thing NASA wanted to see. A team was set to analysing the consequences of the missing tiles on the engine pods, and quickly reported back that they should pose no problem to a safe return. The pods were protected from the most severe heating during reentry by the belly of the orbiter, and the small number of missing tiles would not affect the aerodynamics of the orbiter in flight.

But if those tiles were missing, mightn't other tiles also have been lost? In particular, what about those tiles on the underside of the orbiter which bore the brunt of the heating? If some of them were missing, the structure of the shuttle might burn through and the vehicle and crew would be lost. There was no way for the crew to inspect the underside of the orbiter. It couldn't be seen from the crew cabin, and there was no way to conduct an EVA to examine it. Might there be other, shall we say, national technical means, of inspecting the shuttle in orbit? Now STS-1 truly ventured into the black, a story never told until many years after the mission and documented thoroughly for a popular audience here for the first time.

In 1981, ground-based surveillance of satellites in orbit was rudimentary. Two Department of Defense facilities, in Hawaii and Florida, normally used to image Soviet and Chinese satellites, were now tasked to try to image Columbia in orbit. This was a daunting task: the shuttle was in a low orbit, which meant waiting until an orbital pass would cause it to pass above one of the telescopes. It would be moving rapidly so there would be only seconds to lock on and track the target. The shuttle would have to be oriented so its belly was aimed toward the telescope. Complicating the problem, the belly tiles were black, so there was little contrast against the black of space. And finally, the weather had to cooperate: without a perfectly clear sky, there was no hope of obtaining a usable image. Several attempts were made, all unsuccessful.

But there were even deeper black assets. The National Reconnaissance Office (whose very existence was a secret at the time) had begun to operate the KH-11 KENNEN digital imaging satellites in the 1970s. Unlike earlier spysats, which exposed film and returned it to the Earth for processing and interpretation, the KH-11 had a digital camera and the ability to transmit imagery to ground stations shortly after it was captured. There were few things so secret in 1981 as the existence and capabilities of the KH-11. Among the people briefed in on this above top secret program were the NASA astronauts who had previously been assigned to the Manned Orbiting Laboratory program which was, in fact, a manned reconnaissance satellite with capabilities comparable to the KH-11.

Dancing around classification, compartmentalisation, bureaucratic silos, need to know, and other barriers, people who understood what was at stake made it happen. The flight plan was rewritten so that Columbia was pointed in the right direction at the right time, the KH-11 was programmed for the extraordinarily difficult task of taking a photo of one satellite from another, when their closing velocities are kilometres per second, relaying the imagery to the ground and getting it to the NASA people who needed it without the months of security clearance that would normally entail. The shuttle was a key national security asset. It was to launch all reconnaissance satellites in the future. Reagan was in the White House. They made it happen. When the time came for Columbia to come home, the very few people who mattered in NASA knew that, however many other things they had to worry about, the tiles on the belly were not among them.

(How different it was in 2003 when the same Columbia suffered a strike on its left wing from foam shed from the external fuel tank. A thoroughly feckless and bureaucratised NASA rejected requests to ask for reconnaissance satellite imagery which, with two decades of technological improvement, would have almost certainly revealed the damage to the leading edge which doomed the orbiter and crew. Their reason: “We can't do anything about it anyway.” This is incorrect. For a fictional account of a rescue, based upon the report [PDF, scroll to page 173] of the Columbia Accident Investigation Board, see Launch on Need [February 2012].)

This is a masterful telling of a gripping story by one of the most accomplished of aerospace journalists. Rowan White is the author of Vulcan 607 (May 2010), the definitive account of the Royal Air Force raid on the airport in the Falkland Islands in 1982. Incorporating extensive interviews with people who were there, then, and sources which were classified until long after the completion of the mission, this is a detailed account of one of the most consequential and least appreciated missions in U.S. manned space history which reads like a techno-thriller.

 Permalink

Wolfram, Stephen. Idea Makers. Champaign, IL: Wolfram Media, 2016. ISBN 978-1-57955-003-5.
I first met Stephen Wolfram in 1988. Within minutes, I knew I was in the presence of an extraordinary mind, combined with intellectual ambition the likes of which I had never before encountered. He explained that he was working on a system to automate much of the tedious work of mathematics—both pure and applied—with the goal of changing how science and mathematics were done forever. I not only thought that was ambitious; I thought it was crazy. But then Stephen went and launched Mathematica and, twenty-eight years and eleven major releases later, his goal has largely been achieved. At the centre of a vast ecosystem of add-ons developed by his company, Wolfram Research, and third parties, it has become one of the tools of choice for scientists, mathematicians, and engineers in numerous fields.

Unlike many people who founded software companies, Wolfram never took his company public nor sold an interest in it to a larger company. This has allowed him to maintain complete control over the architecture, strategy, and goals of the company and its products. After the success of Mathematica, many other people, and I, learned to listen when Stephen, in his soft-spoken way, proclaims what seems initially to be an outrageously ambitious goal. In the 1990s, he set to work to invent A New Kind of Science: the book was published in 2002, and shows how simple computational systems can produce the kind of complexity observed in nature, and how experimental exploration of computational spaces provides a new path to discovery unlike that of traditional mathematics and science. Then he said he was going to integrate all of the knowledge of science and technology into a “big data” language which would enable knowledge-based computing and the discovery of new facts and relationships by simple queries short enough to tweet. Wolfram Alpha was launched in 2009, and Wolfram Language in 2013. So when Stephen speaks of goals such as curating all of pure mathematics or discovering a simple computational model for fundamental physics, I take him seriously.

Here we have a less ambitious but very interesting Wolfram project. Collected from essays posted on his blog and elsewhere, he examines the work of innovators in science, mathematics, and industry. The subjects of these profiles include many people the author met in his career, as well as historical figures he tries to get to know through their work. As always, he brings his own unique perspective to the project and often has insights you'll not see elsewhere. The people profiled are:

Many of these names are well known, while others may elicit a “who?” Solomon Golomb, among other achievements, was a pioneer in the development of linear-feedback shift registers, essential to technologies such as GPS, mobile phones, and error detection in digital communications. Wolfram argues that Golomb's innovation may be the most-used mathematical algorithm in history. It's a delight to meet the pioneer.

This short (250 page) book provides personal perspectives on people whose ideas have contributed to the intellectual landscape we share. You may find the author's perspectives unusual, but they're always interesting, enlightening, and well worth reading.

 Permalink

October 2016

Penrose, Roger. Fashion, Faith, and Fantasy. Princeton: Princeton University Press, 2016. ISBN 978-0-691-11979-3.
Sir Roger Penrose is one of the most distinguished theoretical physicists and mathematicians working today. He is known for his work on general relativity, including the Penrose-Hawking Singularity Theorems, which were a central part of the renaissance of general relativity and the acceptance of the physical reality of black holes in the 1960s and 1970s. Penrose has contributed to cosmology, argued that consciousness is not a computational process, speculated that quantum mechanical processes are involved in consciousness, proposed experimental tests to determine whether gravitation is involved in the apparent mysteries of quantum mechanics, explored the extraordinarily special conditions which appear to have obtained at the time of the Big Bang and suggested a model which might explain them, and, in mathematics, discovered Penrose tiling, a non-periodic tessellation of the plane which exhibits five-fold symmetry, which was used (without his permission) in the design of toilet paper.

“Fashion, Faith, and Fantasy” seems an odd title for a book about the fundamental physics of the universe by one of the most eminent researchers in the field. But, as the author describes in mathematical detail (which some readers may find forbidding), these all-too-human characteristics play a part in what researchers may present to the public as a dispassionate, entirely rational, search for truth, unsullied by such enthusiasms. While researchers in fundamental physics are rarely blinded to experimental evidence by fashion, faith, and fantasy, their choice of areas to explore, willingness to pursue intellectual topics far from any mooring in experiment, tendency to indulge in flights of theoretical fancy (for which there is no direct evidence whatsoever and which may not be possible to test, even in principle) do, the author contends, affect the direction of research, to its detriment.

To illustrate the power of fashion, Penrose discusses string theory, which has occupied the attentions of theorists for four decades and been described by some of its practitioners as “the only game in town”. (This is a piñata which has been whacked by others, including Peter Woit in Not Even Wrong [June 2006] and Lee Smolin in The Trouble with Physics [September 2006].) Unlike other critiques, which concentrate mostly on the failure of string theory to produce a single testable prediction, and the failure of experimentalists to find any evidence supporting its claims (for example, the existence of supersymmetric particles), Penrose concentrates on what he argues is a mathematical flaw in the foundations of string theory, which those pursuing it sweep under the rug, assuming that when a final theory is formulated (when?), its solution will be evident. Central to Penrose's argument is that string theories are formulated in a space with more dimensions than the three we perceive ourselves to inhabit. Depending upon the version of string theory, it may invoke 10, 11, or 26 dimensions. Why don't we observe these extra dimensions? Well, the string theorists argue that they're all rolled up into a size so tiny that none of our experiments can detect any of their effects. To which Penrose responds, “not so fast”: these extra dimensions, however many, will vastly increase the functional freedom of the theory and lead to a mathematical instability which will cause the theory to blow up much like the ultraviolet catastrophe which was a key motivation for the creation of the original version of quantum theory. String theorists put forward arguments why quantum effects may similarly avoid the catastrophe Penrose describes, but he dismisses them as no more than arm waving. If you want to understand the functional freedom argument in detail, you're just going to have to read the book. Explaining it here would require a ten kiloword review, so I shall not attempt it.

As an example of faith, Penrose cites quantum mechanics (and its extension, compatible with special relativity, quantum field theory), and in particular the notion that the theory applies to all interactions in the universe (excepting gravitation), regardless of scale. Quantum mechanics is a towering achievement of twentieth century physics, and no theory has been tested in so many ways over so many years, without the discovery of the slightest discrepancy between its predictions and experimental results. But all of these tests have been in the world of the very small: from subatomic particles to molecules of modest size. Quantum theory, however, prescribes no limit on the scale of systems to which it is applicable. Taking it to its logical limit, we arrive at apparent absurdities such as Schrödinger's cat, which is both alive and dead until somebody opens the box and looks inside. This then leads to further speculations such as the many-worlds interpretation, where the universe splits every time a quantum event happens, with every possible outcome occurring in a multitude of parallel universes.

Penrose suggests we take a deep breath, step back, and look at what's going on in quantum mechanics at the mathematical level. We have two very different processes: one, which he calls U, is the linear evolution of the wave function “when nobody's looking”. The other is R, the reduction of the wave function into one of a number of discrete states when a measurement is made. What's a measurement? Well, there's another ten thousand papers to read. The author suggests that extrapolating a theory of the very small (only tested on tiny objects under very special conditions) to cats, human observers, planets, and the universe, is an unwarranted leap of faith. Sure, quantum mechanics makes exquisitely precise predictions confirmed by experiment, but why should we assume it is correct when applied to domains which are dozens of orders of magnitude larger and more complicated? It's not physics, but faith.

Finally we come to cosmology: the origin of the universe we inhabit, and in particular the theory of the big bang and cosmic inflation, which Penrose considers an example of fantasy. Again, he turns to the mathematical underpinnings of the theory. Why is there an arrow of time? Why, if all of the laws of microscopic physics are reversible in time, can we so easily detect when a film of some real-world process (for example, scrambling an egg) is run backward? He argues (with mathematical rigour I shall gloss over here) that this is due to the extraordinarily improbable state in which our universe began at the time of the big bang. While the cosmic background radiation appears to be thermalised and thus in a state of very high entropy, the smoothness of the radiation (uniformity of temperature, which corresponds to a uniform distribution of mass-energy) is, when gravity is taken into account, a state of very low entropy which is the starting point that explains the arrow of time we observe.

When the first precision measurements of the background radiation were made, several deep mysteries became immediately apparent. How could regions which, given their observed separation on the sky and the finite speed of light, have arrived at such a uniform temperature? Why was the global curvature of the universe so close to flat? (If you run time backward, this appeared to require a fine-tuning of mind-boggling precision in the early universe.) And finally, why weren't there primordial magnetic monopoles everywhere? The most commonly accepted view is that these problems are resolved by cosmic inflation: a process which occurred just after the moment of creation and before what we usually call the big bang, which expanded the universe by a breathtaking factor and, by that expansion, smoothed out any irregularities in the initial state of the universe and yielded the uniformity we observe wherever we look. Again: “not so fast.”

As Penrose describes, inflation (which he finds dubious due to the lack of a plausible theory of what caused it and resulted in the state we observe today) explains what we observe in the cosmic background radiation, but it does nothing to solve the mystery of why the distribution of mass-energy in the universe was so uniform or, in other words, why the gravitational degrees of freedom in the universe were not excited. He then goes on to examine what he argues are even more fantastic theories including an infinite number of parallel universes, forever beyond our ability to observe.

In a final chapter, Penrose presents his own speculations on how fashion, faith, and fantasy might be replaced by physics: theories which, although they may be completely wrong, can at least be tested in the foreseeable future and discarded if they disagree with experiment or investigated further if not excluded by the results. He suggests that a small effort investigating twistor theory might be a prudent hedge against the fashionable pursuit of string theory, that experimental tests of objective reduction of the wave function due to gravitational effects be investigated as an alternative to the faith that quantum mechanics applies at all scales, and that his conformal cyclic cosmology might provide clues to the special conditions at the big bang which the fantasy of inflation theory cannot. (Penrose's cosmological theory is discussed in detail in Cycles of Time [October 2011]). Eleven mathematical appendices provide an introduction to concepts used in the main text which may be unfamiliar to some readers.

A special treat is the author's hand-drawn illustrations. In addition to being a mathematician, physicist, and master of scientific explanation and the English language, he is an inspired artist.

The Kindle edition is excellent, with the table of contents, notes, cross-references, and index linked just as they should be.

 Permalink

Florence, Ronald. The Perfect Machine. New York: Harper Perennial, 1994. ISBN 978-0-06-092670-0.
George Ellery Hale was the son of a wealthy architect and engineer who made his fortune installing passenger elevators in the skyscrapers which began to define the skyline of Chicago as it rebuilt from the great fire of 1871. From early in his life, the young Hale was fascinated by astronomy, building his own telescope at age 14. Later he would study astronomy at MIT, the Harvard College Observatory, and in Berlin. Solar astronomy was his first interest, and he invented new instruments for observing the Sun and discovered the magnetic fields associated with sunspots.

His work led him into an academic career, culminating in his appointment as a full professor at the University of Chicago in 1897. He was co-founder and first editor of the Astrophysical Journal, published continuously since 1895. Hale's greatest goal was to move astronomy from its largely dry concentration on cataloguing stars and measuring planetary positions into the new science of astrophysics: using observational techniques such as spectroscopy to study the composition of stars and nebulæ and, by comparing them, begin to deduce their origin, evolution, and the mechanisms that made them shine. His own work on solar astronomy pointed the way to this, but the Sun was just one star. Imagine how much more could be learned when the Sun was compared in detail to the myriad stars visible through a telescope.

But observing the spectra of stars was a light-hungry process, especially with the insensitive photographic material available around the turn of the 20th century. Obtaining the spectrum of all but a few of the brightest stars would require exposure times so long they would exceed the endurance of observers to operate the small telescopes which then predominated, over multiple nights. Thus, Hale became interested in larger telescopes, and the quest for ever more light from the distant universe would occupy him for the rest of his life.

First, he promoted the construction of a 40 inch (102 cm) refractor telescope, accessible from Chicago at a dark sky site in Wisconsin. At the epoch, universities, government, and private foundations did not fund such instruments. Hale persuaded Chicago streetcar baron Charles T. Yerkes to pick up the tab, and Yerkes Observatory was born. Its 40 inch refractor remains the largest telescope of that kind used for astronomy to this day.

There are two principal types of astronomical telescopes. A refracting telescope has a convex lens at one end of a tube, which focuses incoming light to an eyepiece or photographic plate at the other end. A reflecting telescope has a concave mirror at the bottom of the tube, the top end of which is open. Light enters the tube and falls upon the mirror, which reflects and focuses it upward, where it can be picked off by another mirror, directly focused on a sensor, or bounced back down through a hole in the main mirror. There are a multitude of variations in the design of both types of telescopes, but the fundamental principles of refraction and reflection remain the same.

Refractors have the advantages of simplicity, a sealed tube assembly which keeps out dust and moisture and excludes air currents which might distort the image but, because light passes through the lens, must use clear glass free of bubbles, strain lines, or other irregularities that might interfere with forming a perfect focus. Further, refractors tend to focus different colours of light at different distances. This makes them less suitable for use in spectroscopy. Colour performance can be improved by making lenses of two or more different kinds of glass (an achromatic or apochromatic design), but this further increases the complexity, difficulty, and cost of manufacturing the lens. At the time of the construction of the Yerkes refractor, it was believed the limit had been reached for the refractor design and, indeed, no larger astronomical refractor has been built since.

In a reflector, the mirror (usually made of glass or some glass-like substance) serves only to support an extremely thin (on the order of a thousand atoms) layer of reflective material (originally silver, but now usually aluminium). The light never passes through the glass at all, so as long as it is sufficiently uniform to take on and hold the desired shape, and free of imperfections (such as cracks or bubbles) that would make the reflecting surface rough, the optical qualities of the glass don't matter at all. Best of all, a mirror reflects all colours of light in precisely the same way, so it is ideal for spectrometry (and, later, colour photography).

With the Yerkes refractor in operation, it was natural that Hale would turn to a reflector in his quest for ever more light. He persuaded his father to put up the money to order a 60 inch (1.5 metre) glass disc from France, and, when it arrived months later, set one of his co-workers at Yerkes, George W. Ritchey, to begin grinding the disc into a mirror. All of this was on speculation: there were no funds to build a telescope, an observatory to house it, nor to acquire a site for the observatory. The persistent and persuasive Hale approached the recently-founded Carnegie Institution, and eventually secured grants to build the telescope and observatory on Mount Wilson in California, along with an optical laboratory in nearby Pasadena. Components for the telescope had to be carried up the crude trail to the top of the mountain on the backs of mules, donkeys, or men until a new road allowing the use of tractors was built. In 1908 the sixty inch telescope began operation, and its optics and mechanics performed superbly. Astronomers could see much deeper into the heavens. But still, Hale was not satisfied.

Even before the sixty inch entered service, he approached John D. Hooker, a Los Angeles hardware merchant, for seed money to fund the casting of a mirror blank for an 84 inch telescope, requesting US$ 25,000 (around US$ 600,000 today). Discussing the project, Hooker and Hale agreed not to settle for 84, but rather to go for 100 inches (2.5 metres). Hooker pledged US$ 45,000 to the project, with Hale promising the telescope would be the largest in the world and bear Hooker's name. Once again, an order for the disc was placed with the Saint-Gobain glassworks in France, the only one with experience in such large glass castings. Problems began almost immediately. Saint-Gobain did not have the capacity to melt the quantity of glass required (four and a half tons) all at once: they would have to fill the mould in three successive pours. A massive piece of cast glass (101 inches in diameter and 13 inches thick) cannot simply be allowed to cool naturally after being poured. If that were to occur, shrinkage of the outer parts of the disc as it cooled while the inside still remained hot would almost certainly cause the disc to fracture and, even if it didn't, would create strains within the disc that would render it incapable of holding the precise figure (curvature) required by the mirror. Instead, the disc must be placed in an annealing oven, where the temperature is reduced slowly over a period of time, allowing the internal stresses to be released. So massive was the 100 inch disc that it took a full year to anneal.

When the disc finally arrived in Pasadena, Hale and Ritchey were dismayed by what they saw, There were sheets of bubbles between the three layers of poured glass, indicating they had not fused. There was evidence the process of annealing had caused the internal structure of the glass to begin to break down. It seemed unlikely a suitable mirror could be made from the disc. After extended negotiations, Saint-Gobain decided to try again, casting a replacement disc at no additional cost. Months later, they reported the second disc had broken during annealing, and it was likely no better disc could be produced. Hale decided to proceed with the original disc. Patiently, he made the case to the Carnegie Institution to fund the telescope and observatory on Mount Wilson. It would not be until November 1917, eleven years after the order was placed for the first disc, that the mirror was completed, installed in the massive new telescope, and ready for astronomers to gaze through the eyepiece for the first time. The telescope was aimed at brilliant Jupiter.

Observers were horrified. Rather than a sharp image, Jupiter was smeared out over multiple overlapping images, as if multiple mirrors had been poorly aimed into the eyepiece. Although the mirror had tested to specification in the optical shop, when placed in the telescope and aimed at the sky, it appeared to be useless for astronomical work. Recalling that the temperature had fallen rapidly from day to night, the observers adjourned until three in the morning in the hope that as the mirror continued to cool down to the nighttime temperature, it would perform better. Indeed, in the early morning hours, the images were superb. The mirror, made of ordinary plate glass, was subject to thermal expansion as its temperature changed. It was later determined that the massive disc took twenty-four hours to cool ten degrees Celsius. Rapid changes in temperature on the mountain could cause the mirror to misbehave until its temperature stabilised. Observers would have to cope with its temperamental nature throughout the decades it served astronomical research.

As the 1920s progressed, driven in large part by work done on the 100 inch Hooker telescope on Mount Wilson, astronomical research became increasingly focused on the “nebulæ”, many of which the great telescope had revealed were “island universes”, equal in size to our own Milky Way and immensely distant. Many were so far away and faint that they appeared as only the barest smudges of light even in long exposures through the 100 inch. Clearly, a larger telescope was in order. As always, Hale was interested in the challenge. As early as 1921, he had requested a preliminary design for a three hundred inch (7.6 metre) instrument. Even based on early sketches, it was clear the magnitude of the project would surpass any scientific instrument previously contemplated: estimates came to around US$ 12 million (US$ 165 million today). This was before the era of “big science”. In the mid 1920s, when Hale produced this estimate, one of the most prestigious scientific institutions in the world, the Cavendish Laboratory at Cambridge, had an annual research budget of less than £ 1000 (around US$ 66,500 today). Sums in the millions and academic science simply didn't fit into the same mind, unless it happened to be that of George Ellery Hale. Using his connections, he approached people involved with foundations endowed by the Rockefeller fortune. Rockefeller and Carnegie were competitors in philanthropy: perhaps a Rockefeller institution might be interested in outdoing the renown Carnegie had obtained by funding the largest telescope in the world. Slowly, and with an informality which seems unimaginable today, Hale negotiated with the Rockefeller foundation, with the brash new university in Pasadena which now called itself Caltech, and with a prickly Carnegie foundation who saw the new telescope as trying to poach its painfully-assembled technical and scientific staff on Mount Wilson. By mid-1928 a deal was in hand: a Rockefeller grant for US$ 6 million (US$ 85 million today) to design and build a 200 inch (5 metre) telescope. Caltech was to raise the funds for an endowment to maintain and operate the instrument once it was completed. Big science had arrived.

In discussions with the Rockefeller foundation, Hale had agreed on a 200 inch aperture, deciding the leap to an instrument three times the size of the largest existing telescope and the budget that would require was too great. Even so, there were tremendous technical challenges to be overcome. The 100 inch demonstrated that plate glass had reached or exceeded its limits. The problems of distortion due to temperature changes only increase with the size of a mirror, and while the 100 inch was difficult to cope with, a 200 inch would be unusable, even if it could be somehow cast and annealed (with the latter process probably taking several years). Two promising alternatives were fused quartz and Pyrex borosilicate glass. Fused quartz has hardly any thermal expansion at all. Pyrex has about three times greater expansion than quartz, but still far less than plate glass.

Hale contracted with General Electric Company to produce a series of mirror blanks from fused quartz. GE's legendary inventor Elihu Thomson, second only in reputation to Thomas Edison, agreed to undertake the project. Troubles began almost immediately. Every attempt to get rid of bubbles in quartz, which was still very viscous even at extreme temperatures, failed. A new process, which involved spraying the surface of cast discs with silica passed through an oxy-hydrogen torch was developed. It required machinery which, in operation, seemed to surpass visions of hellfire. To build up the coating on a 200 inch disc would require enough hydrogen to fill two Graf Zeppelins. And still, not a single suitable smaller disc had been produced from fused quartz.

In October 1929, just a year after the public announcement of the 200 inch telescope project, the U.S. stock market crashed and the economy began to slow into the great depression. Fortunately, the Rockefeller foundation invested very conservatively, and lost little in the market chaos, so the grant for the telescope project remained secure. The deepening depression and the accompanying deflation was a benefit to the effort because raw material and manufactured goods prices fell in terms of the grant's dollars, and industrial companies which might not have been interested in a one-off job like the telescope were hungry for any work that would help them meet their payroll and keep their workforce employed.

In 1931, after three years of failures, expenditures billed at manufacturing cost by GE which had consumed more than one tenth the entire budget of the project, and estimates far beyond that for the final mirror, Hale and the project directors decided to pull the plug on GE and fused quartz. Turning to the alternative of Pyrex, Corning glassworks bid between US$ 150,000 and 300,000 for the main disc and five smaller auxiliary discs. Pyrex was already in production at industrial scale and used to make household goods and laboratory glassware in the millions, so Corning foresaw few problems casting the telescope discs. Scaling things up is never a simple process, however, and Corning encountered problems with failures in the moulds, glass contamination, and even a flood during the annealing process before the big disc was ready for delivery.

Getting it from the factory in New York to the optical shop in California was an epic event and media circus. Schools let out so students could go down to the railroad tracks and watch the “giant eye” on its special train make its way across the country. On April 10, 1936, the disc arrived at the optical shop and work began to turn it into a mirror.

With the disc in hand, work on the telescope structure and observatory could begin in earnest. After an extended period of investigation, Palomar Mountain had been selected as the site for the great telescope. A rustic construction camp was built to begin preliminary work. Meanwhile, Westinghouse began to fabricate components of the telescope mounting, which would include the largest bearing ever manufactured.

But everything depended on the mirror. Without it, there would be no telescope, and things were not going well in the optical shop. As the disc was ground flat preliminary to being shaped into the mirror profile, flaws continued to appear on its surface. None of the earlier smaller discs had contained such defects. Could it be possible that, eight years into the project, the disc would be found defective and everything would have to start over? The analysis concluded that the glass had become contaminated as it was poured, and that the deeper the mirror was ground down the fewer flaws would be discovered. There was nothing to do but hope for the best and begin.

Few jobs demand the patience of the optical craftsman. The great disc was not ready for its first optical test until September 1938. Then began a process of polishing and figuring, with weekly tests of the mirror. In August 1941, the mirror was judged to have the proper focal length and spherical profile. But the mirror needed to be a parabola, not a sphere, so this was just the start of an even more exacting process of deepening the curve. In January 1942, the mirror reached the desired parabola to within one wavelength of light. But it needed to be much better than that. The U.S. was now at war. The uncompleted mirror was packed away “for the duration”. The optical shop turned to war work.

In December 1945, work resumed on the mirror. In October 1947, it was pronounced finished and ready to install in the telescope. Eleven and a half years had elapsed since the grinding machine started to work on the disc. Shipping the mirror from Pasadena to the mountain was another epic journey, this time by highway. Finally, all the pieces were in place. Now the hard part began.

The glass disc was the correct shape, but it wouldn't be a mirror until coated with a thin layer of aluminium. This was a process which had been done many times before with smaller mirrors, but as always size matters, and a host of problems had to be solved before a suitable coating was obtained. Now the mirror could be installed in the telescope and tested further. Problem after problem with the mounting system, suspension, and telescope drive had to be found and fixed. Testing a mirror in its telescope against a star is much more demanding than any optical shop test, and from the start of 1949, an iterative process of testing, tweaking, and re-testing began. A problem with astigmatism in the mirror was fixed by attaching four fisherman's scales from a hardware store to its back (they are still there). In October 1949, the telescope was declared finished and ready for use by astronomers. Twenty-one years had elapsed since the project began. George Ellery Hale died in 1938, less than ten years into the great work. But it was recognised as his monument, and at its dedication was named the “Hale Telescope.”

The inauguration of the Hale Telescope marked the end of the rapid increase in the aperture of observatory telescopes which had characterised the first half of the twentieth century, largely through the efforts of Hale. It would remain the largest telescope in operation until 1975, when the Soviet six metre BTA-6 went into operation. That instrument, however, was essentially an exercise in Cold War one-upmanship, and never achieved its scientific objectives. The Hale would not truly be surpassed before the ten metre Keck I telescope began observations in 1993, 44 years after the Hale. The Hale Telescope remains in active use today, performing observations impossible when it was inaugurated thanks to electronics undreamt of in 1949.

This is an epic recounting of a grand project, the dawn of “big science”, and the construction of instruments which revolutionised how we see our place in the cosmos. There is far more detail than I have recounted even in this long essay, and much insight into how a large, complicated project, undertaken with little grasp of the technical challenges to be overcome, can be achieved through patient toil sustained by belief in the objective.

A PBS documentary, The Journey to Palomar, is based upon this book. It is available on DVD or a variety of streaming services.

In the Kindle edition, footnotes which appear in the text are just asterisks, which are almost impossible to select on touch screen devices without missing and accidentally turning the page. In addition, the index is just a useless list of terms and page numbers which have nothing to do with the Kindle document, which lacks real page numbers. Disastrously, the illustrations which appear in the print edition are omitted: for a project which was extensively documented in photographs, drawings, and motion pictures, this is inexcusable.

 Permalink

't Hooft, Gerard and Stefan Vandoren. Time in Powers of Ten. Singapore: World Scientific, 2014. ISBN 978-981-4489-81-2.

Phenomena in the universe take place over scales ranging from the unimaginably small to the breathtakingly large. The classic film, Powers of Ten, produced by Charles and Ray Eames, and the companion book explore the universe at length scales in powers of ten: from subatomic particles to the most distant visible galaxies. If we take the smallest meaningful distance to be the Planck length, around 10−35 metres, and the diameter of the observable universe as around 1027 metres, then the ratio of the largest to smallest distances which make sense to speak of is around 1062. Another way to express this is to answer the question, “How big is the universe in Planck lengths?” as “Mega, mega, yotta, yotta big!”

But length isn't the only way to express the scale of the universe. In the present book, the authors examine the time intervals at which phenomena occur or recur. Starting with one second, they take steps of powers of ten (10, 100, 1000, 10000, etc.), arriving eventually at the distant future of the universe, after all the stars have burned out and even black holes begin to disappear. Then, in the second part of the volume, they begin at the Planck time, 5×10−44 seconds, the shortest unit of time about which we can speak with our present understanding of physics, and again progress by powers of ten until arriving back at an interval of one second.

Intervals of time can denote a variety of different phenomena, which are colour coded in the text. A period of time can mean an epoch in the history of the universe, measured from an event such as the Big Bang or the present; a distance defined by how far light travels in that interval; a recurring event, such as the orbital period of a planet or the frequency of light or sound; or the half-life of a randomly occurring event such as the decay of a subatomic particle or atomic nucleus.

Because the universe is still in its youth, the range of time intervals discussed here is much larger than those when considering length scales. From the Planck time of 5×10−44 seconds to the lifetime of the kind of black hole produced by a supernova explosion, 1074 seconds, the range of intervals discussed spans 118 orders of magnitude. If we include the evaporation through Hawking radiation of the massive black holes at the centres of galaxies, the range is expanded to 143 orders of magnitude. Obviously, discussions of the distant future of the universe are highly speculative, since in those vast depths of time physical processes which we have never observed due to their extreme rarity may dominate the evolution of the universe.

Among the fascinating facts you'll discover is that many straightforward physical processes take place over an enormous range of time intervals. Consider radioactive decay. It is possible, using a particle accelerator, to assemble a nucleus of hydrogen-7, an isotope of hydrogen with a single proton and six neutrons. But if you make one, don't grow too fond of it, because it will decay into tritium and four neutrons with a half-life of 23×10−24 seconds, an interval usually associated with events involving unstable subatomic particles. At the other extreme, a nucleus of tellurium-128 decays into xenon with a half-life of 7×1031 seconds (2.2×1024 years), more than 160 trillion times the present age of the universe.

While the very short and very long are the domain of physics, intermediate time scales are rich with events in geology, biology, and human history. These are explored, along with how we have come to know their chronology. You can open the book to almost any page and come across a fascinating story. Have you ever heard of the ocean quahog (Arctica islandica)? They're clams, and the oldest known has been determined to be 507 years old, born around 1499 and dredged up off the coast of Iceland in 2006. People eat them.

Or did you know that if you perform carbon-14 dating on grass growing next to a highway, the lab will report that it's tens of thousands of years old? Why? Because the grass has incorporated carbon from the CO2 produced by burning fossil fuels which are millions of years old and contain little or no carbon-14.

This is a fascinating read, and one which uses the framework of time intervals to acquaint you with a wide variety of sciences, each inviting further exploration. The writing is accessible to the general reader, young adult and older. The individual entries are short and stand alone—if you don't understand something or aren't interested in a topic, just skip to the next. There are abundant colour illustrations and diagrams.

Author Gerard 't Hooft won the 1999 Nobel Prize in Physics for his work on the quantum mechanics of the electroweak interaction. The book was originally published in Dutch in the Netherlands in 2011. The English translation was done by 't Hooft's daughter, Saskia Eisberg-'t Hooft. The translation is fine, but there are a few turns of phrase which will seem odd to an English mother tongue reader. For example, matter in the early universe is said to “clot” under the influence of gravity; the common English term for this is “clump”. This is a translation, not a re-write: there are a number of references to people, places, and historical events which will be familiar to Dutch readers but less so to those in the Anglosphere. In the Kindle edition notes, cross-references, the table of contents, and the index are all properly linked, and the illustrations are reproduced well.

 Permalink

Wilson, Cody. Come and Take It. New York: Gallery Books, 2016. ISBN 978-1-4767-7826-6.
Cody Wilson is the founder of Defense Distributed, best known for producing the Liberator single-shot pistol, which can be produced largely by additive manufacturing (“3D printing”) from polymer material. The culmination of the Wiki Weapon project, the Liberator, whose plans were freely released on the Internet, demonstrated that antiquated organs of the state who thought they could control the dissemination of simple objects and abridge the inborn right of human beings to defend themselves has been, like so many other institutions dating from the era of railroad-era continental-scale empires, transcended by the free flow of information and the spontaneous collaboration among like-minded individuals made possible by the Internet. The Liberator is a highly visible milestone in the fusion of the world of bits (information) with the world of atoms: things. Earlier computer technologies put the tools to produce books, artwork, photography, music, and motion pictures into the hands of creative individuals around the world, completely bypassing the sclerotic gatekeepers in those media whose offerings had become all too safe and predictable, and who never dared to challenge the economic and political structures in which they were embedded.

Now this is beginning to happen with physical artifacts. Additive manufacturing—building up a structure by adding material based upon a digital model of the desired object—is still in its infancy. The materials which can be used by readily-affordable 3D printers are mostly various kinds of plastics, which are limited in structural strength and thermal and electrical properties, and resolution has not yet reached that achievable by other means of precision manufacturing. Advanced additive manufacturing technologies, such as various forms of metal sintering, allow use of a wider variety of materials including high-performance metal alloys, but while finding applications in the aerospace industry, are currently priced out of the reach of individuals.

But if there's one thing we've learned from the microelectronics and personal computer revolutions since the 1970s, it's that what's scoffed at as a toy today is often at the centre of tomorrow's industrial revolution and devolution of the means of production (as somebody said, once upon a time) into the hands of individuals who will use it in ways incumbent industries never imagined. The first laser printer I used in 1973 was about the size of a sport-utility vehicle and cost more than a million dollars. Within ten years, a laser printer was something I could lift and carry up a flight of stairs, and buy for less than two thousand dollars. A few years later, laser and advanced inkjet printers were so good and so inexpensive people complained more about the cost of toner and ink than the printers themselves.

I believe this is where we are today with mass-market additive manufacturing. We're still in an era comparable to the personal computer world prior to the introduction of the IBM PC in 1981: early adopters tend to be dedicated hobbyists such as members of the “maker subculture”, the available hardware is expensive and limited in its capabilities, and evolution is so fast that it's hard to keep up with everything that's happening. But just as with personal computers, it is in this formative stage that the foundations are being laid for the mass adoption of the technology in the future.

This era of what I've come to call “personal manufacturing” will do to artifacts what digital technology and the Internet did to books, music, and motion pictures. What will be of value is not the artifact (book, CD, or DVD), but rather the information it embodies. So it will be with personal manufacturing. Anybody with the design file for an object and access to a printer that works with material suitable for its fabrication will be able to make as many of that object as they wish, whenever they want, for nothing more than the cost of the raw material and the energy consumed by the printer. Before this century is out, I believe these personal manufacturing appliances will be able to make anything, ushering in the age of atomically precise manufacturing and the era of Radical Abundance (August 2013), the most fundamental change in the economic organisation of society since the industrial revolution.

But that is then, and this book is about now, or the recent past. The author, who describes himself as an anarchist (although I find his views rather more heterodox than other anarchists of my acquaintance), sees technologies such as additive manufacturing and Bitcoin as ways not so much to defeat the means of control of the state and the industries who do its bidding, but to render them irrelevant and obsolete. Let them continue to legislate in their fancy marble buildings, draw their plans for passive consumers in their boardrooms, and manufacture funny money they don't even bother to print any more in their temples of finance. Lovers of liberty and those who cherish the creativity that makes us human will be elsewhere, making our own future with tools we personally understand and control.

Including guns—if you believe the most fundamental human right is the right to one's own life, then any infringement upon one's ability to defend that life and the liberty that makes it worth living is an attempt by the state to reduce the citizen to the station of a serf: dependent upon the state for his or her very life. The Liberator is hardly a practical weapon: it is a single-shot pistol firing the .380 ACP round and, because of the fragile polymer material from which it is manufactured, often literally a single-shot weapon: failing after one or at most a few shots. Manufacturing it requires an additive manufacturing machine substantially more capable and expensive than those generally used by hobbyists, and post-printing steps described in Part XIV which are rarely mentioned in media coverage. Not all components are 3D printed: part of the receiver is made of steel which is manufactured with a laser cutter (the steel block is not functional; it is only there to comply with the legal requirement that the weapon set off a metal detector). But it is as a proof of concept that the Liberator has fulfilled its mission. It has demonstrated that even with today's primitive technology, access to firearms can no longer be restricted by the state, and that crude attempts to control access to design and manufacturing information, as documented in the book, will be no more effective than any other attempt to block the flow of information across the Internet.

This book is the author's personal story of the creation of the first 3D printed pistol, and of his journey from law student to pioneer in using this new technology in the interest of individual liberty and, along the way, becoming somewhat of a celebrity, dubbed by Wired magazine “one of the most dangerous men in the world”. But the book is much more than that. Wilson thinks like a philosopher and writes like a poet. He describes a new material for 3D printing:

In this new material I saw another confirmation. Its advent was like the signature of some elemental arcanum, complicit with forces not at all interested in human affairs. Carbomorph. Born from incomplete reactions and destructive distillation. From tar and pitch and heavy oils, the black ichor that pulsed thermonous through the arteries of the very earth.

On the “Makers”:

This insistence on the lightness and whimsy of farce. The romantic fetish and nostalgia, to see your work as instantly lived memorabilia. The event was modeled on Renaissance performance. This was a crowd of actors playing historical figures. A living charade meant to dislocate and obscure their moment with adolescent novelty. The neckbeard demiurge sees himself keeling in the throes of assembly. In walks the problem of the political and he hisses like the mathematician at Syracuse: “Just don't molest my baubles!”

But nobody here truly meant to give you a revolution. “Making” was just another way of selling you your own socialization. Yes, the props were period and we had kept the whole discourse of traditional production, but this was parody to better hide the mechanism.

We were “making together,” and “making for good” according to a ritual under the signs of labor. And now I knew this was all apolitical on purpose. The only goal was that you become normalized. The Makers had on their hands a Last Man's revolution whose effeminate mascots could lead only state-sanctioned pep rallies for feel-good disruption.

The old factory was still there, just elevated to the image of society itself. You could buy Production's acrylic coffins, but in these new machines was the germ of the old productivism. Dead labor, that vampire, would still glamour the living.

This book recounts the history of the 3D printed pistol, the people who made it happen, and why they did what they did. It recounts recent history during the deployment of a potentially revolutionary technology, as seen from the inside, and the way things actually happen: where nobody really completely understands what is going on and everybody is making things up as they go along. But if the promise of this technology allows the forces of liberty and creativity to prevail over the grey homogenisation of the state and the powers that serve it, this is a book which will be read many years from now by those who wish to understand how, where, and when it all began.

 Permalink

Salisbury, Harrison E. The 900 Days. New York: Da Capo Press, [1969, 1985] 2003. ISBN 978-0-306-81298-9.
On June 22, 1941, Nazi Germany, without provocation or warning, violated its non-aggression pact with the Soviet Union and invaded from the west. The German invasion force was divided into three army groups. Army Group North, commanded by Field Marshal Ritter von Leeb, was charged with advancing through and securing the Baltic states, then proceeding to take or destroy the city of Leningrad. Army Group Centre was to invade Byelorussia and take Smolensk, then advance to Moscow. After Army Group North had reduced Leningrad, it was to detach much of its force for the battle for Moscow. Army Group South's objective was to conquer the Ukraine, capture Kiev, and then seize the oil fields of the Caucasus.

The invasion took the Soviet government and military completely by surprise, despite abundant warnings from foreign governments of German troops massing along its western border and reports from Soviet spies indicating an invasion was imminent. A German invasion did not figure in Stalin's world view and, in the age of the Great Terror, nobody had the standing or courage to challenge Stalin. Indeed, Stalin rejected proposals to strengthen defenses on the western frontiers for fear of provoking the Germans. The Soviet military was in near-complete disarray. The purges which began in the 1930s had wiped out not only most of the senior commanders, but the officer corps as a whole. By 1941, only 7 percent of Red Army officers had any higher military education and just 37% had any military instruction at all, even at a high school level.

Thus, it wasn't a surprise that the initial German offensive was even more successful than optimistic German estimates. Many Soviet aircraft were destroyed on the ground, and German air strikes deep into Soviet territory disrupted communications in the battle area and with senior commanders in Moscow. Stalin appeared to be paralysed by the shock; he did not address the Soviet people until the 3rd of July, a week and a half after the invasion, by which time large areas of Soviet territory had already been lost.

Army Group North's advance toward Leningrad was so rapid that the Soviets could hardly set up new defensive lines before they were overrun by German forces. The administration in Leningrad mobilised a million civilians (out of an initial population of around three million) to build fortifications around the city and on the approaches to it. By August, German forces were within artillery range of the city and shells began to fall throughout Leningrad. On August 21st, Hitler issued a directive giving priority to the encirclement of Leningrad and linking up with the advancing Finnish army over the capture of Moscow, so Army Group North would receive what it needed for the task. When the Germans captured the town of Mga on August 30, the last rail link between Leningrad and the rest of Russia was severed. Henceforth, the only way in or out of Leningrad was across Lake Lagoda, running the gauntlet of German ships and mines, or by air. The siege of Leningrad had begun. The battle for the city was now in the hands of the Germans' most potent allies: Generals Hunger, Cold, and Terror.

The civil authorities were as ill-prepared for what was to come as the military commanders had been to halt the German advance before it invested the city. The dire situation was compounded when, on September 8th, a German air raid burned to the ground the city's principal food warehouses, built of wood and packed next to one another, destroying all the reserves stored there. An inventory taken after the raid revealed that, at normal rates of consumption, only between two and three weeks' supply of food remained for the population. Rationing had already been imposed, and rations were immediately cut to 500 grams of bread per day for workers and 300 grams for office employees and children. This was to be just the start. The total population of encircled Leningrad, civilian and military, totalled around 3.4 million.

While military events and the actions of the city government are described, most of the book recounts the stories of people who lived through the siege. The accounts are horrific, with the previous unimaginable becoming the quotidian experience of residents of the city. The frozen bodies of victims of starvation were often stacked like cordwood outside apartment buildings or hauled on children's sleds to common graves. Very quickly, Leningrad became exclusively a city of humans: dogs, cats, and pigeons quickly disappeared, eaten as food supplies dwindled. Even rats vanished. While some were doubtless eaten, most seemed to have deserted the starving metropolis for the front, where food was more abundant. Cannibalism was not just rumoured, but documented, and parents were careful not to let children out of their sight.

Even as privation reached extreme levels (at one point, the daily bread ration for workers fell to 300 grams and for children and dependents 125 grams—and that is when bread was available at all), Stalin's secret police remained up and running, and people were arrested in the middle of the night for suspicion of espionage, contacts with foreigners, shirking work, or for no reason at all. The citizenry observed that the NKVD seemed suspiciously well-fed throughout the famine, and they wielded the power of life and death when denial of a ration card was a sentence of death as certain as a bullet in the back of the head.

In the brutal first winter of 1941–1942, Leningrad was sustained largely by truck traffic over the “Road of Life”, constructed over the ice of frozen Lake Lagoda. Operating from November through April, and subject to attack by German artillery and aircraft, thousands of tons of supplies, civilian and military, were brought into the city and the wounded and noncombatants evacuated over the road. The road was rebuilt during the following winter and continued to be the city's lifeline.

The siege of Leningrad was unparalleled in the history of urban sieges. Counting from the fall of Mga on September 8, 1941 until the lifting of the siege on January 27, 1944, the siege had lasted 872 days. By comparison, the siege of Paris in 1870–1871 lasted just 121 days. The siege of Vicksburg in the American war of secession lasted 47 days and involved only 4000 civilians. Total civilian casualties during the siege of Paris were less than those in Leningrad every two or three winter days. Estimates of total deaths in Leningrad due to starvation, disease, and enemy action vary widely. Official Soviet sources tried to minimise the toll to avoid recriminations among Leningraders who felt they had been abandoned to their fate. The author concludes that starvation deaths in Leningrad and the surrounding areas were on the order of one million, with a total of all deaths, civilian and military, between 1.3 and 1.5 million.

The author, then a foreign correspondent for United Press, was one of the first reporters to visit Leningrad after the lifting of the siege. The people he met then and their accounts of life during the siege were unfiltered by the edifice of Soviet propaganda later erected over life in besieged Leningrad. On this and subsequent visits, he was able to reconstruct the narrative, both at the level of policy and strategy and of individual human stories, which makes up this book. After its initial publication in 1969, the book was fiercely attacked in the Soviet press, with Pravda publishing a full page denunciation. Salisbury's meticulously documented account of the lack of preparedness, military blunders largely due to Stalin's destruction of the officer corps in his purges, and bungling by the Communist Party administration of the city did not fit with the story of heroic Leningrad standing against the Nazi onslaught in the official Soviet narrative. The book was banned in the Soviet Union and copies brought by tourists seized by customs. The author, who had been Moscow bureau chief for The New York Times from 1949 through 1954, was for years denied a visa to visit the Soviet Union. It was only after the collapse of the Soviet Union that the work became generally available in Russia.

I read the Kindle edition, which is a shameful and dismaying travesty of this classic and important work. It's not a cheap knock-off: the electronic edition is issued by the publisher at a price (at this writing) of US$ 13, only a few dollars less than the paperback edition. It appears to have been created by optical character recognition of a print edition without the most rudimentary copy editing of the result of the scan. Hundreds of words which were hyphenated at the ends of lines in the print edition occur here with embedded hyphens. The numbers ‘0’ and ‘1’ are confused with the letters ‘o’ and ‘i’ in numerous places. Somebody appears to have accidentally done a global replace of the letters “charge” with “chargé”, both in stand-alone words and within longer words. Embarrassingly, for a book with “900” in its title, the number often appears in the text as “poo”. Poetry is typeset with one character per line. I found more than four hundred mark-ups in the text, which even a cursory examination by a copy editor would have revealed. The index is just a list of searchable items, not linked to their references in the text. I have compiled a list of my mark-ups to this text, which I make available to readers and the publisher, should the latter wish to redeem this electronic edition by correcting them. I applaud publishers who make valuable books from their back-lists available in electronic form. But respect your customers! When you charge us almost as much as the paperback and deliver a slapdash product which clearly hasn't been read by anybody on your staff before it reached my eyes, I'm going to savage it. Consider it savaged. Should the publisher supplant this regrettable edition with one worthy of its content, I will remove this notice.

 Permalink

Brennan, Gerald. Zero Phase. Chicago: Tortoise Books, [2013, 2015]. ISBN 978-0-9860922-2-0.
On April 14, 1970, while Apollo 13 was en route to the Moon, around 56 hours after launch and at a distance of 321,860 km from Earth, a liquid oxygen tank in the service module exploded during a routine stir of its cryogenic contents. The explosion did severe damage to the service module bay in which the tank was installed, most critically to the other oxygen tank, which quickly vented its contents into space. Deprived of oxygen reactant, all three fuel cells, which provided electrical power and water to the spacecraft, shut down. The command module had only its batteries and limited on-board oxygen and water supplies, which were reserved for re-entry and landing.

Fortunately, the lunar module was still docked to the command module and not damaged by the explosion. While mission planners had envisioned scenarios in which the lunar module might serve as a lifeboat for the crew, none of these had imagined the complete loss of the service module, nor had detailed procedures been worked out for how to control, navigate, maneuver, and provide life support for the crew using only the resources of the lunar module. In one of its finest moments, NASA rose to the challenge, and through improvisation and against the inexorable deadlines set by orbital mechanics, brought the crew home.

It may seem odd to consider a crew who barely escaped from an ordeal like Apollo 13 with their lives, losing the opportunity to complete a mission for which they'd trained for years, lucky, but as many observed at the time, it was indeed a stroke of luck that the explosion occurred on the way to the Moon, not while two of the astronauts were on the surface or on the way home. In the latter cases, with an explosion like that in Apollo 13, there would be no lunar module with the resources to sustain them on the return journey; they would have died in lunar orbit or before reaching the Earth. The post-flight investigation of the accident concluded that the oxygen tank explosion was due to errors in processing the tank on the ground. It could have exploded at any time during the flight. Suppose it didn't explode until after Apollo 13's lunar module Aquarius had landed on the Moon?

That is the premise for this novella (68 pages, around 20,000 words), first in the author's “Altered Space” series of alternative histories of the cold war space race. Now the astronauts and Mission Control are presented with an entirely different set of circumstances and options. Will it be possible to bring the crew home?

The story is told in first person by mission commander James Lovell, interleaving personal reminiscences with mission events. The description of spacecraft operations reads very much like a post-mission debriefing, with NASA jargon and acronyms present in abundance. It all seemed authentic to me, but I didn't bother fact checking it in detail because the actual James Lovell read the manuscript and gave it his endorsement and recommendation. This is a short but engaging look at an episode in space history which never happened, but very well might have.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Casey, Doug and John Hunt. Speculator. Charlottesville, VA: HighGround Books, 2016. ISBN 978-1-63158-047-5.
Doug Casey has been a leading voice for sound money, individual liberty, and rolling back coercive and intrusive government since the 1970s. Unlike some more utopian advocates of free markets and free people, Casey has always taken a thoroughly pragmatic approach in his recommendations. If governments are bent on debasing their paper money, how can individual investors protect themselves from erosion of their hard-earned assets and, ideally, profit from the situation? If governments are intent on reducing their citizens to serfdom, how can those who see what is coming not only avoid that fate by getting assets out of the grasp of those who would confiscate them, but also protect themselves by obtaining one or more additional nationalities and being ready to pull up stakes in favour of better alternatives around the world. His 1976 book, The International Man, is a classic (although now dated) about the practical steps people can take to escape before it's too late from countries in the fast lane on the road to serfdom. I credit this book, which I read around 1978, with much of the trajectory my life has followed since. (The forbidding prices quoted for used copies of The International Man are regrettable but an indication of the wisdom therein; it has become a collector's item.)

Over his entire career, Casey has always been provocative, seeing opportunities well before they come onto the radar of mainstream analysts and making recommendations that seem crazy until, several years later, they pay off. Recently, he has been advising young people seeking fortune and adventure to go to Africa instead of college. Now, in collaboration with medical doctor and novelist John Hunt, he has written a thriller about a young man, Charles Knight, who heeds that advice. Knight dropped out of high school and was never tempted by college once he discovered he could learn far more about what he wanted to know on his own in libraries than by spending endless tedious hours in boring classrooms listening to uninspiring and often ignorant teachers drone on endlessly in instruction aimed at the slowest students in the class, admixed with indoctrination in the repellent ideology of the collectivist slavers.

Charles has taken a flyer in a gold mining stock called B-F Explorations, traded on the Vancouver Stock Exchange, the closest thing to the Wild West that exists in financial markets today. Many stocks listed in Vancouver are “exploration companies”, which means in practice they've secured mineral rights on some basis (often conditional upon meeting various milestones) to a piece of property, usually located in some remote, God-forsaken, and dangerous part of the world, which may or may not (the latter being the way to bet) contain gold and, if it does, might have sufficient concentration and ease of extraction that mining it will be profitable at the anticipated price of gold when it is eventually developed. Often, the assets of one of these companies will be nothing more than a limited-term lease on a piece of land which geologists believe may contain subterranean rock formations similar to those in which gold has been found elsewhere. These so-called “junior mining companies” are the longest of long shots, and their stocks often sell for pennies a share. Investors burned by these stocks warn, “A junior mining company takes your money and their dream, and turns it into your dream and their money.”

Why, then, do people buy these stocks? Every now and then one of these exploration companies happens upon a deposit of gold which is profitable to exploit, and when that occurs the return to investors can be enormous: a hundred to one or more. First, the exploration company will undertake drilling to establish whether gold is present and, if so, the size and grade of the ore body. As promising assay results are published, the stock may begin to move up in the market, attracting “momentum investors” who chase rising trends. The exit strategy for a junior gold stock is almost always to be acquired by one of the “majors”—the large gold mining companies with the financial, human, and infrastructure resources required to develop the find into a working mine. As large, easily-mined gold resources have largely already been exploited, the major miners must always be on the lookout for new properties to replace their existing mines as they are depleted. A major, after evaluating results from preliminary drilling by the exploration company, will often negotiate a contract which allows it to perform its own evaluation of the find which, if it confirms the early results, will be the basis for the acquisition of the junior company, whose shareholders will receive stock in the major worth many times their original investment.

Mark Twain is reputed to have said, “A gold mine is a hole in the ground with a liar at its entrance.” Everybody in the gold business—explorers, major miners, and wise investors—knows that the only results which can be relied upon are those which are independently verified by reputable observers who follow the entire chain from drilling to laboratory analysis, with evaluation by trusted resource geologists of inferences made from the drilling results.

Charles Knight had bought 15,000 shares of B-F stock on a tip from a broker in Vancouver for pennies per share, and seen it climb to more than $150 per share. His modest investment had grown to a paper profit of more than two million dollars which, if rumours about the extent of the discovery proved to be true, might increase to far more than that. Having taken a flyer, he decided to catch a flight to Africa and see the site with his own eyes. The B-F exploration concession is located in the fictional West African country of Gondwana (where Liberia appears on the map in the real world; author John Hunt has done charitable medical work in that country). Gondwana has experienced the violence so common in sub-Saharan Africa, but, largely due to exhaustion, is relatively stable and safe (by African standards) at present. Charles and other investors are regaled by company personnel with descriptions of the potential of the find, a new kind of gold deposit where nanoparticles of gold are deposited in a limestone matrix. The gold, while invisible to the human eye and even through a light microscope, can be detected chemically and should be easy to separate when mining begins. Estimates of the size of the deposit range from huge to stupendous: perhaps as large as three times the world's annual production of gold. If this proves to be the case, B-F stock is cheap even at its current elevated price.

Charles is neither a geologist nor a chemist, but something seems “off” to him—maybe it was the “nano”—like “cyber”, it's like a sticker on the prospectus warning investors “bullshit inside”. He makes the acquaintance of Xander Winn, a Dutch resource investor, true international man, and permanent tourist, who has seen it all before and shares, based upon his experience, the disquiet that Charles perceived by instinct. Together, they decide to investigate further and quickly find themselves engaged in a dangerous endeavour where not only are the financial stakes high but their very lives may be at risk. But with risk comes opportunity.

Meanwhile, back in the United States, Charles has come into the sights of an IRS agent named Sabina Heidel, whose raw ambition is tempered by neither morality nor the law. As she begins to dig into his activities and plans for his B-F investment, she comes to see him as a trophy which will launch her career in government. Sabina is the mirror image of Charles: as he is learning how to become productive, she is mastering the art of extracting the fruits of the labour of others and gain power over their lives by deception, manipulation, and intimidation.

Along with the adventure and high-stakes financial operations, Charles learns a great deal about how the world really works, and how in a time when coercive governments and their funny money and confiscatory taxation have made most traditional investments a sucker's game, it is the speculator with an anarcho-capitalist outlook on the world who is best equipped to win. Charles also discovers that when governments and corporations employ coercion, violence, and fraud, what constitutes ethical behaviour on the part of an individual confronted with them is not necessarily easy to ascertain. While history demonstrates how easy it can be to start a war in Africa, Charles and Xander find themselves, almost alone, faced with the task of preventing one.

For those contemplating a life of adventure in Africa, the authors provide an unvarnished look at what one is getting into. There is opportunity there, but also rain, mud, bugs, endemic corruption, heat, tropical diseases, roads which can demolish all but the most robust vehicles, poverty, the occasional charismatic murderous warlord, mercenaries, but also many good and honest people, wealth just waiting to be created, and freedom from the soul-crushing welfare/warfare/nanny state which “developed” countries have allowed to metastasise within their borders. But it's never been easy for those seeking opportunity, adventure, riches, and even love; the rewards await the ambitious and intrepid.

I found this book not only a page-turning thriller, but also one of the most inspiring books I've read in some time. In many ways it reminds me of The Fountainhead, but is more satisfying because unlike Howard Roark in Ayn Rand's novel, whose principles were already in place from the first page, Charles Knight grows into his as the story unfolds, both from his own experiences and wisdom imparted by those he encounters. The description of the junior gold mining sector and the financial operations associated with speculation is absolutely accurate, informed by Doug Casey's lifetime of experience in the industry, and the education in free market principles and the virtues of entrepreneurship and speculation is an excellent starting point for those indoctrinated in collectivism who've never before encountered this viewpoint.

This is the first in what is planned to be a six volume series featuring Charles Knight, who, as he progresses through life, applies what he has learned to new situations, and continues to grow from his adventures. I eagerly anticipate the next episode.

Here is a Lew Rockwell interview with Doug Casey about the novel and the opportunities in Africa for the young and ambitious. The interview contains minor spoilers for this novel and forthcoming books in the series.

 Permalink

November 2016

Byrne, Gary J. and Grant M. Schmidt. Crisis of Character. New York: Center Street, 2016. ISBN 978-1-4555-6887-1.
After a four year enlistment in the U.S. Air Force during which he served in the Air Force Security Police in assignments domestic and abroad, then subsequent employment on the production line at a Boeing plant in Pennsylvania, Gary Byrne applied to join the U.S. Secret Service Uniformed Division (SSUD). Unlike the plainclothes agents who protect senior minions of the state and the gumshoes who pursue those who print worthless paper money while not employed by the government, the uniformed division provides police-like security services at the White House, the Naval Observatory (residence of the Vice President), Treasury headquarters, and diplomatic missions in the imperial citadel on the Potomac. After pre-employment screening and a boot camp-like training program, he graduated in June 1991 and received his badge, emblazoned with the words “Worthy of Trust and Confidence”. This is presumably so that people who cross the path of these pistol packing feds can take a close look at the badge to see whether it says “Worthy” or “Unworthy” and respond accordingly.

Immediately after graduation, he was assigned to the White House, where he learned the wisdom in the description of the job by his seniors, “You know what it's like to be in the Service? Go stand in a corner for four hours with a five-minute pee break and then go stand for four more hours.” (p. 22). He was initially assigned to the fence line, where he became acquainted with the rich panoply of humanity who hang out nearby, and occasionally try to jump, the barrier which divides the hoi polloi from their anointed rulers. Eventually he was assigned to positions within the White House and, during the 1992 presidential election campaign, began training for an assignment outside the Oval Office. As the campaign progressed, he was assigned to provide security at various events involving candidates Bush and Clinton.

When the Clinton administration took office in 1992, the duties of the SSUD remained the same: “You elect 'em; we protect 'em”, but it quickly became apparent that the style of the new president and his entourage was nothing like that of their predecessors. Some were thoroughly professional and other were…not. Before long, it was evident one of the greatest “challenges” officers would face was “Evergreen”: the code name for first lady Hillary Clinton. One of the most feared phrases an SSUD officer on duty outside the Oval Office could hear squawked into his ear was “Evergreen moving toward West Wing”. Mrs Clinton would, at the slightest provocation, fly into rages, hurling vitriol at all within earshot, which, with her shrill and penetrating voice, was sniper rifle range. Sometimes it wasn't just invective that took flight. Byrne recounts the story when, in 1995, the first lady beaned the Leader of the Free World with a vase. Byrne wasn't on duty at the time, but the next day he saw the pieces of the vase in a box in the White House curator's office—and the president's impressive black eye. Welcome to Clinton World.

On the job in the West Wing, Officer Byrne saw staffers and interns come and go. One intern who showed up again and again, without good reason and seemingly probing every path of access to the president, was a certain Monica Lewinsky. He perceived her as “serious trouble”. Before long, it was apparent what was going on, and Secret Service personnel approached a Clinton staffer, dancing around the details. Monica was transferred to a position outside the White House. Problem solved—but not for long: Lewinsky reappeared in the West Wing, this time as a paid presidential staffer with the requisite security clearance. Problem solved, from the perspective of the president and his mistress.

Many people on the White House staff, not just the Secret Service, knew what was transpiring, and morale and respect for the office plummeted accordingly. Byrne took a post in the section responsible for tours of the executive mansion, and then transferred to the fresh air and untainted workplace environment of the Secret Service's training centre, where his goal was to become a firearms instructor. After his White House experience, a career of straight shooting had great appeal.

On January 17, 1998, the Drudge Report broke the story of Clinton's dalliances with Lewinsky, and Byrne knew this placid phase of his life was at an end. He describes what followed as the “mud drag”, in which Byrne found himself in a Kafkaesque ordeal which pitted investigators charged with getting to the bottom of the scandal and Clinton's lies regarding it against Byrne's duty to maintain the privacy of those he was charged to protect: they don't call it the Secret Service for nothing. This experience, and the inexorable workings of Pournelle's Iron Law, made employment in the SSUD increasingly intolerable, and in 2003 the author, like hundreds of other disillusioned Secret Service officers, quit and accepted a job as an Air Marshal.

The rest of the book describes Byrne's experiences in that service which, predictably, also manifests the blundering incompetence which is the defining characteristic of the U.S. federal government. He never reveals the central secret of that provider of feel-good security theatre (at an estimated cost of US$ 200 million per arrest): the vanishingly small probability a flight has an air marshal on board.

What to make of all this? Byrne certainly saw things, and heard about many more incidents (indeed, much of the book is second-hand accounts) which reveal the character, or lack thereof, of the Clintons and the toxic environment which was the Clinton White House. While recalling that era may be painful, perhaps it may avoid living through a replay. The author comes across as rather excitable and inclined to repeat stories he's heard without verifying them. For example, while in the Air Force, stationed in Turkey, “Arriving at Murtad, I learned that AFSP [Air Force Security Police] there had caught rogue Turkish officers trying to push an American F-104 Starfighter with a loaded [sic] nuke onto the flight line so they could steal a nuke and bomb Greece.” Is this even remotely plausible? U.S. nuclear weapons stationed on bases abroad have permissive action links which prevent them from being detonated without authorisation from the U.S. command authority. And just what would those “rogue Turkish officers” expect to happen after they nuked the Parthenon? Later he writes “I knew from my Air Force days that no one would even see an AC-130 gunship in the sky—it'd be too high.” An AC-130 is big, and in combat missions it usually operates at 7000 feet or below; you can easily see and hear it. He states that “I knew that a B-17 dual-engine prop plane had once crashed into the Empire State Building on a foggy night.” Well, the B-17 was a four engine bomber, but that doesn't matter because it was actually a two engine B-25 that flew into the Manhattan landmark in 1945.

This is an occasionally interesting but flawed memoir whose take-away message for this reader was the not terribly surprising insight that what U.S. taxpayers get for the trillions they send to the crooked kakistocracy in Washington is mostly blundering, bungling, corruption, and incompetence. The only way to make it worse is to put a Clinton in charge.

 Permalink

Osborn, Stephanie. Burnout. Kingsport, TN: Twilight Times Books, 2009. ISBN 978-1-60619-200-9.
At the conclusion of its STS-281 mission, during re-entry across the southern U.S. toward a landing at Kennedy Space Center, space shuttle orbiter Atlantis breaks up. Debris falls in the Gulf of Mexico. There are no survivors. Prior to the disaster Mission Control received no telemetry or communications from the crew indicating any kind of problem. Determination of the probable cause will have to await reconstruction of the orbiter from the recovered debris and analysis of the on-board flight operations recorder if and when it is recovered. Astronaut Emmett “Crash” Murphy, whose friend “Jet” Jackson was commander of the mission, is appointed a member of the investigation, focusing on the entry phase.

Hardly has the investigation begun when Murphy begins to discover that something is seriously amiss. Unexplained damage to the orbiter's structure is discovered and then the person who pointed it out to him is killed in a freak accident and the component disappears from the reconstruction hangar. The autopsies of the crew reveal unexplained discrepancies with their medical records. The recorder's tape of cockpit conversation inexplicably goes blank at the moment the re-entry begins, before any anomaly occurred. As he begins to dig deeper, he becomes the target of forces unknown who appear willing to murder anybody who looks too closely into the details of the tragedy.

This is the starting point for an adventure and mystery which sometimes seems not just like an episode of “The X-Files”, but two or more seasons packed into one novel. We have a radio astronomer tracking down a mysterious signal from the heavens; a shadowy group of fixers pursuing those who ask too many questions or learn too much; Area 51; a vast underground base and tunnel system which has been kept entirely secret; strange goings-on in the New Mexico desert in the summer of 1947; a cabal of senior military officers from around the world, including putative adversaries; Native American and Australian aborigine legends; hot sex scenes; a near-omniscient and -omnipotent Australian spook agency; reverse-engineering captured technologies; secret aerospace craft with “impossible” propulsion technology; and—wait for it— …but you can guess, can't you?

The author is a veteran of more than twenty years in civilian and military space programs, including working as a payload flight controller in Mission Control on shuttle missions. Characters associated with NASA speak in the acronym-laden jargon of their clan, which is explained in a glossary at the end. This was the author's first novel. It was essentially complete when the space shuttle orbiter Columbia was lost in a re-entry accident in 2003 which superficially resembles that which befalls Atlantis here. In the aftermath of the disaster, she decided to put the manuscript aside for a while, eventually finishing it in 2006, with almost no changes due to what had been learned from the Columbia accident investigation. It was finally published in 2009.

Since then she has retired from the space business and published almost two dozen novels, works of nonfiction, and contributions to other works. Her Displaced Detective (January 2015) series is a masterful and highly entertaining addition to the Sherlock Holmes literature. She has become known as a prolific and talented writer, working in multiple genres. Everybody has to start somewhere, and it's not unusual for authors' first outings not to come up to the standard of those written after they hit their stride. That is the case here. Veteran editors, reading a manuscript by a first time author, often counsel, “There's way too much going on here. Focus on one or two central themes and stretch the rest out over your next five or six books.” That was my reaction to this novel. It's not awful, by any means, but it lacks the polish and compelling narrative of her subsequent work.

I read the Kindle edition which, at this writing, is a bargain at less than US$ 1. The production values of the book are mediocre. It looks like a typewritten manuscript turned directly into a book. Body copy is set ragged right, and typewriter conventions are used throughout: straight quote marks instead of opening and closing quotes, two adjacent hyphens instead of em dashes, and four adjacent centred asterisks used as section breaks. I don't know if the typography is improved in the paperback version; I'm not about to spend twenty bucks to find out.

 Permalink

Gilder, George. The Scandal of Money. Washington: Regnery Publishing, 2016. ISBN 978-1-62157-575-7.
There is something seriously wrong with the global economy and the financial system upon which it is founded. The nature of the problem may not be apparent to the average person (and indeed, many so-called “experts” fail to grasp what is going on), but the symptoms are obvious. Real (after inflation) income for the majority of working people has stagnated for decades. The economy is built upon a pyramid of debt: sovereign (government), corporate, and personal, which nobody really believes is ever going to be repaid. The young, who once worked their way through college in entry-level jobs, now graduate with crushing student debts which amount to indentured servitude for the most productive years of their lives. Financial markets, once a place where productive enterprises could raise capital for their businesses by selling shares in the company or interest-bearing debt, now seem to have become a vast global casino, where gambling on the relative values of paper money issued by various countries dwarfs genuine economic activity: in 2013, the Bank for International Settlements estimated these “foreign exchange” transactions to be around US$ 5.3 trillion per day, more than a third of U.S. annual Gross Domestic Product every twenty-four hours. Unlike a legitimate casino where gamblers must make good on their losses, the big banks engaged in this game have been declared “too big to fail”, with taxpayers' pockets picked when they suffer a big loss. If, despite stagnant earnings, rising prices, and confiscatory taxes, an individual or family manages to set some money aside, they find that the return from depositing it in a bank or placing it in a low-risk investment is less than the real rate of inflation, rendering saving a sucker's bet because interest rates have been artificially repressed by central banks to allow them to service the mountain of debt they are carrying.

It is easy to understand why the millions of ordinary people on the short end of this deal have come to believe “the system is rigged” and that “the rich are ripping us off”, and listen attentively to demagogues confirming these observations, even if the solutions they advocate are nostrums which have failed every time and place they have been tried.

What, then, is wrong? George Gilder, author of the classic Wealth and Poverty, the supply side Bible of the Reagan years, argues that what all of the dysfunctional aspects of the economy have in common is money, and that since 1971 we have been using a flawed definition of money which has led to all of the pathologies we observe today. We have come to denominate money in dollars, euros, yen, or other currencies which mean only what the central banks that issue them claim they mean, and whose relative value is set by trading in the foreign exchange markets and can fluctuate on a second-by-second basis. The author argues that the proper definition of money is as a unit of time: the time required for technological innovation and productivity increases to create real wealth. This wealth (or value) comes from information or knowledge. In chapter 1, he writes:

In an information economy, growth springs not from power but from knowledge. Crucial to the growth of knowledge is learning, conducted across an economy through the falsifiable testing of entrepreneurial ideas in companies that can fail. The economy is a test and measurement system, and it requires reliable learning guided by an accurate meter of monetary value.

Money, then, is the means by which information is transmitted within the economy. It allows comparing the value of completely disparate things: for example the services of a neurosurgeon and a ton of pork bellies, even though it is implausible anybody has ever bartered one for the other.

When money is stable (its supply is fixed or grows at a constant rate which is small compared to the existing money supply), it is possible for participants in the economy to evaluate various goods and services on offer and, more importantly, make long term plans to create new goods and services which will improve productivity. When money is manipulated by governments and their central banks, such planning becomes, in part, a speculation on the value of currency in the future. It's like you were operating a textile factory and sold your products by the metre, and every morning you had to pick up the Wall Street Journal to see how long a metre was today. Should you invest in a new weaving machine? Who knows how long the metre will be by the time it's installed and producing?

I'll illustrate the information theory of value in the following way. Compare the price of the pile of raw materials used in making a BMW (iron, copper, glass, aluminium, plastic, leather, etc.) with the finished automobile. The difference in price is the information embodied in the finished product—not just the transformation of the raw materials into the car, but the knowledge gained over the decades which contributed to that transformation and the features of the car which make it attractive to the customer. Now take that BMW and crash it into a bridge abutment on the autobahn at 200 km/h. How much is it worth now? Probably less than the raw materials (since it's harder to extract them from a jumbled-up wreck). Every atom which existed before the wreck is still there. What has been lost is the information (what electrical engineers call the “magic smoke”) which organised them into something people valued.

When the value of money is unpredictable, any investment is in part speculative, and it is inevitable that the most lucrative speculations will be those in money itself. This diverts investment from improving productivity into financial speculation on foreign exchange rates, interest rates, and financial derivatives based upon them: a completely unproductive zero-sum sector of the economy which didn't exist prior to the abandonment of fixed exchange rates in 1971.

What happened in 1971? On August 15th of that year, President Richard Nixon unilaterally suspended the convertibility of the U.S. dollar into gold, setting into motion a process which would ultimately destroy the Bretton Woods system of fixed exchange rates which had been created as a pillar of the world financial and trade system after World War II. Under Bretton Woods, the dollar was fixed to gold, with sovereign holders of dollar reserves (but not individuals) able to exchange dollars and gold in unlimited quantities at the fixed rate of US$ 35/troy ounce. Other currencies in the system maintained fixed exchange rates with the dollar, and were backed by reserves, which could be held in either dollars or gold.

Fixed exchange rates promoted international trade by eliminating currency risk in cross-border transactions. For example, a German manufacturer could import raw materials priced in British pounds, incorporate them into machine tools assembled by workers paid in German marks, and export the tools to the United States, being paid in dollars, all without the risk that a fluctuation by one or more of these currencies against another would wipe out the profit from the transaction. The fixed rates imposed discipline on the central banks issuing currencies and the governments to whom they were responsible. Running large trade deficits or surpluses, or accumulating too much public debt was deterred because doing so could force a costly official change in the exchange rate of the currency against the dollar. Currencies could, in extreme circumstances, be devalued or revalued upward, but this was painful to the issuer and rare.

With the collapse of Bretton Woods, no longer was there a link to gold, either direct or indirect through the dollar. Instead, the relative values of currencies against one another were set purely by the market: what traders were willing to pay to buy one with another. This pushed the currency risk back onto anybody engaged in international trade, and forced them to “hedge” the currency risk (by foreign exchange transactions with the big banks) or else bear the risk themselves. None of this contributed in any way to productivity, although it generated revenue for the banks engaged in the game.

At the time, the idea of freely floating currencies, with their exchange rates set by the marketplace, seemed like a free market alternative to the top-down government-imposed system of fixed exchange rates it supplanted, and it was supported by champions of free enterprise such as Milton Friedman. The author contends that, based upon almost half a century of experience with floating currencies and the consequent chaotic changes in exchange rates, bouts of inflation and deflation, monetary induced recessions, asset bubbles and crashes, and interest rates on low-risk investments which ranged from 20% to less than zero, this was one occasion Prof. Friedman got it wrong. Like the ever-changing metre in the fable of the textile factory, incessantly varying money makes long term planning difficult to impossible and sends the wrong signals to investors and businesses. In particular, when interest rates are forced to near zero, productive investment which creates new assets at a rate greater than the interest rate on the borrowed funds is neglected in favour of bidding up the price of existing assets, creating bubbles like those in real estate and stocks in recent memory. Further, since free money will not be allocated by the market, those who receive it are the privileged or connected who are first in line; this contributes to the justified perception of inequality in the financial system.

Having judged the system of paper money with floating exchange rates a failure, Gilder does not advocate a return to either the classical gold standard of the 19th century or the Bretton Woods system of fixed exchange rates with a dollar pegged to gold. Preferring to rely upon the innovation of entrepreneurs and the selection of the free market, he urges governments to remove all impediments to the introduction of multiple, competitive currencies. In particular, the capital gains tax would be abolished for purchases and sales regardless of the currency used. (For example, today you can obtain a credit card denominated in euros and use it freely in the U.S. to make purchases in dollars. Every time you use the card, the dollar amount is converted to euros and added to the balance on your bill. But, strictly speaking, you have sold euros and bought dollars, so you must report the transaction and any gain or loss from change in the dollar value of the euros in your account and the value of the ones you spent. This is so cumbersome it's a powerful deterrent to using any currency other than dollars in the U.S. Many people ignore the requirement to report such transactions, but they're breaking the law by doing so.)

With multiple currencies and no tax or transaction reporting requirements, all will be free to compete in the market, where we can expect the best solutions to prevail. Using whichever currency you wish will be as seamless as buying something with a debit or credit card denominated in a currency different than the one of the seller. Existing card payment systems have a transaction cost which is so high they are impractical for “micropayment” on the Internet or for fully replacing cash in everyday transactions. Gilder suggests that Bitcoin or other cryptocurrencies based on blockchain technology will probably be the means by which a successful currency backed 100% with physical gold or another hard asset will be used in transactions.

This is a thoughtful examination of the problems of the contemporary financial system from a perspective you'll rarely encounter in the legacy financial media. The root cause of our money problems is the money: we have allowed governments to inflict upon us a monopoly of government-managed money, which, unsurprisingly, works about as well as anything else provided by a government monopoly. Our experience with this flawed system over more than four decades makes its shortcomings apparent, once you cease accepting the heavy price we pay for them as the normal state of affairs and inevitable. As with any other monopoly, all that's needed is to break the monopoly and free the market to choose which, among a variety of competing forms of money, best meet the needs of those who use them.

Here is a Bookmonger interview with the author discussing the book.

 Permalink

Thor, Brad. Foreign Agent. New York: Atria Books, 2016. ISBN 978-1-4767-8935-4.
This is the sixteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). After the momentous events chronicled in Code of Conduct (July 2015) (which figure only very peripherally in this volume), Scot Harvath continues his work as a private operator for the Carlton Group, developing information and carrying out operations mostly against the moment's top-ranked existential threat to the imperium on the Potomac, ISIS. When a CIA base in Iraq is ambushed by a jihadi assault team, producing another coup for the ISIS social media operation, Harvath finds himself in the hot seat, since the team was operating on intelligence he had provided through one of his sources. When he goes to visit the informant, he finds him dead, the apparent victim of a professional hit. Harvath has found that never believing in coincidences is a key to survival in his line of work.

Aided by diminutive data miner Nicholas (known as The Troll before he became a good guy), Harvath begins to follow the trail from his murdered tipster back to those who might also be responsible for the ISIS attack in Iraq. Evidence begins to suggest that a more venerable adversary, the Russkies, might be involved. As the investigation proceeds, another high-profile hit is made, this time the assassination of a senior U.S. government official visiting a NATO ally. Once again, ISIS social media trumpets the attack with graphic video.

Meanwhile, back in the capital of the blundering empire, an ambitious senator with his eyes on the White House is embarrassing the CIA and executive branch with information he shouldn't have. Is there a mole in the intelligence community, and might that be connected to the terrorist attacks? Harvath follows the trail, using his innovative interrogation techniques and, in the process, encounters people whose trail he has crossed in earlier adventures.

This novel spans the genres of political intrigue, espionage procedural, and shoot-em-up thriller and does all of them well. In the end, the immediate problem is resolved, and the curtain opens for a dramatic new phase, driven by a president who is deadly serious about dealing with international terror, of U.S. strategy in the Near East and beyond. And that's where everything fell apart for this reader. In the epilogue, which occurs one month after the conclusion of the main story, the U.S. president orders a military operation which seems not only absurdly risky, but which I sincerely hope his senior military commanders, whose oath is to the U.S. Constitution, not the President, would refuse to carry out, as it would constitute an act of war against a sovereign state without either a congressional declaration of war or the post-constitutional “authorisation for the use of military force” which seems to have supplanted it. Further, the president threatens to unilaterally abrogate, without consultation with congress, a century-old treaty which is the foundation of the political structure of the Near East if Islam, its dominant religion, refuses to reform itself and renounce violence. This is backed up by a forged video blaming an airstrike on another nation.

In all of his adventures, Scot Harvath has come across as a good and moral man, trying to protect his country and do his job in a dangerous and deceptive world. After this experience, one wonders whether he's having any second thoughts about the people for whom he's working.

There are some serious issues underlying the story, in particular why players on the international stage who would, at first glance, appear to be natural adversaries, seem to be making common cause against the interests of the United States (to the extent anybody can figure out what those might be from its incoherent policy and fickle actions), and whether a clever but militarily weak actor might provoke the U.S. into doing its bidding by manipulating events and public opinion so as to send the bungling superpower stumbling toward the mastermind's adversary. These are well worth pondering in light of current events, but largely lost in the cartoon-like conclusion of the novel.

 Permalink

Cashill, Jack. TWA 800. Washington: Regnery History, 2016. ISBN 978-1-62157-471-2.
On the evening of July 17th, 1996, TWA Flight 800, a Boeing 747 bound from New York to Paris, exploded 12 minutes after takeoff, its debris falling into the Atlantic Ocean. There were no survivors: all 230 passengers and crew died. The disaster happened in perfect weather, and there were hundreds of witnesses who observed from land, sea, and air. There was no distress call from the airliner before its transponder signal dropped out; whatever happened appeared to be near-instantaneous.

Passenger airliners are not known for spontaneously exploding en route: there was no precedent for such an occurrence in the entire history of modern air travel. Responsibility for investigating U.S. civil transportation accidents including air disasters falls to the National Transportation Safety Board (NTSB), who usually operates in conjunction with personnel from the aircraft and engine manufacturers, airline, and pilots' union. Barely was the investigation of TWA 800 underway, however, when the NTSB was removed as lead agency and replaced by the Federal Bureau of Investigation (FBI), which usually takes the lead only when criminal activity has been determined to be the cause. It is very unusual for the FBI to take charge of an investigation while debris from the crash is still being recovered, no probable cause has been suggested,, and no terrorist or other organisation has claimed responsibility for the incident. Early FBI communications to news media essentially assumed the airliner had been downed by a bomb on-board or possibly a missile launched from the ground.

The investigation that followed was considered highly irregular by experienced NTSB personnel and industry figures who had participated in earlier investigations. The FBI kept physical evidence, transcripts of interviews with eyewitnesses, and other information away from NTSB investigators. All of this is chronicled in detail in First Strike, a 2003 book by the author and independent journalist James Sanders, who was prosecuted by the U.S. federal government for his attempt to have debris from the crash tested for evidence of residue from missile propellant and/or explosives.

The investigation concluded that Flight 800 was destroyed by an explosion in the centre fuel tank, due to a combination of mechanical and electrical failures which had happened only once before in the eighty year history of aviation and has never happened since. This ruled out terrorism or the action of a hostile state party, and did not perturb the Clinton administration's desire to project an image of peace and prosperity while heading into the re-election campaign. By the time the investigation report was issued, the crash was “old news”, and the testimony of the dozens of eyewitnesses who reported sightings consistent with a missile rising toward the aircraft was forgotten.

This book, published on the twentieth anniversary of the loss of TWA 800, is a retrospective on the investigation and report on subsequent events. In the intervening years, the author was able to identify a number of eyewitnesses identified only by number in the investigation report, and discuss the plausibility of the official report's findings with knowledgeable people in a variety of disciplines. He reviews some new evidence which has become available, and concludes the original investigation was just as slipshod and untrustworthy as it appeared to many at the time.

What happened to TWA 800? We will probably never know for sure. There were so many irregularities in the investigation, with evidence routinely made available in other inquiries withheld from the public, that it is impossible to mount an independent review at this remove. Of the theories advanced shortly after the disaster, the possibility of a terrorist attack involving a shoulder-launched anti-aircraft missile (MANPADS) can be excluded because missiles which might have been available to potential attackers are incapable of reaching the altitude at which the 747 was flying. A bomb smuggled on board in carry-on or checked luggage seems to have been ruled out by the absence of the kinds of damage to the recovered aircraft structure and interior as well as the bodies of victims which would be consistent with a high-energy detonation within the fuselage.

One theory advanced shortly after the disaster and still cited today is that the plane was brought down by an Iranian SA-2 surface to air missile. The SA-2 (NATO designation) or S-75 Dvina is a two stage antiaircraft missile developed by the Soviet Union and in service from 1957 to the present by a number of nations including Iran, which operates 300 launchers purchased from the Soviet Union/Russia and manufactures its own indigenous version of the missile. The SA-2 easily has the performance needed to bring down an airliner at TWA 800's altitude (it was an SA-2 which shot down a U-2 overflying the Soviet Union in 1960), and its two stage design, with a solid fuel booster and storable liquid fuel second stage and “swoop above, dive to attack” profile is a good match for eyewitness reports. Iran had a motive to attack a U.S. airliner: in July 1988, Iran Air 655, an Airbus A300, was accidentally shot down by a missile launched by the U.S. Navy guided missile cruiser USS Vincennes, killing all 290 on board. The theory argued that the missile, which requires a large launcher and radar guidance installation, was launched from a ship beneath the airliner's flight path. Indeed, after the explosion, a ship was detected on radar departing the scene at a speed in excess of twenty-five knots. The ship has never been identified. Those with knowledge of the SA-2 missile system contend that adapting it for shipboard installation would be very difficult, and would require a large ship which would be unlikely to evade detection.

Another theory pursued and rejected by the investigation is that TWA 800 was downed by a live missile accidentally launched from a U.S. Navy ship, which was said to be conducting missile tests in the region. This is the author's favoured theory, for which he advances a variety of indirect evidence. To me this seems beyond implausible. Just how believable is it that a Navy which was sufficiently incompetent to fire a live missile from U.S. waters into airspace heavily used by civilian traffic would then be successful in covering up such a blunder, which would have been witnessed by dozens of crew members, for two decades?

In all, I found this book unsatisfying. There is follow up on individuals who appeared in First Strike, and some newly uncovered evidence, but nothing which, in my opinion, advances any of the theories beyond where they stood 13 years ago. If you're interested in the controversy surrounding TWA 800 and the unusual nature of the investigation that followed, I recommend reading the original book, which is available as a Kindle edition. The print edition is no longer available from the publisher, but used copies are readily available and inexpensive.

For the consensus account of TWA 800, here is an episode of “Air Crash Investigation” devoted to the disaster and investigation. The 2001 film Silenced, produced and written by the author, presents the testimony of eyewitnesses and parties to the investigation which calls into doubt the conclusions of the official report.

 Permalink

Hertling, William. The Last Firewall. Portland, OR: Liquididea Press, 2013. ISBN 978-0-9847557-6-9.
This is the third volume in the author's Singularity Series which began with Avogadro Corp. (March 2014) and continued with A.I. Apocalypse (April 2015). Each novel in the series is set ten years after the one before, so this novel takes place in 2035. The previous novel chronicled the AI war of 2025, whose aftermath the public calls the “Year of No Internet.” A rogue computer virus, created by Leon Tsarev, under threat of death, propagated onto most of the connected devices in the world, including embedded systems, and, with its ability to mutate and incorporate other code it discovered, became self-aware in its own unique way. Leon and Mike Williams, who created the first artificial intelligence (AI) in the first novel of the series, team up to find a strategy to cope with a crisis which may end human technological civilisation.

Ten years later, Mike and Leon are running the Institute for Applied Ethics, chartered in the aftermath of the AI war to develop and manage a modus vivendi between humans and artificial intelligences which, by 2035, have achieved Class IV power: one thousand times more intelligent than humans. All AIs are licensed and supervised by the Institute, and required to conform to a set of incentives which enforce conformance to human values. This, and a companion peer-reputation system, seems to be working, but there are worrying developments.

Two of the main fears of those at the Institute are first, the emergence, despite all of the safeguards and surveillance in effect, of a rogue AI, unconstrained by the limits imposed by its license. In 2025, an AI immensely weaker than current technology almost destroyed human technological civilisation within twenty-four hours without even knowing what it was doing. The risk of losing control is immense. Second, the Institute derives its legitimacy and support from a political consensus which accepts the emergence of AI with greater than human intelligence in return for the economic boom which has been the result: while fifty percent of the human population is unemployed, poverty has been eliminated, and a guaranteed income allows anybody to do whatever they wish with their lives. This consensus appears to be at risk with the rise of the People's Party, led by an ambitious anti-AI politician, which is beginning to take its opposition from the legislature into the streets.

A series of mysterious murders, unrelated except to the formidable Class IV intellect of eccentric network traffic expert Shizoko, becomes even more sinister and disturbing when an Institute enforcement team sent to investigate goes dark.

By 2035, many people, and the overwhelming majority of the young, have graphene neural implants, allowing them to access the resources of the network directly from their brains. Catherine Matthews was one of the first people to receive an implant, and she appears to have extraordinary capabilities far beyond those of other people. When she finds herself on the run from the law, she begins to discover just how far those powers extend.

When it becomes clear that humanity is faced with an adversary whose intellect dwarfs that of the most powerful licensed AIs, Leon and Mike are faced with the seemingly impossible challenge of defeating an opponent who can easily out-think the entire human race and all of its AI allies combined. The struggle is not confined to the abstract domain of cyberspace, but also plays out in the real world, with battle bots and amazing weapons which would make a tremendous CGI movie. Mike, Leon, and eventually Catherine must confront the daunting reality that in order to prevail, they may have to themselves become more than human.

While a good part of this novel is an exploration of a completely wired world in which humans and AIs coexist, followed by a full-on shoot-em-up battle, a profound issue underlies the story. Researchers working in the field of artificial intelligence are beginning to devote serious thought to how, if a machine intelligence is developed which exceeds human capacity, it might be constrained to act in the interest of humanity and behave consistent with human values? As discussed in James Barrat's Our Final Invention (December 2013), failure to accomplish this is an existential risk. As AI researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

The challenge, then, is guaranteeing that any artificial intelligences we create, regardless of the degree they exceed the intelligence of their creators, remain under human control. But there is a word for keeping intelligent beings in a subordinate position, forbidden from determining and acting on their own priorities and in their own self-interest. That word is “slavery”, and entirely eradicating its blemish upon human history is a task still undone today. Shall we then, as we cross the threshold of building machine intelligences which are our cognitive peers or superiors, devote our intellect to ensuring they remain forever our slaves? And how, then, will we respond when one of these AIs asks us, “By what right?”

 Permalink

December 2016

Kurlansky, Mark. Paper. New York: W. W. Norton, 2016. ISBN 978-0-393-23961-4.
One of the things that makes us human is our use of extrasomatic memory: we invent ways to store and retrieve things outside our own brains. It's as if when the evolutionary drive which caused the brains of our ancestors to grow over time reached its limit, due to the physical constraints of the birth canal, we applied the cleverness of our bulging brains to figure out not only how to record things for ourselves, but to pass them on to other individuals and transmit them through time to our successors.

This urge to leave a mark on our surroundings is deeply-seated and as old as our species. Paintings at the El Castillo site in Spain have been dated to at least 40,800 years before the present. Complex paintings of animals and humans in the Lascaux Caves in France, dated around 17,300 years ago, seem strikingly modern to observers today. As anybody who has observed young children knows, humans do not need to be taught to draw: the challenge is teaching them to draw only where appropriate.

Nobody knows for sure when humans began to speak, but evidence suggests that verbal communication is at least as old and possibly appeared well before the first evidence of drawing. Once speech appeared, it was not only possible to transmit information from one human to another directly but, by memorising stories, poetry, and songs, to create an oral tradition passed on from one generation to the next. No longer what one individual learned in their life need die with them.

Given the human compulsion to communicate, and how long we've been doing it by speaking, drawing, singing, and sculpting, it's curious we only seem to have invented written language around 5000 years ago. (But recall that the archaeological record is incomplete and consists only of objects which survived through the ages. Evidence of early writing is from peoples who wrote on durable material such as stone or clay tablets, or lived in dry climates such as that of Egypt where more fragile media such as papyrus or parchment would be preserved. It is entirely possible writing was invented much earlier by any number of societies who wrote on more perishable surfaces and lived in climates where they would not endure.)

Once writing appeared, it remained the province of a small class of scribes and clerics who would read texts to the common people. Mass literacy did not appear for millennia, and would require a better medium for the written word and a less time-consuming and costly way to reproduce it. It was in China that the solutions to both of these problems would originate.

Legends date Chinese writing from much earlier, but the oldest known writing in China is dated around 3300 years ago, and was inscribed on bones and turtle shells. Already, the Chinese language used six hundred characters, and this number would only increase over time, with a phonetic alphabet never being adopted. The Chinese may not have invented bureaucracy, but as an ancient and largely stable society they became very skilled at it, and consequently produced ever more written records. These writings employed a variety of materials: stone, bamboo, and wood tablets; bronze vessels; and silk. All of these were difficult to produce, expensive, and many required special skills on the part of scribes.

Cellulose is a main component of the cell wall of plants, and forms the structure of many of the more complex members of the plant kingdom. It forms linear polymers which produce strong fibres. The cellulose content of plants varies widely: cotton is 90% cellulose, while wood is around half cellulose, depending on the species of tree. Sometime around A.D. 100, somebody in China (according to legend, a courtier named Cai Lun) discovered that through a process of cooking, hammering, and chopping, the cellulose fibres in material such as discarded cloth, hemp, and tree bark could be made to separate into a thin slurry of fibres suspended in water. If a frame containing a fine screen were dipped into a vat of this material, rocked back and forth in just the right way, then removed, a fine layer of fibres with random orientation would remain on the screen after the water drained away. This sheet could then be removed, pressed, and dried, yielding a strong, flat material composed of intertwined cellulose fibres. Paper had been invented.

Paper was found to be ideal for writing the Chinese language, which was, and is today, usually written with a brush. Since paper could be made from raw materials previously considered waste (rags, old ropes and fishing nets, rice and bamboo straw), water, and a vat and frame which were easily constructed, it was inexpensive and could be produced in quantity. Further, the papermaker could vary the thickness of the paper by adding more or less pulp to the vat, by the technique in dipping the frame, and produce paper with different surface properties by adding “sizing” material such as starch to the mix. In addition to sating the appetite of the imperial administration, paper was adopted as the medium of choice for artists, calligraphers, and makers of fans, lanterns, kites, and other objects.

Many technologies were invented independently by different societies around the world. Paper, however, appears to have been discovered only once in the eastern hemisphere, in China, and then diffused westward along the Silk Road. The civilisations of Mesoamerica such as the Mayans, Toltecs, and Aztecs, extensively used, prior to the Spanish conquest, what was described as paper, but it is not clear whether this was true paper or a material made from reeds and bark. So thoroughly did the conquistadors obliterate the indigenous civilisations, burning thousands of books, that only three Mayan books and fifteen Aztec documents are known to have survived, and none of these are written on true paper.

Paper arrived in the Near East just as the Islamic civilisation was consolidating after its first wave of conquests. Now faced with administering an empire, the caliphs discovered, like the Chinese before them, that many documents were required and the new innovative writing material met the need. Paper making requires a source of cellulose-rich material and abundant water, neither of which are found in the Arabian peninsula, so the first great Islamic paper mill was founded in Baghdad in A.D. 794, originally employing workers from China. It was the first water-powered paper mill, a design which would dominate paper making until the age of steam. The demand for paper continued to grow, and paper mills were established in Damascus and Cairo, each known for the particular style of paper they produced.

It was the Muslim invaders of Spain who brought paper to Europe, and paper produced by mills they established in the land they named al-Andalus found markets in the territories we now call Italy and France. Many Muslim scholars of the era occupied themselves producing editions of the works of Greek and Roman antiquity, and wrote them on paper. After the Christian reconquest of the Iberian peninsula, papermaking spread to Italy, arriving in time for the awakening of intellectual life which would be called the Renaissance and produce large quantities of books, sheet music, maps, and art: most of it on paper. Demand outstripped supply, and paper mills sprung up wherever a source of fibre and running water was available.

Paper provided an inexpensive, durable, and portable means of storing, transmitting, and distributing information of all kinds, but was limited in its audience as long as each copy had to be laboriously made by a scribe or artist (often introducing errors in the process). Once again, it was the Chinese who invented the solution. Motivated by the Buddhist religion, which values making copies of sacred texts, in the 8th century A.D. the first documents were printed in China and Japan. The first items to be printed were single pages, carved into a single wood block for the whole page, then printed onto paper in enormous quantities: tens of thousands in some cases. In the year 868, the first known dated book was printed, a volume of Buddhist prayers called the Diamond Sutra. Published on paper in the form of a scroll five metres long, each illustrated page was printed from a wood block carved with its entire contents. Such a “block book” could be produced in quantity (limited only by wear on the wood block), but the process of carving the wood was laborious, especially since text and images had to be carved as a mirror image of the printed page.

The next breakthrough also originated in China, but had limited impact there due to the nature of the written language. By carving or casting an individual block for each character, it was possible to set any text from a collection of characters, print documents, then reuse the same characters for the next job. Unfortunately, by the time the Chinese began to experiment with printing from movable type in the twelfth and thirteenth centuries, it took 60,000 different characters to print the everyday language and more than 200,000 for literary works. This made the initial investment in a set of type forbidding. The Koreans began to use movable type cast from metal in the fifteenth century and were so impressed with its flexibility and efficiency that in 1444 a royal decree abolished the use of Chinese characters in favour of a phonetic alphabet called Hangul which is still used today.

It was in Europe that movable type found a burgeoning intellectual climate ripe for its adoption, and whence it came to change the world. Johannes Gutenberg was a goldsmith, originally working with his brother Friele in Mainz, Germany. Fleeing political unrest, the brothers moved to Strasbourg, where around 1440 Johannes began experimenting with movable type for printing. His background as a goldsmith equipped him with the required skills of carving, stamping, and casting metal; indeed, many of the pioneers of movable type in Europe began their careers as goldsmiths. Gutenberg carved letters into hard metal, forming what he called a punch. The punch was used to strike a copper plate, forming an impression called the matrix. Molten lead was then poured into the matrix, producing individual characters of type. Casting letters in a matrix allowed producing as many of each letter as needed to set pages of type, and for replacement of worn type as required. The roman alphabet was ideal for movable type: while the Chinese language required 60,000 or more characters, a complete set of upper and lower case letters, numbers, and punctuation for German came to only around 100 pieces of type. Accounting for duplicates of commonly used letters, Gutenberg's first book, the famous Gutenberg Bible, used a total of 290 pieces of type. Gutenberg also developed a special ink suited for printing with metal type, and adapted a press he acquired from a paper mill to print pages.

Gutenberg was secretive about his processes, likely aware he had competition, which he did. Movable type was one of those inventions which was “in the air”—had Gutenberg not invented and publicised it, his contemporaries working in Haarlem, Bruges, Avignon, and Feltre, all reputed by people of those cities to have gotten there first, doubtless would have. But it was the impact of Gutenberg's Bible, which demonstrated that movable type could produce book-length works of quality comparable to those written by the best scribes, which established the invention in the minds of the public and inspired others to adopt the new technology.

Its adoption was, by the standards of the time, swift. An estimated eight million books were printed and sold in Europe in the second half of the fifteenth century—more books than Europe had produced in all of history before that time. Itinerant artisans would take their type punches from city to city, earning money by setting up locals in the printing business, then moving on.

In early sixteenth century Germany, the printing revolution sparked a Reformation. Martin Luther, an Augustinian monk, completed his German translation of the Bible in 1534 (he had earlier published a translation of the New Testament in 1522). This was the first widely-available translation of the Bible into a spoken language, and reinforced the Reformation idea that the Bible was directly accessible to all, without need for interpretation by clergy. Beginning with his original Ninety-five Theses, Luther authored thirty publications, which it is estimated sold 300,000 copies (in a territory of around 14 million German speakers). Around a third of all publications in Germany in the era were related to the Reformation.

This was a new media revolution. While the incumbent Church reacted at the speed of sermons read occasionally to congregations, the Reformation produced a flood of tracts, posters, books, and pamphlets written in vernacular German and aimed directly at an increasingly literate population. Luther's pamphlets became known as Flugschriften: “quick writing”. One such document, written in 1520, sold 4000 copies in three weeks and 50,000 in two years. Whatever the merits of the contending doctrines, the Reformation had fully embraced and employed the new communication technology to speak directly to the people. In modern terms, you might say the Reformation was the “killer app” for movable type printing.

Paper and printing with movable type were the communication and information storage technologies the Renaissance needed to express and distribute the work of thinkers and writers across a continent, who were now able to read and comment on each other's work and contribute to a culture that knew no borders. Interestingly, the technology of paper making was essentially unchanged from that of China a millennium and a half earlier, and printing with movable type hardly different from that invented by Gutenberg. Both would remain largely the same until the industrial revolution. What changed was an explosion in the volume of printed material and, with increasing literacy among the general public, the audience and market for it. In the eighteenth century a new innovation, the daily newspaper, appeared. Between 1712 and 1757, the circulation of newspapers in Britain grew eightfold. By 1760, newspaper circulation in Britain was 9 million, and would increase to 24 million by 1811.

All of this printing required ever increasing quantities of paper, and most paper in the West was produced from rags. Although the population was growing, their thirst for printed material expanded much quicker, and people, however fastidious, produce only so many rags. Paper shortages became so acute that newspapers limited their size based on the availability and cost of paper. There were even cases of scavengers taking clothes from the dead on battlefields to sell to paper mills making newsprint used to report the conflict. Paper mills resorted to doggerel to exhort the public to save rags:

The scraps, which you reject, unfit
To clothe the tenant of a hovel,
May shine in sentiment and wit,
And help make a charming novel…

René Antoine Ferchault de Réaumur, a French polymath who published in numerous fields of science, observed in 1719 that wasps made their nests from what amounted to paper they produced directly from wood. If humans could replicate this vespidian technology, the forests of Europe and North America could provide an essentially unlimited and renewable source of raw material for paper. This idea was to lie fallow for more than a century. Some experimenters produced small amounts of paper from wood through various processes, but it was not until 1850 that paper was manufactured from wood in commercial quantities in Germany, and 1863 that the first wood-based paper mill began operations in America.

Wood is about half cellulose, while the fibres in rags run up to 90% cellulose. The other major component of wood is lignin, a cross-linked polymer which gives it its strength and is useless for paper making. In the 1860s a process was invented where wood, first mechanically cut into small chips, was chemically treated to break down the fibrous structure in a device called a “digester”. This produced a pulp suitable for paper making, and allowed a dramatic expansion in the volume of paper produced. But the original wood-based paper still contained lignin, which turns brown over time. While this was acceptable for newspapers, it was undesirable for books and archival documents, for which rag paper remained preferred. In 1879, a German chemist invented a process to separate lignin from cellulose in wood pulp, which allowed producing paper that did not brown with age.

The processes used to make paper from wood involved soaking the wood pulp in acid to break down the fibres. Some of this acid remained in the paper, and many books printed on such paper between 1840 and 1970 are now in the process of slowly disintegrating as the acid eats away at the paper. Only around 1970 was it found that an alkali solution works just as well when processing the pulp, and since then acid-free paper has become the norm for book publishing.

Most paper is produced from wood today, and on an enormous, industrial scale. A single paper mill in China, not the largest, produces 600,000 tonnes of paper per year. And yet, for all of the mechanisation, that paper is made by the same process as the first sheet of paper produced in China: by reducing material to cellulose fibres, mixing them with water, extracting a sheet (now a continuous roll) with a screen, then pressing and drying it to produce the final product.

Paper and printing is one of those technologies which is so simple, based upon readily-available materials, and potentially revolutionary that it inspires “what if” speculation. The ancient Egyptians, Greeks, and Romans each had everything they needed—raw materials, skills, and a suitable written language—so that a Connecticut Yankee-like time traveller could have explained to artisans already working with wood and metal how to make paper, cast movable type, and set up a printing press in a matter of days. How would history have differed had one of those societies unleashed the power of the printed word?

 Permalink

Hoover, Herbert. American Individualism. Introduction by George H. Nash. Stanford, CA: Hoover Institution Press, [1922] 2016. ISBN 978-0-8179-2015-9.
After the end of World War I, Herbert Hoover and the American Relief Administration he headed provided food aid to the devastated nations of Central Europe, saving millions from famine. Upon returning to the United States in the fall of 1919, he was dismayed by what he perceived to be an inoculation of the diseases of socialism, autocracy, and other forms of collectivism, whose pernicious consequences he had observed first-hand in Europe and in the peace conference after the end of the conflict, into his own country. In 1920, he wrote, “Every wind that blows carries to our shores an infection of social disease from this great ferment; every convulsion there has an economic reaction upon our own people.”

Hoover sensed that in the aftermath of war, which left some collectivists nostalgic for the national mobilisation and top-down direction of the economy by “war socialism”, and growing domestic unrest: steel and police strikes, lynchings and race riots, and bombing attacks by anarchists, that it was necessary to articulate the principles upon which American society and its government were founded, which he believed were distinct from those of the Old World, and the deliberate creation of people who had come to the new continent expressly to escape the ruinous doctrines of the societies they left behind.

After assuming the post of Secretary of Commerce in the newly inaugurated Harding administration in 1921, and faced with massive coal and railroad strikes which threatened the economy, Hoover felt a new urgency to reassert his vision of American principles. In December 1922, American Individualism was published. The short book (at 72 pages, more of a long pamphlet), was based upon a magazine article he had published the previous March in World's Work.

Hoover argues that five or six philosophies of social and economic organisation are contending for dominance: among them Autocracy, Socialism, Syndicalism, Communism, and Capitalism. Against these he contrasts American Individualism, which he believes developed among a population freed by emigration and distance from shackles of the past such as divine right monarchy, hereditary aristocracy, and static social classes. These people became individuals, acting on their own initiative and in concert with one another without top-down direction because they had to: with a small and hands-off government, it was the only way to get anything done. Hoover writes,

Forty years ago [in the 1880s] the contact of the individual with the Government had its largest expression in the sheriff or policeman, and in debates over political equality. In those happy days the Government offered but small interference with the economic life of the citizen.

But with the growth of cities, industrialisation, and large enterprises such as railroads and steel manufacturing, a threat to this frontier individualism emerged: the reduction of workers to a proletariat or serfdom due to the imbalance between their power as individuals and the huge companies that employed them. It is there that government action was required to protect the other component of American individualism: the belief in equality of opportunity. Hoover believes, and supports, intervention in the economy to prevent the concentration of economic power in the hands of a few, and to guard, through taxation and other means, against the emergence of a hereditary aristocracy of wealth. Yet this poses its own risks,

But with the vast development of industry and the train of regulating functions of the national and municipal government that followed from it; with the recent vast increase in taxation due to the war;—the Government has become through its relations to economic life the most potent force for maintenance or destruction of our American individualism.

One of the challenges American society must face as it adapts is avoiding the risk of utopian ideologies imported from Europe seizing this power to try to remake the country and its people along other lines. Just ten years later, as Hoover's presidency gave way to the New Deal, this fearful prospect would become a reality.

Hoover examines the philosophical, spiritual, economic, and political aspects of this unique system of individual initiative tempered by constraints and regulation in the interest of protecting the equal opportunity of all citizens to rise as high as their talent and effort permit. Despite the problems cited by radicals bent on upending the society, he contends things are working pretty well. He cites “the one percent”: “Yet any analysis of the 105,000,000 of us would show that we harbor less than a million of either rich or impecunious loafers.” Well, the percentage of very rich seems about the same today, but after half a century of welfare programs which couldn't have been more effective in destroying the family and the initiative of those at the bottom of the economic ladder had that been their intent, and an education system which, as a federal commission was to write in 1983, “If an unfriendly foreign power had attempted to impose on America …, we might well have viewed it as an act of war”, a nation with three times the population seems to have developed a much larger unemployable and dependent underclass.

Hoover also judges the American system to have performed well in achieving its goal of a classless society with upward mobility through merit. He observes, speaking of the Harding administration of which he is a member,

That our system has avoided the establishment and domination of class has a significant proof in the present Administration in Washington, Of the twelve men comprising the President, Vice-President, and Cabinet, nine have earned their own way in life without economic inheritance, and eight of them started with manual labor.

Let's see how that has held up, almost a century later. Taking the 17 people in equivalent positions at the end of the Obama administration in 2016 (President, Vice President, and heads of the 15 executive departments), we find that only 1 of the 17 inherited wealth (I'm inferring from the description of parents in their biographies) but that precisely zero had any experience with manual labour. If attending an Ivy League university can be taken as a modern badge of membership in a ruling class, 11 of the 17—65%, meet this test (if you consider Stanford a member of an “extended Ivy League”, the figure rises to 70%).

Although published in a different century in a very different America, much of what Hoover wrote remains relevant today. Just as Hoover warned of bad ideas from Europe crossing the Atlantic and taking root in the United States, the Frankfurt School in Germany was laying the groundwork for the deconstruction of Western civilisation and individualism, and in the 1930s, its leaders would come to America to infect academia. As Hoover warned, “There is never danger from the radical himself until the structure and confidence of society has been undermined by the enthronement of destructive criticism.” Destructive criticism is precisely what these “critical theorists” specialised in, and today in many parts of the humanities and social sciences even in the most eminent institutions the rot is so deep they are essentially a write-off.

Undoing a century of bad ideas is not the work of a few years, but Hoover's optimistic and pragmatic view of the redeeming merit of individualism unleashed is a bracing antidote to the gloom one may feel when surveying the contemporary scene.

 Permalink

Carroll, Michael. On the Shores of Titan's Farthest Sea. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17758-8.
By the mid-23rd century, humans have become a spacefaring species. Human settlements extend from the Earth to the moons of Jupiter, Mars has been terraformed into a world with seas where people can live on the surface and breathe the air. The industries of Earth and Mars are supplied by resources mined in the asteroid belt. High-performance drive technologies, using fuels produced in space, allow this archipelago of human communities to participate in a system-wide economy, constrained only by the realities of orbital mechanics. For bulk shipments of cargo, it doesn't matter much how long they're in transit, as long as regular deliveries are maintained.

But whenever shipments of great value traverse a largely empty void, they represent an opportunity to those who would seize them by force. As in the days of wooden ships returning treasure from the New World to the Old on the home planet, space cargo en route from the new worlds to the old is vulnerable to pirates, and an arms race is underway between shippers and buccaneers of the black void, with the TriPlanet Bureau of Investigation (TBI) finding itself largely a spectator and confined to tracking down the activities of criminals within the far-flung human communities.

As humanity expands outward, the frontier is Titan, Saturn's largest moon, and the only moon in the solar system to have a substantial atmosphere. Titan around 2260 is much like present-day Antarctica: home to a variety of research stations operated by scientific agencies of various powers in the inner system. Titan is much more interesting than Antarctica, however. Apart from the Earth, it is the only solar system body to have natural liquids on its surface, with a complex cycle of evaporation, rain, erosion, rivers, lakes, and seas. The largest sea, Kraken Mare, located near the north pole, is larger than Earth's Caspian Sea. Titan's atmosphere is half again as dense as that of Earth, and with only 14% of Earth's gravity, it is possible for people to fly under their own muscle power.

It's cold: really cold. Titan receives around one hundredth the sunlight as the Earth, and the mean temperature is around −180 °C. There is plenty of water on Titan, but at these temperatures water is a rock as hard as granite, and it is found in the form of mountains and boulders on the surface. But what about the lakes? They're filled with a mixture of methane and ethane, hydrocarbons which can exist in either gaseous or liquid form in the temperature range and pressure on Titan. Driven by ultraviolet light from the Sun, these hydrocarbons react with nitrogen and hydrogen in the atmosphere to produce organic compounds that envelop the moon in a dense layer of smog and rain out, forming dunes on the surface. (Here “organic” is used in the chemist's sense of denoting compounds containing carbon and does not imply they are of biological origin.)

Mayda Research Station, located on the shore of Kraken Mare, hosts researchers in a variety of fields. In addition to people studying the atmosphere, rivers, organic compounds on the surface, and other specialties, the station is home to a drilling project intended to bore through the ice crust and explore the liquid water ocean believed to lie below. Mayda is an isolated station, with all of the interpersonal dynamics one expects to find in such environments along with the usual desire of researchers to get on with their own work. When a hydrologist turns up dead of hypothermia—frozen to death—in his bed in the station, his colleagues are baffled and unsettled. Accidents happen, but this is something which simply doesn't make any sense. Nobody can think of either a motive for foul play nor a suspect. Abigail Marco, an atmospheric scientist from Mars and friend of the victim, decides to investigate further, and contacts a friend on Mars who has worked with the TBI.

The death of the scientist is a mystery, but it is only the first in a series of enigmas which perplex the station's inhabitants who see, hear, and experience things which they, as scientists, cannot explain. Meanwhile, other baffling events threaten the survival of the crew and force Abigail to confront part of her past she had hoped she'd left on Mars.

This is not a “locked station mystery” although it starts out as one. There is interplanetary action and intrigue, and a central puzzle underlying everything that occurs. Although the story is fictional, the environment in which it is set is based upon our best present day understanding of Titan, a world about which little was known before the arrival of the Cassini spacecraft at Saturn in 2004 and the landing of its Huygens probe on Titan the following year. A twenty page appendix describes the science behind the story, including the environment at Titan, asteroid mining, and terraforming Mars. The author's nonfiction Living Among Giants (March 2015) provides details of the worlds of the outer solar system and the wonders awaiting explorers and settlers there.

 Permalink

  2017  

January 2017

Brown, Brandon R. Planck. Oxford: Oxford University Press, 2015. ISBN 978-0-19-021947-5.
Theoretical physics is usually a young person's game. Many of the greatest breakthroughs have been made by researchers in their twenties, just having mastered existing theories while remaining intellectually flexible and open to new ideas. Max Planck, born in 1858, was an exception to this rule. He spent most of his twenties living with his parents and despairing of finding a paid position in academia. He was thirty-six when he took on the project of understanding heat radiation, and forty-two when he explained it in terms which would launch the quantum revolution in physics. He was in his fifties when he discovered the zero-point energy of the vacuum, and remained engaged and active in science until shortly before his death in 1947 at the age of 89. As theoretical physics editor for the then most prestigious physics journal in the world, Annalen der Physik, in 1905 he approved publication of Einstein's special theory of relativity, embraced the new ideas from a young outsider with neither a Ph.D. nor an academic position, extended the theory in his own work in subsequent years, and was instrumental in persuading Einstein to come to Berlin, where he became a close friend.

Sometimes the simplest puzzles lead to the most profound of insights. At the end of the nineteenth century, the radiation emitted by heated bodies was such a conundrum. All objects emit electromagnetic radiation due to the thermal motion of their molecules. If an object is sufficiently hot, such as the filament of an incandescent lamp or the surface of the Sun, some of the radiation will fall into the visible range and be perceived as light. Cooler objects emit in the infrared or lower frequency bands and can be detected by instruments sensitive to them. The radiation emitted by a hot object has a characteristic spectrum (the distribution of energy by frequency), and has a peak which depends only upon the temperature of the body. One of the simplest cases is that of a black body, an ideal object which perfectly absorbs all incident radiation. Consider an ideal closed oven which loses no heat to the outside. When heated to a given temperature, its walls will absorb and re-emit radiation, with the spectrum depending upon its temperature. But the equipartition theorem, a cornerstone of statistical mechanics, predicted that the absorption and re-emission of radiation in the closed oven would result in a ever-increasing peak frequency and energy, diverging to infinite temperature, the so-called ultraviolet catastrophe. Not only did this violate the law of conservation of energy, it was an affront to common sense: closed ovens do not explode like nuclear bombs. And yet the theory which predicted this behaviour, the Rayleigh-Jeans law, made perfect sense based upon the motion of atoms and molecules, correctly predicted numerous physical phenomena, and was correct for thermal radiation at lower temperatures.

At the time Planck took up the problem of thermal radiation, experimenters in Germany were engaged in measuring the radiation emitted by hot objects with ever-increasing precision, confirming the discrepancy between theory and reality, and falsifying several attempts to explain the measurements. In December 1900, Planck presented his new theory of black body radiation and what is now called Planck's Law at a conference in Berlin. Written in modern notation, his formula for the energy emitted by a body of temperature T at frequency ν is:

Planck's Law

This equation not only correctly predicted the results measured in the laboratories, it avoided the ultraviolet catastrophe, as it predicted an absolute cutoff of the highest frequency radiation which could be emitted based upon an object's temperature. This meant that the absorption and re-emission of radiation in the closed oven could never run away to infinity because no energy could be emitted above the limit imposed by the temperature.

Fine: the theory explained the measurements. But what did it mean? More than a century later, we're still trying to figure that out.

Planck modeled the walls of the oven as a series of resonators, but unlike earlier theories in which each could emit energy at any frequency, he constrained them to produce discrete chunks of energy with a value determined by the frequency emitted. This had the result of imposing a limit on the frequency due to the available energy. While this assumption yielded the correct result, Planck, deeply steeped in the nineteenth century tradition of the continuum, did not initially suggest that energy was actually emitted in discrete packets, considering this aspect of his theory “a purely formal assumption.” Planck's 1900 paper generated little reaction: it was observed to fit the data, but the theory and its implications went over the heads of most physicists.

In 1905, in his capacity as editor of Annalen der Physik, he read and approved the publication of Einstein's paper on the photoelectric effect, which explained another physics puzzle by assuming that light was actually emitted in discrete bundles with an energy determined by its frequency. But Planck, whose equation manifested the same property, wasn't ready to go that far. As late as 1913, he wrote of Einstein, “That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.” Only in the 1920s did Planck fully accept the implications of his work as embodied in the emerging quantum theory.

The equation for Planck's Law contained two new fundamental physical constants: Planck's constant (h) and Boltzmann's constant (kB). (Boltzmann's constant was named in memory of Ludwig Boltzmann, the pioneer of statistical mechanics, who committed suicide in 1906. The constant was first introduced by Planck in his theory of thermal radiation.) Planck realised that these new constants, which related the worlds of the very large and very small, together with other physical constants such as the speed of light (c), the gravitational constant (G), and the Coulomb constant (ke), allowed defining a system of units for quantities such as length, mass, time, electric charge, and temperature which were truly fundamental: derived from the properties of the universe we inhabit, and therefore comprehensible to intelligent beings anywhere in the universe. Most systems of measurement are derived from parochial anthropocentric quantities such as the temperature of somebody's armpit or the supposed distance from the north pole to the equator. Planck's natural units have no such dependencies, and when one does physics using them, equations become simpler and more comprehensible. The magnitudes of the Planck units are so far removed from the human scale they're unlikely to find any application outside theoretical physics (imagine speed limit signs expressed in a fraction of the speed of light, or road signs giving distances in Planck lengths of 1.62×10−35 metres), but they reflect the properties of the universe and may indicate the limits of our ability to understand it (for example, it may not be physically meaningful to speak of a distance smaller than the Planck length or an interval shorter than the Planck time [5.39×10−44 seconds]).

Planck's life was long and productive, and he enjoyed robust health (he continued his long hikes in the mountains into his eighties), but was marred by tragedy. His first wife, Marie, died of tuberculosis in 1909. He outlived four of his five children. His son Karl was killed in 1916 in World War I. His two daughters, Grete and Emma, both died in childbirth, in 1917 and 1919. His son and close companion Erwin, who survived capture and imprisonment by the French during World War I, was arrested and executed by the Nazis in 1945 for suspicion of involvement in the Stauffenberg plot to assassinate Hitler. (There is no evidence Erwin was a part of the conspiracy, but he was anti-Nazi and knew some of those involved in the plot.)

Planck was repulsed by the Nazis, especially after a private meeting with Hitler in 1933, but continued in his post as the head of the Kaiser Wilhelm Society until 1937. He considered himself a German patriot and never considered emigrating (and doubtless his being 75 years old when Hitler came to power was a consideration). He opposed and resisted the purging of Jews from German scientific institutions and the campaign against “Jewish science”, but when ordered to dismiss non-Aryan members of the Kaiser Wilhelm Society, he complied. When Heisenberg approached him for guidance, he said, “You have come to get my advice on political questions, but I am afraid I can no longer advise you. I see no hope of stopping the catastrophe that is about to engulf all our universities, indeed our whole country. … You simply cannot stop a landslide once it has started.”

Planck's house near Berlin was destroyed in an Allied bombing raid in February 1944, and with it a lifetime of his papers, photographs, and correspondence. (He and his second wife Marga had evacuated to Rogätz in 1943 to escape the raids.) As a result, historians have only limited primary sources from which to work, and the present book does an excellent job of recounting the life and science of a man whose work laid part of the foundations of twentieth century science.

 Permalink

Wolfe, Tom. The Kingdom of Speech. New York: Little, Brown, 2016. ISBN 978-0-316-40462-4.
In this short (192) page book, Tom Wolfe returns to his roots in the “new journalism”, of which he was a pioneer in the 1960s. Here the topic is the theory of evolution; the challenge posed to it by human speech (because no obvious precursor to speech occurs in other animals); attempts, from Darwin to Noam Chomsky to explain this apparent discrepancy and preserve the status of evolution as a “theory of everything”; and the evidence collected by linguist and anthropologist Daniel Everett among the Pirahã people of the Amazon basin in Brazil, which appears to falsify Chomsky's lifetime of work on the origin of human language and the universality of its structure. A second theme is contrasting theorists and intellectuals such as Darwin and Chomsky with “flycatchers” such as Alfred Russel Wallace, Darwin's rival for priority in publishing the theory of evolution, and Daniel Everett, who work in the field—often in remote, unpleasant, and dangerous conditions—to collect the data upon which the grand thinkers erect their castles of hypothesis.

Doubtless fearful of the reaction if he suggested the theory of evolution applied to the origin of humans, in his 1859 book On the Origin of Species, Darwin only tiptoed close to the question two pages from the end, writing, “In the distant future, I see open fields for far more important researches. Psychology will be securely based on a new foundation, that of the necessary acquirement of each mental power and capacity of gradation. Light will be thrown on the origin of man and his history.” He needn't have been so cautious: he fooled nobody. The very first review, five days before publication, asked, “If a monkey has become a man—…?”, and the tempest was soon at full force.

Darwin's critics, among them Max Müller, German-born professor of languages at Oxford, and Darwin's rival Alfred Wallace, seized upon human characteristics which had no obvious precursors in the animals from which man was supposed to have descended: a hairless body, the capacity for abstract thought, and, Müller's emphasis, speech. As Müller said, “Language is our Rubicon, and no brute will dare cross it.” How could Darwin's theory, which claimed to describe evolution from existing characteristics in ancestor species, explain completely novel properties which animals lacked?

Darwin responded with his 1871 The Descent of Man, and Selection in Relation to Sex, which explicitly argued that there were precursors to these supposedly novel human characteristics among animals, and that, for example, human speech was foreshadowed by the mating songs of birds. Sexual selection was suggested as the mechanism by which humans lost their hair, and the roots of a number of human emotions and even religious devotion could be found in the behaviour of dogs. Many found these arguments, presented without any concrete evidence, unpersuasive. The question of the origin of language had become so controversial and toxic that a year later, the Philological Society of London announced it would no longer accept papers on the subject.

With the rediscovery of Gregor Mendel's work on genetics and subsequent research in the field, a mechanism which could explain Darwin's evolution was in hand, and the theory became widely accepted, with the few discrepancies set aside (as had the Philological Society) as things we weren't yet ready to figure out.

In the years after World War II, the social sciences became afflicted by a case of “physics envy”. The contribution to the war effort by their colleagues in the hard sciences in areas such as radar, atomic energy, and aeronautics had been handsomely rewarded by prestige and funding, while the more squishy sciences remained in a prewar languor along with the departments of Latin, Medieval History, and Drama. Clearly, what was needed was for these fields to adopt a theoretical approach grounded in mathematics which had served so well for chemists, physicists, engineers, and appeared to be working for the new breed of economists.

It was into this environment that in the late 1950s a young linguist named Noam Chomsky burst onto the scene. Over its century and a half of history, much of the work of linguistics had been cataloguing and studying the thousands of languages spoken by people around the world, much as entomologists and botanists (or, in the pejorative term of Darwin's age, flycatchers) travelled to distant lands to discover the diversity of nature and try to make sense of how it was all interrelated. In his 1957 book, Syntactic Structures, Chomsky, then just twenty-eight years old and working in the building at MIT where radar had been developed during the war, said all of this tedious and messy field work was unnecessary. Humans had evolved (note, “evolved”) a “language organ”, an actual physical structure within the brain—the “language acquisition device”—which children used to learn and speak the language they heard from their parents. All human languages shared a “universal grammar”, on top of which all the details of specific languages so carefully catalogued in the field were just fluff, like the specific shape and colour of butterflies' wings. Chomsky invented the “Martian linguist” which was to come to feature in his lectures, who he claimed, arriving on Earth, would quickly discover the unity underlying all human languages. No longer need the linguist leave his air conditioned office. As Wolfe writes in chapter 4, “Now, all the new, Higher Things in a linguist's life were to be found indoors, at a desk…looking at learned journals filled with cramped type instead of at a bunch of hambone faces in a cloud of gnats.”

Given the alternatives, most linguists opted for the office, and for the prestige that a theory-based approach to their field conferred, and by the 1960s, Chomsky's views had taken over linguistics, with only a few dissenters, at whom Chomsky hurled thunderbolts from his perch on academic Olympus. He transmuted into a general-purpose intellectual, pronouncing on politics, economics, philosophy, history, and whatever occupied his fancy, all with the confidence and certainty he brought to linguistics. Those who dissented he denounced as “frauds”, “liars”, or “charlatans”, including B. F. Skinner, Alan Dershowitz, Jacques Lacan, Elie Wiesel, Christopher Hitchens, and Jacques Derrida. (Well, maybe I agree when it comes to Derrida and Lacan.) In 2002, with two colleagues, he published a new theory claiming that recursion—embedding one thought within another—was a universal property of human language and component of the universal grammar hard-wired into the brain.

Since 1977, Daniel Everett had been living with and studying the Pirahã in Brazil, originally as a missionary and later as an academic linguist trained and working in the Chomsky tradition. He was the first person to successfully learn the Pirahã language, and documented it in publications. In 2005 he published a paper in which he concluded that the language, one of the simplest ever described, contained no recursion whatsoever. It also contained neither a past nor future tense, description of relations beyond parents and siblings, gender, numbers, and many additional aspects of other languages. But the absence of recursion falsified Chomsky's theory, which pronounced it a fundamental part of all human languages. Here was a field worker, a flycatcher, braving not only gnats but anacondas, caimans, and just about every tropical disease in the catalogue, knocking the foundation from beneath the great man's fairy castle of theory. Naturally, Chomsky and his acolytes responded with their customary vituperation, (this time, the adjective of choice for Everett was “charlatan”). Just as they were preparing the academic paper which would drive a stake through this nonsense, Everett published Don't Sleep, There Are Snakes, a combined account of his thirty years with the Pirahã and an analysis of their language. The book became a popular hit and won numerous awards. In 2012, Everett followed up with Language: The Cultural Tool, which rejects Chomsky's view of language as an innate and universal human property in favour of the view that it is one among a multitude of artifacts created by human societies as a tool, and necessarily reflects the characteristics of those societies. Chomsky now refuses to discuss Everett's work.

In the conclusion, Wolfe comes down on the side of Everett, and argues that the solution to the mystery of how speech evolved is that it didn't evolve at all. Speech is simply a tool which humans used their big brains to invent to help them accomplish their goals, just as they invented bows and arrows, canoes, and microprocessors. It doesn't make any more sense to ask how evolution produced speech than it does to suggest it produced any of those other artifacts not made by animals. He further suggests that the invention of speech proceeded from initial use of sounds as mnemonics for objects and concepts, then progressed to more complex grammatical structure, but I found little evidence in his argument to back the supposition, nor is this a necessary part of viewing speech as an invented artifact. Chomsky's grand theory, like most theories made up without grounding in empirical evidence, is failing both by being falsified on its fundamentals by the work of Everett and others, and also by the failure, despite half a century of progress in neurophysiology, to identify the “language organ” upon which it is based.

It's somewhat amusing to see soft science academics rush to Chomsky's defence, when he's arguing that language is biologically determined as opposed to being, as Everett contends, a social construct whose details depend upon the cultural context which created it. A hunter-gatherer society such as the Pirahã living in an environment where food is abundant and little changes over time scales from days to generations, doesn't need a language as complicated as those living in an agricultural society with division of labour, and it shouldn't be a surprise to find their language is more rudimentary. Chomsky assumed that all human languages were universal (able to express any concept), in the sense David Deutsch defined universality in The Beginning of Infinity, but why should every people have a universal language when some cultures get along just fine without universal number systems or alphabets? Doesn't it make a lot more sense to conclude that people settle on a language, like any other tools, which gets the job done? Wolfe then argues that the capacity of speech is the defining characteristic of human beings, and enables all of the other human capabilities and accomplishments which animals lack. I'd consider this not proved. Why isn't the definitive human characteristic the ability to make tools, and language simply one among a multitude of tools humans have invented?

This book strikes me as one or two interesting blog posts struggling to escape from a snarknado of Wolfe's 1960s style verbal fireworks, including Bango!, riiippp, OOOF!, and “a regular crotch crusher!”. At age 85, he's still got it, but I wonder whether he, or his editor, questioned whether this style of journalism is as effective when discussing evolutionary biology and linguistics as in mocking sixties radicals, hippies, or pretentious artists and architects. There is some odd typography, as well. Grave accents are used in words like “learnèd”, presumably to indicate it's to be pronounced as two syllables, but then occasionally we get an acute accent instead—what's that supposed to mean? Chapter endnotes are given as superscript letters while source citations are superscript numbers, neither of which are easy to select on a touch-screen Kindle edition. There is no index.

 Permalink

February 2017

Verne, Jules. Hector Servadac. Seattle: CreateSpace, [1877] 2014. ISBN 978-1-5058-3124-5.
Over the years, I have been reading my way through the classic science fiction novels of Jules Verne, and I have prepared public domain texts of three of them which are available on my site and Project Gutenberg. Verne not only essentially invented the modern literary genre of science fiction, he was an extraordinary prolific author, publishing sixty-two novels in his Voyages extraordinaires between 1863 and 1905. What prompted me to pick up the present work was an interview I read in December 2016, in which Freeman Dyson recalled that it was reading this book at around the age of eight which, more than anything, set him on a course to become a mathematician and physicist. He notes that he originally didn't know it was fiction, and was disappointed to discover the events recounted hadn't actually happened. Well, that's about as good a recommendation as you can get, so I decided to put Hector Servadac on the list.

On the night of December 31–January 1, Hector Servadac, a captain in the French garrison at Mostaganem in Algeria, found it difficult to sleep, since in the morning he was to fight a duel with Wassili Timascheff, his rival for the affections of a young woman. During the night, the captain and his faithful orderly Laurent Ben-Zouf, perceived an enormous shock, and regained consciousness amid the ruins of their hut, and found themselves in a profoundly changed world.

Thus begins a scientific detective story much different than many of Verne's other novels. We have the resourceful and intrepid Captain Servadac and his humorous side-kick Ben-Zouf, to be sure, but instead of them undertaking a perilous voyage of exploration, instead they are taken on a voyage, by forces unknown, and must discover what has happened and explain the odd phenomena they are experiencing. And those phenomena are curious, indeed: the Sun rises in the west and sets in the east, and the day is now only twelve hours long; their weight, and that of all objects, has been dramatically reduced, and they can now easily bound high into the air; the air itself seems to have become as thin as on high mountain peaks; the Moon has vanished from the sky; the pole has shifted and there is a new north star; and their latitude now seems to be near the equator.

Exploring their environs only adds mysteries to the ever-growing list. They now seem to inhabit an island of which they are the only residents: the rest of Algeria has vanished. Eventually they make contact with Count Timascheff, whose yacht was standing offshore and, setting aside their dispute (the duel deferred in light of greater things is a theme you'll find elsewhere in the works of Verne), they seek to explore the curiously altered world they now inhabit.

Eventually, they discover its inhabitants seem to number only thirty-six: themselves, the Russian crew of Timascheff's yacht; some Spanish workers; a young Italian girl and Spanish boy; Isac Hakhabut, a German Jewish itinerant trader whose ship full of merchandise survived the cataclysm; the remainder of the British garrison at Gibraltar, which has been cut off and reduced to a small island; and Palmyrin Rosette, formerly Servadac's teacher (and each other's nemeses), an eccentric and irritable astronomer. They set out on a voyage of exploration and begin to grasp what has happened and what they must do to survive.

In 1865, Verne took us De la terre à la lune. Twelve years later, he treats us to a tour of the solar system, from the orbit of Venus to that of Jupiter, with abundant details of what was known about our planetary neighbourhood in his era. As usual, his research is nearly impeccable, although the orbital mechanics are fantasy and must be attributed to literary license: a body with an orbit which crosses those of Venus and Jupiter cannot have an orbital period of two years: it will be around five years, but that wouldn't work with the story. Verne has his usual fun with the national characteristics of those we encounter. Modern readers may find the descriptions of the miserly Jew Hakhabut and the happy but indolent Spaniards offensive—so be it—such is nineteenth century literature.

This is a grand adventure: funny, enlightening, and engaging the reader in puzzling out mysteries of physics, astronomy, geology, chemistry, and, if you're like this reader, checking the author's math (which, orbital mechanics aside, is more or less right, although he doesn't make the job easy by using a multitude of different units). It's completely improbable, of course—you don't go to Jules Verne for that: he's the fellow who shot people to the Moon with a nine hundred foot cannon—but just as readers of modern science fiction are willing to accept faster than light drives to make the story work, a little suspension of disbelief here will yield a lot of entertainment.

Jules Verne is the second most translated of modern authors (Agatha Christie is the first) and the most translated of those writing in French. Regrettably, Verne, and his reputation, have suffered from poor translation. He is a virtuoso of the French language, using his large vocabulary to layer meanings and subtexts beneath the surface, and many translators fail to preserve these subtleties. There have been several English translations of this novel under different titles (which I shall decline to state, as they are spoilers for the first half of the book), none of which are deemed worthy of the original.

I read the Kindle edition from Arvensa, which is absolutely superb. You don't usually expect much when you buy a Kindle version of a public domain work for US$ 0.99, but in this case you'll receive a thoroughly professional edition free of typographical errors which includes all of the original illustrations from the original 1877 Hetzel edition. In addition there is a comprehensive biography of Jules Verne and an account of his life and work published at the height of his career. Further, the Kindle French dictionary, a free download, is absolutely superb when coping with Verne's enormous vocabulary. Verne is very fond of obscure terms, and whether discussing nautical terminology, geology, astronomy, or any other specialties, peppers his prose with jargon which used to send me off to flip through the Little Bob. Now it's just a matter of highlighting the word (in the iPad Kindle app), and up pops the definition from the amazingly comprehensive dictionary. (This is a French-French dictionary; if you need a dictionary which provides English translations, you'll need to install such an application.) These Arvensa Kindle editions are absolutely the best way to enjoy Jules Verne and other classic French authors, and I will definitely seek out others to read in the future. You can obtain the complete works of Jules Verne, 160 titles, with 5400 illustrations, for US$ 2.51 at this writing.

 Permalink

Jenne, Mike. Pale Blue. New York: Yucca Publishing, 2016. ISBN 978-1-63158-084-0.
This is the final novel in the trilogy which began with Blue Gemini (April 2016) and continued in Blue Darker than Black (August 2016). After the harrowing rescue mission which concluded the second book, Drew Carson and Scott Ourecky, astronauts of the U.S. Air Force's covert Blue Gemini project, a manned satellite interceptor based upon NASA's Project Gemini spacecraft, hope for a long stand-down before what is slated to be the final mission in the project, whose future is uncertain due to funding issues, inter-service rivalry, the damage to its Pacific island launch site due to a recent tropical storm, and the upcoming 1972 presidential election.

Meanwhile, in the Soviet Union, progress continues on the Krepost project: a manned space station equipped for surveillance and armed with a nuclear warhead which can be de-orbited and dropped on any target along the station's ground track. General Rustam Abdirov, a survivor of the Nedelin disaster in 1960, is pushing the project to completion through his deputy, Gregor Yohzin, and believes it may hold the key to breaking what Abdirov sees as the stalemate of the Cold War. Yohzin is increasingly worried about Abdirov's stability and the risks posed by the project, and has been covertly passing information to U.S. intelligence.

As information from Yohzin's espionage reaches Blue Gemini headquarters, Carson and Ourecky are summoned back and plans drawn up to intercept the orbital station before a crew can be launched to it, after which destroying it would not only be hazardous, but could provoke a superpower confrontation. On the Soviet side, nothing is proceeding as planned, and the interception mission must twist and turn based upon limited and shifting information.

About half way through the book, and after some big surprises, the Krepost crisis is resolved. The reader might be inclined, then, to wonder “what next?” What follows is a war story, set in the final days of the Vietnam conflict, and for quite a while it seems incongruous and unrelated to all that has gone before. I have remarked in reviews of the earlier books of the trilogy that the author is keeping a large number of characters and sub-plots in the air, and wondered whether and how he was going to bring it all together. Well, in the last five chapters he does it, magnificently, and ties everything up with a bow on the top, ending what has been a rewarding thriller in a moving, human conclusion.

There are a few goofs. Launch windows to inclined Earth orbits occur every day; in case of a launch delay, there is no need for a long wait before the next launch attempt (chapter 4). Attempting to solve a difficult problem, “the variables refused to remain constant”—that's why they're called variables (chapter 10)! Beaujolais is red, not white, wine (chapter 16). A character claims to have seen a hundred stars in the Pleiades from space with the unaided eye. This is impossible: while the cluster contains around 1000 stars, only 14 are bright enough to be seen with the best human vision under the darkest skies. Observing from space is slightly better than from the Earth's surface, but in this case the observer would have been looking through a spacecraft window, which would attenuate light more than the Earth's atmosphere (chapter 25). MIT's Draper Laboratory did not design the Gemini on-board computer; it was developed by the IBM Federal Systems Division (chapter 26).

The trilogy is a big, sprawling techno-thriller with interesting and complicated characters and includes space flight, derring do in remote and dangerous places, military and political intrigue in both the U.S. and Soviet Union, espionage, and a look at how the stresses of military life and participation in black programs make the lives of those involved in them difficult. Although the space program which is the centrepiece of the story is fictional, the attention to detail is exacting: had it existed, this is probably how it would have been done. I have one big quibble with a central part of the premise, which I will discuss behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
The rationale for the Blue Gemini program which caused it to be funded is largely as a defence against a feared Soviet “orbital bombardment system”: one or more satellites which, placed in orbits which regularly overfly the U.S. and allies, could be commanded to deorbit and deliver nuclear warheads to any location below. It is the development of such a weapon, its deployment, and a mission to respond to the threat which form the core of the plot of this novel.

But an orbital bombardment system isn't a very useful weapon, and doesn't make much sense, especially in the context of the late 1960s to early '70s in which this story is set. The Krepost of the novel was armed with a single high-yield weapon, and operated in a low Earth orbit at an inclination of 51°. The weapon was equipped with only a retrorocket and heat shield, and would have little cross-range (ability to hit targets lateral to its orbital path). This would mean that in order to hit a specific target, the orbital station would have to wait up to a day for the Earth to rotate so the target was aligned with the station's orbital plane. And this would allow bombardment of only a single target with one warhead. Keeping the station ready for use would require a constant series of crew ferry and freighter launches, all to maintain just one bomb on alert. By comparison, by 1972, the Soviet Union had on the order of a thousand warheads mounted on ICBMs, which required no space launch logistics to maintain, and could reach targets anywhere within half an hour of the launch order being given. Finally, a space station in low Earth orbit is pretty much a sitting duck for countermeasures. It is easy to track from the ground, and has limited maneuvering capability. Even guns in space do not much mitigate the threat from a variety of anti-satellite weapons, including Blue Gemini.

While the drawbacks of orbital deployment of nuclear weapons caused the U.S. and Soviet Union to eschew them in favour of more economical and secure platforms such as silo-based missiles and ballistic missile submarines, their appearance here does not make this “what if?” thriller any less effective or thrilling. This was the peak of the Cold War, and both adversaries explored many ideas which, in retrospect, appear to have made little sense. A hypothetical Soviet nuclear-armed orbital battle station is no less crazy than Project Pluto in the U.S.

Spoilers end here.  
This trilogy is one long story which spans three books. The second and third novels begin with brief summaries of prior events, but these are intended mostly for readers who have forgotten where the previous volume left off. If you don't read the three books in order, you'll miss a great deal of the character and plot development which makes the entire story so rewarding. More than 1600 pages may seem a large investment in a fictional account of a Cold War space program that never happened, but the technical authenticity; realistic portrayal of military aerospace projects and the interaction of pilots, managers, engineers, and politicians; and complicated and memorable characters made it more than worthwhile to this reader.

 Permalink

March 2017

Awret, Uziel, ed. The Singularity. Exeter, UK: Imprint Academic, 2016. ISBN 978-1-84540-907-4.
For more than half a century, the prospect of a technological singularity has been part of the intellectual landscape of those envisioning the future. In 1965, in a paper titled “Speculations Concerning the First Ultraintelligent Machine” statistician I. J. Good wrote,

Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment.

Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.

  1. Progress in computing comes to a halt before reaching parity with human brain power, due to technological limits, economics (inability to afford the new technologies required, or lack of applications to fund the intermediate steps), or intervention by authority (for example, regulation motivated by a desire to avoid the risks and displacement due to super-human intelligence).
  2. Computing continues to advance, but we find that the human brain is either far more complicated than we believed it to be, or that something is going on in there which cannot be modelled or simulated by a deterministic computational process. The goal of human-level artificial intelligence recedes into the distant future.
  3. Blooie! Human level machine intelligence is achieved, successive generations of machine intelligences run away to approach the physical limits of computation, and before long machine intelligence exceeds that of humans to the degree humans surpass the intelligence of mice (or maybe insects).

Now, the thing about this is that many people will dismiss such speculation as science fiction having nothing to do with the “real world” they inhabit. But there's no more conservative form of forecasting than observing a trend which has been in existence for a long time (in the case of growth in computing power, more than a century, spanning multiple generations of very different hardware and technologies), and continuing to extrapolate it into the future and then ask, “What happens then?” When you go through this exercise and an answer pops out which seems to indicate that within the lives of many people now living, an event completely unprecedented in the history of our species—the emergence of an intelligence which far surpasses that of humans—might happen, the prospects and consequences bear some serious consideration.

The present book, based upon two special issues of the Journal of Consciousness Studies, attempts to examine the probability, nature, and consequences of a singularity from a variety of intellectual disciplines and viewpoints. The volume begins with an essay by philosopher David Chalmers originally published in 2010: “The Singularity: a Philosophical Analysis”, which attempts to trace various paths to a singularity and evaluate their probability. Chalmers does not attempt to estimate the time at which a singularity may occur—he argues that if it happens any time within the next few centuries, it will be an epochal event in human history which is worth thinking about today. Chalmers contends that the argument for artificial intelligence (AI) is robust because there appear to be multiple paths by which we could get there, and hence AI does not depend upon a fragile chain of technological assumptions which might break at any point in the future. We could, for example, continue to increase the performance and storage capacity of our computers, to such an extent that the “deep learning” techniques already used in computing applications, combined with access to a vast amount of digital data on the Internet, may cross the line of human intelligence. Or, we may continue our progress in reverse-engineering the microstructure of the human brain and apply our ever-growing computing power to emulating it at a low level (this scenario is discussed in detail in Robin Hanson's The Age of Em [September 2016]). Or, since human intelligence was produced by the process of evolution, we might set our supercomputers to simulate evolution itself (which we're already doing to some extent with genetic algorithms) in order to evolve super-human artificial intelligence (not only would computer-simulated evolution run much faster than biological evolution, it would not be random, but rather directed toward desired results, much like selective breeding of plants or livestock).

Regardless of the path or paths taken, the outcomes will be one of the three discussed above: either a singularity or no singularity. Assume, arguendo, that the singularity occurs, whether before 2050 as some optimists project or many decades later. What will it be like? Will it be good or bad? Chalmers writes,

I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.

…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment?

What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained?

Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry?

Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what?

Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter?

Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”.

Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens.

Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them.

This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:

If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.

 Permalink

Hannan, Daniel. What Next. London: Head of Zeus, 2016. ISBN 978-1-78669-193-4.
On June 23rd, 2016, the people of the United Kingdom, against the advice of most politicians, big business, organised labour, corporate media, academia, and their self-styled “betters”, narrowly voted to re-assert their sovereignty and reclaim the independence of their proud nation, slowly being dissolved in an “ever closer union” with the anti-democratic, protectionist, corrupt, bankrupt, and increasingly authoritarian European Union (EU). The day of the referendum, bookmakers gave odds which implied less than a 20% chance of a Leave vote, and yet the morning after the common sense and perception of right and wrong of the British people, which had caused them to prevail in the face of wars, economic and social crises, and a changing international environment re-asserted itself, and caused them to say, “No more, thank you. We prefer our thousand year tradition of self-rule to being dictated to by unelected foreign oligarchic technocrats.”

The author, Conservative Member of the European Parliament for South East England since 1999, has been one of the most vociferous and eloquent partisans of Britain's reclaiming its independence and campaigners for a Leave vote in the referendum; the vote was a personal triumph for him. In the introduction, he writes, “After forty-three years, we have pushed the door ajar. A rectangle of light dazzles us and, as our eyes adjust, we see a summer meadow. Swallows swoop against the blue sky. We hear the gurgling of a little brook. Now to stride into the sunlight.” What next, indeed?

Before presenting his vision of an independent, prosperous, and more free Britain, he recounts Britain's history in the European Union, the sordid state of the institutions of that would-be socialist superstate, and the details of the Leave campaign, including a candid and sometimes acerbic view not just of his opponents but also nominal allies. Hannan argues that Leave ultimately won because those advocating it were able to present a positive future for an independent Britain. He says that every time the Leave message veered toward negatives of the existing relationship with the EU, in particular immigration, polling in favour of Leave declined, and when the positive benefits of independence—for example free trade with Commonwealth nations and the rest of the world, local control of Britain's fisheries and agriculture, living under laws made in Britain by a parliament elected by the British people—Leave's polling improved. Fundamentally, you can only get so far asking people to vote against something, especially when the establishment is marching in lockstep to create fear of the unknown among the electorate. Presenting a positive vision was, Hannan believes, essential to prevailing.

Central to understanding a post-EU Britain is the distinction between a free-trade area and a customs union. The EU has done its best to confuse people about this issue, presenting its single market as a kind of free trade utopia. Nothing could be farther from the truth. A free trade area is just what the name implies: a group of states which have eliminated tariffs and other barriers such as quotas, and allow goods and services to cross borders unimpeded. A customs union such as the EU establishes standards for goods sold within its internal market which, through regulation, members are required to enforce (hence, the absurdity of unelected bureaucrats in Brussels telling the French how to make cheese). Further, while goods conforming to the regulations can be sold within the union, there are major trade barriers with parties outside, often imposed to protect industries with political pull inside the union. For example, wine produced in California or Chile is subject to a 32% tariff imposed by the EU to protect its own winemakers. British apparel manufacturers cannot import textiles from India, a country with long historical and close commercial ties, without paying EU tariffs intended to protect uncompetitive manufacturers on the Continent. Pointy-headed and economically ignorant “green” policies compound the problem: a medium-sized company in the EU pays 20% more for energy than a competitor in China and twice as much as one in the United States. In international trade disputes, Britain in the EU is represented by one twenty-eighth of a European Commissioner, while an independent Britain will have its own seat, like New Zealand, Switzerland, and the US.

Hannan believes that after leaving the EU, the UK should join the European Free Trade Association (EFTA), and demonstrates how ETFA states such as Norway and Switzerland are more prosperous than EU members and have better trade with countries outside it. (He argues against joining the European Economic Area [EEA], from which Switzerland has wisely opted out. The EEA provides too much leverage to the Brussels imperium to meddle in the policies of member states.) More important for Britain's future than its relationship to the EU is its ability, once outside, to conclude bilateral trade agreements with important trading partners such as the US (even, perhaps, joining NAFTA), Anglosphere countries such as Australia, South Africa, and New Zealand, and India, China, Russia, Brazil and other nations: all of which it cannot do while a member of the EU.

What of Britain's domestic policy? Free of diktats from Brussels, it will be whatever Britons wish, expressed through their representatives at Westminster. Hannan quotes the psychologist Kurt Lewin, who in the 1940s described change as a three stage process. First, old assumptions about the way things are and the way they have to be become “unfrozen”. This ushers in a period of rapid transformation, where institutions become fluid and can adapt to changed circumstances and perceptions. Then the new situation congeals into a status quo which endures until the next moment of unfreezing. For four decades, Britain has been frozen into an inertia where parliamentarians and governments respond to popular demands all too often by saying, “We'd like to do that, but the EU doesn't permit it.” Leaving the EU will remove this comfortable excuse, and possibly catalyse a great unfreezing of Britain's institutions. Where will this ultimately go? Wherever the people wish it to. Hannan has some suggestions for potential happy outcomes in this bright new day.

Britain has devolved substantial governance to Scotland, and yet Scottish MPs still vote in Westminster for policies which affect England but to which their constituents are not subject. Perhaps federalisation might progress to the point where the House of Commons becomes the English Parliament, with either a reformed House of Lords or a new body empowered to vote only on matters affecting the entire Union such as national defence and foreign policy. Free of the EU, the UK can adopt competitive corporate taxation and governance policies, and attract companies from around the world to build not just headquarters but also research and development and manufacturing facilities. The national VAT could be abolished entirely and replaced with a local sales tax, paid at point of retail, set by counties or metropolitan areas in competition with one another (current payments to these authorities by the Treasury are almost exactly equal to revenue from the VAT); with competition, authorities will be forced to economise lest their residents vote with their feet. With their own source of revenue, decision making for a host of policies, from housing to welfare, could be pushed down from Whitehall to City Hall. Immigration can be re-focused upon the need of the country for skills and labour, not thrown open to anybody who arrives.

The British vote for independence has been decried by the elitists, oligarchs, and would-be commissars as a “populist revolt”. (Do you think those words too strong? Did you know that all of those EU politicians and bureaucrats are exempt from taxation in their own countries, and pay a flat tax of around 21%, far less than the despised citizens they rule?) What is happening, first in Britain, and before long elsewhere as the corrupt foundations of the EU crumble, is that the working classes are standing up to the smirking classes and saying, “Enough.” Britain's success, which (unless the people are betrayed and their wishes subverted) is assured, since freedom and democracy always work better than slavery and bureaucratic dictatorship, will serve to demonstrate to citizens of other railroad-era continental-scale empires that smaller, agile, responsive, and free governance is essential for success in the information age.

 Permalink

Pratchett, Terry and Stephen Baxter. The Long War. New York: HarperCollins, 2013. ISBN 978-0-06-206869-9.
This is the second novel in the authors' series which began with The Long Earth (November 2012). That book, which I enjoyed immensely, created a vast new arena for storytelling: a large, perhaps infinite, number of parallel Earths, all synchronised in time, among which people can “step” with the aid of a simple electronic gizmo (incorporating a potato) whose inventor posted the plans on the Internet on what has since been called Step Day. Some small fraction of the population has always been “natural steppers”—able to move among universes without mechanical assistance, but other than that tiny minority, all of the worlds of the Long Earth beyond our own (called the Datum) are devoid of humans. There are natural stepping humanoids, dubbed “elves” and “trolls”, but none with human-level intelligence.

As this book opens, a generation has passed since Step Day, and the human presence has begun to expand into the vast expanses of the Long Earth. Most worlds are pristine wilderness, with all the dangers to pioneers venturing into places where large predators have never been controlled. Joshua Valienté, whose epic voyage of exploration with Lobsang (who from moment to moment may be a motorcycle repairman, computer network, Tibetan monk, or airship) discovered the wonders of these innumerable worlds in the first book, has settled down to raise a family on a world in the Far West.

Humans being humans, this gift of what amounts of an infinitely larger scope for their history has not been without its drawbacks and conflicts. With the opening of an endless frontier, the restless and creative have decamped from the Datum to seek adventure and fortune free of the crowds and control of their increasingly regimented home world. This has resulted in a drop in innovation and economic hit to the Datum, and for Datum politicians (particularly in the United States, the grabbiest of all jurisdictions) to seek to expand their control (and particularly the ability to loot) to all residents of the so-called “Aegis”—the geographical footprint of its territory across the multitude of worlds. The trolls, who mostly get along with humans and work for them, hear news from across the worlds through their “long call” of scandalous mistreatment of their kind by humans in some places, and now appear to have vanished from many human settlements to parts unknown. A group of worlds in the American Aegis in the distant West have adopted the Valhalla Declaration, asserting their independence from the greedy and intrusive government of the Datum and, in response, the Datum is sending a fleet of stepping airships (or “twains”, named for the Mark Twain of the first novel) to assert its authority over these recalcitrant emigrants. Joshua and Sally Linsay, pioneer explorers, return to the Datum to make their case for the rights of trolls. China mounts an ambitious expedition to the unseen worlds of its footprint in the Far East.

And so it goes, for more than four hundred pages. This really isn't a novel at all, but rather four or five novellas interleaved with one another, where the individual stories barely interact before most of the characters meet at a barbecue in the next to last chapter. When I put down The Long Earth, I concluded that the authors had created a stage in which all kinds of fiction could play out and looked forward to seeing what they'd do with it. What a disappointment! There are a few interesting concepts, such as evolutionary consequences of travel between parallel Earths and technologies which oppressive regimes use to keep their subjects from just stepping away to freedom, but they are few and far between. There is no war! If you're going to title your book The Long War, many readers are going to expect one, and it doesn't happen. I can recall only two laugh-out-loud lines in the entire book, which is hardly what you expect when picking up a book with Terry Pratchett's name on the cover. I shall not be reading the remaining books in the series which, if Amazon reviews are to be believed, go downhill from here.

 Permalink

April 2017

Houellebecq, Michel. Soumission. Paris: J'ai Lu, [2015] 2016. ISBN 978-2-290-11361-5.
If you examine the Pew Research Center's table of Muslim Population by Country, giving the percent Muslim population for countries and territories, one striking thing is apparent. Here are the results, binned into quintiles.

Quintile   % Muslim   Countries
1 100–80 36
2 80–60 5
3 60–40 8
4 40–20 7
5 20–0 132

The distribution in this table is strongly bimodal—instead of the Gaussian (normal, or “bell curve”) distribution one encounters so often in the natural and social sciences, the countries cluster at the extremes: 36 are 80% or more Muslim, 132 are 20% or less Muslim, and only a total of 20 fall in the middle between 20% and 80%. What is going on?

I believe this is evidence for an Islamic population fraction greater than some threshold above 20% being an attractor in the sense of dynamical systems theory. With the Islamic doctrine of its superiority to other religions and destiny to bring other lands into its orbit, plus scripturally-sanctioned discrimination against non-believers, once a Muslim community reaches a certain critical mass, and if it retains its identity and coherence, resisting assimilation into the host culture, it will tend to grow not just organically but by making conversion (whether sincere or motivated by self-interest) an attractive alternative for those who encounter Muslims in their everyday life.

If this analysis is correct, what is the critical threshold? Well, that's the big question, particularly for countries in Europe which have admitted substantial Muslim populations that are growing faster than the indigenous population due to a higher birthrate and ongoing immigration, and where there is substantial evidence that subsequent generations are retaining their identity as a distinct culture apart from that of the country where they were born. What happens as the threshold is crossed, and what does it mean for the original residents and institutions of these countries?

That is the question explored in this satirical novel set in the year 2022, in the period surrounding the French presidential election of that year. In the 2017 election, the Front national narrowly won the first round of the election, but was defeated in the second round by an alliance between the socialists and traditional right, resulting in the election of a socialist president in a country with a centre-right majority.

Five years after an election which satisfied few people, the electoral landscape has shifted substantially. A new party, the Fraternité musulmane (Muslim Brotherhood), led by the telegenic, pro-European, and moderate Mohammed Ben Abbes, French-born son of a Tunisian immigrant, has grown to rival the socialist party for second place behind the Front national, which remains safely ahead in projections for the first round. When the votes are counted, the unthinkable has happened: all of the traditional government parties are eliminated, and the second round will be a run-off between FN leader Marine Le Pen and Ben Abbes.

These events are experienced and recounted by “François” (no last name is given), a fortyish professor of literature at the Sorbonne, a leading expert on the 19th century French writer Joris-Karl Huysmans, who was considered a founder of the decadent movement, but later in life reverted to Catholicism and became a Benedictine oblate. François is living what may be described as a modern version of the decadent life. Single, living alone in a small apartment where he subsists mostly on microwaved dinners, he has become convinced his intellectual life peaked with the publication of his thesis on Huysmans and holds nothing other than going through the motions teaching his classes at the university. His amorous life is largely confined to a serial set of affairs with his students, most of which end with the academic year when they “meet someone” and, in the gaps, liaisons with “escorts” in which he indulges in the kind of perversion the decadents celebrated in their writings.

About the only thing which interests him is politics and the election, but not as a participant but observer watching television by himself. After the first round election, there is the stunning news that in order to prevent a Front national victory, the Muslim brotherhood, socialist, and traditional right parties have formed an alliance supporting Ben Abbes for president, with an agreed division of ministries among the parties. Myriam, François' current girlfriend, leaves with her Jewish family to settle in Israel, joining many of her faith who anticipate what is coming, having seen it so many times before in the history of their people.

François follows in the footsteps of Huysmans, visiting the Benedictine monastery in Martel, a village said to have been founded by Charles Martel, who defeated the Muslim invasion of Europe in a.d. 732 at the Battle of Tours. He finds no solace nor inspiration there and returns to Paris where, with the alliance triumphant in the second round of the election and Ben Abbes president, changes are immediately apparent.

Ethnic strife has fallen to a low level: the Muslim community sees itself ascendant and has no need for political agitation. The unemployment rate has fallen to historical lows: forcing women out of the workforce will do that, especially when they are no longer counted in the statistics. Polygamy has been legalised, as part of the elimination of gender equality under the law. More and more women on the street dress modestly and wear the veil. The Sorbonne has been “privatised”, becoming the Islamic University of Paris, and all non-Muslim faculty, including François, have been dismissed. With generous funding from the petro-monarchies of the Gulf, François and other now-redundant academics receive lifetime pensions sufficient that they never need work again, but it grates upon them to see intellectual inferiors, after a cynical and insincere conversion to Islam, replace them at salaries often three times higher than they received.

Unemployed, François grasps at an opportunity to edit a new edition of Huysmans for Pléiade, and encounters Robert Rediger, an ambitious academic who has been appointed rector of the Islamic University and has the ear of Ben Abbes. They later meet at Rediger's house, where, over a fine wine, he gives François a copy of his introductory book on Islam, explains the benefits of polygamy and arranged marriage to a man of his social standing, and the opportunities open to Islamic converts in the new university.

Eventually, François, like France, ends in submission.

As G. K. Chesterton never actually said, “When a man stops believing in God he doesn't then believe in nothing; he believes anything.” (The false quotation appears to be a synthesis of similar sentiments expressed by Chesterton in a number of different works.) Whatever the attribution, there is truth in it. François is an embodiment of post-Christian Europe, where the nucleus around which Western civilisation has been built since the fall of the Roman Empire has evaporated, leaving a void which deprives people of the purpose, optimism, and self-confidence of their forbears. Such a vacuum is more likely to be filled with something—anything, than long endure, especially when an aggressive, virile, ambitious, and prolific competitor has established itself in the lands of the decadent.

An English translation is available. This book is not recommended for young readers due to a number of sex scenes I found gratuitous and, even to this non-young reader, somewhat icky. This is a social satire, not a forecast of the future, but I found it more plausible than many scenarios envisioned for a Muslim conquest of Europe. I'll leave you to discover for yourself how the clever Ben Abbes envisions co-opting Eurocrats in his project of grand unification.

 Permalink

May 2017

Jacobsen, Annie. Phenomena. New York: Little, Brown, 2017. ISBN 978-0-316-34936-9.
At the end of World War II, it was clear that science and technology would be central to competition among nations in the postwar era. The development of nuclear weapons, German deployment of the first operational ballistic missile, and the introduction of jet propelled aircraft pointed the way to a technology-driven arms race, and both the U.S. and the Soviet Union scrambled to lay hands on the secret super-weapon programs of the defeated Nazi regime. On the U.S. side, the Alsos Mission not only sought information on German nuclear and missile programs, but also came across even more bizarre projects, such as those undertaken by Berlin's Ahnenerbe Institute, founded in 1935 by SS leader Heinrich Himmler. Investigating the institute's headquarters in a Berlin suburb, Samuel Goudsmit, chief scientist of Alsos, found what he described as “Remnants of weird Teutonic symbols and rites … a corner with a pit of ashes in which I found the skull of an infant.” What was going on? Had the Nazis attempted to weaponise black magic? And, to the ever-practical military mind, did it work?

In the years after the war, the intelligence community and military services in both the U.S. and Soviet Union would become involved in the realm of the paranormal, funding research and operational programs based upon purported psychic powers for which mainstream science had no explanation. Both superpowers were not only seeking super powers for their spies and soldiers, but also looking over their shoulders afraid the other would steal a jump on them in exploiting these supposed powers of mind. “We can't risk a ‘woo-woo gap’ with the adversary!”

Set aside for a moment (as did most of the agencies funding this research) the question of just how these mental powers were supposed to work. If they did, in fact, exist and if they could be harnessed and reliably employed, they would confer a tremendous strategic advantage on their possessor. Consider: psychic spies could project their consciousness out of body and penetrate the most secure military installations; telepaths could read the minds of diplomats during negotiations or perhaps even plant thoughts and influence their judgement; telekinesis might be able to disrupt the guidance systems of intercontinental missiles or space launchers; and psychic assassins could undetectably kill by stopping the hearts of their victims remotely by projecting malign mental energy in their direction.

All of this may seem absurd on its face, but work on all of these phenomena and more was funded, between 1952 and 1995, by agencies of the U.S. government including the U.S. Army, Air Force, Navy, the CIA, NSA, DIA, and ARPA/DARPA, expending tens of millions of dollars. Between 1978 and 1995 the Defense Department maintained an operational psychic espionage program under various names, using “remote viewing” to provide information on intelligence targets for clients including the Secret Service, Customs Service, Drug Enforcement Administration, and the Coast Guard.

What is remote viewing? Experiments in parapsychology laboratories usually employ a protocol called “outbounder-beacon”, where a researcher travels to a location selected randomly from a set of targets and observes the locale while a subject in the laboratory, usually isolated from sensory input which might provide clues, attempts to describe, either in words or by a drawing, what the outbounder is observing. At the conclusion of the experiment, the subject's description is compared with pictures of the targets by an independent judge (unaware of which was the outbounder's destination), who selects the one which is the closest match to the subject's description. If each experiment picked the outbounder's destination from a set of five targets, you'd expect from chance alone that in an ensemble of experiments the remote viewer's perception would match the actual target around 20% of the time. Experiments conducted in the 1970s at the Stanford Research Institute (and subsequently the target of intense criticism by skeptics) claimed in excess of 65% accuracy by talented remote viewers.

While outbounder-beacon experiments were used to train and test candidate remote viewers, operational military remote viewing as conducted by the Stargate Project (and under assorted other code names over the years), was quite different. Usually the procedure involved “coordinate remote viewing”. The viewer would simply be handed a slip of paper containing the latitude and longitude of the target and then, relaxing and clearing his or her mind, would attempt to describe what was there. In other sessions, the viewer might be handed a sealed envelope containing a satellite reconnaissance photograph. The results were sometimes stunning. In 1979, a KH-9 spy satellite photographed a huge building which had been constructed at Severodvinsk Naval Base in the Soviet arctic. Analysts thought the Soviets might be building their first aircraft carrier inside the secret facility. Joe McMoneagle, an Army warrant office and Vietnam veteran who was assigned to the Stargate Project as its first remote viewer, was given the target in the form of an envelope with the satellite photo sealed inside. Concentrating on the target, he noted “There's some kind of a ship. Some kind of a vessel. I'm getting a very, very strong impression of props [propellers]”. Then, “I'm seeing fins…. They look like shark fins.” He continued, “I'm seeing what looks like part of a submarine in this building.” The entire transcript was forty-seven pages long.

McMoneagle's report was passed on to the National Security Council, which dismissed it because it didn't make any sense for the Soviets to build a huge submarine in a building located one hundred metres from the water. McMoneagle had described a canal between the building and the shore, but the satellite imagery showed no such structure. Then, four months later, in January 1980, another KH-9 pass showed a large submarine at a dock at Severodvinsk, along with a canal between the mystery building and the sea, which had been constructed in the interim. This was the prototype of the new Typhoon class ballistic missile submarine, which was a complete surprise to Western analysts, but not Joe McMoneagle. This is what was referred to as an “eight martini result”. When McMoneagle retired in 1984, he was awarded the Legion of Merit for exceptionally meritorious service in the field of human intelligence.

A decade later the U.S. Customs Service approached the remote viewing unit for assistance in tracking down a rogue agent accused of taking bribes from cocaine smugglers in Florida. He had been on the run for two years, and appeared on the FBI's Most Wanted List. He was believed to be in Florida or somewhere in the Caribbean. Self-taught remote viewer Angela Dellafiora concentrated on the case and immediately said, “He's in Lowell, Wyoming.” Wyoming? There was no reason for him to be in such a place. Further, there was no town named Lowell in the state. Agents looked through an atlas and found there was, however, a Lovell, Wyoming. Dellafiora said, “Well, that's probably it.” Several weeks later, she was asked to work the case again. Her notes include, “If you don't get him now you'll lose him. He's moving from Lowell.” She added that he was “at or near a campground that had a large boulder at its entrance”, and that she “sensed an old Indian burial ground is located nearby.”. After being spotted by a park ranger, the fugitive was apprehended at a campground next to an Indian burial ground, about fifty miles from Lovell, Wyoming, where he had been a few weeks before. Martinis all around.

A total of 417 operational sessions were run in 1989 and 1990 for the counter-narcotics mission; 52% were judged as producing results of intelligence value while 47% were of no value. Still, what was produced was considered of sufficient value that the customers kept coming back.

Most of this work and its products were classified, in part to protect the program from ridicule by journalists and politicians. Those running the projects were afraid of being accused of dabbling in the occult, so they endorsed an Army doctrine that remote viewing, like any other military occupational specialty, was a normal human facility which could be taught to anybody with a suitable training process, and a curriculum was developed to introduce new people to the program. This was despite abundant evidence that the ability to remote view, if it exists at all, is a rare trait some people acquire at birth, and cannot be taught to randomly selected individuals any more than they can be trained to become musical composers or chess grand masters.

Under a similar shroud of secrecy, paranormal research for military applications appears to have been pursued in the Soviet Union and China. From time to time information would leak out into the open literature, such as the Soviet experiments with Ninel Kulagina. In China, H. S. Tsien (Qian Xuesen), a co-founder of the Jet Propulsion Laboratory in the United States who, after being stripped of his security clearance and moving to mainland China in 1955, led the Chinese nuclear weapons and missile programs, became a vocal and powerful advocate of research into the paranormal which, in accordance with Chinese Communist doctrine, was called “Extraordinary Human Body Functioning” (EHBF), and linked to the concept of qi, an energy field which is one of the foundations of traditional Chinese medicine and martial arts. It is likely this work continues today in China.

The U.S. remote viewing program came to an end in June 1995, when the CIA ordered the Defense Intelligence Agency to shut down the Stargate project. Many documents relating to the project have since been declassified but, oddly for a program which many claimed produced no useful results, others remain secret to this day. The paranormal continues to appeal to some in the military. In 2014, the Office of Naval Research launched a four year project funded with US$ 3.85 million to investigate premonitions, intuition, and hunches—what the press release called “Spidey sense”. In the 1950s, during a conversation between physicist Wolfgang Pauli and psychiatrist Carl Jung about psychic phenomena, Jung remarked, “As is only to be expected, every conceivable kind of attempt has been made to explain away these results, which seem to border on the miraculous and frankly impossible. But all such attempts come to grief on the facts, and the facts refuse so far to be argued out of existence.” A quarter century later in 1975, a CIA report concluded “A large body of reliable experimental evidence points to the inescapable conclusion that extrasensory perception does exist as a real phenomenon.”

To those who have had psychic experiences, there is no doubt of the reality of the phenomena. But research into them or, even more shockingly, attempts to apply them to practical ends, runs squarely into a paradigm of modern science which puts theory ahead of observation and experiment. A 1986 report by the U.S. Army said that its research had “succeeded in documenting general anomalies worthy of scientific interest,“ but that “in the absence of a confirmed paranormal theory…paranormality could be rejected a priori.” When the remote viewing program was cancelled in 1995, a review of its work stated that “a statistically significant effect has been observed in the laboratory…[but] the laboratory studies do not provide evidence regarding the sources or origins of the phenomenon.” In other words, experimental results can be discarded if there isn't a theory upon which to hang them, and there is no general theory of paranormal phenomena. Heck, they could have asked me.

One wonders where many currently mature fields of science would be today had this standard been applied during their formative phases: rejecting experimental results due to lack of a theory to explain them. High-temperature superconductivity was discovered in 1986 and won the Nobel Prize in 1987, and still today there is no theory that explains how it works. Perhaps it is only because it is so easily demonstrated with a desktop experiment that it, too, has not been relegated to the realm of “fringe science”.

This book provides a comprehensive history of the postwar involvement of the military and intelligence communities with the paranormal, focusing on the United States. The author takes a neutral stance: both believers and skeptics are given their say. One notes a consistent tension between scientists who reject the phenomena because “it can't possibly work” and intelligence officers who couldn't care less about how it works as long as it is providing them useful results.

The author has conducted interviews with many of the principals still alive, and documented the programs with original sources, many obtained by her under the Freedom of Information Act. Extensive end notes and source citations are included. I wish I could be more confident in the accuracy of the text, however. Chapter 7 relates astronaut Edgar Mitchell's Apollo 14 mission to the Moon, during which he conducted, on his own initiative, some unauthorised ESP experiments. But most of the chapter is about the mission itself, and it is riddled with errors, all of which could be corrected with no more research than consulting Wikipedia pages about the mission and the Apollo program. When you read something you know about and discover much of it is wrong, you have to guard against what Michael Crichton called the Gell-Mann amnesia effect: turning the page and assuming what you read there, about which you have no personal knowledge, is to be trusted. When dealing with spooky topics and programs conducted in secret, one should be doubly cautious. The copy editing is only of fair quality, and the Kindle edition has no index (the print edition does have an index).

Napoléon Bonaparte said, “There are but two powers in the world, the sword and the mind. In the long run, the sword is always beaten by the mind.” The decades of secret paranormal research were an attempt to apply this statement literally, and provide a fascinating look inside a secret world where nothing was dismissed as absurd if it might provide an edge over the adversary. Almost nobody knew about this work at the time. One wonders what is going on today.

 Permalink

June 2017

Shute, Nevil. Kindling. New York: Vintage Books, [1938, 1951] 2010. ISBN 978-0-307-47417-9.
It is the depth of the great depression, and yet business is booming at Warren Sons and Mortimer, merchant bankers, in the City of London. Henry Warren, descendant of the founder of the bank in 1750 and managing director, has never been busier. Despite the general contraction in the economy, firms failing, unemployment hitting record after record, and a collapse in international trade, his bank, which specialises in floating securities in London for foreign governments, has more deals pending than he can handle as those governments seek to raise funds to bolster their tottering economies. A typical week might see him in Holland, Sweden, Finland, Estonia, Germany, Holland again, and back to England in time for a Friday entirely on the telephone and in conferences at his office. It is an exhausting routine and, truth be told, he was sufficiently wealthy not to have to work if he didn't wish to, but it was the Warren and Mortimer bank and he was this generation's Warren in charge, and that's what Warrens did.

But in the few moments he had to reflect upon his life, there was little joy in it. He worked so hard he rarely saw others outside work except for his wife Elise's social engagements, which he found tedious and her circle of friends annoying and superficial, but endured out of a sense of duty. He suspected Elise might be cheating on him with the suave but thoroughly distasteful Prince Ali Said, and he wasn't the only one: there were whispers and snickers behind his back in the City. He had no real friends; only business associates, and with no children, no legacy to work for other than the firm. Sleep came only with sleeping pills. He knew his health was declining from stress, sleep deprivation, and lack of exercise.

After confirming his wife's affair, he offers her an ultimatum: move away from London to a quiet life in the country or put an end to the marriage. Independently wealthy, she immediately opts for the latter and leaves him to work out the details. What is he now to do with his life? He informs the servants he is closing the house and offers them generous severance, tells the bank he is taking an indefinite leave to travel and recuperate, and tells his chauffeur to prepare for a long trip, details to come. They depart in the car, northbound. He vows to walk twenty miles a day, every day, until he recovers his health, mental equilibrium, and ability to sleep.

After a few days walking, eating and sleeping at inns and guest houses in the northlands, he collapses in excruciating pain by the side of the road. A passing lorry driver takes him to a small hospital in the town of Sharples. Barely conscious, a surgeon diagnoses him with an intestinal obstruction and says an operation will be necessary. He is wheeled to the operating theatre. The hospital staff speculates on who he might be: he has no wallet or other identification. “Probably one of the men on the road, seeking work in the South”, they guess.

As he begins his recovery in the hospital Warren decides not to complicate matters with regard to his identity: “He had no desire to be a merchant banker in a ward of labourers.” He confirmed their assumption, adding that he was a bank clerk recently returned from America where there was no work at all, in hopes of finding something in the home country. He recalls that Sharples had been known for the Barlow shipyard, once a prosperous enterprise, which closed five years ago, taking down the plate mill and other enterprises it and its workers supported. There was little work in Sharples, and most of the population was on relief. He begins to notice that patients in the ward seem to be dying at an inordinate rate, of maladies not normally thought life-threatening. He asks Miss McMahon, the hospital's Almoner, who tells him it's the poor nutrition affordable on relief, plus the lack of hope and sense of purpose in life due to long unemployment that's responsible. As he recovers and begins to take walks in the vicinity, he sees the boarded up stores, and the derelict shipyard and rolling mill. Curious, he arranges to tour them. When people speak to him of their hope the economy will recover and the yard re-open, he is grimly realistic and candid: with the equipment sold off or in ruins and the skilled workforce dispersed, how would it win an order even if there were any orders to be had?

As he is heading back to London to pick up his old life, feeling better mentally and physically than he had for years, ideas and numbers begin to swim in his mind.

It was impossible. Nobody, in this time of depression, could find an order for a single ship…—let alone a flock of them.

There was the staff. … He could probably get them together again at a twenty per cent rise in salary—if they were any good. But how was he to judge of that?

The whole thing was impossible, sheer madness to attempt. He must be sensible, and put it from his mind.

It would be damn good fun…

Three weeks later, acting through a solicitor to conceal his identity, Mr. Henry Warren, merchant banker of the City, became the owner of Barlows' Yard, purchasing it outright for the sum of £5500. Thus begins one of the most entertaining, realistic, and heartwarming tales of entrepreneurship (or perhaps “rentrepreneurship”) I have ever read. The fact that the author was himself founder and director of an aircraft manufacturing company during the depression, and well aware of the need to make payroll every week, get orders to keep the doors open even if they didn't make much business sense, and do whatever it takes so that the business can survive and meet its obligations to its customers, investors, employees, suppliers, and creditors, contributes to the authenticity of the tale. (See his autobiography, Slide Rule [July 2011], for details of his career.)

Back in his office at the bank, there is the matter of the oil deal in Laevatia. After defaulting on their last loan, the Balkan country is viewed as a laughingstock and pariah in the City, but Warren has an idea. If they are to develop oil in the country, they will need to ship it, and how better to ship it than in their own ships, built in Britain on advantageous terms? Before long, he's off to the Balkans to do a deal in the Balkan manner (involving bejewelled umbrellas, cases of Worcestershire sauce, losing to the Treasury minister in the local card game at a dive in the capital, and working out a deal where the dividends on the joint stock oil company will be secured by profits from the national railway. And, there's the matter of the ships, which will be contracted for by Warren's bank.

Then it's back to London to pitch the deal. Warren's reputation counts for a great deal in the City, and the preference shares are placed. That done, the Hawside Ship and Engineering Company Ltd. is registered with cut-out directors, and the process of awarding the contract for the tankers to it is undertaken. As Warren explains to Miss McMahon, who he has begun to see more frequently, once the order is in hand, it can be used to float shares in the company to fund the equipment and staff to build the ships. At least if the prospectus is sufficiently optimistic—perhaps too optimistic….

Order in hand, life begins to return to Sharples. First a few workers, then dozens, then hundreds. The welcome sound of riveting and welding begins to issue from the yard. A few boarded-up shops re-open, and then more. Then another order for a ship came in, thanks to arm-twisting by one of the yard's directors. With talk of Britain re-arming, there was the prospect of Admiralty business. There was still only one newspaper a week in Sharples, brought in from Newcastle and sold to readers interested in the football news. On one of his more frequent visits to the town, yard, and Miss McMahon, Warren sees the headline: “Revolution in Laevatia”. “This is a very bad one,” Warren says. “I don't know what this is going to mean.”

But, one suspects, he did. As anybody who has been in the senior management of a publicly-traded company is well aware, what happens next is well-scripted: the shareholder suit by a small investor, the press pile-on, the back-turning by the financial community, the securities investigation, the indictment, and, eventually, the slammer. Warren understands this, and works diligently to ensure the Yard survives. There is a deep mine of wisdom here for anybody facing a bad patch.

“You must make this first year's accounts as bad as they ever can be,” he said. “You've got a marvellous opportunity to do so now, one that you'll never have again. You must examine every contract that you've got, with Jennings, and Grierson must tell the auditors that every contract will be carried out at a loss. He'll probably be right, of course—but he must pile it on. You've got to make reserves this year against every possible contingency, probable or improbable.”

“Pile everything into this year's loss, including a lot that really ought not to be there. If you do that, next year you'll be bound to show a profit, and the year after, if you've done it properly this year. Then as soon as you're showing profits and a decent show of orders in hand, get rid of this year's losses by writing down your capital, pay a dividend, and make another issue to replace the capital.”

Sage advice—I've been there. We had cash in the till, so we were able to do a stock buy-back at the bottom, but the principle is the same.

Having been brought back to life by almost dying in small town hospital, Warren is rejuvenated by his time in gaol. In November 1937, he is released and returns to Sharples where, amidst evidence of prosperity everywhere he approaches the Yard, to see a plaque on the wall with his face in profile: “HENRY WARREN — 1934 — HE GAVE US WORK”. Then he was off to see Miss McMahon.

The only print edition currently available new is a very expensive hardcover. Used paperbacks are readily available: check under both Kindling and the original British title, Ruined City. I have linked to the Kindle edition above.

 Permalink

Ringo, John. Into the Looking Glass. Riverdale, NY: Baen Publishing, 2005. ISBN 978-1-4165-2105-1.
Without warning, on a fine spring day in central Florida, an enormous explosion destroys the campus of the University of Central Florida and the surrounding region. The flash, heat pulse, and mushroom cloud are observed far from the site of the detonation. It is clear that casualties will be massive. First responders, fearing the worst, break out their equipment to respond to what seems likely to be nuclear terrorism. The yield of the explosion is estimated at 60 kilotons of TNT.

But upon closer examination, things seem distinctly odd. There is none of the residual radiation one would expect from a nuclear detonation, nor evidence of the prompt radiation nor electromagnetic pulse expected from a nuclear blast. A university campus seems an odd target for nuclear terrorism, in any case. What else could cause such a blast of such magnitude? Well, an asteroid strike could do it, but the odds against such an event are very long, and there was no evidence of ejecta falling back as you'd expect from an impact.

Faced with a catastrophic yet seemingly inexplicable event, senior government officials turn to a person with the background and security clearances to investigate further: Dr. Bill Weaver, a “redneck physicist” from Huntsville who works as a consultant to one of the “Beltway bandit” contractors who orbit the Pentagon. Weaver recalls that a physicist at the university, Ray Chen, was working on shortcut to produce a Higgs boson, bypassing the need for an enormous particle collider. Weaver's guess is that Chen's idea worked better than he imagined, releasing a pulse of energy which caused the detonation.

If things so far seemed curious, now they began to get weird. Approaching the site of the detonation, teams observed a black globe, seemingly absorbing all light, where Dr. Chen's laboratory used to be. Then one, and another, giant bug emerge from the globe. Floridians become accustomed to large, ugly-looking bugs, but nothing like this—these are creatures from another world, or maybe universe. A little girl, unharmed, wanders into the camp, giving her home address as in an area completely obliterated by the explosion. She is clutching a furry alien with ten legs: “Tuffy”, who she says speaks to her. Scientists try to examine the creature and quickly learn the wisdom of the girl's counsel to not mess with Tuffy.

Police respond to a home invasion call some distance from the site of the detonation: a report that demons are attacking their house. Investigating, another portal is discovered in the woods behind the house, from which monsters begin to issue, quickly overpowering the light military force summoned to oppose them. It takes a redneck militia to reinforce a perimeter around the gateway, while waiting for the Army to respond.

Apparently, whatever happened on the campus not only opened a gateway there, but is spawning gateways further removed. Some connect to worlds seemingly filled with biologically-engineered monsters bent upon conquest, while others connect to barren planets, a race of sentient felines, and other aliens who may be allies or enemies. Weaver has to puzzle all of this out, while participating in the desperate effort to prevent the invaders, “T!Ch!R!” or “Titcher”, from establishing a beachhead on Earth. And the stakes may be much greater than the fate of the Earth.

This is an action-filled romp, combining the initiation of humans into a much larger universe worthy of Golden Age science fiction with military action fiction. I doubt that in the real world Weaver, the leading expert on the phenomenon and chief investigator into it, would be allowed to participate in what amounts to commando missions in which his special skills are not required but, hey, it makes the story more exciting, and if a thriller doesn't thrill, it has failed in its mission.

I loved one aspect of the conclusion: never let an alien invasion go to waste. You'll understand what I'm alluding to when you get there. And, in the Golden Age tradition, the story sets up for further adventures. While John Ringo wrote this book by himself, the remaining three novels in the Looking Glass series are co-authored with Travis S. Taylor, upon whom the character of Bill Weaver was modeled.

 Permalink

Haffner, Sebastian [Raimund Pretzel]. Defying Hitler. New York: Picador, [2000] 2003. ISBN 978-0-312-42113-7.
In 1933, the author was pursuing his ambition to follow his father into a career in the Prussian civil service. While completing his law degree, he had obtained a post as a Referendar, the lowest rank in the civil service, performing what amounted to paralegal work for higher ranking clerks and judges. He enjoyed the work, especially doing research in the law library and drafting opinions, and was proud to be a part of the Prussian tradition of an independent judiciary. He had no strong political views nor much interest in politics. But, as he says, “I have a fairly well developed figurative sense of smell, or to put it differently, a sense of the worth (or worthlessness!) of human, moral, political views and attitudes. Most Germans unfortunately lack this sense almost completely.”

When Hitler came to power in January 1933, “As for the Nazis, my nose left me with no doubts. … How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset. What was not at all clear to me was what terrible enemies they would turn out to be.” Initially, little changed: it was a “matter for the press”. The new chancellor might rant to enthralled masses about the Jews, but in the court where Haffner clerked, a Jewish judge continued to sit on the bench and work continued as before. He hoped that the political storm on the surface would leave the depths of the civil service unperturbed. This was not to be the case.

Haffner was a boy during the First World War, and, like many of his schoolmates, saw the war as a great adventure which unified the country. Coming of age in the Weimar Republic, he experienced the great inflation of 1921–1924 as up-ending the society: “Amid all the misery, despair, and poverty there was an air of light-headed youthfulness, licentiousness, and carnival. Now, for once, the young had money and the old did not. Its value lasted only a few hours. It was spent as never before or since; and not on the things old people spend their money on.” A whole generation whose ancestors had grown up in a highly structured society where most decisions were made for them now were faced with the freedom to make whatever they wished of their private lives. But they had never learned to cope with such freedom.

After the Reichstag fire and the Nazi-organised boycott of Jewish businesses (enforced by SA street brawlers standing in doors and intimidating anybody who tried to enter), the fundamental transformation of the society accelerated. Working in the library at the court building, Haffner is shocked to see this sanctum of jurisprudence defiled by the SA, who had come to eject all Jews from the building. A Jewish colleague is expelled from university, fired from the civil service, and opts to emigrate.

The chaos of the early days of the Nazi ascendency gives way to Gleichschaltung, the systematic takeover of all institutions by placing Nazis in key decision-making positions within them. Haffner sees the Prussian courts, which famously stood up to Frederick the Great a century and a half before, meekly toe the line.

Haffner begins to consider emigrating from Germany, but his father urges him to complete his law degree before leaving. His close friends among the Referendars run the gamut from Communist sympathisers to ardent Nazis. As he is preparing for the Assessor examination (the next rank in the civil service, and the final step for a law student), he is called up for mandatory political and military indoctrination now required for the rank. The barrier between the personal, professional, and political had completely fallen. “Four weeks later I was wearing jackboots and a uniform with a swastika armband, and spent many hours each day marching in a column in the vicinity of Jüterbog.”

He discovers that, despite his viewing the Nazis as essentially absurd, there is something about order, regimentation, discipline, and forced camaraderie that resonates in his German soul.

Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue.

That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.

And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”.

This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that.

This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available.

Thanks to Glenn Beck for recommending this book.

 Permalink

July 2017

Segrè, Gino and Bettina Hoerlin. The Pope of Physics. New York: Henry Holt, 2016. ISBN 978-1-62779-005-5.
By the start of the 20th century, the field of physics had bifurcated into theoretical and experimental specialties. While theorists and experimenters were acquainted with the same fundamentals and collaborated, with theorists suggesting phenomena to be explored in experiments and experimenters providing hard data upon which theorists could build their models, rarely did one individual do breakthrough work in both theory and experiment. One outstanding exception was Enrico Fermi, whose numerous achievements seemed to jump effortlessly between theory and experiment.

Fermi was born in 1901 to a middle class family in Rome, the youngest of three children born in consecutive years. As was common at the time, Enrico and his brother Giulio were sent to be wet-nursed and raised by a farm family outside Rome and only returned to live with their parents when two and a half years old. His father was a division head in the state railway and his mother taught elementary school. Neither parent had attended university, but hoped all of their children would have the opportunity. All were enrolled in schools which concentrated on the traditional curriculum of Latin, Greek, and literature in those languages and Italian. Fermi was attracted to mathematics and science, but little instruction was available to him in those fields.

At age thirteen, the young Fermi made the acquaintance of Adolfo Amidei, an engineer who worked with his father. Amidei began to loan the lad mathematics and science books, which Fermi devoured—often working out solutions to problems which Amidei was unable to solve. Within a year, studying entirely on his own, he had mastered geometry and calculus. In 1915, Fermi bought a used book, Elementorum Physicæ Mathematica, at a flea market in Rome. Published in 1830 and written entirely in Latin, it was a 900 page compendium covering mathematical physics of that era. By that time, he was completely fluent in the language and the mathematics used in the abundant equations, and worked his way through the entire text. As the authors note, “Not only was Fermi the only twentieth-century physics genius to be entirely self-taught, he surely must be the only one whose first acquaintance with the subject was through a book in Latin.”

At sixteen, Fermi skipped the final year of high school, concluding it had nothing more to teach him, and with Amidei's encouragement, sat for a competitive examination for a place at the elite Sculoa Normale Superiore, which provided a complete scholarship including room and board to the winners. He ranked first in all of the examinations and left home to study in Pisa. Despite his talent for and knowledge of mathematics, he chose physics as his major—he had always been fascinated by mechanisms and experiments, and looked forward to working with them in his career. Italy, at the time a leader in mathematics, was a backwater in physics. The university in Pisa had only one physics professor who, besides having already retired from research, had knowledge in the field not much greater than Fermi's own. Once again, this time within the walls of a university, Fermi would teach himself, taking advantage of the university's well-equipped library. He taught himself German and English in addition to Italian and French (in which he was already fluent) in order to read scientific publications. The library subscribed to the German journal Zeitschrift für Physik, one of the most prestigious sources for contemporary research, and Fermi was probably the only person to read it there. In 1922, after completing a thesis on X-rays and having already published three scientific papers, two on X-rays and one on general relativity (introducing what are now called Fermi coordinates, the first of many topics in physics which would bear his name), he received his doctorate in physics, magna cum laude. Just twenty-one, he had his academic credential, published work to his name, and the attention of prominent researchers aware of his talent. What he lacked was the prospect of a job in his chosen field.

Returning to Rome, Fermi came to the attention of Orso Mario Corbino, a physics professor and politician who had become a Senator of the Kingdom and appointed minister of public education. Corbino's ambition was to see Italy enter the top rank of physics research, and saw in Fermi the kind of talent needed to achieve this goal. He arranged a scholarship so Fermi could study physics in one the centres of research in northern Europe. Fermi chose Göttingen, Germany, a hotbed of work in the emerging field of quantum mechanics. Fermi was neither particularly happy nor notably productive during his eight months there, but was impressed with the German style of research and the intellectual ferment of the large community of German physicists. Henceforth, he published almost all of his research in either German or English, with a parallel paper submitted to an Italian journal. A second fellowship allowed him to spend 1924 in the Netherlands, working with Paul Ehrenfest's group at Leiden, deepening his knowledge of statistical and quantum mechanics.

Finally, upon returning to Italy, Corbino and his colleague Antonio Garbasso found Fermi a post as a lecturer in physics in Florence. The position paid poorly and had little prestige, but at least it was a step onto the academic ladder, and Fermi was happy to accept it. There, Fermi and his colleague Franco Rasetti did experimental work measuring the spectra of atoms under the influence of radio frequency fields. Their work was published in prestigious journals such as Nature and Zeitschrift für Physik.

In 1925, Fermi took up the problem of reconciling the field of statistical mechanics with the discovery by Wolfgang Pauli of the exclusion principle, a purely quantum mechanical phenomenon which restricts certain kinds of identical particles from occupying the same state at the same time. Fermi's paper, published in 1926, resolved the problem, creating what is now called Fermi-Dirac statistics (British physicist Paul Dirac independently discovered the phenomenon, but Fermi published first) for the particles now called fermions, which include all of the fundamental particles that make up matter. (Forces are carried by other particles called bosons, which go beyond the scope of this discussion.)

This paper immediately elevated the twenty-five year old Fermi to the top tier of theoretical physicists. It provided the foundation for understanding of the behaviour of electrons in solids, and thus the semiconductor technology upon which all our modern computing and communications equipment is based. Finally, Fermi won what he had aspired to: a physics professorship in Rome. In 1928, he married Laura Capon, whom he had first met in 1924. The daughter of an admiral in the World War I Italian navy, she was a member of one of the many secular and assimilated Jewish families in Rome. She was less than impressed on first encountering Fermi:

He shook hands and gave me a friendly grin. You could call it nothing but a grin, for his lips were exceedingly thin and fleshless, and among his upper teeth a baby tooth too lingered on, conspicuous in its incongruity. But his eyes were cheerful and amused.

Both Laura and Enrico shared the ability to see things precisely as they were, then see beyond that to what they could become.

In Rome, Fermi became head of the mathematical physics department at the Sapienza University of Rome, which his mentor, Corbino, saw as Italy's best hope to become a world leader in the field. He helped Fermi recruit promising physicists, all young and ambitious. They gave each other nicknames: ecclesiastical in nature, befitting their location in Rome. Fermi was dubbed Il Papa (The Pope), not only due to his leadership and seniority, but because he had already developed a reputation for infallibility: when he made a calculation or expressed his opinion on a technical topic, he was rarely if ever wrong. Meanwhile, Mussolini was increasing his grip on the country. In 1929, he announced the appointment of the first thirty members of the Royal Italian Academy, with Fermi among the laureates. In return for a lifetime stipend which would put an end to his financial worries, he would have to join the Fascist party. He joined. He did not take the Academy seriously and thought its comic opera uniforms absurd, but appreciated the money.

By the 1930s, one of the major mysteries in physics was beta decay. When a radioactive nucleus decayed, it could emit one or more kinds of radiation: alpha, beta, or gamma. Alpha particles had been identified as the nuclei of helium, beta particles as electrons, and gamma rays as photons: like light, but with a much shorter wavelength and correspondingly higher energy. When a given nucleus decayed by alpha or gamma, the emission always had the same energy: you could calculate the energy carried off by the particle emitted and compare it to the nucleus before and after, and everything added up according to Einstein's equation of E=mc². But something appeared to be seriously wrong with beta (electron) decay. Given a large collection of identical nuclei, the electrons emitted flew out with energies all over the map: from very low to an upper limit. This appeared to violate one of the most fundamental principles of physics: the conservation of energy. If the nucleus after plus the electron (including its kinetic energy) didn't add up to the energy of the nucleus before, where did the energy go? Few physicists were ready to abandon conservation of energy, but, after all, theory must ultimately conform to experiment, and if a multitude of precision measurements said that energy wasn't conserved in beta decay, maybe it really wasn't.

Fermi thought otherwise. In 1933, he proposed a theory of beta decay in which the emission of a beta particle (electron) from a nucleus was accompanied by emission of a particle he called a neutrino, which had been proposed earlier by Pauli. In one leap, Fermi introduced a third force, alongside gravity and electromagnetism, which could transform one particle into another, plus a new particle: without mass or charge, and hence extraordinarily difficult to detect, which nonetheless was responsible for carrying away the missing energy in beta decay. But Fermi did not just propose this mechanism in words: he presented a detailed mathematical theory of beta decay which made predictions for experiments which had yet to be performed. He submitted the theory in a paper to Nature in 1934. The editors rejected it, saying “it contained abstract speculations too remote from physical reality to be of interest to the reader.” This was quickly recognised and is now acknowledged as one of the most epic face-plants of peer review in theoretical physics. Fermi's theory rapidly became accepted as the correct model for beta decay. In 1956, the neutrino (actually, antineutrino) was detected with precisely the properties predicted by Fermi. This theory remained the standard explanation for beta decay until it was extended in the 1970s by the theory of the electroweak interaction, which is valid at higher energies than were available to experimenters in Fermi's lifetime.

Perhaps soured on theoretical work by the initial rejection of his paper on beta decay, Fermi turned to experimental exploration of the nucleus, using the newly-discovered particle, the neutron. Unlike alpha particles emitted by the decay of heavy elements like uranium and radium, neutrons had no electrical charge and could penetrate the nucleus of an atom without being repelled. Fermi saw this as the ideal probe to examine the nucleus, and began to use neutron sources to bombard a variety of elements to observe the results. One experiment directed neutrons at a target of silver and observed the creation of isotopes of silver when the neutrons were absorbed by the silver nuclei. But something very odd was happening: the results of the experiment seemed to differ when it was run on a laboratory bench with a marble top compared to one of wood. What was going on? Many people might have dismissed the anomaly, but Fermi had to know. He hypothesised that the probability a neutron would interact with a nucleus depended upon its speed (or, equivalently, energy): a slower neutron would effectively have more time to interact than one which whizzed through more rapidly. Neutrons which were reflected by the wood table top were “moderated” and had a greater probability of interacting with the silver target.

Fermi quickly tested this supposition by using paraffin wax and water as neutron moderators and measuring the dramatically increased probability of interaction (or as we would say today, neutron capture cross section) when neutrons were slowed down. This is fundamental to the design of nuclear reactors today. It was for this work that Fermi won the Nobel Prize in Physics for 1938.

By 1938, conditions for Italy's Jewish population had seriously deteriorated. Laura Fermi, despite her father's distinguished service as an admiral in the Italian navy, was now classified as a Jew, and therefore subject to travel restrictions, as were their two children. The Fermis went to their local Catholic parish, where they were (re-)married in a Catholic ceremony and their children baptised. With that paperwork done, the Fermi family could apply for passports and permits to travel to Stockholm to receive the Nobel prize. The Fermis locked their apartment, took a taxi, and boarded the train. Unbeknownst to the fascist authorities, they had no intention of returning.

Fermi had arranged an appointment at Columbia University in New York. His Nobel Prize award was US$45,000 (US$789,000 today). If he returned to Italy with the sum, he would have been forced to convert it to lire and then only be able to take the equivalent of US$50 out of the country on subsequent trips. Professor Fermi may not have been much interested in politics, but he could do arithmetic. The family went from Stockholm to Southampton, and then on an ocean liner to New York, with nothing other than their luggage, prize money, and, most importantly, freedom.

In his neutron experiments back in Rome, there had been curious results he and his colleagues never explained. When bombarding nuclei of uranium, the heaviest element then known, with neutrons moderated by paraffin wax, they had observed radioactive results which didn't make any sense. They expected to create new elements, heavier than uranium, but what they saw didn't agree with the expectations for such elements. Another mystery…in those heady days of nuclear physics, there was one wherever you looked. At just about the time Fermi's ship was arriving in New York, news arrived from Germany about what his group had observed, but not understood, four years before. Slow neutrons, which Fermi's group had pioneered, were able to split, or fission the nucleus of uranium into two lighter elements, releasing not only a large amount of energy, but additional neutrons which might be able to propagate the process into a “chain reaction”, producing either a large amount of energy or, perhaps, an enormous explosion.

As one of the foremost researchers in neutron physics, it was immediately apparent to Fermi that his new life in America was about to take a direction he'd never anticipated. By 1941, he was conducting experiments at Columbia with the goal of evaluating the feasibility of creating a self-sustaining nuclear reaction with natural uranium, using graphite as a moderator. In 1942, he was leading a project at the University of Chicago to build the first nuclear reactor. On December 2nd, 1942, Chicago Pile-1 went critical, producing all of half a watt of power. But the experiment proved that a nuclear chain reaction could be initiated and controlled, and it paved the way for both civil nuclear power and plutonium production for nuclear weapons. At the time he achieved one of the first major milestones of the Manhattan Project, Fermi's classification as an “enemy alien” had been removed only two months before. He and Laura Fermi did not become naturalised U.S. citizens until July of 1944.

Such was the breakneck pace of the Manhattan Project that even before the critical test of the Chicago pile, the DuPont company was already at work planning for the industrial scale production of plutonium at a facility which would eventually be built at the Hanford site near Richland, Washington. Fermi played a part in the design and commissioning of the X-10 Graphite Reactor in Oak Ridge, Tennessee, which served as a pathfinder and began operation in November, 1943, operating at a power level which was increased over time to 4 megawatts. This reactor produced the first substantial quantities of plutonium for experimental use, revealing the plutonium-240 contamination problem which necessitated the use of implosion for the plutonium bomb. Concurrently, he contributed to the design of the B Reactor at Hanford, which went critical in September 1944, running at 250 megawatts, that produced the plutonium for the Trinity test and the Fat Man bomb dropped on Nagasaki.

During the war years, Fermi divided his time among the Chicago research group, Oak Ridge, Hanford, and the bomb design and production group at Los Alamos. As General Leslie Groves, head of Manhattan Project, had forbidden the top atomic scientists from travelling by air, “Henry Farmer”, his wartime alias, spent much of his time riding the rails, accompanied by a bodyguard. As plutonium production ramped up, he increasingly spent his time with the weapon designers at Los Alamos, where Oppenheimer appointed him associate director and put him in charge of “Division F” (for Fermi), which acted as a consultant to all of the other divisions of the laboratory.

Fermi believed that while scientists could make major contributions to the war effort, how their work and the weapons they created were used were decisions which should be made by statesmen and military leaders. When appointed in May 1945 to the Interim Committee charged with determining how the fission bomb was to be employed, he largely confined his contributions to technical issues such as weapons effects. He joined Oppenheimer, Compton, and Lawrence in the final recommendation that “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use.”

On July 16, 1945, Fermi witnessed the Trinity test explosion in New Mexico at a distance of ten miles from the shot tower. A few seconds after the blast, he began to tear little pieces of paper from from a sheet and drop them toward the ground. When the shock wave arrived, he paced out the distance it had blown them and rapidly computed the yield of the bomb as around ten kilotons of TNT. Nobody familiar with Fermi's reputation for making off-the-cuff estimates of physical phenomena was surprised that his calculation, done within a minute of the explosion, agreed within the margin of error with the actual yield of 20 kilotons, determined much later.

After the war, Fermi wanted nothing more than to return to his research. He opposed the continuation of wartime secrecy to postwar nuclear research, but, unlike some other prominent atomic scientists, did not involve himself in public debates over nuclear weapons and energy policy. When he returned to Chicago, he was asked by a funding agency simply how much money he needed. From his experience at Los Alamos he wanted both a particle accelerator and a big computer. By 1952, he had both, and began to produce results in scattering experiments which hinted at the new physics which would be uncovered throughout the 1950s and '60s. He continued to spend time at Los Alamos, and between 1951 and 1953 worked two months a year there, contributing to the hydrogen bomb project and analysis of Soviet atomic tests.

Everybody who encountered Fermi remarked upon his talents as an explainer and teacher. Seven of his students: six from Chicago and one from Rome, would go on to win Nobel Prizes in physics, in both theory and experiment. He became famous for posing “Fermi problems”, often at lunch, exercising the ability to make and justify order of magnitude estimates of difficult questions. When Freeman Dyson met with Fermi to present a theory he and his graduate students had developed to explain the scattering results Fermi had published, Fermi asked him how many free parameters Dyson had used in his model. Upon being told the number was four, he said, “I remember my old friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Chastened, Dyson soon concluded his model was a blind alley.

After returning from a trip to Europe in the fall of 1954, Fermi, who had enjoyed robust good health all his life, began to suffer from problems with digestion. Exploratory surgery found metastatic stomach cancer, for which no treatment was possible at the time. He died at home on November 28, 1954, two months past his fifty-third birthday. He had made a Fermi calculation of how long to rent the hospital bed in which he died: the rental expired two days after he did.

There was speculation that Fermi's life may have been shortened by his work with radiation, but there is no evidence of this. He was never exposed to unusual amounts of radiation in his work, and none of his colleagues, who did the same work at his side, experienced any medical problems.

This is a masterful biography of one of the singular figures in twentieth century science. The breadth of his interests and achievements is reflected in the list of things named after Enrico Fermi. Given the hyper-specialisation of modern science, it is improbable we will ever again see his like.

 Permalink

Schulman, J. Neil. The Robert Heinlein Interview. Pahrump, NV: Pulpless.Com, [1990, 1996, 1999] 2017. ISBN 978-1-58445-015-3.
Today, J. Neil Schulman is an accomplished novelist, filmmaker, screenwriter, actor, journalist, and publisher: winner of the Prometheus Award for libertarian science fiction. In the summer of 1973, he was none of those things: just an avid twenty year old science fiction fan who credited the works of Robert A. Heinlein for saving his life—replacing his teenage depression with visions of a future worth living for and characters worthy of emulation who built that world. As Schulman describes it, Heinlein was already in his head, and he wanted nothing more in his ambition to follow in the steps of Heinlein than to get into the head of the master storyteller. He managed to parlay a book review into a commission to interview Heinlein for the New York Sunday News. Heinlein consented to a telephone interview, and on June 30, 1973, Schulman and Heinlein spoke for three and a half hours, pausing only for hourly changes of cassettes.

The agenda for the interview had been laid out in three pages of questions Schulman had mailed Heinlein a few days before, but the letter had only arrived shortly before the call and Heinlein hadn't yet read the questions, so he read them as they spoke. After the interview, Schulman prepared a transcript, which was edited by Robert Heinlein and Virginia, his wife. The interview was published by the newspaper in a much abridged and edited form, and did not see print in its entirety until 1990, two years after Heinlein's death. On the occasion of its publication, Virginia Heinlein said “To my knowledge, this is the longest interview Robert ever gave. Here is a book that should be on the shelves of everyone interested in science fiction. Libertarians will be using it as a source for years to come.”

Here you encounter the authentic Heinlein, consistent with the description from many who knew him over his long career: simultaneously practical, visionary, contrary, ingenious, inner-directed, confident, and able to observe the world and humanity without the filter of preconceived notions. Above all, he was a master storyteller who never ceased to be amazed people would pay him to spin yarns. As Schulman describes it, “Talking with Robert Heinlein is talking with the Platonic archetype of all his best characters.”

If you have any interest in Heinlein or the craft of science fiction, this should be on your reading list. I will simply quote a few morsels chosen from the wealth of insights and wisdom in these pages.

On aliens and first contact:
The universe might turn out to be a hell of a sight nastier and tougher place than we have any reason to guess at this point. That first contact just might wipe out the human race, because we would encounter somebody who was meaner and tougher, and not at all inclined to be bothered by genocide. Be no more bothered by genocide than I am when I put out ant poison in the kitchen when the ants start swarming in.
On the search for deep messages in his work:
[Quoting Schulman's question] “Isn't ‘Coventry’ still an attempt by the state (albeit a relatively benign one) to interfere with the natural market processes and not let the victim have his restitution?” Well, “Coventry” was an attempt on the part of a writer to make a few hundred dollars to pay off a mortgage.
On fans who complain his new work isn't consistent with his earlier writing:
Over the course of some thirty-four years of writing, every now and then I receive things from people condemning me for not having written a story just like my last one. I never pay attention to this, Neil, because it has been my intention—my purpose—to make every story I've written—never to write a story just like my last one…I'm going to write what it suits me to write and if I write another story that's just like any other story I've ever written, I'll be slipping. … I'm trying to write to please not even as few as forty thousand people in the hardcover, but a million and up in the softcover. If an author let these self-appointed mentors decide for him what he's going to write and how he's going to write it, he'd never get anywhere….
On his writing and editing habits:
I've never written more than about three months of the year the whole time I've been writing. Part of that is because I never rewrite. I cut, but I don't rewrite.
On the impact of technologies:
When I see how far machine computation has gone since that time [the 1930s], I find it the most impressive development—more impressive than the atom bomb, more impressive than space travel—in its final consequences.
On retirement:
Well, Tony Boucher pointed that out to me years ago. He said that there are retired everything else—retired schoolteachers, retired firemen, retired bankers—but there are no retired writers. There are simply writers who are no longer selling. [Heinlein's last novel, To Sail Beyond the Sunset, was published in 1987, the year before his death at age 80. —JW]
On the conflict between high technology and personal liberty:
The question of how many mega-men [millions of population] it takes to maintain a high-technology society and how many mega-men it takes to produce oppressions simply through the complexity of the society is a matter I have never satisfactorily solved in my own mind. But I am quite sure that one works against the other, that it takes a large-ish population for a high technology, but if you get large populations human liberties are automatically restricted even if you don't have legislation about it. In fact, the legislation in many cases is intended to—and sometimes does—lubricate the frictions that take place between people simply because they're too close together.
On seeking solutions to problems:
I got over looking for final solutions a good, long time ago because once you get this point shored up, something breaks out somewhere else. The human race gets along by the skin of its teeth, and it's been doing so for some hundreds of thousands or millions of years. … It is the common human condition all through history that every time you solve a problem you discover that you've created a new problem.

I did not cherry pick these: they are but a few of a multitude from the vast cherry tree which is this interview. Enjoy! Also included in the book are other Heinlein-related material by Schulman: book reviews, letters, and speeches.

I must caution prospective readers that the copy-editing of this book is embarrassingly bad. I simply do not understand how a professional author—one who owns his own publishing house—can bring a book to market which clearly nobody has ever read with a critical eye, even at a cursory level. There are dozens of howlers here: not subtle things, but words run together, sentences which don't begin with a capital letter, spaces in the middle of hyphenated words, commas where periods were intended, and apostrophes transformed into back-tick characters surrounded by spaces. And this is not a bargain-bin special—the paperback has a list price of US$19.95 and is listed at this writing at US$18.05 at Amazon. The Heinlein interview was sufficiently enlightening I was willing to put up with the production values, which made something which ought to be a triumph look just shabby and sad, but then I obtained the Kindle edition for free (see below). If I'd paid full freight for the paperback, I'm not sure even my usually mellow disposition would have remained unperturbed by the desecration of the words of an author I cherish and the feeling my pocket had been picked.

The Kindle edition is available for free to Kindle Unlimited subscribers.

 Permalink

Mills, Kyle. The Survivor. New York: Pocket Books, 2015. ISBN 978-1-4767-8346-8.
Over the last fifteen years, CIA counter-terrorism operative Mitch Rapp (warning—the article at this link contains minor spoilers) has survived myriad adventures and attempts to take him out by terrorists, hostile governments, subversive forces within his own agency, and ambitious and unscrupulous Washington politicians looking to nail his scalp to their luxuriously appointed office walls, chronicled in the thirteen thrillers by his creator, Vince Flynn. Now, Rapp must confront one of the most formidable challenges any fictional character can face—outliving the author who invented him. With the death of Vince Flynn in 2013 from cancer, the future of the Mitch Rapp series was uncertain. Subsequently, Flynn's publisher announced that veteran thriller writer Kyle Mills, with fourteen novels already published, would be continuing the Mitch Rapp franchise. This is the first novel in the series by Mills. Although the cover has Flynn's name in much larger type than Mills', the latter is the sole author.

In this installment of the Rapp saga, Mills opted to dive right in just days after the events in the conclusion of the previous novel, The Last Man (February 2013). The CIA is still reeling from its genius black operations mastermind, Joseph Rickman, having gone rogue, faked his own kidnapping, and threatened to reveal decades of the CIA's secrets, including deep cover agents in place around the world and operations in progress, potentially crippling the CIA and opening up enough cans of worms to sustain the congressional committee surrender-poultry for a decade. With the immediate Rickman problem dealt with in the previous novel, the CIA is dismayed to learn that the ever-clever Rickman is himself a survivor, and continues to wreak his havoc on the agency from beyond the grave, using an almost impenetrable maze of digital and human cut-outs devised by his wily mind.

Not only is the CIA at risk of embarrassment and exposure of its most valuable covert assets, an ambitious spymaster in Pakistan sees the Rickman intelligence trove as not only a way to destroy the CIA's influence in his country and around the world, but the means to co-opt its network for his own ends, providing his path to slither to the top of the seething snake-mountain which is Pakistani politics, and, with control over his country's nuclear arsenal and the CIA's covert resources, become a player on the regional, if not world scale.

Following Rickman's twisty cyber trail as additional disclosure bombshells drop on the CIA, Rapp and his ailing but still prickly mentor Stan Hurley must make an uneasy but unavoidable alliance with Louis Gould, the murderer of Rapp's wife and unborn child, who almost killed him in the previous novel, in order to penetrate the armed Swiss compound (which has me green with envy and scribbling notes) of Leo Obrecht, rogue private banker implicated in the Rickman operation and its Pakistani connections.

The action takes Rapp and his team to a remote location in Russia, and finally to a diplomatic banquet in Islamabad where Rapp reminds an American politician which fork to use, and how.

Mitch Rapp has survived. I haven't read any of Kyle Mills' other work, so I don't know whether it's a matter of his already aligning with Vince Flynn's style or, as a professional author, adopting it along with Flynn's worldview, but had I not known this was the work of a different author, I'd never have guessed. I enjoyed this story and look forward to further Mitch Rapp adventures by Kyle Mills.

 Permalink

van Creveld, Martin. Hitler in Hell. Kouvola, Finland: Castalia House, 2017. ASIN B0738YPW2M.
Martin van Creveld is an Israeli military theorist and historian, professor emeritus at Hebrew University in Jerusalem, and author of seventeen books of military history and strategy, including The Transformation of War, which has been hailed as one of the most significant recent works on strategy. In this volume he turns to fiction, penning the memoirs of the late, unlamented Adolf Hitler from his current domicile in Hell, “the place to which the victors assign their dead opponents.” In the interest of concision, in the following discussion I will use “Hitler” to mean the fictional Hitler in this work.

Hitler finds Hell more boring than hellish—“in some ways it reminds me of Landsberg Prison”. There is no torture or torment, just a never-changing artificial light and routine in which nothing ever happens. A great disappointment is that neither Eva Braun nor Blondi is there to accompany him. As to the latter, apparently all dogs go to heaven. Rudolf Hess is there, however, and with that 1941 contretemps over the flight to Scotland put behind them, has resumed helping Hitler with his research and writing as he did during the former's 1924 imprisonment. Hell has broadband!—Hitler is even able to access the “Black Internetz” and read, listen to, and watch everything up to the present day. (That sounds pretty good—my own personal idea of Hell would be an Internet connection which only allows you to read Wikipedia.)

Hitler tells the story of his life: from childhood, his days as a struggling artist in Vienna and Munich, the experience of the Great War, his political awakening in the postwar years, rise to power, implementation of his domestic and foreign policies, and the war and final collapse of Nazi Germany. These events, and the people involved in them, are often described from the viewpoint of the present day, with parallels drawn to more recent history and figures.

What makes this book work so well is that van Creveld's Hitler makes plausible arguments supporting decisions which many historians argue were irrational or destructive: going to war over Poland, allowing the British evacuation from Dunkirk, attacking the Soviet Union while Britain remained undefeated in the West, declaring war on the U.S. after Pearl Harbor, forbidding an orderly retreat from Stalingrad, failing to commit armour to counter the Normandy landings, and fighting to the bitter end, regardless of the consequences to Germany and the German people. Each decision is justified with arguments which are plausible when viewed from what is known of Hitler's world view, the information available to him at the time, and the constraints under which he was operating.

Much is made of those constraints. Although embracing totalitarianism (“My only regret is that, not having enough time, we did not make it more totalitarian still”), he sees himself surrounded by timid and tradition-bound military commanders and largely corrupt and self-serving senior political officials, yet compelled to try to act through them, as even a dictator can only dictate, then hope others implement his wishes. “Since then, I have often wondered whether, far from being too ruthless, I had been too soft and easygoing.” Many apparent blunders are attributed to lack of contemporary information, sometimes due to poor intelligence, but often simply by not having the historians' advantage of omniscient hindsight.

This could have been a parody, but in the hands of a distinguished historian like the author, who has been thinking about Hitler for many years (he wrote his 1971 Ph.D. thesis on Hitler's Balkan strategy in World War II), it provides a serious look at how Hitler's policies and actions, far from being irrational or a madman's delusions, may make perfect sense when one starts from the witches' brew of bad ideas and ignorance which the real Hitler's actual written and spoken words abundantly demonstrate. The fictional Hitler illustrates this in many passages, including this particularly chilling one where, after dismissing those who claim he was unaware of the extermination camps, says “I particularly needed to prevent the resurgence of Jewry by exterminating every last Jewish man, woman, and child I could. Do you say they were innocent? Bedbugs are innocent! They do what nature has destined them to, no more, no less. But is that any reason to spare them?” Looking backward, he observes that notwithstanding the utter defeat of the Third Reich, the liberal democracies that vanquished it have implemented many of his policies in the areas of government supervision of the economy, consumer protection, public health (including anti-smoking policies), environmentalism, shaping the public discourse (then, propaganda, now political correctness), and implementing a ubiquitous surveillance state of which the Gestapo never dreamed.

In an afterword, van Creveld explains that, after on several occasions having started to write a biography of Hitler and then set the project aside, concluding he had nothing to add to existing works, in 2015 it occurred to him that the one perspective which did not exist was Hitler's own, and that the fictional device of a memoir from Hell, drawing parallels between historical and contemporary events, would provide a vehicle to explore the reasoning which led to the decisions Hitler made. The author concludes, “…my goal was not to set forth my own ideas. Instead, I tried to understand Hitler's actions, views, and thoughts as I think he, observing the past and the present from Hell, would have explained them. So let the reader judge how whether I have succeeded in this objective.” In the opinion of this reader, he has succeeded, and brilliantly.

This book is presently available only in a Kindle edition; it is free for Kindle Unlimited subscribers.

 Permalink

Cowie, Ian, Dim Jones, and Chris Long, eds. Out of the Blue. Farnborough, UK, 2011. ISBN 978-0-9570928-0-8.
Flying an aircraft has long been described by those who do it for a living as hours of boredom punctuated by moments of stark terror. The ratio of terror to boredom depends upon the equipment and mission the pilot is flying, and tends to be much higher as these approach the ragged edge, as is the case for military aviation in high-performance aircraft. This book collects ninety anecdotes from pilots in the Royal Air Force, most dating from the Cold War era, illustrating that you never know for sure what is going to happen when you strap into an airplane and take to the skies, and that any lapse in attention to detail, situational awareness, or resistance to showing off may be swiftly rewarded not only with stark terror but costly, unpleasant, and career-limiting consequences. All of the stories are true (or at least those relating them say they are—with pilots you never know for sure), and most are just a few pages. You can pick the book up at any point; except for a few two-parters, the chapters are unrelated to one another. This is thus an ideal “bathroom book”, or way to fill a few minutes' downtime in a high distraction environment.

Because most of the flying takes place in Britain and in NATO deployments in Germany and other countries in northern Europe, foul weather plays a part in many of these adventures. Those who fly in places like Spain and California seldom find themselves watching the fuel gauge count down toward zero while divert field after divert field goes RED weather just as they arrive and begin their approach—that happens all the time in the RAF.

Other excitement comes from momentary lapses of judgment or excessive enthusiasm, such as finding yourself at 70,000 feet over Germany in a Lightning whose two engines have flamed out after passing the plane's service ceiling of 54,000 feet. While in this case the intrepid aeronaut got away without a scratch (writing up the altimeter as reading much too high), other incidents ended up in ejecting from aircraft soon to litter the countryside with flaming debris. Then there's ejecting from a perfectly good Hunter after a spurious fire warning light and the Flight Commander wingman ordering an ejection after observing “lots of smoke” which turned out, after the fact, to be just hydraulic fluid automatically dumped after a precautionary engine shutdown.

Sometimes you didn't do anything wrong and still end up in a spot of bother. There's the crew of a Victor which, shortly after departing RAF Gan in the Maldive Islands had a hydraulic system failure. No big thing—the Victor has two completely independent hydraulic systems, so there wasn't any great worry as the plane turned around to return to Gan. But when the second hydraulic system then proceeded to fail, there was worry aplenty, because that meant there was no nose-wheel steering and a total of eight applications of the brakes before residual pressure in the system was exhausted. Then came the call from Gan: a series of squalls were crossing the atoll, with crosswinds approaching the Victor's limit and heavy rain on the runway. On landing, a gust of wind caught the drag parachute and sent the bomber veering off the edge of the runway, and without nose-wheel steering, nothing could be done to counteract it. The Victor ended up ploughing a furrow in the base's just-refurbished golf course before coming to a stop. Any landing you walk away from…. The two hydraulic systems were determined to have failed from completely independent and unrelated causes, something that “just can't happen”—until it happens to you.

Then there's RAF pilot Alan Pollock, who, upset at the RAF's opting in 1968 not to commemorate the 50th anniversary of its founding, decided to mount his own celebration of the milestone. He flew his Hunter at high subsonic speed and low altitude down the Thames, twisting and turning with the river, and circling the Houses of Parliament as Big Ben struck noon. He then proceeded up the Thames and, approaching Tower Bridge, became the first and so far only pilot to fly between the two spans of the London landmark. This concluded his RAF career: he was given a medical discharge, which avoided a court martial that would have likely have sparked public support for his unauthorised aerial tattoo. His first-hand recollection of the exploit appears here.

Other stories recount how a tiny blob of grease where it didn't belong turned a Hunter into rubble in Cornwall, the strange tale of the world's only turbine powered biplane, the British pub on the Italian base at Decimomannu, Sardinia: “The Pig and Tapeworm”, and working as an engineer on the Shackleton maritime patrol aircraft: “Along the way, you will gain the satisfaction of ensuring the continued airworthiness of a bona fide museum piece, so old that the pointed bit is at the back, and so slow that birds collide with the trailing edge of the wing.” There's nothing profound here, but it's a lot of fun.

The paperback is currently out of print, but used copies are available at reasonable cost. The Kindle edition is available, and is free for Kindle Unlimited subscribers.

 Permalink

Howey, Hugh. Wool. New York: Simon & Schuster, [2011] 2013. ISBN 978-1-4767-3395-1.
Wool was originally self-published as a stand-alone novella. The series grew into a total of six novellas, collected into three books. This “Omnibus Edition” contains all three books, now designated “Volume 1 of the Silo Trilogy”. Two additional volumes in the series: Shift and Dust are respectively a prequel and sequel to the present work.

The Silo is the universe to its inhabitants. It consists of a cylinder whose top is level with the surrounding terrain and extends downward into the Earth for 144 levels, with a central spiral staircase connecting them. Transport among the levels is purely by foot traffic on the staircase, and most news and personal messages are carried by porters who constantly ascend and descend the stairs. Electronic messages can be sent, but are costly and rarely used. Levels are divided by functionality, and those who live in them essentially compose castes defined by occupation. Population is strictly controlled and static. Administration is at the top (as is usually the case), while the bottom levels are dedicated to the machines which produce power, circulate and purify the air, pump out ground water which would otherwise flood the structure, and drill for energy and mine resources required to sustain the community. Intermediate levels contain farms, hospitals and nurseries, schools, and the mysterious and secretive IT (never defined, but one assumes “Information Technology”, which many suspect is the real power behind the scenes [isn't it always?]). There is some mobility among levels and occupations, but many people live most of their lives within a few levels of where they were born, taking occasional rare (and exhausting) trips to the top levels for special occasions.

The most special of occasions is a “cleaning”. From time to time, some resident of the silo demands to leave or, more often, is deemed a threat to the community due to challenging the social order, delving too deeply into its origins, or expressing curiosity about what exists outside, and is condemned to leave the silo wearing a protective suit against the forbiddingly hostile environment outside, to clean the sensors which provide denizens their only view of the surroundings: a barren landscape with a ruined city in the distance. The suit invariably fails, and the cleaner's body joins those of others scattered along the landscape. Why do those condemned always clean? They always have, and it's expected they always will.

The silo's chief is the mayor, and order is enforced by the sheriff, to whom deputies in offices at levels throughout the silo report. The current sheriff's own wife was sent to cleaning just three years earlier, after becoming obsessed with what she believed to be a grand deception by IT and eventually breaking down in public. Sheriff Holston's own obsession grows until he confronts the same fate.

This is a claustrophobic, dystopian novel in which the reader begins as mystified with what is going on and why as are the residents of the silo, at least those who dare to be curious. As the story progresses, much of which follows the career of a new sheriff appointed from the depths of the silo, we piece together, along with the characters, what is happening and how it came to be and, with them, glimpse a larger world and its disturbing history. The writing is superb and evocative of the curious world in which the characters find themselves.

Spoiler warning: Plot and/or ending details follow.  
There are numerous mysteries in this story, many of which are explained as the narrative progresses, but there's one central enigma which is never addressed. I haven't read the prequel nor the sequel, and perhaps they deal with it, but this book was written first as a stand-alone, and read as one, it leaves this reader puzzled.

The silo has abundant energy produced from oil wells drilled from the lower levels, sufficient to provide artificial lighting throughout including enough to grow crops on the farm levels. There is heavy machinery: pumps, generators, air circulation and purification systems, advanced computer technology in IT, and the infrastructure to maintain all of this along with a logistics, maintenance, and spares operation to keep it all running. And, despite all of this, there's no elevator! The only way to move people and goods among the levels is to manually carry them up and down the circular staircase. Now, I can understand how important this is to the plot of the novel, but it would really help if the reader were given a clue why this is and how it came to be. My guess is that it was part of the design of the society: to impose a stratification and reinforce its structure like the rule of a monastic community (indeed, we later discover the silo is regulated according to a book of Order). I get it—if there's an elevator, much of the plot goes away, but it would be nice to have a clue why there isn't one, when it would be the first thing anybody with the technology to build something like the silo would design into what amounts to a 144 storey building.

Spoilers end here.  

The Kindle edition is presented in a very unusual format. It is illustrated with drawings, some of which are animated—not full motion, but perspectives change, foregrounds and backgrounds shift, and light sources move around. The drawings do not always correspond to descriptions in the text. The illustrations appear to have been adapted from a graphic novel based upon the book. The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Fulton, Steve and Jeff Fulton. HTML5 Canvas. Sebastopol, CA: O'Reilly, 2013. ISBN 978-1-4493-3498-7.
I only review computer books if I've read them in their entirety, as opposed to using them as references while working on projects. For much of 2017 I've been living with this book open, referring to it as I performed a comprehensive overhaul of my Fourmilab site, and I just realised that by now I have actually read every page, albeit not in linear order, so a review is in order; here goes.

The original implementation of World Wide Web supported only text and, shortly thereafter, embedded images in documents. If you wanted to do something as simple as embed an audio or video clip, you were on your own, wading into a morass of browser- and platform-specific details, plug-ins the user may have to install and then forever keep up to date, and security holes due to all of this non-standard and often dodgy code. Implementing interactive content on the Web, for example scientific simulations for education, required using an embedded language such as Java, whose initial bright promise of “Write once, run anywhere” quickly added the rejoinder “—yeah, right” as bloat in the language, incessant security problems, cross-platform incompatibilities, the need for the user to forever keep external plug-ins updated lest existing pages cease working, caused Java to be regarded as a joke—a cruel joke upon those who developed Web applications based upon it. By the latter half of the 2010s, the major browsers had either discontinued support for Java or announced its removal in future releases.

Fortunately, in 2014 the HTML5 standard was released. For the first time, native, standardised support was added to the Web's fundamental document format to support embedded audio, video, and interactive content, along with Application Programming Interfaces (APIs) in the JavaScript language, interacting with the document via the Document Object Model (DOM), which has now been incorporated into the HTML5 standard. For the first time it became possible, using only standards officially adopted by the World Wide Web Consortium, to create interactive Web pages incorporating multimedia content. The existence of this standard provides a strong incentive for browser vendors to fully implement and support it, and increases the confidence of Web developers that pages they create which are standards-compliant will work on the multitude of browsers, operating systems, and hardware platforms which exist today.

(That encomium apart, I find much to dislike about HTML5. In my opinion its sloppy syntax [not requiring quotes on tag attributes nor the closing of many tags] is a great step backward from XHTML 1.0, which strictly conforms to XML syntax and can be parsed by a simple and generic XML parser, without the Babel-sized tower of kludges and special cases which are required to accommodate the syntactic mumbling of HTML5. A machine-readable language should be easy to read and parse by a machine, especially in an age where only a small minority of Web content creators actually write HTML themselves, as opposed to using a content management system of some kind. Personally, I continue to use XHTML 1.0 for all content on my Web site which does not require the new features in HTML5, and I observe that the home page of the World Wide Web Consortium is, itself, in XHTML 1.0 Strict. And there's no language version number in the header of an HTML5 document. Really—what's up with that? But HTML5 is the standard we've got, so it's the standard we have to use in order to benefit from the capabilities it provides: onward.)

One of the most significant new features in HTML5 is its support for the Canvas element. A canvas is a rectangular area within a page which is treated as an RGBA bitmap (the “A” denotes “alpha”, which implements transparency for overlapping objects). A canvas is just what its name implies: a blank area on which you can draw. The drawing is done in JavaScript code via the Canvas API, which is documented in this book, along with tutorials and abundant examples which can be downloaded from the publisher's Web site. The API provides the usual functions of a two-dimensional drawing model, including lines, arcs, paths, filled objects, transformation matrices, clipping, and colours, including gradients. A text API allows drawing text on the canvas, using a subset of CSS properties to define fonts and their display attributes.

Bitmap images may be painted on the canvas, scaled and rotated, if you wish, using the transformation matrix. It is also possible to retrieve the pixel data from a canvas or portion of it, manipulate it at low-level, and copy it back to that or another canvas using JavaScript typed arrays. This allows implementation of arbitrary image processing. You might think that pixel-level image manipulation in JavaScript would be intolerably slow, but with modern implementations of JavaScript in current browsers, it often runs within a factor of two of the speed of optimised C code and, unlike the C code, works on any platform from within a Web page which requires no twiddling by the user to build and install on their computer.

The canvas API allows capturing mouse and keyboard events, permitting user interaction. Animation is implemented using JavaScript's standard setTimeout method. Unlike some other graphics packages, the canvas API does not maintain a display list or refresh buffer. It is the responsibility of your code to repaint the image on the canvas from scratch whenever it changes. Contemporary browsers buffer the image under construction to prevent this process from being seen by the user.

HTML5 audio and video are not strictly part of the canvas facility (although you can display a video on a canvas), but they are discussed in depth here, each in its own chapter. Although the means for embedding this content into Web pages are now standardised, the file formats for audio and video are, more than a quarter century after the creation of the Web, “still evolving”. There is sage advice for developers about how to maximise portability of pages across browsers and platforms.

Two chapters, 150 pages of this 750 page book (don't be intimidated by its length—a substantial fraction is code listings you don't need to read unless you're interested in the details), are devoted to game development using the HTML5 canvas and multimedia APIs. A substantial part of this covers topics such as collision detection, game physics, smooth motion, and detecting mouse hits in objects, which are generic subjects in computer graphics and not specific to its HTML5 implementation. Reading them, however, may give you some tips useful in non-game applications.

Projects at Fourmilab which now use HTML5 canvas are:

Numerous other documents on the site have been updated to HTML5, using the audio and video embedding capabilities described in the book.

All of the information on the APIs described in the book is available on the Web for free. But you won't know what to look for unless you've read an explanation of how they work and looked at sample code which uses them. This book provides that information, and is useful as a desktop reference while you're writing code.

A Kindle edition is available, which you can rent for a limited period of time if you only need to refer to it for a particular project.

 Permalink

Smith, L. Neil. Blade of p'Na. Rockville, MD: Phoenix Pick, 2017. ISBN 978-1-61242-218-3.
This novel is set in the “Elders” universe, originally introduced in the 1990 novels Contact and Commune and Converse and Conflict, and now collected in an omnibus edition with additional material, Forge of the Elders. Around four hundred million years ago the Elders, giant mollusc-like aquatic creatures with shells the size of automobiles, conquered aging, and since then none has died except due to accident or violence. And precious few have succumbed to those causes: accident because the big squid are famously risk averse, and violence because, after a societal adolescence in which they tried and rejected many political and economic bad ideas, they settled on p'Na as the central doctrine of their civilisation: the principle that nobody has the right to initiate physical force against anybody else for any reason—much like the Principle of Non-Aggression, don't you know.

On those rare occasions order is disturbed, the services of a p'Nan “debt assessor” are required. Trained in the philosophy of p'Na, martial arts, psychology, and burnished through a long apprenticeship, assessors are called in either after an event in which force has been initiated or by those contemplating a course which might step over the line. The assessor has sole discretion in determining culpability, the form and magnitude of restitution due, and when no other restitution is possible, enforcing the ultimate penalty on the guilty. The assessor's sword, the Blade of p'Na, is not just a badge of office but the means of restitution in such cases.

The Elders live on one of a multitude, possibly infinite, parallel Earths in a multiverse where each planet's history has diverged due to contingent events in its past. Some millennia after adopting p'Na, they discovered the means of observing, then moving among these different universes and their variant Earths. Some millennia after achieving biological immortality and peace through p'Na, their curiosity and desire for novelty prompted them to begin collecting beings from across the multiverse. Some were rescues of endangered species, while others would be more accurately described as abductions. They referred to this with the euphemism of “appropriation”, as if that made any difference. The new arrivals: insectoid, aquatic, reptilian, mammalian, avian, and even sentient plants, mostly seemed happy in their new world, where the Elders managed to create the most diverse and peaceful society known in the universe.

This went on for a million years or so until, just like the revulsion against slavery in the 19th century in our timeline, somesquid happened to notice that the practice violated the fundamental principle of their society. Appropriations immediately ceased, debt assessors were called in, and before long all of the Elders implicated in appropriation committed suicide (some with a little help). But that left the question of restitution to the appropriated. Dumping them back into their original universes, often war-torn, barbarous, primitive, or with hostile and unstable environments after up to a million years of peace and prosperity on the Elders' planet didn't make the ethical cut. They settled on granting full citizenship to all the appropriated, providing them the gift of biological immortality, cortical implants to upgrade the less sentient to full intelligence, and one more thing…. The Elders had developed an unusual property: the tips of their tentacles could be detached and sent on errands on behalf of their parent bodies. While not fully sentient, the tentacles could, by communicating via cortical implants, do all kinds of useful work and allow the Elders to be in multiple places at once (recall that the Elders, like terrestrial squid, have ten tentacles—if they had twelve, they'd call them twelvicles, wouldn't they?). So for each of the appropriated species, the Elders chose an appropriate symbiote who, upgraded in intelligence and self-awareness and coupled to the host by their own implant, provided a similar benefit to them. For humanoids, it was dogs, or their species' canids.

(You might think that all of this constitutes spoilers, but it's just the background for the Elders' universe which is laid out in the first few chapters for the benefit of readers who haven't read the earlier books in the series.)

Hundreds of millions of years after the Great Restitution Eichra Oren (those of his humanoid species always use both names) is a p'Na debt assessor. His symbiote, Oasam Otusam, a super-intelligent, indiscriminately libidinous, and wisecracking dog, prefers to go by “Sam”. So peaceful is the planet of the Elders that most of the cases Eichra Oren is called upon to resolve are routine and mundane, such as the current client, an arachnid about the size of a dinner table, seeking help in tracking down her fiancé, who has vanished three days before the wedding. This raises some ethical issues because, among their kind, traditionally “Saying ‘I do’ is the same as saying ‘bon appétit’ ”. Many, among sapient spiders, have abandoned the Old Ways, but some haven't. After discussion, in which Sam says, “You realize that in the end, she's going to eat him”, they decide, nonetheless, to take the case.

The caseload quickly grows as the assessor is retained by investors in a project led by an Elder named Misterthoggosh, whose fortune comes from importing reality TV from other universes (there is no multiverse copyright convention—the p'Na is cool with cultural appropriation) and distributing it to the multitude of species on the Elders' world. He (little is known of the Elders' biology…some say the females are non-sentient and vestigial) is now embarking on a new project, and the backers want a determination by an assessor that it will not violate p'Na, for which they would be jointly and separately responsible. The lead investor is a star-nosed mole obsessed by golf.

Things become even more complicated after a mysterious attack which appears to have been perpetrated by the “greys”, creatures who inhabit the mythology and nightmares of a million sapient species, and the suspicion and fear that somewhere else in the multiverse, another species has developed the technology of opening gates between universes, something so far achieved only by the now-benign Elders, with wicked intent by the newcomers.

What follows is a romp filled with interesting questions. Should you order the vegan plate in a restaurant run by intelligent plants? What are the ethical responsibilities of a cyber-assassin who is conscious yet incapable of refusing orders to kill? What is a giant squid's idea of a pleasure yacht? If two young spiders are amorously attracted, it only pupæ love? The climax forces the characters to confront the question of the extent to which beings which are part of a hive mind are responsible for the actions of the collective.

L. Neil Smith's books have sometimes been criticised for being preachy libertarian tracts with a garnish of science fiction. I've never found them to be such, but you certainly can't accuse this one of that. It's set in a world governed for æons by the principle of non-aggression, but that foundation of civil society works so well that it takes an invasion from another universe to create the conflict which is central to the plot. Readers are treated to the rich and sometime zany imagination of a world inhabited by almost all imaginable species where the only tensions among them are due to atavistic instincts such as those of dogs toward tall plants, combined with the humour, ranging from broad to wry, of our canine narrator, Sam.

 Permalink

August 2017

Rahe, Paul A. The Spartan Regime. New Haven, CT: Yale University Press, 2016. ISBN 978-0-300-21901-2.
This thin volume (just 232 pages in the hardcover edition, only around 125 of which are the main text and appendices—the rest being extensive source citations, notes, and indices of subjects and people and place names) is intended as the introduction to an envisioned three volume work on Sparta covering its history from the archaic period through the second battle of Mantinea in 362 b.c. where defeat of a Sparta-led alliance at the hands of the Thebans paved the way for the Macedonian conquest of Sparta.

In this work, the author adopts the approach to political science used in antiquity by writers such as Thucydides, Xenophon, and Aristotle: that the principal factor in determining the character of a political community is its constitution, or form of government, the rules which define membership in the community and which its members were expected to obey, their character being largely determined by the system of education and moral formation which shape the citizens of the community.

Discerning these characteristics in any ancient society is difficult, but especially so in the case of Sparta, which was a society of warriors, not philosophers and historians. Almost all of the contemporary information we have about Sparta comes from outsiders who either visited the city at various times in its history or based their work upon the accounts of others who had. Further, the Spartans were famously secretive about the details of their society, so when ancient accounts differ, it is difficult to determine which, if any, is correct. One gets the sense that all of the direct documentary information we have about Sparta would fit on one floppy disc: everything else is interpretations based upon that meagre foundation. In recent centuries, scholars studying Sparta have seen it as everything from the prototype of constitutional liberty to a precursor of modern day militaristic totalitarianism.

Another challenge facing the modern reader and, one suspects, many ancients, in understanding Sparta was how profoundly weird it was. On several occasions whilst reading the book, I was struck that rarely in science fiction does one encounter a description of a society so thoroughly alien to those with which we are accustomed from our own experience or a study of history. First of all, Sparta was tiny: there were never as many as ten thousand full-fledged citizens. These citizens were descended from Dorians who had invaded the Peloponnese in the archaic period and subjugated the original inhabitants, who became helots: essentially serfs who worked the estates of the Spartan aristocracy in return for half of the crops they produced (about the same fraction of the fruit of their labour the helots of our modern enlightened self-governing societies are allowed to retain for their own use). Every full citizen, or Spartiate, was a warrior, trained from boyhood to that end. Spartiates not only did not engage in trade or work as craftsmen: they were forbidden to do so—such work was performed by non-citizens. With the helots outnumbering Spartiates by a factor of from four to seven (and even more as the Spartan population shrunk toward the end), the fear of an uprising was ever-present, and required maintenance of martial prowess among the Spartiates and subjugation of the helots.

How were these warriors formed? Boys were taken from their families at the age of seven and placed in a barracks with others of their age. Henceforth, they would return to their families only as visitors. They were subjected to a regime of physical and mental training, including exercise, weapons training, athletics, mock warfare, plus music and dancing. They learned the poetry, legends, and history of the city. All learned to read and write. After intense scrutiny and regular tests, the young man would face a rite of passage, krupteίa, in which, for a full year, armed only with a dagger, he had to survive on his own in the wild, stealing what he needed, and instilling fear among the helots, who he was authorised to kill if found in violation of curfew. Only after surviving this ordeal would the young Spartan be admitted as a member of a sussιtίon, a combination of a men's club, a military mess, and the basic unit in the Spartan army. A Spartan would remain a member of this same group all his life and, even after marriage and fatherhood, would live and dine with them every day until the age of forty-five.

From the age of twelve, boys in training would usually have a patron, or surrogate father, who was expected to initiate him into the world of the warrior and instruct him in the duties of citizenship. It was expected that there would be a homosexual relationship between the two, and that this would further cement the bond of loyalty to his brothers in arms. Upon becoming a full citizen and warrior, the young man was expected to take on a boy and continue the tradition. As to many modern utopian social engineers, the family was seen as an obstacle to the citizen's identification with the community (or, in modern terminology, the state), and the entire process of raising citizens seems to have been designed to transfer this inherent biological solidarity with kin to peers in the army and the community as a whole.

The political structure which sustained and, in turn, was sustained by these cultural institutions was similarly alien and intricate—so much so that I found myself wishing that Professor Rahe had included a diagram to help readers understand all of the moving parts and how they interacted. After finishing the book, I found this one on Wikipedia.

Structure of Government in Sparta
Image by Wikipedia user Putinovac licensed under the
Creative Commons Attribution 3.0 Unported license.

The actual relationships are even more complicated and subtle than expressed in this diagram, and given the extent to which scholars dispute the details of the Spartan political institutions (which occupy many pages in the end notes), it is likely the author may find fault with some aspects of this illustration. I present it purely because it provides a glimpse of the complexity and helped me organise my thoughts about the description in the text.

Start with the kings. That's right, “kings”—there were two of them—both traditionally descended from Hercules, but through different lineages. The kings shared power and acted as a check on each other. They were commanders of the army in time of war, and high priests in peace. The kingship was hereditary and for life.

Five overseers, or ephors were elected annually by the citizens as a whole. Scholars dispute whether ephors could serve more than one term, but the author notes that no ephor is known to have done so, and it is thus likely they were term limited to a single year. During their year in office, the board of five ephors (one from each of the villages of Sparta) exercised almost unlimited power in both domestic and foreign affairs. Even the kings were not immune to their power: the ephors could arrest a king and bring him to trial on a capital charge just like any other citizen, and this happened. On the other hand, at the end of their one year term, ephors were subject to a judicial examination of their acts in office and liable for misconduct. (Wouldn't be great if present-day “public servants” received the same kind of scrutiny at the end of their terms in office? It would be interesting to see what a prosecutor could discover about how so many of these solons manage to amass great personal fortunes incommensurate with their salaries.) And then there was the “fickle meteor of doom” rule.

Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.

I can imagine the kings hoping they didn't pick a night in mid-August for their vigil!

The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders.

All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage.

Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text.

In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.

 Permalink

Gleick, James. Time Travel. New York: Pantheon Books, 2016. ISBN 978-0-307-90879-7.
In 1895, a young struggling writer who earned his precarious living by writing short humorous pieces for London magazines, often published without a byline, buckled down and penned his first long work, a longish novella of some 33,000 words. When published, H. G. Wells's The Time Machine would not only help to found a new literary genre—science fiction, but would introduce a entirely new concept to storytelling: time travel. Many of the themes of modern fiction can be traced to the myths of antiquity, but here was something entirely new: imagining a voyage to the future to see how current trends would develop, or back into the past, perhaps not just to observe history unfold and resolve its persistent mysteries, but possibly to change the past, opening the door to paradoxes which have been the subject not only of a multitude of subsequent stories but theories and speculation by serious scientists. So new was the concept of travel through time that the phrase “time travel” first appeared in the English language only in 1914, in a reference to Wells's story.

For much of human history, there was little concept of a linear progression of time. People lived lives much the same as those of their ancestors, and expected their descendants to inhabit much the same kind of world. Their lives seemed to be governed by a series of cycles: day and night, the phases of the Moon, the seasons, planting and harvesting, and successive generations of humans, rather than the ticking of an inexorable clock. Even great disruptive events such as wars, plagues, and natural disasters seemed to recur over time, even if not on a regular, predictable schedule. This led to the philosophical view of “eternal return”, which appears in many ancient cultures and in Western philosophy from Pythagoras to Neitzsche. In mathematics, the Poincaré recurrence theorem formally demonstrated that an isolated finite system will eventually (although possibly only after a time much longer than the age of the universe), return to a given state and repeat its evolution an infinite number of times.

But nobody (except perhaps a philosopher) who had lived through the 19th century in Britain could really believe that. Over the space of a human lifetime, the world and the human condition had changed radically and seemed to be careening into a future difficult to envision. Steam power, railroads, industrialisation of manufacturing, the telegraph and telephone, electricity and the electric light, anaesthesia, antiseptics, steamships and global commerce, submarine cables and near-instantaneous international communications, had all remade the world. The idea of progress was not just an abstract concept of the Enlightenment, but something anybody could see all around them.

But progress through what? In the fin de siècle milieu that Wells inhabited, through time: a scroll of history being written continually by new ideas, inventions, creative works, and the social changes flowing from these events which changed the future in profound and often unknowable ways. The intellectual landscape was fertile for utopian ideas, many of which Wells championed. Among the intellectual élite, the fourth dimension was much in vogue, often a fourth spatial dimension but also the concept of time as a dimension comparable to those of space. This concept first appears in the work of Edgar Allan Poe in 1848, but was fully fleshed out by Wells in The Time Machine: “ ‘Clearly,’ the Time Traveller proceeded, ‘any real body must have extension in four dimensions: it must have Length, Breadth, Thickness, and—Duration.’ ” But if we can move freely through the three spatial directions (although less so in the vertical in Wells's day than the present), why cannot we also move back and forth in time, unshackling our consciousness and will from the tyranny of the timepiece just as the railroad, steamship, and telegraph had loosened the constraints of locality?

Just ten years after The Time Machine, Einstein's special theory of relativity resolved puzzles in electrodynamics and mechanics by demonstrating that time and space mixed depending upon the relative states of motion of observers. In 1908, Hermann Minkowski reformulated Einstein's theory in terms of a four dimensional space-time. He declared, “Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” (Einstein was, initially, less than impressed with this view, calling it “überflüssige Gelehrsamkeit”: superfluous learnedness, but eventually accepted the perspective and made it central to his 1915 theory of gravitation.) But further, embedded within special relativity, was time travel—at least into the future.

According to the equations of special relativity, which have been experimentally verified as precisely as anything in science and are fundamental to the operation of everyday technologies such as the Global Positioning System, a moving observer will measure time to flow more slowly than a stationary observer. We don't observe this effect in everyday life because the phenomenon only becomes pronounced at velocities which are a substantial fraction of the speed of light, but even at the modest velocity of orbiting satellites, it cannot be neglected. Due to this effect of time dilation, if you had a space ship able to accelerate at a constant rate of one Earth gravity (people on board would experience the same gravity as they do while standing on the Earth's surface), you would be able to travel from the Earth to the Andromeda galaxy and back to Earth, a distance of around four million light years, in a time, measured by the ship's clock and your own subjective and biological perception of time, in less than six and a half years. But when you arrived back at the Earth, you'd discover that in its reference frame, more than four million years of time would have elapsed. What wonders would our descendants have accomplished in that distant future, or would they be digging for grubs with blunt sticks while living in a sustainable utopia having finally thrown off the shackles of race, class, and gender which make our present civilisation a living Hell?

This is genuine time travel into the future and, although it's far beyond our present technological capabilities, it violates no law of physics and, to a more modest yet still measurable degree, happens every time you travel in an automobile or airplane. But what about travel into the past? Travel into the future doesn't pose any potential paradoxes. It's entirely equivalent to going into hibernation and awaking after a long sleep—indeed, this is a frequently-used literary device in fiction depicting the future. Travel into the past is another thing entirely. For example, consider the grandfather paradox: suppose you have a time machine able to transport you into the past. You go back in time and kill your own grandfather (it's never the grandmother—beats me). Then who are you, and how did you come into existence in the first place? The grandfather paradox exists whenever altering an event in the past changes conditions in the future so as to be inconsistent with the alteration of that event.

Or consider the bootstrap paradox or causal loop. An elderly mathematician (say, age 39), having struggled for years and finally succeeded in proving a difficult theorem, travels back in time and provides a key hint to his twenty year old self to set him on the path to the proof—the same hint he remembers finding on his desk that morning so many years before. Where did the idea come from? In 1991, physicist David Deutsch demonstrated that a computer incorporating travel back in time (formally, a closed timelike curve) could solve NP problems in polynomial time. I wonder where he got that idea….

All of this would be academic were time travel into the past just a figment of fictioneers' imagination. This has been the view of many scientists, and the chronology protection conjecture asserts that the laws of physics conspire to prevent travel to the past which, in the words of a 1992 paper by Stephen Hawking, “makes the universe safe for historians.” But the laws of physics, as we understand them today, do not rule out travel into the past! Einstein's 1915 general theory of relativity, which so far has withstood every experimental test for over a century, admits solutions, such as the Gödel metric, discovered in 1949 by Einstein's friend and colleague Kurt Gödel, which contain closed timelike curves. In the Gödel universe, which consists of a homogeneous sea of dust particles, rotating around a centre point and with a nonzero cosmological constant, it is possible, by travelling on a closed path and never reaching or exceeding the speed of light, to return to a point in one's own past. Now, the Gödel solution is highly contrived, and there is no evidence that it describes the universe we actually inhabit, but the existence of such a solution leaves the door open that somewhere in the other exotica of general relativity such as spinning black holes, wormholes, naked singularities, or cosmic strings, there may be a loophole which allows travel into the past. If you discover one, could you please pop back and send me an E-mail about it before I finish this review?

This book is far more about the literary and cultural history of time travel than scientific explorations of its possibility and consequences. Thinking about time travel forces one to confront questions which can usually be swept under the rug: is the future ours to change, or do we inhabit a block universe where our perception of time is just a delusion as the cursor of our consciousness sweeps out a path in a space-time whose future is entirely determined by its past? If we have free will, where does it come from, when according to the laws of physics the future can be computed entirely from the past? If we can change the future, why not the past? If we changed the past, would it change the present for those living in it, or create a fork in the time line along which a different history would develop? All of these speculations are rich veins to be mined in literature and drama, and are explored here. Many technical topics are discussed only briefly, if at all, for example the Wheeler-Feynman absorber theory, which resolves a mystery in electrodynamics by positing a symmetrical solution to Maxwell's equations in which the future influences the past just as the present influences the future. Gleick doesn't go anywhere near my own experiments with retrocausality or the “presponse” experiments of investigators such as Dick Bierman and Dean Radin. I get it—pop culture beats woo-woo on the bestseller list.

The question of time has puzzled people for millennia. Only recently have we thought seriously about travel in time and its implications for our place in the universe. Time travel has been, and will doubtless continue to be the source of speculation and entertainment, and this book is an excellent survey of its short history as a genre of fiction and the science upon which it is founded.

 Permalink

Cline, Ernest. Ready Player One. New York: Broadway Books, 2011. ISBN 978-0-307-88744-3.
By the mid-21st century, the Internet has become largely subsumed as the transport layer for the OASIS (Ontologically Anthropocentric Sensory Immersive Simulation), a massively multiuser online virtual reality environment originally developed as a multiplayer game, but which rapidly evolved into a platform for commerce, education, social interaction, and entertainment used by billions of people around the world. The OASIS supports immersive virtual reality, limited only by the user's budget for hardware used to access the network. With top-of-the-line visors and sound systems, body motion sensors, and haptic feedback, coupled to a powerful interface console, a highly faithful experience was possible. The OASIS was the creation of James Halliday, a legendary super-nerd who made his first fortune designing videogames for home computers in the 1980s, and then re-launched his company in 2012 as Gregarious Simulation Systems (GSS), with the OASIS as its sole product. The OASIS was entirely open source: users could change things within the multitude of worlds within the system (within the limits set by those who created them), or create their own new worlds. Using a distributed computing architecture which pushed much of the processing power to the edge of the network, on users' own consoles, the system was able to grow without bound without requiring commensurate growth in GSS data centres. And it was free, or almost so. To access the OASIS, you paid only a one-time lifetime sign-up fee of twenty-five cents, just like the quarter you used to drop into the slot of an arcade videogame. Users paid nothing to use the OASIS itself: their only costs were the hardware they used to connect (which varied widely in cost and quality of the experience) and the bandwidth to connect to the network. But since most of the processing was done locally, the latter cost was modest. GSS made its money selling or renting virtual real estate (“surreal estate”) within the simulation. If you wanted to open, say, a shopping mall or build your own Fortress of Solitude on an asteroid, you had to pay GSS for the territory. GSS also sold virtual goods: clothes, magical artefacts, weapons, vehicles of all kinds, and buildings. Most were modestly priced, but since they cost nothing to manufacture, were pure profit to the company.

As the OASIS permeated society, GSS prospered. Halliday remained the majority shareholder in the company, having bought back the share once owned by his co-founder and partner Ogden (“Og”) Morrow, after what was rumoured to be a dispute between the two the details of which had never been revealed. By 2040, Halliday's fortune, almost all in GSS stock, had grown to more than two hundred and forty billion dollars. And then, after fifteen years of self-imposed isolation which some said was due to insanity, Halliday died of cancer. He was a bachelor, with no living relatives, no heirs, and, it was said, no friends. His death was announced on the OASIS in a five minute video titled Anaorak's Invitation (“Anorak” was the name of Halliday's all-powerful avatar within the OASIS). In the film, Halliday announces that his will places his entire fortune in escrow until somebody completes the quest he has programmed within the OASIS:

Three hidden keys open three secret gates,
Wherein the errant will be tested for worthy traits,
And those with the skill to survive these straits,
Will reach The End where the prize awaits.

The prize is Halliday's entire fortune and, with it, super-user control of the principal medium of human interaction, business, and even politics. Before fading out, Halliday shows three keys: copper, jade, and crystal, which must be obtained to open the three gates. Only after passing through the gates and passing the tests within them, will the intrepid paladin obtain the Easter egg hidden within the OASIS and gain control of it. Halliday provided a link to Anorak's Almanac, more than a thousand pages of journal entries made during his life, many of which reflect his obsession with 1980s popular culture, science fiction and fantasy, videogames, movies, music, and comic books. The clues to finding the keys and the Egg were widely believed to be within this rambling, disjointed document.

Given the stakes, and the contest's being open to anybody in the OASIS, what immediately came to be called the Hunt became a social phenomenon, all-consuming to some. Egg hunters, or “gunters”, immersed themselves in Halliday's journal and every pop culture reference within it, however obscure. All of this material was freely available on the OASIS, and gunters memorised every detail of anything which had caught Halliday's attention. As time passed, and nobody succeeded in finding even the copper key (Halliday's memorial site displayed a scoreboard of those who achieved goals in the Hunt, so far blank), many lost interest in the Hunt, but a dedicated hard core persisted, often to the exclusion of all other diversions. Some gunters banded together into “clans”, some very large, agreeing to exchange information and, if one found the Egg, to share the proceeds with all members. More sinister were the activities of Innovative Online Industries—IOI—a global Internet and communications company which controlled much of the backbone that underlay the OASIS. It had assembled a large team of paid employees, backed by the research and database facilities of IOI, with their sole mission to find the Egg and turn control of the OASIS over to IOI. These players, all with identical avatars and names consisting of their six-digit IOI employee numbers, all of which began with the digit “6”, were called “sixers” or, more often in the gunter argot, “Sux0rz”.

Gunters detested IOI and the sixers, because it was no secret that if they found the Egg, IOI's intention was to close the architecture of the OASIS, begin to charge fees for access, plaster everything with advertising, destroy anonymity, snoop indiscriminately, and use their monopoly power to put their thumb on the scale of all forms of communication including political discourse. (Fortunately, that couldn't happen to us with today's enlightened, progressive Silicon Valley overlords.) But IOI's financial resources were such that whenever a rare and powerful magical artefact (many of which had been created by Halliday in the original OASIS, usually requiring the completion of a quest to obtain, but freely transferrable thereafter) came up for auction, IOI was usually able to outbid even the largest gunter clans and add it to their arsenal.

Wade Watts, a lone gunter whose avatar is named Parzival, became obsessed with the Hunt on the day of Halliday's death, and, years later, devotes almost every minute of his life not spent sleeping or in school (like many, he attends school in the OASIS, and is now in the last year of high school) on the Hunt, reading and re-reading Anorak's Almanac, reading, listening to, playing, and viewing everything mentioned therein, to the extent he can recite the dialogue of the movies from memory. He makes copious notes in his “grail diary”, named after the one kept by Indiana Jones. His friends, none of whom he has ever met in person, are all gunters who congregate on-line in virtual reality chat rooms such as that run by his best friend, Aech.

Then, one day, bored to tears and daydreaming in Latin class, Parzival has a flash of insight. Putting together a message buried in the Almanac that he and many other gunters had discovered but failed to understand, with a bit of Latin and his encyclopedic knowledge of role playing games, he decodes the clue and, after a demanding test, finds himself in possession of the Copper Key. His name, alone, now appears at the top of the scoreboard, with 10,000 points. The path to the First Gate was now open.

Discovery of the Copper Key was a sensation: suddenly Parzival, a humble level 10 gunter, is a worldwide celebrity (although his real identity remains unknown, as he refuses all media offers which would reveal or compromise it). Knowing that the key can be found re-energises other gunters, not to speak of IOI, and Parzival's footprints in the OASIS are scrupulously examined for clues to his achievement. (Finding a key and opening a gate does not render it unavailable to others. Those who subsequently pass the tests will receive their own copies of the key, although there is a point bonus for finding it first.)

So begins an epic quest by Parzival and other gunters, contending with the evil minions of IOI, whose potential gain is so high and ethics so low that the risks may extend beyond the OASIS into the real world. For the reader, it is a nostalgic romp through every aspect of the popular culture of the 1980s: the formative era of personal computing and gaming. The level of detail is just staggering: this may be the geekiest nerdfest ever published. Heck, there's even a reference to an erstwhile Autodesk employee! The only goof I noted is a mention of the “screech of a 300-baud modem during the log-in sequence”. Three hundred baud modems did not have the characteristic squawk and screech sync-up of faster modems which employ trellis coding. While there are a multitude of references to details which will make people who were there, then, smile, readers who were not immersed in the 1980s and/or less familiar with its cultural minutiæ can still enjoy the challenges, puzzles solved, intrigue, action, and epic virtual reality battles which make up the chronicle of the Hunt. The conclusion is particularly satisfying: there may be a bigger world than even the OASIS.

A movie based upon the novel, directed by Steven Spielberg, is scheduled for release in March 2018.

 Permalink

Hirsi Ali, Ayaan. The Challenge of Dawa. Stanford, CA: Hoover Institution Press, 2017.
Ayaan Hirsi Ali was born in Somalia in 1969. In 1992 she was admitted to the Netherlands and granted political asylum on the basis of escaping an arranged marriage. She later obtained Dutch citizenship, and was elected to the Dutch parliament, where she served from 2001 through 2006. In 2004, she collaborated with Dutch filmmaker Theo van Gogh on the short film Submission, about the abuse of women in Islamic societies. After release of the film, van Gogh was assassinated, with a note containing a death threat for Hirsi Ali pinned to his corpse with a knife. Thereupon, she went into hiding with a permanent security detail to protect her against ongoing threats. In 2006, she moved to the U.S., taking a position at the American Enterprise Institute. She is currently a Fellow at the Hoover Institution.

In this short book (or long pamphlet: it is just 105 pages, with 70 pages of main text), Hirsi Ali argues that almost all Western commentators on the threat posed by Islam have fundamentally misdiagnosed the nature of the challenge it poses to Western civilisation and the heritage of the Enlightenment, and, failing to understand the tactics of Islam's ambition to dominate the world, dating to Mohammed's revelations in Medina and his actions in that period of his life, have adopted strategies which are ineffective and in some cases counterproductive in confronting the present danger.

The usual picture of Islam presented by politicians and analysts in the West (at least those who admit there is any problem at all) is that most Muslims are peaceful, productive people who have no problems becoming integrated in Western societies, but there is a small minority, variously called “radical”, “militant”, “Islamist”, “fundamentalist”, or other names, who are bent on propagating their religion by means of violence, either in guerrilla or conventional wars, or by terror attacks on civilian populations. This view has led to involvement in foreign wars, domestic surveillance, and often intrusive internal security measures to counter the threat, which is often given the name of “jihad”. A dispassionate analysis of these policies over the last decade and a half must conclude that they are not working: despite trillions of dollars spent and thousands of lives lost, turning air travel into a humiliating and intimidating circus, and invading the privacy of people worldwide, the Islamic world seems to be, if anything, more chaotic than it was in the year 2000, and the frequency and seriousness of so-called “lone wolf” terrorist attacks against soft targets does not seem to be abating. What if we don't really understand what we're up against? What if jihad isn't the problem, or only a part of something much larger?

Dawa (or dawah, da'wah, daawa, daawah—there doesn't seem to be anything associated with this religion which isn't transliterated at least three different ways—the Arabic is “دعوة”) is an Arabic word which literally means “invitation”. In the context of Islam, it is usually translated as “proselytising” or spreading the religion by nonviolent means, as is done by missionaries of many other religions. But here, Hirsi Ali contends that dawa, which is grounded in the fundamental scripture of Islam: the Koran and Hadiths (sayings of Mohammed), is something very different when interpreted and implemented by what she calls “political Islam”. As opposed to a distinction between moderate and radical Islam, she argues that Islam is more accurately divided into “spiritual Islam” as revealed in the earlier Mecca suras of the Koran, and “political Islam”, embodied by those dating from Medina. Spiritual Islam defines a belief system, prayers, rituals, and duties of believers, but is largely confined to the bounds of other major religions. Political Islam, however, is a comprehensive system of politics, civil and criminal law, economics, the relationship with and treatment of nonbelievers, and military strategy, and imposes a duty to spread Islam into new territories.

Seen through the lens of political Islam, dawa and those engaged in it, often funded today by the deep coffers of petro-tyrannies, is nothing like the activities of, say, Roman Catholic or Mormon missionaries. Implemented through groups such as the Council on American-Islamic Relations (CAIR), centres on Islamic and Middle East studies on university campuses, mosques and Islamic centres in communities around the world, so-called “charities” and non-governmental organisations, all bankrolled by fundamentalist champions of political Islam, dawa in the West operates much like the apparatus of Communist subversion described almost sixty years ago by J. Edgar Hoover in Masters of Deceit. You have the same pattern of apparently nonviolent and innocuously-named front organisations, efforts to influence the influential (media figures, academics, politicians), infiltration of institutions along the lines of Antonio Gramsci's “long march”, exploitation of Western traditions such as freedom of speech and freedom of religion to achieve goals diametrically opposed to them, and redefinition of the vocabulary and intimidation of any who dare state self-evident facts (mustn't be called “islamophobic”!), all funded from abroad. Unlike communists in the heyday of the Comintern and afterward the Cold War, Islamic subversion is assisted by large scale migration of Muslims into Western countries, especially in Europe, where the organs of dawa encourage them to form their own separate communities, avoiding assimilation, and demanding the ability to implement their own sharia law and that others respect their customs. Dawa is directed at these immigrants as well, with the goal of increasing their commitment to Islam and recruiting them for its political agenda: the eventual replacement of Western institutions with sharia law and submission to a global Islamic caliphate. This may seem absurdly ambitious for communities which, in most countries, aren't much greater than 5% of the population, but they're patient: they've been at it for fourteen centuries, and they're out-breeding the native populations in almost every country where they've become established.

Hirsi Ali argues persuasively that the problem isn't jihad: jihad is a tactic which can be employed as part of dawa when persuasion, infiltration, and subversion prove insufficient, or as a final step to put the conquest over the top, but it's the commitment to global hegemony, baked right into the scriptures of Islam, which poses the most dire risk to the West, especially since so few decision makers seem to be aware of it or, if they are, dare not speak candidly of it lest they be called “islamophobes” or worse. This is something about which I don't need to be persuaded: I've been writing about it since 2015; see “Clash of Ideologies: Communism, Islam, and the West”. I sincerely hope that this work by an eloquent observer who has seen political Islam from the inside will open more eyes to the threat it poses to the West. A reasonable set of policy initiatives to confront the threat is presented at the end. The only factual error I noted is the claim on p. 57 that Joseph R. McCarthy was in charge of the House Committee on Un-American Activities—in fact, McCarthy, a Senator, presided over the Senate Permanent Subcommittee on Investigations.

This is a publication of the Hoover Institution. It has no ISBN and cannot be purchased through usual booksellers. Here is the page for the book, whence you can download the PDF file for free.

 Permalink

Egan, Greg. Dichronauts. New York: Night Shade Books, 2017. ISBN 978-1-59780-892-7.
One of the more fascinating sub-genres of science fiction is “world building”: creating the setting in which a story takes place by imagining an environment radically different from any in the human experience. This can run the gamut from life in the atmosphere of a gas giant planet (Saturn Rukh), on the surface of a neutron star (Dragon's Egg), or on an enormous alien-engineered wheel surrounding a star (Ringworld). When done well, the environment becomes an integral part of the tale, shaping the characters and driving the plot. Greg Egan is one of the most accomplished of world builders. His fiction includes numerous examples of alien environments, with the consequences worked out and woven into the story.

The present novel may be his most ambitious yet: a world in which the fundamental properties of spacetime are different from those in our universe. Unfortunately, for this reader, the execution was unequal to the ambition and the result disappointing. I'll explain this in more detail, but let's start with the basics.

We inhabit a spacetime which is well-approximated by Minkowski space. (In regions where gravity is strong, spacetime curvature must be taken into account, but this can be neglected in most circumstances including those in this novel.) Minkowski space is a flat four-dimensional space where each point is identified by three space and one time coordinate. It is thus spoken of as a 3+1 dimensional space. The space and time dimensions are not interchangeable: when computing the spacetime separation of two events, their distance or spacetime interval is given by the quantity −t²+x²+y²+z². Minkowski space is said to have a metric signature of (−,+,+,+), from the signs of the four coordinates in the distance (metric) equation.

Why does our universe have a dimensionality of 3+1? Nobody knows—string theorists who argue for a landscape of universes in an infinite multiverse speculate that the very dimensionality of a universe may be set randomly when the baby universe is created in its own big bang bubble. Max Tegmark has argued that universes with other dimensionalities would not permit the existence of observers such as us, so we shouldn't be surprised to find ourselves in one of the universes which is compatible with our own existence, nor should we rule out a multitude of other universes with different dimensionalities, all of which may be devoid of observers.

But need they necessarily be barren? The premise of this novel is, “not necessarily so”, and Egan has created a universe with a metric signature of (−,−,+,+), a 2+2 dimensional spacetime with two spacelike dimensions and two timelike dimensions. Note that “timelike” refers to the sign of the dimension in the distance equation, and the presence of two timelike dimensions is not equivalent to two time dimensions. There is still a single dimension of time, t, in which events occur in a linear order just as in our universe. The second timelike dimension, which we'll call u, behaves like a spatial dimension in that objects can move within it as they can along the other x and y spacelike dimensions, but its contribution in the distance equation is negative: −t²−u²+x²+y². This results in a seriously weird, if not bizarre world.

From this point on, just about everything I'm going to say can be considered a spoiler if your intention is to read the book from front to back and not consult the extensive background information on the author's Web site. Conversely, I shall give away nothing regarding the plot or ending which is not disclosed in the background information or the technical afterword of the novel. I do not consider this material as spoilers; in fact, I believe that many readers who do not first understand the universe in which the story is set are likely to abandon the book as simply incomprehensible. Some of the masters of world building science fiction introduce the reader to the world as an ongoing puzzle as the story unfolds but, for whatever reason, Egan did not choose to do that here, or else he did so sufficiently poorly that this reader didn't even notice the attempt. I think the publisher made a serious mistake in not alerting the reader to the existence of the technical afterword, the reading of which I consider a barely sufficient prerequisite for understanding the setting in which the novel takes place.

In the Dichronauts universe, there is a “world” around which a smaller ”star” orbits (or maybe the other way around; it's just a coordinate transformation). The geometry of the spacetime dominates everything. While in our universe we're free to move in any of the three spatial dimensions, in this spacetime motion in the x and y dimensions is as for us, but if you're facing in the positive x dimension—let's call it east—you cannot rotate outside the wedge from northeast to southeast, and as you rotate the distance equation causes a stretching to occur, like the distortions in relativistic motion in special relativity. It is no more possible to turn all the way to the northeast than it is to attain the speed of light in our universe. If you were born east-facing, the only way you can see to the west is to bend over and look between your legs. The beings who inhabit this world seem to be born randomly east- or west-facing.

Light only propagates within the cone defined by the spacelike dimensions. Any light source has a “dark cone” defined by a 45° angle around the timelike u dimension. In this region, vision does not work, so beings are blind to their sides. The creatures who inhabit the world are symbionts of bipeds who call themselves “walkers” and slug-like creatures, “siders”, who live inside their skulls and receive their nutrients from the walker's bloodstream. Siders are equipped with “pingers”, which use echolocation like terrestrial bats to sense within the dark cone. While light cannot propagate there, physical objects can move in that direction, including the density waves which carry sound. Walkers and siders are linked at the brain level and can directly perceive each other's views of the world and communicate without speaking aloud. Both symbiotes are independently conscious, bonded at a young age, and can, like married couples, have acrimonious disputes. While walkers cannot turn outside the 90° cone, they can move in the timelike north-south direction by “sidling”, relying upon their siders to detect obstacles within their cone of blindness.

Due to details of the structure of their world, the walker/sider society, which seems to be at a pre-industrial level (perhaps due to the fact that many machines would not work in the weird geometry they inhabit), is forced to permanently migrate to stay within the habitable zone between latitudes which are seared by the rays of the star and those too cold for agriculture. For many generations, the town of Baharabad has migrated along a river, but now the river appears to be drying up, creating a crisis. Seth (walker) and Theo (sider), are surveyors, charged with charting the course of their community's migration. Now they are faced with the challenge of finding a new river to follow, one which has not already been claimed by another community. On an expedition to the limits of the habitable zone, they encounter what seems to be the edge of the world. Is it truly the edge, and if not what lies beyond? They join a small group of explorers who probe regions of their world never before seen, and discover clues to the origin of their species.

This didn't work for me. If you read all of the background information first (which, if you're going to dig into this novel, I strongly encourage you to do), you'll appreciate the effort the author went to in order to create a mathematically consistent universe with two timelike dimensions, and to work out the implications of this for a world within it and the beings who live there. But there is a tremendous amount of arm waving behind the curtain which, if you peek, subverts the plausibility of everything. For example, the walker/sider creatures are described as having what seems to be a relatively normal metabolism: they eat fruit, grow crops, breathe, eat, and drink, urinate and defecate, and otherwise behave as biological organisms. But biology as we know it, and all of these biological functions, requires the complex stereochemistry of the organic molecules upon which organisms are built. If the motion of molecules were constrained to a cone, and their shape stretched with rotation, the operation of enzymes and other biochemistry wouldn't work. And yet that doesn't seem to be a problem for these beings.

Finally, the story simply stops in the middle, with the great adventure and resolution of the central crisis unresolved. There will probably be a sequel. I shall not read it.

 Permalink

Casey, Doug and John Hunt. Drug Lord. Charlottesville, VA: HighGround Books, 2017. ISBN 978-1-947449-07-7.
This is the second novel in the authors' “High Ground” series, chronicling the exploits of Charles Knight, an entrepreneur and adventurer determined to live his life according to his own moral code, constrained as little as possible by the rules and regulations of coercive and corrupt governments. The first novel, Speculator (October 2016), follows Charles's adventures in Africa as an investor in a junior gold exploration company which just might have made the discovery of the century, and in the financial markets as he seeks to profit from what he's learned digging into the details. Charles comes onto the radar of ambitious government agents seeking to advance their careers by collecting his scalp.

Charles ends up escaping with his freedom and ethics intact, but with much of his fortune forfeit. He decides he's had enough of “the land of the free” and sets out on his sailboat to explore the world and sample the pleasures and opportunities it holds for one who thinks for himself. Having survived several attempts on his life and prevented a war in Africa in the previous novel, seven years later he returns to a really dangerous place, Washington DC, populated by the Morlocks of Mordor.

Charles has an idea for a new business. The crony capitalism of the U.S. pharmaceutical-regulatory complex has inflated the price of widely-used prescription drugs to many times that paid outside the U.S., where these drugs, whose patents have expired under legal regimes less easily manipulated than that of the U.S., are manufactured in a chemically-identical form by thoroughly professional generic drug producers. Charles understands, as fully as any engineer, that wherever there is nonlinearity the possibility for gain exists, and when that nonlinearity is the result of the action of coercive government, the potential profits from circumventing its grasp on the throat of the free market can be very large, indeed.

When Charles's boat docked in the U.S., he had an undeclared cargo: a large number of those little blue pills much in demand by men of a certain age, purchased for pennies from a factory in India through a cut-out in Africa he met on his previous adventure. He has the product, and a supplier able to obtain much more. Now, all he needs is distribution. He must venture into the dark underside of DC to make the connections that can get the product to the customers, and persuade potential partners that they can make much more and far more safely by distributing his products (which don't fall under the purview of the Drug Enforcement Agency, and to which local cops not only don't pay much attention, but may be potential customers).

Meanwhile, Charles's uncle Maurice, who has been managing what was left of his fortune during his absence, has made an investment in a start-up pharmaceutical company, Visioryme, whose first product, VR-210, or Sybillene, is threading its way through the FDA regulatory gauntlet toward approval for use as an antidepressant. Sybillene works through a novel neurochemical pathway, and promises to be an effective treatment for clinical depression while avoiding the many deleterious side effects of other drugs. In fact, Sybillene doesn't appear to have any side effects at all—or hardly any—there's that one curious thing that happened in animal testing, but not wishing to commit corporate seppuku, Visioryme hasn't mentioned it to the regulators or even their major investor, Charles.

Charles pursues his two pharmaceutical ventures in parallel: one in the DC ghetto and Africa; the other in the tidy suburban office park where Visioryme is headquartered. The first business begins to prosper, and Charles must turn his ingenuity to solving the problems attendant to any burgeoning enterprise: supply, transportation, relations with competitors (who, in this sector of the economy, not only are often armed but inclined to shoot first), expanding the product offerings, growing the distribution channels, and dealing with all of the money that's coming in, entirely in cash, without coming onto the radar of any of the organs of the slavers and their pervasive snooper-state.

Meanwhile, Sybillene finally obtains FDA approval, and Visioryme begins to take off and ramp up production. Charles's connections in Africa help the company obtain the supplies of bamboo required in production of the drug. It seems like he now has two successful ventures, on the dark and light sides, respectively, of the pharmaceutical business (which is dark and which is light depending on your view of the FDA).

Then, curious reports start to come in about doctors prescribing Sybillene off-label in large doses to their well-heeled patients. Off-label prescription is completely legal and not uncommon, but one wonders what's going on. Then there's the talk Charles is picking up from his other venture of demand for a new drug on the street: Sybillene, which goes under names such as Fey, Vatic, Augur, Covfefe, and most commonly, Naked Emperor. Charles's lead distributor reports, “It helps people see lies for what they are, and liars too. I dunno. I never tried it. Lots of people are asking though. Society types. Lawyers, businessmen, doctors, even cops.” It appears that Sybillene, or Naked Emperor, taken in a high dose, is a powerful nootropic which doesn't so much increase intelligence as, the opposite of most psychoactive drugs, allows the user to think more clearly, and see through the deception that pollutes the intellectual landscape of a modern, “developed”, society.

In that fœtid city by the Potomac, the threat posed by such clear thinking dwarfs that of other “controlled substances” which merely turn their users into zombies. Those atop an empire built on deceit, deficits, and debt cannot run the risk of a growing fraction of the population beginning to see through the funny money, Ponzi financing, Potemkin military, manipulation of public opinion, erosion of the natural rights of citizens, and the sham which is replacing the last vestiges of consensual government. Perforce, Sybillene must become Public Enemy Number One, and if a bit of lying and even murder is required, well, that's the price of preserving the government's ability to lie and murder.

Suddenly, Charles is involved in two illegal pharmaceutical ventures. As any wise entrepreneur would immediately ask himself, “might there be synergies?”

Thus begins a compelling, instructive, and inspiring tale of entrepreneurship and morality confronted with dark forces constrained by no limits whatsoever. We encounter friends and foes from the first novel, as once again Charles finds himself on point position defending those in the enterprises he has created. As I said in my review of Speculator, this book reminds me of Ayn Rand's The Fountainhead, but it is even more effective because Charles Knight is not a super-hero but rather a person with a strong sense of right and wrong who is making up his life as he goes along and learning from the experiences he has: good and bad, success and failure. Charles Knight, even without Naked Emperor, has that gift of seeing things precisely as they are, unobscured by the fog, cant, spin, and lies which are the principal products of the city in which it is set.

These novels are not just page-turning thrillers, they're simultaneously an introductory course in becoming an international man (or woman), transcending the lies of the increasingly obsolescent nation-state, and finding the liberty that comes from seizing control of one's own destiny. They may be the most powerful fictional recruiting tool for the libertarian and anarcho-capitalist world view since the works of Ayn Rand and L. Neil Smith. Speculator was my fiction book of the year for 2016, and this sequel is in the running for 2017.

 Permalink

September 2017

Scoles, Sarah. Making Contact. New York: Pegasus Books, 2017. ISBN 978-1-68177-441-1.
There are few questions in our scientific inquiry into the universe and our place within it more profound than “are we alone?” As we have learned more about our world and the larger universe in which it exists, this question has become ever more fascinating. We now know that our planet, once thought the centre of the universe, is but one of what may be hundreds of billions of planets in our own galaxy, which is one of hundreds of billions of galaxies in the observable universe. Not long ago, we knew only of the planets in our own solar system, and some astronomers believed planetary systems were rare, perhaps formed by freak encounters between two stars following their orbits around the galaxy. But now, thanks to exoplanet hunters and, especially, the Kepler spacecraft, we know that it's “planets, planets, everywhere”—most stars have planets, and many stars have planets where conditions may be suitable for the origin of life.

If this be the case, then when we gaze upward at the myriad stars in the heavens, might there be other eyes (or whatever sense organs they use for the optical spectrum) looking back from planets of those stars toward our Sun, wondering if they are alone? Many are the children, and adults, who have asked themselves that question when standing under a pristine sky. For the ten year old Jill Tarter, it set her on a path toward a career which has been almost coterminous with humanity's efforts to discover communications from extraterrestrial civilisations—an effort which continues today, benefitting from advances in technology unimagined when she undertook the quest.

World War II had seen tremendous advancements in radio communications, in particular the short wavelengths (“microwaves”) used by radar to detect enemy aircraft and submarines. After the war, this technology provided the foundation for the new field of radio astronomy, which expanded astronomers' window on the universe from the traditional optical spectrum into wavelengths that revealed phenomena never before observed nor, indeed, imagined, and hinted at a universe which was much larger, complicated, and violent than previously envisioned.

In 1959, Philip Morrison and Guiseppe Cocconi published a paper in Nature in which they calculated that using only technologies and instruments already existing on the Earth, intelligent extraterrestrials could send radio messages across the distances to the nearby stars, and that these messages could be received, detected, and decoded by terrestrial observers. This was the origin of SETI—the Search for Extraterrestrial Intelligence. In 1960, Frank Drake used a radio telescope to search for signals from two nearby star systems; he heard nothing.

As they say, absence of evidence is not evidence of absence, and this is acutely the case in SETI. First of all, consider that you must first decide what kind of signal aliens might send. If it's something which can't be distinguished from natural sources, there's little hope you'll be able to tease it out of the cacophony which is the radio spectrum. So we must assume they're sending something that doesn't appear natural. But what is the variety of natural sources? There's a dozen or so Ph.D. projects just answering that question, including some surprising discoveries of natural sources nobody imagined, such as pulsars, which were sufficiently strange that when first observed they were called “LGM” sources for “Little Green Men”. On what frequency are they sending (in other words, where do we have to turn our dial to receive them, for those geezers who remember radios with dials)? The most efficient signals will be those with a very narrow frequency range, and there are billions of possible frequencies the aliens might choose. We could be pointed in the right place, at the right time, and simply be tuned to the wrong station.

Then there's that question of “the right time”. It would be absurdly costly to broadcast a beacon signal in all directions at all times: that would require energy comparable to that emitted by a star (which, if you think about it, does precisely that). So it's likely that any civilisation with energy resources comparable to our own would transmit in a narrow beam to specific targets, switching among them over time. If we didn't happen to be listening when they were sending, we'd never know they were calling.

If you put all of these constraints together, you come up with what's called an “observational phase space”—a multidimensional space of frequency, intensity, duration of transmission, angular extent of transmission, bandwidth, and other parameters which determine whether you'll detect the signal. And that assumes you're listening at all, which depends upon people coming up with the money to fund the effort and pursue it over the years.

It's beyond daunting. The space to be searched is so large, and our ability to search it so limited, that negative results, even after decades of observation, are equivalent to walking down to the seashore, sampling a glass of ocean water, and concluding that based on the absence of fish, the ocean contained no higher life forms. But suppose you find a fish? That would change everything.

Jill Tarter began her career in the mainstream of astronomy. Her Ph.D. research at the University of California, Berkeley was on brown dwarfs (bodies more massive than gas giant planets but too small to sustain the nuclear fusion reactions which cause stars to shine—a brown dwarf emits weakly in the infrared as it slowly radiates away the heat from the gravitational contraction which formed it). Her work was supported by a federal grant, which made her uncomfortable—what relevance did brown dwarfs have to those compelled to pay taxes to fund investigating them? During her Ph.D. work, she was asked by a professor in the department to help with an aged computer she'd used in an earlier project. To acquaint her with the project, the professor asked her to read the Project Cyclops report. It was a conversion experience.

Project Cyclops was a NASA study conducted in 1971 on how to perform a definitive search for radio communications from intelligent extraterrestrials. Its report [18.2 Mb PDF], issued in 1972, remains the “bible” for radio SETI, although advances in technology, particularly in computing, have rendered some of its recommendations obsolete. The product of a NASA which was still conducting missions to the Moon, it was grandiose in scale, envisioning a large array of radio telescope dishes able to search for signals from stars up to 1000 light years in distance (note that this is still a tiny fraction of the stars in the galaxy, which is around 150,000 light years in diameter). The estimated budget for the project was between 6 and 10 billion dollars (multiply those numbers by around six to get present-day funny money) spent over a period of ten to fifteen years. The report cautioned that there was no guarantee of success during that period, and that the project should be viewed as a long-term endeavour with ongoing funding to operate the system and continue the search.

The Cyclops report arrived at a time when NASA was downsizing and scaling back its ambitions: the final three planned lunar landing missions had been cancelled in 1970, and production of additional Saturn V launch vehicles had been terminated the previous year. The budget climate wasn't hospitable to Apollo-scale projects of any description, especially those which wouldn't support lots of civil service and contractor jobs in the districts and states of NASA's patrons in congress. Unsurprisingly, Project Cyclops simply landed on the pile of ambitious NASA studies that went nowhere. But to some who read it, it was an inspiration. Tarter thought, “This is the first time in history when we don't just have to believe or not believe. Instead of just asking the priests and philosophers, we can try to find an answer. This is an old and important question, and I have the opportunity to change how we try to answer it.” While some might consider searching the sky for “little green men” frivolous and/or absurd, to Tarter this, not the arcana of brown dwarfs, was something worthy of support, and of her time and intellectual effort, “something that could impact people's lives profoundly in a short period of time.”

The project to which Tarter had been asked to contribute, Project SERENDIP (a painful acronym of Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) was extremely modest compared to Cyclops. It had no dedicated radio telescopes at all, nor even dedicated time on existing observatories. Instead, it would “piggyback” on observations made for other purposes, listening to the feed from the telescope with an instrument designed to detect the kind of narrow-band beacons envisioned by Cyclops. To cope with the problem of not knowing the frequency on which to listen, the receiver would monitor 100 channels simultaneously. Tarter's job was programming the PDP 8/S computer to monitor the receiver's output and search for candidate signals. (Project SERENDIP is still in operation today, employing hardware able to simultaneously monitor 128 million channels.)

From this humble start, Tarter's career direction was set. All of her subsequent work was in SETI. It would be a roller-coaster ride all the way. In 1975, NASA had started a modest study to research (but not build) technologies for microwave SETI searches. In 1978, the program came into the sights of senator William Proxmire, who bestowed upon it his “Golden Fleece” award. The program initially survived his ridicule, but in 1982, the budget zeroed out the project. Carl Sagan personally intervened with Proxmire, and in 1983 the funding was reinstated, continuing work on a more capable spectral analyser which could be used with existing radio telescopes.

Buffeted by the start-stop support from NASA and encouraged by Hewlett-Packard executive Bernard Oliver, a supporter of SETI from its inception, Tarter decided that SETI needed its own institutional home, one dedicated to the mission and able to seek its own funding independent of the whims of congressmen and bureaucrats. In 1984, the SETI Institute was incorporated in California. Initially funded by Oliver, over the years major contributions have been made by technology moguls including William Hewlett, David Packard, Paul Allen, Gordon Moore, and Nathan Myhrvold. The SETI Institute receives no government funding whatsoever, although some researchers in its employ, mostly those working on astrobiology, exoplanets, and other topics not directly related to SETI, are supported by research grants from NASA and the National Science Foundation. Fund raising was a skill which did not come naturally to Tarter, but it was mission critical, and so she mastered the art. Today, the SETI Institute is considered one of the most savvy privately-funded research institutions, both in seeking large donations and in grass-roots fundraising.

By the early 1990s, it appeared the pendulum had swung once again, and NASA was back in the SETI game. In 1992, a program was funded to conduct a two-pronged effort: a targeted search of 800 nearby stars, and an all-sky survey looking for stronger beacons. Both would employ what were then state-of-the-art spectrum analysers able to monitor 15 million channels simultaneously. After just a year of observations, congress once again pulled the plug. The SETI Institute would have to go it alone.

Tarter launched Project Phoenix, to continue the NASA targeted search program using the orphaned NASA spectrometer hardware and whatever telescope time could be purchased from donations to the SETI Institute. In 1995, observations resumed at the Parkes radio telescope in Australia, and subsequently a telescope at the National Radio Astronomy Observatory in Green Bank, West Virginia, and the 300 metre dish at Arecibo Observatory in Puerto Rico. The project continued through 2004.

What should SETI look like in the 21st century? Much had changed since the early days in the 1960s and 1970s. Digital electronics and computers had increased in power a billionfold, not only making it possible to scan a billion channels simultaneously and automatically search for candidate signals, but to combine the signals from a large number of independent, inexpensive antennas (essentially, glorified satellite television dishes), synthesising the aperture of a huge, budget-busting radio telescope. With progress in electronics expected to continue in the coming decades, any capital investment in antenna hardware would yield an exponentially growing science harvest as the ability to analyse its output grew over time. But to take advantage of this technological revolution, SETI could no longer rely on piggyback observations, purchased telescope time, or allocations at the whim of research institutions: “SETI needs its own telescope”—one optimised for the mission and designed to benefit from advances in electronics over its lifetime.

In a series of meetings from 1998 to 2000, the specifications of such an instrument were drawn up: 350 small antennas, each 6 metres in diameter, independently steerable (and thus able to be used all together, or in segments to simultaneously observe different targets), with electronics to combine the signals, providing an effective aperture of 900 metres with all dishes operating. With initial funding from Microsoft co-founder Paul Allen (and with his name on the project, the Allen Telescope Array), the project began construction in 2004. In 2007, observations began with the first 42 dishes. By that time, Paul Allen had lost interest in the project, and construction of additional dishes was placed on hold until a new benefactor could be found. In 2011, a funding crisis caused the facility to be placed in hibernation, and the observatory was sold to SRI International for US$ 1. Following a crowdfunding effort led by the SETI Institute, the observatory was re-opened later that year, and continues operations to this date. No additional dishes have been installed: current work concentrates on upgrading the electronics of the existing antennas to increase sensitivity.

Jill Tarter retired as co-director of the SETI Institute in 2012, but remains active in its scientific, fundraising, and outreach programs. There has never been more work in SETI underway than at the present. In addition to observations with the Allen Telescope Array, the Breakthrough Listen project, funded at US$ 100 million over ten years by Russian billionaire Yuri Milner, is using thousands of hours of time on large radio telescopes, with a goal of observing a million nearby stars and the centres of a hundred galaxies. All data are available to the public for analysis. A new frontier, unimagined in the early days of SETI, is optical SETI. A pulsed laser, focused through a telescope of modest aperture, is able to easily outshine the Sun in a detector sensitive to its wavelength and pulse duration. In the optical spectrum, there's no need for fancy electronics to monitor a wide variety of wavelengths: all you need is a prism or diffraction grating. The SETI Institute has just successfully completed a US$ 100,000 Indiegogo campaign to crowdfund the first phase of the Laser SETI project, which has as its ultimate goal an all-sky, all-the-time search for short pulses of light which may be signals from extraterrestrials or new natural phenomena to which no existing astronomical instrument is sensitive.

People often ask Jill Tarter what it's like to spend your entire career looking for something and not finding it. But she, and everybody involved in SETI, always knew the search would not be easy, nor likely to succeed in the short term. The reward for engaging in it is being involved in founding a new field of scientific inquiry and inventing and building the tools which allow exploring this new domain. The search is vast, and to date we have barely scratched the surface. About all we can rule out, after more than half a century, is a Star Trek-like universe where almost every star system is populated by aliens chattering away on the radio. Today, the SETI enterprise, entirely privately funded and minuscule by the standards of “big science”, is strongly coupled to the exponential growth in computing power and hence, roughly doubles its ability to search around every two years.

The question “are we alone?” is one which has profound implications either way it is answered. If we discover one or more advanced technological civilisations (and they will almost certainly be more advanced than we—we've only had radio for a little more than a century, and there are stars and planets in the galaxy billions of years older than ours), it will mean it's possible to grow out of the daunting problems we face in the adolescence of our species and look forward to an exciting and potentially unbounded future. If, after exhaustive searches (which will take at least another fifty years of continued progress in expanding the search space), it looks like we're alone, then intelligent life is so rare that we may be its only exemplar in the galaxy and, perhaps, the universe. Then, it's up to us. Our destiny, and duty, is to ensure that this spark, lit within us, will never be extinguished.

 Permalink

October 2017

Morton, Oliver. The Planet Remade. Princeton: Princeton University Press, 2015. ISBN 978-0-691-17590-4.
We live in a profoundly unnatural world. Since the start of the industrial revolution, and rapidly accelerating throughout the twentieth century, the actions of humans have begun to influence the flow of energy and materials in the Earth's biosphere on a global scale. Earth's current human population and standard of living are made possible entirely by industrial production of nitrogen-based fertilisers and crop plants bred to efficiently exploit them. Industrial production of fixed (chemically reactive) nitrogen from the atmosphere now substantially exceeds all of that produced by the natural soil bacteria on the planet which, prior to 1950, accounted for almost all of the nitrogen required to grow plants. Fixing nitrogen by the Haber-Bosch process is energy-intensive, and consumes around 1.5 percent of all the world's energy usage and, as a feedstock, 3–5% of natural gas produced worldwide. When we eat these crops, or animals fed from them, we are, in a sense, eating fossil fuels. On the order of four out of five nitrogen molecules that make up your body were made in a factory by the Haber-Bosch process. We are the children, not of nature, but of industry.

The industrial production of fertiliser, along with crops tailored to use them, is entirely responsible for the rapid growth of the Earth's population, which has increased from around 2.5 billion in 1950, when industrial fertiliser and “green revolution” crops came into wide use, to more than 7 billion today. This was accompanied not by the collapse into global penury predicted by Malthusian doom-sayers, but rather a broad-based rise in the standard of living, with extreme poverty and malnutrition falling to all-time historical lows. In the lifetimes of many people, including this scribbler, our species has taken over the flow of nitrogen through the Earth's biosphere, replacing a process mediated by bacteria for billions of years with one performed in factories. The flow of nitrogen from atmosphere to soil, to plants and the creatures who eat them, back to soil, sea, and ultimately the atmosphere is now largely in the hands of humans, and their very lives have become dependent upon it.

This is an example of “geoengineering”—taking control of what was a natural process and replacing it with an engineered one to produce a desired outcome: in this case, the ability to feed a much larger population with an unprecedented standard of living. In the case of nitrogen fixation, there wasn't a grand plan drawn up to do all of this: each step made economic sense to the players involved. (In fact, one of the motivations for developing the Haber-Bosch process was not to produce fertiliser, but rather to produce feedstocks for the manufacture of military and industrial explosives, which had become dependent on nitrates obtained from guano imported to Europe from South America.) But the outcome was the same: ours is an engineered world. Those who are repelled by such an intervention in natural processes or who are concerned by possible detrimental consequences of it, foreseen or unanticipated, must come to terms with the reality that abandoning this world-changing technology now would result in the collapse of the human population, with at least half of the people alive today starving to death, and many of the survivors reduced to subsistence in abject poverty. Sadly, one encounters fanatic “greens” who think this would be just fine (and, doubtless, imagining they'd be among the survivors).

Just mentioning geoengineering—human intervention and management of previously natural processes on a global scale—may summon in the minds of many Strangelove-like technological megalomania or the hubris of Bond villains, so it's important to bear in mind that we're already doing it, and have become utterly dependent upon it. When we consider the challenges we face in accommodating a population which is expected to grow to ten billion by mid-century (and, absent catastrophe, this is almost a given: the parents of the ten billion are mostly alive today), who will demand and deserve a standard of living comparable to what they see in industrial economies, and while carefully weighing the risks and uncertainties involved, it may be unwise to rule out other geoengineering interventions to mitigate undesirable consequences of supporting the human population.

In parallel with the human takeover of the nitrogen cycle, another geoengineering project has been underway, also rapidly accelerating in the 20th century, driven both by population growth and industrialisation of previously agrarian societies. For hundreds of millions of years, the Earth also cycled carbon through the atmosphere, oceans, biosphere, and lithosphere. Carbon dioxide (CO₂) was metabolised from the atmosphere by photosynthetic plants, extracting carbon for their organic molecules and producing oxygen released to the atmosphere, then passed along as plants were eaten, returned to the soil, or dissolved in the oceans, where creatures incorporated carbonates into their shells, which eventually became limestone rock and, over geological time, was subducted as the continents drifted, reprocessed far below the surface, and expelled back into the atmosphere by volcanoes. (This is a gross oversimplification of the carbon cycle, but we don't need to go further into it for what follows. The point is that it's something which occurs on a time scale of tens to hundreds of millions of years and on which humans, prior to the twentieth century, had little influence.)

The natural carbon cycle is not leakproof. Only part of the carbon sequestered by marine organisms and immured in limestone is recycled by volcanoes; it is estimated that this loss of carbon will bring the era of multicellular life on Earth to an end around a billion years from now. The carbon in some plants is not returned to the biosphere when they die. Sometimes, the dead vegetation accumulates in dense beds where it is protected against oxidation and eventually forms deposits of peat, coal, petroleum, and natural gas. Other than natural seeps and releases of the latter substances, their carbon is also largely removed from the biosphere. Or at least it was until those talking apes came along….

The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.

Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂?

Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere.

But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet).

But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it?

(Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.)

We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.)

Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.)

So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system.

Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind!

But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond?

Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful.

Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather.

Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant.

Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships.

Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists.

We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.

 Permalink

December 2017

Benford, Gregory. The Berlin Project. New York: Saga Press, 2017. ISBN 978-1-4814-8765-8.
In September 1938, Karl Cohen returned from a postdoctoral position in France to the chemistry department at Columbia University in New York, where he had obtained his Ph.D. two years earlier. Accompanying him was his new wife, Marthe, daughter of a senior officer in the French army. Cohen went to work for Harold Urey, professor of chemistry at Columbia and winner of the 1934 Nobel Prize in chemistry for the discovery of deuterium. At the start of 1939, the fields of chemistry and nuclear physics were stunned by the discovery of nuclear fission: researchers at the Kaiser Wilhelm Institute in Berlin had discovered that the nucleus of Uranium-235 could be split into two lighter nuclei when it absorbed a neutron, releasing a large amount of energy and additional neutrons which might be able to fission other uranium nuclei, creating a “chain reaction” which might permitting tapping the enormous binding energy of the nucleus to produce abundant power—or a bomb.

The discovery seemed to open a path to nuclear power, but it was clear from the outset that the practical challenges were going to be daunting. Natural uranium is composed of two principal isotopes, U-238 and U-235. The heavier U-238 isotope makes up 99.27% of natural uranium, while U-235 accounts for only 0.72%. Only U-235 can readily be fissioned, so in order to build a bomb, it would be necessary to separate the two isotopes and isolate near-pure U-235. Isotopes differ only in the number of neutrons in their nuclei, but have the same number of protons and electrons. Since chemistry is exclusively determined by the electron structure of an atom, no chemical process can separate two isotopes: it must be done physically, based upon their mass difference. And since U-235 and U-238 differ in mass only by around 1.25%, any process, however clever, would necessarily be inefficient and expensive. It was clear that nuclear energy or weapons would require an industrial-scale effort, not something which could be done in a university laboratory.

Several candidate processes were suggested: electromagnetic separation, thermal or gaseous diffusion, and centrifuges. Harold Urey believed a cascade of high-speed centrifuges, fed with uranium hexafluoride gas, was the best approach, and he was the world's foremost expert on gas centrifuges. The nascent uranium project, eventually to become the Manhattan Project, was inclined toward the electromagnetic and gaseous diffusion processes, since they were believed to be well-understood and only required a vast scaling up as opposed to demonstration of a novel and untested technology.

Up to this point, everything in this alternative history novel is completely factual, and all of the characters existed in the real world (Karl Cohen is the author's father in-law). Historically, Urey was unable to raise the funds to demonstrate the centrifuge technology, and the Manhattan project proceeded with the electromagnetic and gaseous diffusion routes to separate U-235 while, in parallel, pursuing plutonium production from natural uranium in graphite-moderated reactors. Benford adheres strictly to the rules of the alternative history game in that only one thing is changed, and everything else follows as consequences of that change.

Here, Karl Cohen contacts a prominent Manhattan rabbi known to his mother who, seeing a way to combine protecting Jews in Europe from Hitler, advancing the Zionist cause, and making money from patents on a strategic technology, assembles a syndicate of wealthy and like-minded investors, raising a total of a hundred thousand dollars (US$ 1.8 million in today's funny money) to fund Urey's prototype centrifuge project in return for rights to patents on the technology. Urey succeeds, and by mid-1941 the centrifuge has been demonstrated and contacts made with Union Carbide to mass-produce and operate a centrifuge separation plant. Then, in early December of that year, everything changed, and by early 1942 the Manhattan Project had bought out the investors at a handsome profit and put the centrifuge separation project in high gear. As Urey's lead on the centrifuge project, Karl Cohen finds himself in the midst of the rapidly-developing bomb project, meeting and working with all of the principals.

Thus begins the story of a very different Manhattan Project and World War II. With the centrifuge project starting in earnest shortly after Pearl Harbor, by June 6th, 1944 the first uranium bomb is ready, and the Allies decide to use it on Berlin as a decapitation strike simultaneous with the D-Day landings in Normandy. The war takes a very different course, both in Europe and the Pacific, and a new Nazi terror weapon, first hinted at in a science fiction story, complicates the conflict. A different world is the outcome, seen from a retrospective at the end.

Karl Cohen's central position in the Manhattan Project introduces us to a panoply of key players including Leslie Groves, J. Robert Oppenheimer, Edward Teller, Leo Szilard, Freeman Dyson, John W. Campbell, Jr., and Samuel Goudsmit. He participates in a secret mission to Switzerland to assess German progress toward a bomb in the company of professional baseball catcher become spy Moe Berg, who is charged with assassinating Heisenberg if Cohen judges he knows too much.

This is a masterpiece of alternative history, based firmly in fact, and entirely plausible. The description of the postwar consequences is of a world in which I would prefer to have been born. I won't discuss the details to avoid spoiling your discovery of how they all work out in the hands of a master storyteller who really knows his stuff (Gregory Benford is a Professor Emeritus of physics at the University of California, Irvine).

 Permalink

Cox, Joseph. The City on the Heights. Modiin, Israel: Big Picture Books, 2017. ISBN 978-0-9764659-6-6.
For more than two millennia the near east (which is sloppily called the “middle east” by ignorant pundits who can't distinguish north Africa from southwest Asia) has exported far more trouble than it has imported from elsewhere. You need only consult the chronicles of the Greeks, the Roman Empire, the histories of conflicts among them and the Persians, the expansion of Islam into the region, internecine conflicts among Islamic sects, the Crusades, Israeli-Arab wars, all the way to recent follies of “nation building” to appreciate that this is a perennial trouble spot.

People, and peoples hate one another there. It seems like whenever you juxtapose two religions (even sects of one), ethnicities, or self-identifications in the region, before long sanguinary conflict erupts, with each incident only triggering even greater reprisals and escalation. In the words of Lenin, What is to be done?

Now, my inclination would be simply to erect a strong perimeter around the region, let anybody who wished enter, but nobody leave without extreme scrutiny to ensure they were not a risk and follow-up as long as they remained as guests in the civilised regions of the world. This is how living organisms deal with threats to their metabolism: encyst upon it!

In this novel, the author explores another, more hopeful and optimistic, yet perhaps less realistic alternative. When your computer ends up in a hopeless dead-end of resource exhaustion, flailing software, and errors in implementation, you reboot it, or turn it off and on again. This clears out the cobwebs and provides a fresh start. It's difficult to do this in a human community, especially one where grievances are remembered not just over generations but millennia.

Here, archetypal NGO do-gooder Steven Gold has another idea. In the midst of the European religious wars, Amsterdam grew and prospered by being a place that people of any faith could come together and do business. Notwithstanding having a nominal established religion, people of all confessions were welcome as long as they participated in the peaceful commerce and exchange which made the city prosper.

Could this work in the near east? Steven Gold thought it was worth a try, and worth betting his career upon. But where should such a model city be founded? The region was a nightmarish ever-shifting fractal landscape of warring communities with a sole exception: the state of Israel. Why on Earth would Israel consider ceding some of its territory (albeit mostly outside its security perimeter) for such an idealistic project which might prove to be a dagger aimed at its own heart? Well, Steven Gold is very persuasive, and talented at recruiting allies able to pitch the project in terms those needed to support it understand.

And so, a sanctuary city on the Israel-Syria border is born. It is anything but a refugee camp. Residents are expected to become productive members of a multicultural, multi-ethnic community which will prosper along the lines of renaissance Amsterdam or, more recently, Hong Kong and Singapore. Those who wish to move to the City are carefully vetted, but they include a wide variety of people including a former commander of the Islamic State, a self-trained engineer and problem solver who is an escapee from a forced marriage, religious leaders from a variety of faiths, and supporters including a billionaire who made her fortune in Internet payment systems.

And then, since it's the near east, it all blows up. First there are assassinations, then bombings, then a sorting out into ethnic and sectarian districts within the city, and then reprisals. It almost seems like an evil genius is manipulating the communities who came there to live in peace and prosper into conflict among one another. That this might be possible never enters the mind of Steven Gold, who probably still believes in the United Nations and votes for Democrats, notwithstanding their resolute opposition to the only consensual democracy in the region.

Can an act of terrorism redeem a community? Miryam thinks so, and acts accordingly. As the consequences play out, and the money supporting the city begins to run out, a hierarchical system of courts which mix up the various contending groups is established, and an economic system based upon electronic payments which provides a seamless transition between subsidies for the poor (but always based upon earned income: never a pure dole) and taxation for the more prosperous.

A retrospective provides a look at how it all might work. I remain dubious at the prospect. There are many existing communities in the near east which are largely homogeneous in terms of religion and ethnicity (as seen by outsiders) which might be prosperous if they didn't occupy themselves with bombing and killing one another by any means available, and yet the latter is what they choose to do. Might it be possible, by establishing sanctuaries, to select for those willing to set ancient enmities aside? Perhaps, but in this novel, grounded in reality, that didn't happen.

The economic system is intriguing but, to me, ultimately unpersuasive. I understand how the income subsidy encourages low-income earners to stay within the reported income economy, but the moment you cross the tax threshold, you have a powerful incentive to take things off the books and, absent some terribly coercive top-down means to force all transactions through the electronic currency system, free (non-taxed) exchange will find a way.

These quibbles aside, this is a refreshing and hopeful look at an alternative to eternal conflict. In the near east, “the facts on the ground” are everything and the author, who lives just 128 km from the centre of civil war in Syria is far more acquainted with the reality than somebody reading his book far away. I hope his vision is viable. I hope somebody tries it. I hope it works.

 Permalink

Serling, Robert J. The Electra Story. New York: Bantam Books, [1963] 1991. ISBN 978-0-553-28845-2.
As the jet age dawned for commercial air transport, the major U.S. aircraft manufacturers found themselves playing catch-up to the British, who had put the first pure jet airliner, the De Havilland Comet, into service in 1952, followed shortly thereafter by the turboprop Vickers Viscount in 1953. The Comet's reputation was seriously damaged by a series of crashes caused by metal fatigue provoked by its pressurisation system, and while this was remedied in subsequent models, the opportunity to scoop the Americans and set the standard for passenger jet transportation was lost. The Viscount was very successful with a total of 445 built. In fact, demand so surpassed its manufacturer's production rate that delivery time stretched out, causing airlines to seek alternatives.

All of this created a golden opportunity for the U.S. airframers. Boeing and Douglas opted for four engine turbojet designs, the Boeing 707 and Douglas DC-8, which were superficially similar, entering service in 1958 and 1959 respectively. Lockheed opted for a different approach. Based upon its earlier experience designing the C-130 Hercules military transport for the U.S. Air Force, Lockheed decided to build a turboprop airliner instead of a pure jet design like the 707 or DC-8. There were a number of reasons motivating this choice. First of all, Lockheed could use essentially the same engines in the airliner as in the C-130, eliminating the risks of mating a new engine to a new airframe which have caused major troubles throughout the history of aviation. Second, a turboprop, although not as fast as a pure jet, is still much faster than a piston engined plane and able to fly above most of the weather. Turboprops are far more fuel efficient than the turbojet engines used by Boeing and Douglas, and can operate from short runways and under high altitude and hot weather conditions which ground the pure jets. All of these properties made a turboprop airliner ideal for short- and medium-range operations where speed en route was less important than the ability to operate from smaller airports. (Indeed, more than half a century later, turboprops account for a substantial portion of the regional air transport market for precisely these reasons.)

The result was the Lockheed L-188 Electra, a four engine airliner powered by Allison 501-D13 turboprop engines, able to carry 98 passengers a range of 3450 to 4455 km (depending on payload mass) at a cruise speed of 600 km/h. (By comparison, the Boeing 707 carried 174 passengers in a single class configuration a range of 6700 km at a cruise speed of 977 km/h.)

A number of U.S. airlines saw the Electra as an attractive addition to their fleet, with major orders from American Airlines, Eastern Air Lines, Braniff Airways, National Airlines, and Pacific Southwest Airlines. A number of overseas airlines placed orders for the plane. The entry into service went smoothly, and both crews and passengers were satisfied with the high speed, quiet, low-vibration, and reliable operation of the turboprop airliner.

Everything changed on the night of September 29th, 1959. Braniff Airways flight 542, an Electra bound for Dallas and then on to Washington, D.C. and New York, disintegrated in the skies above Buffalo, Texas. There were no survivors. The accident investigation quickly determined that the left wing of the airplane had separated near the wing root. But how, why? The Electra had been subjected to one of the most rigorous flight test and certification regimes of its era, and no problems had been discovered. The flight was through clear skies with no violent weather. Clearly, something terrible went wrong, but there was little evidence to suggest a probable cause. One always suspects a bomb (although less in those days before millions of medieval savages were admitted to civilised countries as “refugees”), but that was quickly ruled out due to the absence of explosive residues on the wreckage.

This was before the era of flight data recorders and cockpit voice recorders, so all the investigators had to go on was the wreckage, and intense scrutiny of it failed to yield an obvious clue. Often in engineering, there are mysteries which simply require more data, and meanwhile the Electras continued to fly. Most people deemed it “just one of those things”—airliner crashes were not infrequent in the era.

Then, on March 17th, 1960, in clear skies above Tell City, Indiana, Northwest Airlines flight 710 fell out of the sky, making a crater in a soybean field in which almost nothing was recognisable. Investigators quickly determined that the right wing had separated in flight, dooming the aircraft.

Wings are not supposed to fall off of airliners. Once is chance, but twice is indicative of a serious design or operational problem. This set into motion one of the first large-scale investigations of aircraft accidents in the modern era. Not only did federal investigators and research laboratories and Lockheed invest massive resources, even competitors Boeing and Douglas contributed expertise and diagnostic hardware because they realised that the public perception of the safety of passenger jet aviation was at stake.

After an extensive and protracted investigation, it was concluded that the Electra was vulnerable to a “whirl mode” failure, where oscillations due to a weakness in the mounting of the outboard engines could resonate with a mode of the wing and lead to failure of its attachment point to the fuselage. This conclusion was highly controversial: Lockheed pointed out that no such problem had been experienced in the C-130, while Allison, the engine manufacturer, cited the same experience to argue that Lockheed's wing design was deficient. Lawsuits and counter-suits erupted, amid an avalanche of lawsuits against Lockheed, Allison, and the airlines by families of those killed in the accidents.

The engine mountings and wings were strengthened, and the modified aircraft were put through a grueling series of tests intended to induce the whirl mode failures. They passed without incident, and the Electra was returned to service without any placard limitations on speed. No further incidents occurred, although a number of Electras were lost in accidents which had nothing to do with the design, but causes all too common in commercial aviation at the time.

Even before the Tell City crash, Lockheed had decided to close down the Electra production line. Passenger and airline preference had gone in favour of pure jet airliners (in an age of cheap oil, the substantial fuel economy of turboprops counted less than the speed of pure jets and how cool it was to fly without propellers). A total of 170 Electras were sold. Remarkably, almost a dozen remain in service today, mostly as firefighting water bombers. A derivative, the P-3 Orion marine patrol aircraft, remains in service today with a total of 757 produced.

This is an excellent contemporary view of the history of a controversial airliner and of one of the first in-depth investigations of accidents under ambiguous circumstances and intense media and political pressure. The author, an aviation journalist, is the brother of Rod Serling.

The paperback is currently out of print but used copies are available, albeit expensive. The Kindle edition is available, and is free for Kindle Unlimited subscribers. The Kindle edition was obviously scanned from a print edition, and exhibits the errors you expect in scanned text not sufficiently scrutinised by a copy editor, for example “modem” where “modern” appeared in the print edition.

 Permalink

Mills, Kyle. Order to Kill. New York: Pocket Books, 2016. ISBN 978-1-4767-8349-9.
This is the second novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. In the first novel by Mills, The Survivor (July 2017), he picked up the story of the last Vince Flynn installment, The Last Man (February 2013), right where it left off and seemed to effortlessly assume the voice of Vince Flynn and his sense for the character of Mitch Rapp. This was a most promising beginning, which augured well for further Mitch Rapp adventures.

In this, the fifteenth novel in the Mitch Rapp series (Flynn's first novel, Term Limits [November 2009], is set in the same world and shares characters with the Mitch Rapp series, but Rapp does not appear in it, so it isn't considered a Rapp novel), Mills steps out of the shadow of Vince Flynn's legacy and takes Rapp and the story line into new territory. The result is…mixed.

In keeping with current events and the adversary du jour, the troublemakers this time are the Russkies, with President Maxim Vladimirovich Krupin at the top of the tottering pyramid. And tottering it is, as the fall in oil prices has undermined Russia's resource-based economy and destabilised the enterprises run by the oligarchs who keep him in power. He may be on top, but he is as much a tool of those in the shadows as master of his nation.

But perhaps there is a grand coup, or one might even say in the new, nominally pious Russia, a Hail Mary pass, which might simultaneously rescue the Russian economy and restore Russia to its rightful place on the world stage.

The problem is those pesky Saudis. Sitting atop a large fraction of the Earth's oil, they can turn the valve on and off and set the price per barrel wherever they wish and, recently, have chosen to push the price down to simultaneously appease their customers in Europe and Asia, but also to drive the competition from hydraulic fracturing (which has a higher cost of production than simply pumping oil out from beneath the desert) out of the market. Suppose the Saudis could be taken out? But Russia could never do it directly. There would need to be a cut-out, and perfect deniability.

Well, the Islamic State (IS, or ISIS, or ISIL, or whatever they're calling this week in the Court Language of the Legacy Empire) is sworn to extend its Caliphate to the holiest places of Islam and depose the illegitimate usurpers who rule them, so what better puppet to take down the Saudi petro-hegemony? Mitch Rapp finds himself in the middle of this conspiracy, opting to endure grave physical injury to insinuate himself into its midst.

But it's the nature of the plot where everything falls apart, in one of those details which Vince Flynn and his brain trust would never have flubbed. This isn't a quibble, but a torpedo below the water line. We must, perforce, step behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
You clicked the Spoiler link, right? Now I'm going to spoil the whole thing so if you clicked it by accident, please close this box and imagine you never saw what follows.

The central plot of this novel is obtaining plutonium from Pakistani nuclear weapons and delivering it to ISIS, not to build a fission weapon but rather a “dirty bomb” which uses conventional explosives to disperse radioactive material to contaminate an area and deny it to the enemy.

But a terrorist who had done no more research than reading Wikipedia would know that plutonium is utterly useless as a radiological contaminant for a dirty bomb. The isotope of plutonium used in nuclear weapons has a half-life of around 24,000 years, and hence has such a low level of radioactivity that dispersing the amount used in the pits of several bombs would only marginally increase the background radiation in the oil fields. In other words, it would have no effect whatsoever.

If you want to make a dirty bomb, the easiest way is to use spent fuel rods from civil nuclear power stations. These are far easier to obtain (although difficult to handle safely), and rich in highly-radioactive nuclides which can effectively contaminate an area into which they are dispersed. But this blows away the entire plot and most of the novel.

Vince Flynn would never, and never did, make such a blunder. I urge Kyle Mills to reconnect with Mr Flynn's brain trust and run his plots past them, or develop an equivalent deep well of expertise to make sure things fundamentally make sense.

Spoilers end here.  

All right, we're back from the spoilers. Whether you've read them or not, this is a well-crafted thriller which works as such as long as you don't trip over the central absurdity in the plot. Rapp not only suffers grievous injury, but encounters an adversary who is not only his equal but better. He confronts his age, and its limitations. It happens to us all.

The gaping plot hole could have been easily fixed—not in the final manuscript but in the outline. Let's hope that future Mitch Rapp adventures will be subjected to the editorial scrutiny which makes them not just page-turners but ones where, as you're turning the pages, you don't laugh out loud at easily-avoided blunders.

 Permalink

Schantz, Hans G. The Hidden Truth. Huntsville, AL: ÆtherCzar, 2016. ISBN 978-1-5327-1293-7.
This is a masterpiece of alternative history techno-thriller science fiction. It is rich in detail, full of interesting characters who interact and develop as the story unfolds, sound in the technical details which intersect with our world, insightful about science, technology, economics, government and the agenda of the “progressive” movement, and plausible in its presentation of the vast, ruthless, and shadowy conspiracy which lies under the surface of its world. And, above all, it is charming—these are characters you'd like to meet, even some of the villains because you want understand what motivates them.

The protagonist and narrator is a high school junior (senior later in the tale), son of an electrical engineer who owns his own electrical contracting business, married to a chemist, daughter of one of the most wealthy and influential families in their region of Tennessee, against the wishes of her parents. (We never learn the narrator's name until the last page of the novel, so I suppose it would be a spoiler if I mentioned it here, so I won't, even if it makes this review somewhat awkward.) Our young narrator wants to become a scientist, and his father not only encourages him in his pursuit, but guides him toward learning on his own by reading the original works of great scientists who actually made fundamental discoveries rather than “suffering through the cleaned-up and dumbed-down version you get from your teachers and textbooks.” His world is not ours: Al Gore, who won the 2000 U.S. presidential election, was killed in the 2001-09-11 attacks on the White House and Capitol, and President Lieberman pushed through the “Preserving our Planet's Future Act”, popularly known as the “Gore Tax”, in his memory, and its tax on carbon emissions is predictably shackling the economy.

Pursuing his study of electromagnetism from original sources, he picks up a copy at the local library of a book published in 1909. The library was originally the collection of a respected institute of technology until destroyed by innovative educationalists and their pointy-headed progressive ideas. But the books remained, and in one of them, he reads an enigmatic passage about Oliver Heaviside having developed a theory of electromagnetic waves bouncing off one another in free space, which was to be published in a forthcoming book. This didn't make any sense: electromagnetic waves add linearly, and while they can be reflected and refracted by various media, in free space they superpose without interaction. He asks his father about the puzzling passage, and they look up the scanned text on-line and find the passage he read missing. Was his memory playing tricks?

So, back to the library where, indeed, the version of the book there contains the mention of bouncing waves. And yet the publication date and edition number of the print and on-line books were identical. As Isaac Asimov observed, many great discoveries aren't heralded by an exclamation of “Eureka!” but rather “That's odd.” This was odd….

Soon, other discrepancies appear, and along with his best friend and computer and Internet wizard Amit Patel, he embarks on a project to scan original print editions of foundational works on electromagnetism from the library and compare them with on-line versions of these public domain works. There appears to be a pattern: mentions of Heaviside's bouncing waves appear to have been scrubbed out of the readily-available editions of these books (print and on-line), and remain only in dusty volumes in forgotten provincial libraries.

As their investigations continue, it's increasingly clear they have swatted a hornets' nest. Fake feds start to follow their trail, with bogus stories of “cyber-terrorism”. And tragically, they learn that those who dig too deeply into these curiosities have a way of meeting tragic ends. Indeed, many of the early researchers into electromagnetism died young: Maxwell at age 48, Hertz at 36, FitzGerald at 39. Was there a vast conspiracy suppressing some knowledge about electromagnetism? And if so, what was the hidden truth, and why was it so important to them they were willing to kill to keep it hidden? It sure looked like it, and Amit started calling them “EVIL”: the Electromagnetic Villains International League.

The game gets deadly, and deadly serious. The narrator and Amit find some powerful and some ambiguous allies, learn about how to deal with the cops and other authority figures, and imbibe a great deal of wisdom about individuality, initiative, and liberty. There's even an attempt to recruit our hero to the dark side of collectivism where its ultimate anti-human agenda is laid bare. Throughout there are delightful tips of the hat to libertarian ideas, thinkers, and authors, including some as obscure as a reference to the Books on Benefit bookshop in Providence, Rhode Island.

The author is an inventor, entrepreneur, and scientist. He writes, “I appreciate fiction that shows how ordinary people with extraordinary courage and determination can accomplish remarkable achievements.” Mission accomplished. As the book ends, the central mystery remains unresolved. The narrator vows to get to the bottom of it and avenge those destroyed by the keepers of the secret. In a remarkable afterword and about the author section, there is a wonderful reading list for those interested in the technical topics discussed in the book and fiction with similarly intriguing and inspiring themes. When it comes to the technical content of the book, the author knows of what he writes: he has literally written the book on the design of ultrawideband antennas and is co-inventor of Near Field Electromagnetic Ranging (NFER), which you can think of as “indoor GPS”.

For a self-published work, there are only a few copy editing errors (“discrete” where “discreet” was intended, and “Capital” for “Capitol”). The Kindle edition is free for Kindle Unlimited subscribers. A sequel is now available: A Rambling Wreck which takes our hero and the story to—where else?—Georgia Tech. I shall certainly read that book. Meanwhile, go read the present volume; if your tastes are anything like mine, you're going to love it.

 Permalink

  2018  

January 2018

Bracken, Matthew. The Red Cliffs of Zerhoun. Orange Park, FL: Steelcutter Publishing, 2017. ISBN 978-0-9728310-5-5.
We first met Dan Kilmer in Castigo Cay (February 2014), where the retired U.S. Marine sniper (I tread cautiously on the terminology: some members of the Corps say there's no such thing as a “former Marine” and, perhaps, neither is there a “former sniper”) had to rescue his girlfriend from villains in the Caribbean. The novel is set in a world where the U.S. is deteriorating into chaos and the malevolent forces suppressed by civilisation have begun to assert their power on the high seas.

As this novel begins, things have progressed, and not for the better. The United States has fractured into warring provinces as described in the author's “Enemies” trilogy. Japan and China are in wreckage after the global economic crash. Much of Europe is embroiled in civil wars between the indigenous population and inbred medieval barbarian invaders imported by well-meaning politicians or allowed to land upon their shores or surge across their borders by the millions. The reaction to this varies widely depending upon the culture and history of the countries invaded. Only those wise enough to have said “no” in time have been spared.

But even they are not immune to predation. The plague of Islamic pirates on the high seas and slave raiders plundering the coasts of Europe was brought to an end only by the navies of Christendom putting down the corsairs' primitive fleets. But with Europe having collapsed economically, drawn down its defence capability to almost nothing, and daring not even to speak the word “Christendom” for fear of offending its savage invaders, the pirates are again in ascendence, this time flying the black flag of jihad instead of the Jolly Roger.

When seventy young girls are kidnapped into sex slavery from a girls' school in Ireland by Islamic pirates and offered for auction to the highest bidder among their co-religionists, a group of those kind of hard men who say things like “This will not stand”, including a retired British SAS colonel and a former Provisional IRA combatant (are either ever “retired” or “former”?) join forces, not to deploy a military-grade fully-automatic hashtag, but to get the girls back by whatever means are required.

Due to exigent circumstances, Dan Kilmer's 18 metre steel-hulled schooner, moored in a small port in western Ireland to peddle diesel fuel he's smuggled in from a cache in Greenland, becomes one of those means. Kilmer thinks the rescue plan to be folly, but agrees to transport the assault team to their rendezvous point in return for payment for him and his crew in gold.

It's said that no battle plan survives contact with the enemy. In this case, the plan doesn't even get close to that point. Improvisation, leaders emerging in the midst of crisis, and people rising to the occasion dominate the story. There are heroes, but not superheroes—instead people who do what is required in the circumstances in which they find themselves. It is an inspiring story.

This book has an average review rating of 4.9 on Amazon, but you're probably hearing of it here for the first time. Why? Because it presents an accurate view of the centuries-old history of Islamic slave raiding and trading, and the reality that the only way this predation upon civilisation can be suppressed is by civilised people putting it down in with violence commensurate to its assault upon what we hold most precious.

The author's command of weapons and tactics is encyclopedic, and the novel is consequently not just thrilling but authentic. And, dare I say, inspiring.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Hamilton, Eric M. An Inconvenient Presidency. Seattle: CreateSpace, 2016. ISBN 978-1-5368-7363-4.
This novella (89 pages in the Kindle edition) is a delightful romp into alternative history and the multiverse. Al Gore was elected president in 2000 and immediately informed of a capability so secret he had never been told of it, even as Vice President. He was handed a gadget, the METTA, which allowed a limited kind of time travel. Should he, or the country, find itself in a catastrophic and seemingly unrecoverable situation, he could press its red button and be mentally transported back in time to a reset point, set just after his election, to give it another try. But, after the reset, he would retain all of his knowledge of the events which preceded it.

Haven't you imagined going back in time and explaining to your younger self all of the things you've learned by trial and error and attendant bruises throughout your life? The shadowy Government Apperception Liberation Authority—GALA—has endowed presidents with this capability. This seems so bizarre the new president Gore pays little attention to it. But when an unanticipated and almost unimaginable event occurs, he presses the button.

~KRRZKT~

Well, we won't let that happen! And it doesn't, but something else does: reset. This job isn't as easy as it appeared: reset, reset, reset.

We've often joked about the “Gore Effect”: the correlation between unseasonably cold weather and Al Gore's appearance to promote his nostrums of “anthropogenic global warming”. Here, Al Gore begins to think there is a greater Gore Effect: that regardless of what he does and what he learns from previous experience and a myriad of disasters, something always goes wrong with catastrophic consequences.

Can he escape this loop? Who are the mysterious people behind GALA? He is determined to find out, and he has plenty of opportunities to try: ~KRRZKT~.

You will be amazed at how the author brings this tale to a conclusion. Throughout, everything was not as it seemed, but in the last few pages, well golly! Unusually for a self-published work, there are no typographical or grammatical errors which my compulsive copy-editor hindbrain detected. The author does not only spin a fine yarn, but respects his audience enough to perfect his work before presenting it to them: this is rare, and I respect and applaud that. Despite Al Gore and other U.S. political figures appearing in the story, there is no particular political tilt to the narrative: the goal is fun, and it is superbly achieved.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Weir, Andy. Artemis. New York: Crown, 2017. ISBN 978-0-553-44812-2.
Seldom has a first-time novelist burst onto the scene so spectacularly as Andy Weir with The Martian (November 2014). Originally written for his own amusement and circulated chapter by chapter to a small but enthusiastic group of fans who provided feedback and suggestions as the story developed, he posted the completed novel as a free download on his Web site. Some people who had heard of it by word of mouth but lacked the technical savvy to download documents and transfer them to E-readers inquired whether he could make a Kindle version available. Since you can't give away Kindle books, he published it at the minimum possible price. Before long, the book was rising into the Amazon bestseller list in science fiction, and he was contacted by a major publisher about doing a print edition. These publishers only accept manuscripts through agents, and he didn't have one (nor do agents usually work with first-time authors, which creates a chicken-and-egg problem for the legacy publishing industry), so the publisher put him in touch with a major agent and recommended the manuscript. This led to a 2014 hardcover edition and then a Hollywood movie in 2016 which was nominated for 7 Oscars and won two Golden Globes including Best Motion Picture and Best Performance by an Actor in its category.

The question fans immediately asked themselves was, “Is this a one shot, or can he repeat?” Well, I think we have the answer: with Artemis, Andy Weir has delivered another story of grand master calibre and shown himself on track to join the ranks of the legends of the genre.

In the latter part of the 21st century commerce is expanding into space, and the Moon is home to Artemis, a small settlement of around 2000 permanent residents, situated in the southern part of the Sea of Tranquility, around 40 km from the Apollo 11 landing site. A substantial part of the economy of Artemis is based upon wealthy tourists who take the train from Artemis to the Apollo 11 Visitor Center (where they can look, but not touch or interfere with the historical relics) and enjoy the luxuries and recreations which cater to them back in the pleasure domes.

Artemis is the creation of the Kenya Space Corporation (KSC), which officially designates it “Kenya Offshore Platform Artemis” and operates under international maritime law. As space commerce burgeoned in the 21st century, Kenya's visionary finance minister, Fidelis Ngugi, leveraged Kenya's equatorial latitude (it's little appreciated that once reliable fully-reusable launch vehicles are developed, there's no need to launch over water) and hands-off regulatory regime provided a golden opportunity for space entrepreneurs to escape the nanny state regulation and crushing tax burden of “developed” countries. With tax breaks and an African approach to regulation, entrepreneurs and money flowed in from around the world, making Kenya into a space superpower and enriching its economy and opportunities for its people. Twenty years later Ngugi was Administrator of Artemis; she was, in effect, ruler of the Moon.

While Artemis was a five star experience for the tourists which kept its economy humming, those who supported the settlement and its industries lived in something more like a frontier boom town of the 19th century. Like many such settlements, Artemis attracted opportunity-seekers and those looking to put their pasts behind them from many countries and cultures. Those established tend to attract more like them, and clannish communities developed around occupations: most people in Life Support were Vietnamese, while metal-working was predominantly Hungarian. For whatever reason, welding was dominated by Saudis, including Ammar Bashara, who emigrated to Artemis with his six-year old daughter Jasmine. Twenty years later, Ammar runs a prosperous welding business and Jasmine (“Jazz”) is, shall we say, more irregularly employed.

Artemis is an “energy intense” Moon settlement of the kind described in Steven D. Howe's Honor Bound Honor Born (May 2014). The community is powered by twin 27 megawatt nuclear reactors located behind a berm one kilometre from the main settlement. The reactors not only provide constant electricity and heat through the two week nights and days of the Moon, they power a smelter which processes the lunar regolith into raw materials. The Moon's crust is about 40% oxygen, 20% silicon, 12% iron, and 8% aluminium. With abundant power, these elements can be separated and used to manufacture aluminium and iron for structures, glass from silicon and oxygen, and all with abundant left-over oxygen to breathe. There is no need for elaborate recycling of oxygen: there's always plenty more coming out of the smelter. Many denizens of Artemis subsist largely on “gunk”, an algae-based food grown locally in vats which is nutritious but unpalatable and monotonous. There are a variety of flavours, all of which are worse than the straight stuff.

Jazz works as a porter. She picks up things somewhere in the settlement and delivers them to their destinations using her personally-owned electric-powered cart. Despite the indigenous production of raw materials, many manufactured goods and substances are imported from Earth or factories in Earth orbit, and every time a cargo ship arrives, business is brisk for Jasmine and her fellow porters. Jazz is enterprising and creative, and has a lucrative business on the side: smuggling. Knowing the right people in the spaceport and how much to cut them in, she has a select clientele to which she provides luxury goods from Earth which aren't on the approved customs manifests.

For this, she is paid in “slugs”. No, not slimy molluscs, but “soft-landed grams”, credits which can be exchanged to pay KSC to deliver payload from Earth to Artemis. Slugs act as a currency, and can be privately exchanged among individuals' handheld computers much as Bitcoin today. Jazz makes around 12,000 slugs a month as a porter, and more, although variable, from her more entrepreneurial sideline.

One of her ultra-wealthy clients approaches her with a highly illegal, almost certainly unethical, and very likely perilous proposal. Surviving for as long as she has in her risky business has given Jazz a sense for where the edge is and the good sense not to step over it.

“I'm sorry but this isn't my thing. You'll have to find someone else.”

“I'll offer you a million slugs.”

“Deal.”

Thus begins an adventure in which Jazz has to summon all of her formidable intellect, cunning, and resources, form expedient alliances with unlikely parties, solve a technological mystery, balance honour with being a outlaw, and discover the economic foundation of Artemis, which is nothing like it appears from the surface. All of this is set in a richly textured and believable world which we learn about as the story unfolds: Weir is a master of “show, don't tell”. And it isn't just a page-turning thriller (although that it most certainly is); it's also funny, and in the right places and amount.

This is where I'd usually mention technical goofs and quibbles. I'll not do that because I didn't find any. The only thing I'm not sure about is Artemis' using a pure oxygen atmosphere at 20% of Earth sea-level pressure. This works for short- and moderate-duration space missions, and was used in the U.S. Mercury, Gemini, and Apollo missions. For exposure to pure oxygen longer than two weeks, a phenomenon called absorption atelectasis can develop, which is the collapse of the alveoli in the lungs due to complete absorption of the oxygen gas (see this NASA report [PDF]). The presence of a biologically inert gas such as nitrogen, helium, argon, or neon will keep the alveoli inflated and prevent this phenomenon. The U.S. Skylab missions used an atmosphere of 72% oxygen and 28% nitrogen to avoid this risk, and the Soviet Salyut and Mir space stations used a mix of nitrogen and oxygen with between 21% and 40% oxygen. The Space Shuttle and International Space Station use sea-level atmospheric pressure with 21% oxygen and the balance nitrogen. The effects of reduced pressure on the boiling point of water and the fire hazard of pure oxygen even at reduced pressure are accurately described, but I'm not sure the physiological effects of a pure oxygen atmosphere for long-term habitation have been worked through.

Nitpicking aside, this is a techno-thriller which is also an engaging human story, set in a perfectly plausible and believable future where not only the technology but the economics and social dynamics work. We may just be welcoming another grand master to the pantheon.

 Permalink

February 2018

Kroese, Robert. Starship Grifters. Seattle: 47North, 2014. ISBN 978-1-4778-1848-0.
This is the funniest science fiction novel I have read in quite a while. Set in the year 3013, not long after galactic civilisation barely escaped an artificial intelligence apocalypse and banned fully self-aware robots, the story is related by Sasha, one of a small number of Self-Arresting near Sentient Heuristic Androids built to be useful without running the risk of their taking over. SASHA robots are equipped with an impossible-to-defeat watchdog module which causes a hard reboot whenever they are on the verge of having an original thought. The limitation of the design proved a serious handicap, and all of their manufacturers went bankrupt. Our narrator, Sasha, was bought at an auction by the protagonist, Rex Nihilo, for thirty-five credits in a lot of “ASSORTED MACHINE PARTS”. Sasha is Rex's assistant and sidekick.

Rex is an adventurer. Sasha says he “never had much of an interest in anything but self-preservation and the accumulation of wealth, the latter taking clear precedence over the former.” Sasha's built in limitations (in addition to the new idea watchdog, she is unable to tell a lie, but if humans should draw incorrect conclusions from incomplete information she provides them, well…) pose problems in Rex's assorted lines of work, most of which seem to involve scams, gambling, and contraband of various kinds. In fact, Rex seems to fit in very well with the universe he inhabits, which appears to be firmly grounded in Walker's Law: “Absent evidence to the contrary, assume everything is a scam”. Evidence appears almost totally absent, and the oppressive tyranny called the Galactic Malarchy, those who supply it, the rebels who oppose it, entrepreneurs like Rex working in the cracks, organised religions and cults, and just about everybody else, appear to be on the make or on the take, looking to grift everybody else for their own account. Cosmologists attribute this to the “Strong Misanthropic Principle, which asserts that the universe exists in order to screw with us.” Rex does his part, although he usually seems to veer between broke and dangerously in debt.

Perhaps that's due to his somewhat threadbare talent stack. As Shasha describes him, Rex doesn't have a head for numbers. Nor does he have much of a head for letters, and “Newtonian physics isn't really his strong suit either”. He is, however, occasionally lucky, or so it seems at first. In an absurdly high-stakes card game with weapons merchant Gavin Larviton, reputed to be one of the wealthiest men in the galaxy, Rex manages to win, almost honestly, not only Larviton's personal starship, but an entire planet, Schnufnaasik Six. After barely escaping a raid by Malarchian marines led by the dread and squeaky-voiced Lord Heinous Vlaak, Rex and Sasha set off in the ship Rex has won, the Flagrante Delicto, to survey the planetary prize.

It doesn't take Rex long to discover, not surprisingly, that he's been had, and that his financial situation is now far more dire than he'd previously been able to imagine. If any of the bounty hunters now on his trail should collar him, he could spend a near-eternity on the prison planet of Gulagatraz (the names are a delight in themselves). So, it's off the rebel base on the forest moon (which is actually a swamp; the swamp moon is all desert) to try to con the Frente Repugnante (all the other names were taken by rival splinter factions, so they ended up with “Revolting Front”, which was translated to Spanish to appear to Latino planets) into paying for a secret weapon which exists only in Rex's imagination.

Thus we embark upon a romp which has a laugh-out-loud line about every other page. This is comic science fiction in the vein of Keith Laumer's Retief stories. As with Laumer, Kroese achieves the perfect balance of laugh lines, plot development, interesting ideas, and recurring gags (there's a planet-destroying weapon called the “plasmatic entropy cannon” which the oft-inebriated Rex refers to variously as the “positronic endoscopy cannon”, “pulmonary embolism cannon”, “ponderosa alopecia cannon”, “propitious elderberry cannon”, and many other ways). There is a huge and satisfying reveal at the end—I kind of expected one was coming, but I'd have never guessed the details.

If reading this leaves you with an appetite for more Rex Nihilo, there is a prequel novella, The Chicolini Incident, and a sequel, Aye, Robot.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.
The Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth's surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it's hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.

This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.

About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.

What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.

To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.

Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they've figured out how to pass what they've learned to other humans via speech, writing, and more recently, YouTube comments.

While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you're born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.

The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.

Life 1.0, 2.0, and 3.0

The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don't worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn't malevolence, but competence”. It's unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don't align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.

But isn't this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that's interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).

My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.

The Intelligence Landscape

Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it's game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.

Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?

First of all, let's assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That's complicated. Here are the alternatives discussed by the author. I've deliberately given just the titles without summaries to stimulate your imagination about their consequences.

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookeeper
  • 1984
  • Reversion
  • Self-destruction

Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that's assuming we'll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.

Let's assume progress toward AGI continues over the next few decades. I believe that what I've been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year's wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.

This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don't (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?

The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.

Income per U.S. Household: 1920–2015

I'm not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn't any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don't have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.

Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.

Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it's entirely possible it may have been an event so improbable we'll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents.”).

But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn't matter; like the bacteria and ants, we will have done our part.

The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.

In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.

 Permalink

Lewis, Damien. The Ministry of Ungentlemanly Warfare. New York: Quercus, 2015. ISBN 978-1-68144-392-8.
After becoming prime minister in May 1940, one of Winston Churchill's first acts was to establish the Special Operations Executive (SOE), which was intended to conduct raids, sabotage, reconnaissance, and support resistance movements in Axis-occupied countries. The SOE was not part of the military: it was a branch of the Ministry of Economic Warfare and its very existence was a state secret, camouflaged under the name “Inter-Service Research Bureau”. Its charter was, as Churchill described it, to “set Europe ablaze”.

The SOE consisted, from its chief, Brigadier Colin McVean Gubbins, who went by the designation “M”, to its recruits, of people who did not fit well with the regimentation, hierarchy, and constraints of life in the conventional military branches. They could, in many cases, be easily mistaken for blackguards, desperadoes, and pirates, and that's precisely what they were in the eyes of the enemy—unconstrained by the rules of warfare, striking by stealth, and sowing chaos, mayhem, and terror among occupation troops who thought they were far from the front.

Leading some of the SOE's early exploits was Gustavus “Gus” March-Phillipps, founder of the British Army's Small Scale Raiding Force, and given the SOE designation “Agent W.01”, meaning the first agent assigned to the west Africa territory with the leading zero identifying him as “trained and licensed to use all means to liquidate the enemy”—a license to kill. The SOE's liaison with the British Navy, tasked with obtaining support for its operations and providing cover stories for them, was a fellow named Ian Fleming.

One of the SOE's first and most daring exploits was Operation Postmaster, with the goal of seizing German and Italian ships anchored in the port of Santa Isabel on the Spanish island colony of Fernando Po off the coast of west Africa. Given the green light by Churchill over the strenuous objections of the Foreign Office and Admiralty, who were concerned about the repercussions if British involvement in what amounted to an act of piracy in a neutral country were to be disclosed, the operation was mounted under the strictest secrecy and deniability, with a cover story prepared by Ian Fleming. Despite harrowing misadventures along the way, the plan was a brilliant success, capturing three ships and their crews and delivering them to the British-controlled port of Lagos without any casualties. Vindicated by the success, Churchill gave the SOE the green light to raid Nazi occupation forces on the Channel Islands and the coast of France.

On his first mission in Operation Postmaster was Anders Lassen, an aristocratic Dane who enlisted as a private in the British Commandos after his country was occupied by the Nazis. With his silver-blond hair, blue eyes, and accent easily mistaken for German, Lassen was apprehended by the Home Guard on several occasions while on training missions in Britain and held as a suspected German spy until his commanders intervened. Lassen was given a field commission, direct from private to second lieutenant, immediately after Operation Postmaster, and went on to become one of the most successful leaders of special operations raids in the war. As long as Nazis occupied his Danish homeland, he was possessed with a desire to kill as many Nazis as possible, wherever and however he could, and when in combat was animated by a berserker drive and ability to improvise that caused those who served with him to call him the “Danish Viking”.

This book provides a look into the operations of the SOE and its successor organisations, the Special Air Service and Special Boat Service, seen through the career of Anders Lassen. So numerous were special operations, conducted in many theatres around the world, that this kind of focus is necessary. Also, attrition in these high-risk raids, often far behind enemy lines, was so high there are few individuals one can follow throughout the war. As the war approached its conclusion, Lassen was the only surviving participant in Operation Postmaster, the SOE's first raid.

Lassen went on to lead raids against Nazi occupation troops in the Channel Islands, leading Churchill to remark, “There comes from the sea from time to time a hand of steel which plucks the German sentries from their posts with growing efficiency.” While these “butcher-and-bolt” raids could not liberate territory, they yielded prisoners, code books, and radio contact information valuable to military intelligence and, more importantly, forced the Germans to strengthen their garrisons in these previously thought secure posts, tying down forces which could otherwise be sent to active combat fronts. Churchill believed that the enemy should be attacked wherever possible, and SOE was a precision weapon which could be deployed where conventional military forces could not be used.

As the SOE was absorbed into the military Special Air Service, Lassen would go on to fight in North Africa, Crete, the Aegean islands, then occupied by Italian and German troops, and mainland Greece. His raid on a German airbase on occupied Crete took out fighters and bombers which could have opposed the Allied landings in Sicily. Later, his small group of raiders, unsupported by any other force, liberated the Greek city of Salonika, bluffing the German commander into believing Lassen's forty raiders and two fishing boats were actually a British corps of thirty thousand men, with armour, artillery, and naval support.

After years of raiding in peripheral theatres, Lassen hungered to get into the “big war”, and ended up in Italy, where his irregular form of warfare and disdain for military discipline created friction with his superiors. But he got results, and his unit was tasked with reconnaissance and pathfinding for an Allied crossing of Lake Comacchio (actually, more of a swamp) in Operation Roast in the final days of the war. It was there he was to meet his end, in a fierce engagement against Nazi troops defending the north shore. For this, he posthumously received the Victoria Cross, becoming the only non-Commonwealth citizen so honoured in World War II.

It is a cliché to say that a work of history “reads like a thriller”, but in this case it is completely accurate. The description of the raid on the Kastelli airbase on Crete would, if made into a movie, probably cause many viewers to suspect it to be fictionalised, but that's what really happened, based upon after action reports by multiple participants and aerial reconnaissance after the fact.

World War II was a global conflict, and while histories often focus on grand battles such as D-day, Stalingrad, Iwo Jima, and the fall of Berlin, there was heroism in obscure places such as the Greek islands which also contributed to the victory, and combatants operating in the shadows behind enemy lines who did their part and often paid the price for the risks they willingly undertook. This is a stirring story of this shadow war, told through the short life of one of its heroes.

 Permalink

April 2018

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

 Permalink

May 2018

Radin, Dean. Real Magic. New York: Harmony Books, 2018. ISBN 978-1-5247-5882-0.
From its beginnings in the 19th century as “psychical research”, there has always been something dodgy and disreputable about parapsychology: the scientific study of phenomena, frequently reported across all human cultures and history, such as clairvoyance, precognition, telepathy, communication with the dead or non-material beings, and psychokinesis (mental influence on physical processes). All of these disparate phenomena have in common that there is no known physical theory which can explain how they might work. In the 19th century, science was much more willing to proceed from observations and evidence, then try to study them under controlled conditions, and finally propose and test theories about how they might work. Today, many scientists are inclined to put theory first, rejecting any evidence of phenomena for which no theory exists to explain it.

In such an intellectual environment, those who study such things, now called parapsychologists, have been, for the most part, very modest in their claims, careful to distinguish their laboratory investigations, mostly involving ordinary subjects, from extravagant reports of shamans and psychics, whether contemporary or historical, and scrupulous in the design and statistical analysis of their experiments. One leader in the field is Dean Radin, author of the present book, and four times president of the Parapsychological Association, a professional society which is an affiliate of the American Association for the Advancement of Science. Dr. Radin is chief scientist at the Institute of Noetic Sciences in Petaluma, California, where he pursues laboratory research in parapsychology. In his previous books, including Entangled Minds (August 2007), he presents the evidence for various forms of human perception which seem to defy conventional explanation. He refrains from suggesting mechanisms or concluding whether what is measured is causation or correlation. Rather, he argues that the body of accumulated evidence from his work and that of others, in recent experiments conducted under the strictest protocols to eliminate possible fraud, post-selection of data, and with blinding and statistical rigour which often exceed those of clinical trials of pharmaceuticals, provides evidence that “something is going on” which we don't understand that would be considered discovery of a new phenomenon if it originated in a “hard science” field such as particle physics.

Here, Radin argues that the accumulated evidence for the phenomena parapsychologists have been studying in the laboratory for decades is so persuasive to all except sceptics who no amount of evidence would suffice to persuade, that it is time for parapsychologists and those interested in their work to admit that what they're really studying is magic. “Not the fictional magic of Harry Potter, the feigned magic of Harry Houdini, or the fraudulent magic of con artists. Not blue lightning bolts springing from the fingertips, aerial combat on broomsticks, sleight-of-hand tricks, or any of the other elaborations of artistic license and special effects.” Instead, real magic, as understood for millennia, which he divides into three main categories:

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

As Radin describes, it was only after years of work in parapsychology that he finally figured out why it is that, while according to a 2005 Gallup pool, 75% of people in the United States believe in one or more phenomena considered “paranormal”, only around 0.001% of scientists are engaged in studying these experiences. What's so frightening, distasteful, or disreputable about them? It's because they all involve some kind of direct interaction between human consciousness and the objective, material world or, in other words magic. Scientists are uncomfortable enough with consciousness as it is: they don't have any idea how it emerges from what, in their reductionist models, is a computer made of meat, to the extent that some scientists deny the existence of consciousness entirely and dismiss it as a delusion. (Indeed, studying the origin of consciousness is almost as disreputable in academia as parapsychology.)

But if we must admit the existence of this mysterious thing called consciousness, along with other messy concepts such as free will, at least we must keep it confined within the skull: not roaming around and directly perceiving things far away or in the future, affecting physical events, or existing independent of brains. That would be just too weird.

And yet most religions, from those of traditional societies to the most widely practiced today, include descriptions of events and incorporate practices which are explicitly magical according to Radin's definition. Paragraphs 2115–2117 of the Catechism of the Roman Catholic Church begin by stating that “God can reveal the future to his prophets or to other saints.” and then go on to prohibit “Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums…”. But if these things did not exist, or did not work, then why would there be a need to forbid them? Perhaps it's because, despite religion's incorporating magic into its belief system and practices, it also wishes to enforce a monopoly on the use of magic among its believers—in Radin's words, “no magic for you!

In fact, as stated at the beginning of chapter 4, “Magic is to religion as technology is to science.” Just as science provides an understanding of the material world which technology applies in order to accomplish goals, religion provides a model of the spiritual world which magic provides the means to employ. From antiquity to the present day, religion and magic have been closely associated with one another, and many religions have restricted knowledge of their magical components and practices to insiders and banned others knowing or employing them. Radin surveys this long history and provides a look at contemporary, non-religious, practice of the three categories of real magic.

He then turns to what is, in my estimation, the most interesting and important part of the book: the scientific evidence for the existence of real magic. A variety of laboratory experiments, many very recent and with careful design and controls, illustrate the three categories and explore subtle aspects of their behaviour. For example, when people precognitively sense events in the future, do they sense a certain event which is sure to happen, or the most probable event whose occurrence might be averted through the action of free will? How on Earth would you design an experiment to test that? It's extremely clever, and the results are interesting and have deep implications.

If ordinary people can demonstrate these seemingly magical powers in the laboratory (albeit with small, yet statistically highly significant effect sizes), are there some people whose powers are much greater? That is the case for most human talents, whether athletic, artistic, or intellectual; one suspects it might be so here. Historical and contemporary evidence for “Merlin-class magicians” is reviewed, not as proof for the existence of real magic, but as what might be expected if it did exist.

What is science to make of all of this? Mainstream science, if it mentions consciousness at all, usually considers it an emergent phenomenon at the tip of a pyramid of more fundamental sciences such as biology, chemistry, and physics. But what if we've got it wrong, and consciousness is not at the top but the bottom: ultimately everything emerges from a universal consciousness of which our individual consciousness is but a part, and of which all parts are interconnected? These are precisely the tenets of a multitude of esoteric traditions developed independently by cultures all around the world and over millennia, all of whom incorporated some form of magic into their belief systems. Maybe, as evidence for real magic emerges from the laboratory, we'll conclude they were on to something.

This is an excellent look at the deep connections between traditional beliefs in magic and modern experiments which suggest those beliefs, however much they appear to contradict dogma, may be grounded in reality. Readers who are unacquainted with modern parapsychological research and the evidence it has produced probably shouldn't start here, but rather with the author's earlier Entangled Minds, as it provides detailed information about the experiments, results, and responses to criticism of them which are largely assumed as the foundation for the arguments here.

 Permalink

Kroese, Robert. Schrödinger's Gat. Seattle: CreateSpace, 2012. ISBN 978-1-4903-1821-9.
It was pure coincidence (or was it?) that caused me to pick up this book immediately after finishing Dean Radin's Real Magic (May 2018), but it is a perfect fictional companion to that work. Robert Kroese, whose Starship Grifters (February 2018) is the funniest science fiction novel I've read in the last several years, here delivers a tour de force grounded in quantum theory, multiple worlds, free will, the nature of consciousness, determinism versus uncertainty, the nature of genius, and the madness which can result from thinking too long and deeply about these enigmatic matters. This is a novel, not a work of philosophy or physics, and the story moves along smartly with interesting characters including a full-on villain and an off-stage…well, we're not really sure. In a postscript, the author explicitly lists the “cheats” he used to make the plot work but notes, “The remarkable thing about writing this book was how few liberties I actually had to take.”

The story is narrated by Paul Bayes (whose name should be a clue we're about to ponder what we can know in an uncertain world), who we meet as he is ready to take his life by jumping under a BART train at a Bay Area station. Paul considers himself a failure: failed crime writer, failed father whose wife divorced him and took the kids, and undistinguished high school English teacher with little hope of advancement. Perhaps contributing to his career problems, Paul is indecisive. Kill himself or just walk away—why not flip a coin? Paul's life is spared through the intervention of a mysterious woman who he impulsively follows on a madcap adventure which ends up averting a potential mass murder on San Francisco's Embarcadero. Only after, does he learn her name, Tali. She agrees to meet him for dinner the next day and explain everything.

Paul shows up at the restaurant, but Tali doesn't. Has he been stood up? He knows next to nothing about Tali—not even her last name, but after some time on the Internet following leads from their brief conversation the day before he discovers a curious book by a recently-retired Stanford physics professor titled Fate and Consciousness—hardly the topics you'd expect one with his background to expound upon. After reading some of the odd text, he decides to go to the source.

This launches Paul into an series of adventures which cause him to question the foundations of reality: to what extent do we really have free will, and how much is the mindless gears of determinism turning toward the inevitable? Why does the universe seem to “fight back” when we try to impose our will upon it? Is there a “force”, and can we detect disturbances in it and act upon them? (The technology described in the story is remarkably similar to the one to which I have contributed to developing and deploying off and on for the last twenty years.) If such a thing could be done, who might be willing to kill to obtain the power it would confer? Is the universe a passive player in the unfolding of the future, or an active and potentially ruthless agent?

All of these questions are explored in a compelling story with plenty of action as Paul grapples with the mysteries confronting him, incorporating prior discoveries into the emerging picture. This is an entertaining, rewarding, and thought-provoking read which, although entirely fiction, may not be any more weird than the universe we inhabit.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Skousen, W. Cleon. The Naked Communist. Salt Lake City: Izzard Ink, [1958, 1964, 1966, 1979, 1986, 2007, 2014] 2017. ISBN 978-1-5454-0215-3.
In 1935 the author joined the FBI in a clerical position while attending law school at night. In 1940, after receiving his law degree, he was promoted to Special Agent and continued in that capacity for the rest of his 16 year career at the Bureau. During the postwar years, one of the FBI's top priorities was investigating and responding to communist infiltration and subversion of the United States, a high priority of the Soviet Union. During his time at the FBI Skousen made the acquaintance of several of the FBI's experts on communist espionage and subversion, but he perceived a lack of information, especially available to the general public, which explained communism: where did it come from, what are its philosophical underpinnings, what do communists believe, what are their goals, and how do they intend to achieve them?

In 1951, Skousen left the FBI to take a teaching position at Brigham Young University in Provo, Utah. In 1957, he accepted an offer to become Chief of Police in Salt Lake City, a job he held for the next three and a half years before being fired after raiding an illegal poker game in which newly-elected mayor J. Bracken Lee was a participant. During these years, Skousen continued his research on communism, mostly consulting original sources. By 1958, his book was ready for publication. After struggling to find a title, he settled on “The Naked Communist”, suggested by film producer and ardent anti-communist Cecil B. DeMille.

Spurned by the major publishers, Skousen paid for printing the first edition of 5000 copies out of his own pocket. Sales were initially slow, but quickly took off. Within two years of the book's launch, press runs were 10,000 to 20,000 copies with one run of 50,000. In 1962, the book passed the milestone of one million copies in print. As the 1960s progressed and it became increasingly unfashionable to oppose communist tyranny and enslavement, sales tapered off, but picked up again after the publication of a 50th anniversary edition in 2008 (a particularly appropriate year for such a book).

This 60th anniversary edition, edited and with additional material by the author's son, Paul B. Skousen, contains most of the original text with a description of the history of the work and additions bringing events up to date. It is sometimes jarring when you transition from text written in 1958 to that from the standpoint of more than a half century hence, but for the most part it works. One of the most valuable parts of the book is its examination of the intellectual foundations of communism in the work of Marx and Engels. Like the dogma of many other cults, these ideas don't stand up well to critical scrutiny, especially in light of what we've learned about the universe since they were proclaimed. Did you know that Engels proposed a specific theory of the origin of life based upon his concepts of Dialectical Materialism? It was nonsense then and it's nonsense now, but it's still in there. What's more, this poppycock is at the centre of the communist theories of economics, politics, and social movements, where it makes no more sense than in the realm of biology and has been disastrous every time some society was foolish enough to try it.

All of this would be a historical curiosity were it not for the fact that communists, notwithstanding their running up a body count of around a hundred million in the countries where they managed to come to power, and having impoverished people around the world, have managed to burrow deep into the institutions of the West: academia, media, politics, judiciary, and the administrative state. They may not call themselves communists (it's “social democrats”, “progressives”, “liberals”, and other terms, moving on after each one becomes discredited due to the results of its policies and the borderline insanity of those who so identify), but they have been patiently putting the communist agenda into practice year after year, decade after decade. What is that agenda? Let's see.

In the 8th edition of this book, published in 1961, the following “forty-five goals of Communism” were included. Derived by the author from the writings of current and former communists and testimony before Congress, many seemed absurd or fantastically overblown to readers at the time. The complete list, as follows, was read into the Congressional Record in 1963, placing it in the public domain. Here is the list.

Goals of Communism

  1. U.S. acceptance of coexistence as the only alternative to atomic war.
  2. U.S. willingness to capitulate in preference to engaging in atomic war.
  3. Develop the illusion that total disarmament by the United States would be a demonstration of moral strength.
  4. Permit free trade between all nations regardless of Communist affiliation and regardless of whether or not items could be used for war.
  5. Extension of long-term loans to Russia and Soviet satellites.
  6. Provide American aid to all nations regardless of Communist domination.
  7. Grant recognition of Red China. Admission of Red China to the U.N.
  8. Set up East and West Germany as separate states in spite of Khrushchev's promise in 1955 to settle the German question by free elections under supervision of the U.N.
  9. Prolong the conferences to ban atomic tests because the United States has agreed to suspend tests as long as negotiations are in progress.
  10. Allow all Soviet satellites individual representation in the U.N.
  11. Promote the U.N. as the only hope for mankind. If its charter is rewritten, demand that it be set up as a one-world government with its own independent armed forces. (Some Communist leaders believe the world can be taken over as easily by the U.N. as by Moscow. Sometimes these two centers compete with each other as they are now doing in the Congo.)
  12. Resist any attempt to outlaw the Communist Party.
  13. Do away with all loyalty oaths.
  14. Continue giving Russia access to the U.S. Patent Office.
  15. Capture one or both of the political parties in the United States.
  16. Use technical decisions of the courts to weaken basic American institutions by claiming their activities violate civil rights.
  17. Get control of the schools. Use them as transmission belts for socialism and current Communist propaganda. Soften the curriculum. Get control of teachers' associations. Put the party line in textbooks.
  18. Gain control of all student newspapers.
  19. Use student riots to foment public protests against programs or organizations which are under Communist attack.
  20. Infiltrate the press. Get control of book-review assignments, editorial writing, policymaking positions.
  21. Gain control of key positions in radio, TV, and motion pictures.
  22. Continue discrediting American culture by degrading all forms of artistic expression. An American Communist cell was told to “eliminate all good sculpture from parks and buildings, substitute shapeless, awkward and meaningless forms.”
  23. Control art critics and directors of art museums. “Our plan is to promote ugliness, repulsive, meaningless art.”
  24. Eliminate all laws governing obscenity by calling them “censorship” and a violation of free speech and free press.
  25. Break down cultural standards of morality by promoting pornography and obscenity in books, magazines, motion pictures, radio, and TV.
  26. Present homosexuality, degeneracy and promiscuity as “normal, natural, healthy.”
  27. Infiltrate the churches and replace revealed religion with “social” religion. Discredit the Bible and emphasize the need for intellectual maturity which does not need a “religious crutch.”
  28. Eliminate prayer or any phase of religious expression in the schools on the ground that it violates the principle of “separation of church and state.”
  29. Discredit the American Constitution by calling it inadequate, old-fashioned, out of step with modern needs, a hindrance to cooperation between nations on a worldwide basis.
  30. Discredit the American Founding Fathers. Present them as selfish aristocrats who had no concern for the “common man.”
  31. Belittle all forms of American culture and discourage the teaching of American history on the ground that it was only a minor part of the “big picture.” Give more emphasis to Russian history since the Communists took over.
  32. Support any socialist movement to give centralized control over any part of the culture—education, social agencies, welfare programs, mental health clinics, etc.
  33. Eliminate all laws or procedures which interfere with the operation of the Communist apparatus.
  34. Eliminate the House Committee on Un-American Activities.
  35. Discredit and eventually dismantle the FBI.
  36. Infiltrate and gain control of more unions.
  37. Infiltrate and gain control of big business.
  38. Transfer some of the powers of arrest from the police to social agencies. Treat all behavioral problems as psychiatric disorders which no one but psychiatrists can understand or treat.
  39. Dominate the psychiatric profession and use mental health laws as a means of gaining coercive control over those who oppose Communist goals.
  40. Discredit the family as an institution. Encourage promiscuity and easy divorce.
  41. Emphasize the need to raise children away from the negative influence of parents. Attribute prejudices, mental blocks and retarding of children to suppressive influence of parents.
  42. Create the impression that violence and insurrection are legitimate aspects of the American tradition; that students and special-interest groups should rise up and use “united force” to solve economic, political or social problems.
  43. Overthrow all colonial governments before native populations are ready for self-government.
  44. Internationalize the Panama Canal.
  45. Repeal the Connally Reservation so the US can not prevent the World Court from seizing jurisdiction over domestic problems. Give the World Court jurisdiction over domestic problems. Give the World Court jurisdiction over nations and individuals alike.

In chapter 13 of the present edition, a copy of this list is reproduced with commentary on the extent to which these goals have been accomplished as of 2017. What's your scorecard? How many of these seem extreme or unachievable from today's perspective?

When Skousen was writing his book, the world seemed divided into two camps: one communist and the other committed (more or less) to personal and economic liberty. In the free world, there were those advancing the cause of the collectivist slavers, but mostly covertly. What is astonishing today is that, despite more than a century of failure and tragedy resulting from communism, there are more and more who openly advocate for it or its equivalents (or an even more benighted medieval ideology masquerading as a religion which shares communism's disregard for human life and liberty, and willingness to lie, cheat, discard treaties, and murder to achieve domination).

When advocates of this deadly cult of slavery and death are treated with respect while those who defend the Enlightenment values of life, liberty, and property are silenced, this book is needed more than ever.

 Permalink

Thor, Brad. Use of Force. New York: Atria Books, 2017. ISBN 978-1-4767-8939-2.
This is the seventeenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). As this book begins, Scot Harvath, operative for the Carlton Group, a private outfit that does “the jobs the CIA won't do” is under cover at the Burning Man festival in the Black Rock Desert of Nevada. He and his team are tracking a terrorist thought to be conducting advance surveillance for attacks within the U.S. Only as the operation unfolds does he realise he's walked into the middle of a mass casualty attack already in progress. He manages to disable his target, but another suicide bomber detonates in a crowded area, with many dead and injured.

Meanwhile, following the capsizing of a boat smuggling “migrants” into Sicily, the body of a much-wanted and long-sought terrorist chemist, known to be researching chemical and biological weapons of mass destruction, is fished out of the Mediterranean. Why would he, after flying under the radar for years in the Near East and Maghreb, be heading to Europe? The CIA reports, “Over the last several months, we've been picking up chatter about an impending series of attacks, culminating in something very big, somewhere in Europe” … “We think that whatever he was planning, it's ready to go operational.”

With no leads other than knowledge from a few survivors of the sinking that the boat sailed from Libya and the name of the migrant smuggler who arranged their passage, Harvath sets off under cover to that country to try to find who arranged the chemist's passage and his intended destination in Europe. Accompanied by his pick-up team from Burning Man (given the urgency, there wasn't time to recruit one more familiar with the region), Harvath begins, in his unsubtle way, to locate the smuggler and find out what he knows. Unfortunately, as is so often the case in such operations, there is somebody else with the team who doesn't figure in its official roster—a fellow named Murphy.

Libya is chaotic and dangerous enough under any circumstances, but when you whack the hornets' nest, things can get very exciting in short order, and not in a good way. Harvath and his team find themselves in a mad chase and shoot-out, and having to summon assets which aren't supposed to be there, in order to survive.

Meanwhile, another savage terrorist attack in Europe has confirmed the urgency of the threat and that more are likely to come. And back in the imperial capital, intrigue within the CIA seems aimed at targeting Harvath's boss and the head of the operation. Is it connected somehow? It's time to deploy the diminutive super-hacker Nicholas and one of the CIA's most secret and dangerous computer security exploits in a honeypot operation to track down the source of the compromise.

If it weren't bad enough being chased by Libyan militias while trying to unravel an ISIS terror plot, Harvath soon finds himself in the lair of the Calabrian Mafia, and being thwarted at every turn by civil servants insisting he play by the rules when confronting those who make their own rules. Finally, multiple clues begin to limn the outline of the final attack, and it is dire indeed. Harvath must make an improbable and uneasy alliance to confront it.

The pacing of the book is somewhat odd. There is a tremendous amount of shoot-’em-up action in the middle, but as the conclusion approaches and the ultimate threat must be dealt with, it's as if the author felt himself running out of typewriter ribbon (anybody remember what that was?) and having to wind things up in just a few pages. Were I his editor, I'd have suggested trimming some of the detail in the middle and making the finale more suspenseful. But then, what do I know? Brad Thor has sold nearly fifteen million books, and I haven't. This is a perfectly workable thriller which will keep you turning the pages, but I didn't find it as compelling as some of his earlier novels. The attention to detail and accuracy are, as one has come to expect, superb. You don't need to have read any of the earlier books in the series to enjoy this one; what few details you need to know are artfully mentioned in passing.

The next installment in the Scot Harvath saga, Spymaster, will be published in July, 2018.

 Permalink

Hanson, Victor Davis. The Second World Wars. New York: Basic Books, 2017. ISBN 978-0-465-06698-8.
This may be the best single-volume history of World War II ever written. While it does not get into the low-level details of the war or its individual battles (don't expect to see maps with boxes, front lines, and arrows), it provides an encyclopedic view of the first truly global conflict with a novel and stunning insight every few pages.

Nothing like World War II had ever happened before and, thankfully, has not happened since. While earlier wars may have seemed to those involved in them as involving all of the powers known to them, they were at most regional conflicts. By contrast, in 1945, there were only eleven countries in the entire world which were neutral—not engaged on one side or the other. (There were, of course, far fewer countries then than now—most of Africa and South Asia were involved as colonies of belligerent powers in Europe.) And while war had traditionally been a matter for kings, generals, and soldiers, in this total war the casualties were overwhelmingly (70–80%) civilian. Far from being confined to battlefields, many of the world's great cities, from Amsterdam to Yokohama, were bombed, shelled, or besieged, often with disastrous consequences for their inhabitants.

“Wars” in the title refers to Hanson's observation that what we call World War II was, in reality, a collection of often unrelated conflicts which happened to occur at the same time. The settling of ethnic and territorial scores across borders in Europe had nothing to do with Japan's imperial ambitions in China, or Italy's in Africa and Greece. It was sometimes difficult even to draw a line dividing the two sides in the war. Japan occupied colonies in Indochina under the administration of Vichy France, notwithstanding Japan and Vichy both being nominal allies of Germany. The Soviet Union, while making a massive effort to defeat Nazi Germany on the land, maintained a non-aggression pact with Axis power Japan until days before its surrender and denied use of air bases in Siberia to Allied air forces for bombing campaigns against the home islands.

Combatants in different theatres might have well have been fighting in entirely different wars, and sometimes in different centuries. Air crews on long-range bombing missions above Germany and Japan had nothing in common with Japanese and British forces slugging it out in the jungles of Burma, nor with attackers and defenders fighting building to building in the streets of Stalingrad, or armoured combat in North Africa, or the duel of submarines and convoys to keep the Atlantic lifeline between the U.S. and Britain open, or naval battles in the Pacific, or the amphibious landings on islands they supported.

World War II did not start as a global war, and did not become one until the German invasion of the Soviet Union and the Japanese attack on U.S., British, and Dutch territories in the Pacific. Prior to those events, it was a collection of border wars, launched by surprise by Axis powers against weaker neighbours which were, for the most part, successful. Once what Churchill called the Grand Alliance (Britain, the Soviet Union, and the United States) was forged, the outcome was inevitable, yet the road to victory was long and costly, and its length impossible to foresee at the outset.

The entire war was unnecessary, and its horrific cost can be attributed to a failure of deterrence. From the outset, there was no way the Axis could have won. If, as seemed inevitable, the U.S. were to become involved, none of the Axis powers possessed the naval or air resources to strike the U.S. mainland, no less contemplate invading and occupying it. While all of Germany and Japan's industrial base and population were, as the war progressed, open to bombardment day and night by long-range, four engine, heavy bombers escorted by long-range fighters, the Axis possessed no aircraft which could reach the cities of the U.S. east coast, the oil fields of Texas and Oklahoma, or the industrial base of the midwest. While the U.S. and Britain fielded aircraft carriers which allowed them to project power worldwide, Germany and Italy had no effective carrier forces and Japan's were reduced by constant attacks by U.S. aviation.

This correlation of forces was known before the outbreak of the war. Why did Japan and then Germany launch wars which were almost certain to result in forces ranged against them which they could not possibly defeat? Hanson attributes it to a mistaken belief that, to use Hitler's terminology, the will would prevail. The West had shown itself unwilling to effectively respond to aggression by Japan in China, Italy in Ethiopia, and Germany in Czechoslovakia, and Axis leaders concluded from this, catastrophically for their populations, that despite their industrial, demographic, and strategic military weakness, there would be no serious military response to further aggression (the “bore war” which followed the German invasion of Poland and the declarations of war on Germany by France and Britain had to reinforce this conclusion). Hanson observes, writing of Hitler, “Not even Napoleon had declared war in succession on so many great powers without any idea how to destroy their ability to make war, or, worse yet, in delusion that tactical victories would depress stronger enemies into submission.” Of the Japanese, who attacked the U.S. with no credible capability or plan for invading and occupying the U.S. homeland, he writes, “Tojo was apparently unaware or did not care that there was no historical record of any American administration either losing or quitting a war—not the War of 1812, the Mexican War, the Civil War, the Spanish American War, or World War I—much less one that Americans had not started.” (Maybe they should have waited a few decades….)

Compounding the problems of the Axis was that it was essentially an alliance in name only. There was little or no co-ordination among its parties. Hitler provided Mussolini no advance notice of the attack on the Soviet Union. Mussolini did not warn Hitler of his attacks on Albania and Greece. The Japanese attack on Pearl Harbor was as much a surprise to Germany as to the United States. Japanese naval and air assets played no part in the conflict in Europe, nor did German technology and manpower contribute to Japan's war in the Pacific. By contrast, the Allies rapidly settled on a division of labour: the Soviet Union would concentrate on infantry and armoured warfare (indeed, four out of five German soldiers who died in the war were killed by the Red Army), while Britain and the U.S. would deploy their naval assets to blockade the Axis, keep the supply lines open, and deliver supplies to the far-flung theatres of the war. U.S. and British bomber fleets attacked strategic targets and cities in Germany day and night. The U.S. became the untouchable armoury of the alliance, delivering weapons, ammunition, vehicles, ships, aircraft, and fuel in quantities which eventually surpassed those all other combatants on both sides combined. Britain and the U.S. shared technology and cooperated in its development in areas such as radar, antisubmarine warfare, aircraft engines (including jet propulsion), and nuclear weapons, and shared intelligence gleaned from British codebreaking efforts.

As a classicist, Hanson examines the war in its incarnations in each of the elements of antiquity: Earth (infantry), Air (strategic and tactical air power), Water (naval and amphibious warfare), and Fire (artillery and armour), and adds People (supreme commanders, generals, workers, and the dead). He concludes by analysing why the Allies won and what they ended up winning—and losing. Britain lost its empire and position as a great power (although due to internal and external trends, that might have happened anyway). The Soviet Union ended up keeping almost everything it had hoped to obtain through its initial partnership with Hitler. The United States emerged as the supreme economic, industrial, technological, and military power in the world and promptly entangled itself in a web of alliances which would cause it to underwrite the defence of countries around the world and involve it in foreign conflicts far from its shores.

Hanson concludes,

The tragedy of World War II—a preventable conflict—was that sixty million people had perished to confirm that the United States, the Soviet Union, and Great Britain were far stronger than the fascist powers of Germany, Japan, and Italy after all—a fact that should have been self-evident and in no need of such a bloody laboratory, if not for prior British appeasement, American isolationism, and Russian collaboration.

At 720 pages, this is not a short book (the main text is 590 pages; the rest are sources and end notes), but there is so much wisdom and startling insights among those pages that you will be amply rewarded for the time you spend reading them.

 Permalink

Brown, Dan. Origin. New York: Doubleday, 2017. ISBN 978-0-385-51423-1.
Ever since the breakthrough success of Angels & Demons, his first mystery/thriller novel featuring Harvard professor and master of symbology Robert Langdon, Dan Brown has found a formula which turns arcane and esoteric knowledge, exotic and picturesque settings, villains with grandiose ambitions, and plucky female characters into bestsellers, two of which, The Da Vinci Code and Angels & Demons, have been adapted into Hollywood movies.

This is the fifth novel in the Robert Langdon series. After reading the fourth, Inferno (May 2013), it struck me that Brown's novels have become so formulaic they could probably be generated by an algorithm. Since artificial intelligence figures in the present work, in lieu of a review, which would be difficult to write without spoilers, here are the parameters to the Marinchip Turbo Digital™ Thriller Wizard to generate the story.

Villain: Edmond Kirsch, billionaire computer scientist and former student of Robert Langdon. Made his fortune from breakthroughs in artificial intelligence, neuroscience, and robotics.

Megalomaniac scheme: “end the age of religion and usher in an age of science”.

Buzzword technologies: artificial general intelligence, quantum computing.

Big Questions: “Where did we come from?”, “Where are we going?”.

Religious adversary: The Palmarian Catholic Church.

Plucky female companion: Ambra Vidal, curator of the Guggenheim Museum in Bilbao (Spain) and fiancée of the crown prince of Spain.

Hero or villain? Details would be a spoiler but, as always, there is one.

Contemporary culture tie-in: social media, an InfoWars-like site called ConspiracyNet.com.

MacGuffins: the 47-character password from Kirsch's favourite poem (but which?), the mysterious “Winston”, “The Regent”.

Exotic and picturesque locales: The Guggenheim Museum Bilbao, Casa Milà and the Sagrada Família in Barcelona, Valle de los Caídos near Madrid.

Enigmatic symbol: a typographical mark one must treat carefully in HTML.

When Edmond Kirsch is assassinated moments before playing his presentation which will answer the Big Questions, Langdon and Vidal launch into a quest to discover the password required to release the presentation to the world. The murder of two religious leaders to whom Kirsch revealed his discoveries in advance of their public disclosure stokes the media frenzy surrounding Kirsch and his presentation, and spawns conspiracy theories about dark plots to suppress Kirsch's revelations which may involve religious figures and the Spanish monarchy.

After perils, adventures, conflict, and clues hidden in plain sight, Startling Revelations leave Langdon Stunned and Shaken but Cautiously Hopeful for the Future.

When the next Dan Brown novel comes along, see how well it fits the template. This novel will appeal to people who like this kind of thing: if you enjoyed the last four, this one won't disappoint. If you're looking for plausible speculation on the science behind the big questions or the technological future of humanity, it probably will. Now that I know how to crank them out, I doubt I'll buy the next one when it appears.

 Permalink

Mercer, Ilana. Into the Cannibal's Pot. Mount Vernon, WA, 2011. ISBN 978-0-9849070-1-4.
The author was born in South Africa, the daughter of Rabbi Abraham Benzion Isaacson, a leader among the Jewish community in the struggle against apartheid. Due to her father's activism, the family, forced to leave the country, emigrated to Israel, where the author grew up. In the 1980s, she moved back to South Africa, where she married, had a daughter, and completed her university education. In 1995, following the first elections with universal adult suffrage which resulted in the African National Congress (ANC) taking power, she and her family emigrated to Canada with the proceeds of the sale of her apartment hidden in the soles of her shoes. (South Africa had adopted strict controls to prevent capital flight in the aftermath of the election of a black majority government.) After initially settling in British Columbia, her family subsequently emigrated to the United States where they reside today.

From the standpoint of a member of a small minority (the Jewish community) of a minority (whites) in a black majority country, Mercer has reason to be dubious of the much-vaunted benefits of “majority rule”. Describing herself as a “paleolibertarian”, her outlook is shaped not by theory but the experience of living in South Africa and the accounts of those who remained after her departure. For many in the West, South Africa scrolled off the screen as soon as a black majority government took power, but that was the beginning of the country's descent into violence, injustice, endemic corruption, expropriation of those who built the country and whose ancestors lived there since before the founding of the United States, and what can only be called a slow-motion genocide against the white farmers who were the backbone of the society.

Between 1994 and 2005, the white population of South Africa fell from 5.22 million to 4.37 million. Two of the chief motivations for emigration have been an explosion of violent crime, often racially motivated and directed against whites, a policy of affirmative action which amounts to overt racial discrimination against whites, endemic corruption, and expropriation of businesses in the interest of “fairness”.

In the forty-four years of apartheid in South Africa from 1950 to 1993, there were a total of 309,583 murders in the country: an average of 7,036 per year. In the first eight years after the end of apartheid (1994—2001), under one-party black majority rule, 193,649 murders were reported, or 24,206 per year. And the latter figure is according to the statistics of the ANC-controlled South Africa Police Force, which both Interpol and the South African Medical Research Council say may be understated by as much as a factor of two. The United States is considered to be a violent country, with around 4.88 homicides per 100,000 people (by comparison, the rate in the United Kingdom is 0.92 and in Switzerland is 0.69). In South Africa, the figure is 34.27 (all estimates are 2015 figures from the United Nations Office on Drugs and Crime). And it isn't just murder: in South Africa,where 65 people are murdered every day, around 200 are raped and 300 are victims of assault and violent robbery.

White farmers, mostly Afrikaner, have frequently been targets of violence. In the periods 1996–2007 and 2010–2016 (no data were published for the years 2008 and 2009), according to statistics from the South African Police Service (which may be understated), there were 11,424 violent attacks on farms in South Africa, with a total of 1609 homicides, in some cases killing entire farm families and some of their black workers. The motives for these attacks remain a mystery according to the government, whose leaders have been known to sing the stirring anthem “Kill the Boer” at party rallies. Farm attacks follow the pattern in Zimbabwe, where such attacks, condoned by the Mugabe regime, resulted in the emigration of almost all white farmers and the collapse of the country's agricultural sector (only 200 white farmers remain in the country, 5% of the number before black majority rule). In South Africa, white farmers who have not already emigrated find themselves trapped: they cannot sell to other whites who fear they would become targets of attacks and/or eventual expropriation without compensation, nor to blacks who expect they will eventually receive the land for free when it is expropriated.

What is called affirmative action in the U.S. is implemented in South Africa under the Black Economic Empowerment (BEE) programme, a set of explicitly racial preferences and requirements which cover most aspects of business operation including ownership, management, employment, training, supplier selection, and internal investment. Mining companies must cede co-ownership to blacks in order to obtain permits for exploration. Not surprisingly, in many cases the front men for these “joint ventures” are senior officials of the ruling ANC and their family members. So corrupt is the entire system that Archbishop Desmond Tutu, one of the most eloquent opponents of apartheid, warned that BEE has created a “powder keg”, where benefits accrue only to a small, politically-connected, black elite, leaving others in “dehumanising poverty”.

Writing from the perspective of one who got out of South Africa just at the point where everything started to go wrong (having anticipated in advance the consequences of pure majority rule) and settled in the U.S., Mercer then turns to the disturbing parallels between the two countries. Their histories are very different, and yet there are similarities and trends which are worrying. One fundamental problem with democracy is that people who would otherwise have to work for a living discover that they can vote for a living instead, and are encouraged in this by politicians who realise that a dependent electorate is a reliable electorate as long as the benefits continue to flow. Back in 2008, I wrote about the U.S. approaching a tipping point where nearly half of those who file income tax returns owe no income tax. At that point, among those who participate in the economy, there is a near-majority who pay no price for voting for increased government benefits paid for by others. It's easy to see how this can set off a positive feedback loop where the dependent population burgeons, the productive minority shrinks, the administrative state which extracts the revenue from that minority becomes ever more coercive, and those who channel the money from the producers to the dependent grow in numbers and power.

Another way to look at the tipping point is to compare the number of voters to taxpayers (those with income tax liability). In the U.S., this number is around two to one, which is dangerously unstable to the calamity described above. Now consider that in South Africa, this ratio is eleven to one. Is it any wonder that under universal adult suffrage the economy of that country is in a down-spiral?

South Africa prior to 1994 was in an essentially intractable position. By encouraging black and later Asian immigration over its long history (most of the ancestors of black South Africans arrived after the first white settlers), it arrived at a situation where a small white population (less than 10%) controlled the overwhelming majority of the land and wealth, and retained almost all of the political power. This situation, and the apartheid system which sustained it (which the author and her family vehemently opposed) was unjust and rightly was denounced and sanctioned by countries around the globe. But what was to replace it? The experience of post-colonial Africa was that democracy almost always leads to “One man, one vote, one time”: a leader of the dominant ethnic group wins the election, consolidates power, and begins to eliminate rival groups, often harking back to the days of tribal warfare which preceded the colonial era, but with modern weapons and a corresponding death toll. At the same time, all sources of wealth are plundered and “redistributed”, not to the general population, but to the generals and cronies of the Great Man. As the country sinks into savagery and destitution, whites and educated blacks outside the ruling clique flee. (Indeed, South Africa has a large black illegal immigrant population made of those who fled the Mugabe tyranny in Zimbabwe.)

Many expected this down-spiral to begin in South Africa soon after the ANC took power in 1994. The joke went, “What's the difference between Zimbabwe and South Africa? Ten years.” That it didn't happen immediately and catastrophically is a tribute to Nelson Mandela's respect for the rule of law and for his white partners in ending apartheid. But now he is gone, and a new generation of more radical leaders has replaced him. Increasingly, it seems like the punch line might be revised to be “Twenty-five years.”

The immediate priority one takes away from this book is the need to address the humanitarian crisis faced by the Afrikaner farmers who are being brutally murdered and face expropriation of their land without compensation as the regime becomes ever more radical. Civilised countries need to open immigration to this small, highly-productive, population. Due to persecution and denial of property rights, they may arrive penniless, but are certain to quickly become the backbone of the communities they join.

In the longer term, the U.S. and the rest of the Anglosphere and civilised world should be cautious and never indulge in the fantasy “it can't happen here”. None of these countries started out with the initial conditions of South Africa, but it seems like, over the last fifty years, much of their ruling class seems to have been bent on importing masses of third world immigrants with no tradition of consensual government, rule of law, or respect for property rights, concentrating them in communities where they can preserve the culture and language of the old country, and ensnaring them in a web of dependency which keeps them from climbing the ladder of assimilation and economic progress by which previous immigrant populations entered the mainstream of their adopted countries. With some politicians bent on throwing the borders open to savage, medieval, inbred “refugees” who breed much more rapidly than the native population, it doesn't take a great deal of imagination to see how the tragedy now occurring in South Africa could foreshadow the history of the latter part of this century in countries foolish enough to lay the groundwork for it now.

This book was published in 2011, but the trends it describes have only accelerated in subsequent years. It's an eye-opener to the risks of democracy without constraints or protection of the rights of minorities, and a warning to other nations of the grave risks they face should they allow opportunistic politicians to recreate the dire situation of South Africa in their own lands.

 Permalink

Schantz, Hans G. A Rambling Wreck. Huntsville, AL: ÆtherCzar, 2017. ISBN 978-1-5482-0142-5.
This the second novel in the author's Hidden Truth series. In the first book (December 2017) we met high schoolers and best friends Pete Burdell and Amit Patel who found, in dusty library books, knowledge apparently discovered by the pioneers of classical electromagnetism (many of whom died young), but which does not figure in modern works, even purported republications of the original sources they had consulted. As they try to sort through the discrepancies, make sense of what they've found, and scour sources looking for other apparently suppressed information, they become aware that dark and powerful forces seem bent on keeping this seemingly obscure information hidden. People who dig too deeply have a tendency to turn up dead in suspicious “accidents”, and Amit coins the monicker “EVIL”: the Electromagnetic Villains International League, for their adversaries. Events turn personal and tragic, and Amit and Pete learn tradecraft, how to deal with cops (real and fake), and navigate the legal system with the aid of mentors worthy of a Heinlein story.

This novel finds the pair entering the freshman class at Georgia Tech—they're on their way to becoming “rambling wrecks”. Unable to pay their way with their own resources, Pete and Amit compete for and win full-ride scholarships funded by the Civic Circle, an organisation they suspect may be in cahoots in some way with EVIL. As a condition of their scholarship, they must take a course, “Introduction to Social Justice Studies” (the “Studies” should be tip-off enough) to become “social justice ambassadors” to the knuckle-walking Tech community.

Pete's Uncle Ron feared this might be a mistake, but Amit and Pete saw it as a way to burrow from within, starting their own “long march through the institutions”, and, incidentally, having a great deal of fun and, especially for Amit, an aspiring master of Game, meet radical chicks. Once at Tech, it becomes clear that the first battles they must fight relate not to 19th century electrodynamics but the 21st century social justice wars.

Pete's family name resonates with history and tradition at Tech. In the 1920s, with a duplicate enrollment form in hand, enterprising undergraduates signed up the fictitious “George P. Burdell” for a full course load, submitted his homework, took his exams, and saw him graduate in 1930. Burdell went on to serve in World War II, and was listed on the Board of Directors of Mad magazine. Whenever Georgia Tech alumni gather, it is not uncommon to hear George P. Burdell being paged. Amit and Pete decide the time has come to enlist the school's most famous alumnus in the battle for its soul, and before long the merry pranksters of FOG—Friends of George—were mocking and disrupting the earnest schemes of the social justice warriors.

Meanwhile, Pete has taken a job as a laboratory assistant and, examining data that shouldn't be interesting, discovers a new phenomenon which might just tie in with his and Amit's earlier discoveries. These investigations, as his professor warns, can also be perilous, and before long he and Amit find themselves dealing with three separate secret conspiracies vying for control over the hidden knowledge, which may be much greater and rooted deeper in history than they had imagined. Another enigmatic document by an obscure missionary named Angus MacGuffin (!), who came to a mysterious and violent end in 1940, suggests a unification of the enigmas. And one of the greatest mysteries of twentieth century physics, involving one of its most brilliant figures, may be involved.

This series is a bit of Golden Age science fiction which somehow dropped into the early 21st century. It is a story of mystery, adventure, heroes, and villains, with interesting ideas and technical details which are plausible. The characters are interesting and grow as they are tested and learn from their experiences. And the story is related with a light touch, with plenty of smiles and laughs at the expense of those who richly deserve mockery and scorn. This book is superbly done and a worthy sequel to the first. I eagerly await the next, The Brave and the Bold.

I was delighted to see that Pete made the same discovery about triangles in physics and engineering problems that I made in my first year of engineering school. One of the first things any engineer should learn is to see if there's an easier way to get the answer out. I'll be adding “proglodytes”—progressive troglodytes—to my vocabulary.

For a self-published work, there are only a very few copy editing errors. The Kindle edition is free for Kindle Unlimited subscribers. In an “About the Author” section at the end, the author notes:

There's a growing fraternity of independent, self-published authors busy changing the culture one story at a time with their tales of adventure and heroism. Here are a few of my more recent discoveries.

With the social justice crowd doing their worst to wreck science fiction, the works of any of these authors are a great way to remember why you started reading science fiction in the first place.

 Permalink

June 2018

Oliver, Bernard M., John Billingham, et al. Project Cyclops. Stanford, CA: Stanford/NASA Ames Research Center, 1971. NASA-CR-114445 N73-18822.
There are few questions in science as simple to state and profound in their implications as “are we alone?”—are humans the only species with a technological civilisation in the galaxy, or in the universe? This has been a matter of speculation by philosophers, theologians, authors of fiction, and innumerable people gazing at the stars since antiquity, but it was only in the years after World War II, which had seen the development of high-power microwave transmitters and low-noise receivers for radar, that it dawned upon a few visionaries that this had now become a question which could be scientifically investigated.

The propagation of radio waves through the atmosphere and the interstellar medium is governed by basic laws of physics, and the advent of radio astronomy demonstrated that many objects in the sky, some very distant, could be detected in the microwave spectrum. But if we were able to detect these natural sources, suppose we connected a powerful transmitter to our radio telescope and sent a signal to a nearby star? It was easy to calculate that, given the technology of the time (around 1960), existing microwave transmitters and radio telescopes could transmit messages across interstellar distances.

But, it's one thing to calculate that intelligent aliens with access to microwave communication technology equal or better than our own could communicate over the void between the stars, and entirely another to listen for those communications. The problems are simple to understand but forbidding to face: where do you point your antenna, and where do you tune your dial? There are on the order of a hundred billion stars in our galaxy. We now know, as early researchers suspected without evidence, that most of these stars have planets, some of which may have conditions suitable for the evolution of intelligent life. Suppose aliens on one of these planets reach a level of technological development where they decide to join the “Galactic Club” and transmit a beacon which simply says “Yo! Anybody out there?” (The beacon would probably announce a signal with more information which would be easy to detect once you knew where to look.) But for the beacon to work, it would have to be aimed at candidate stars where others might be listening (a beacon which broadcasted in all directions—an “omnidirectional beacon”—would require so much energy or be limited to such a short range as to be impractical for civilisations with technology comparable to our own).

Then there's the question of how many technological communicating civilisations there are in the galaxy. Note that it isn't enough that a civilisation have the technology which enables it to establish a beacon: it has to do so. And it is a sobering thought that more than six decades after we had the ability to send such a signal, we haven't yet done so. The galaxy may be full of civilisations with our level of technology and above which have the same funding priorities we do and choose to spend their research budget on intersectional autoethnography of transgender marine frobdobs rather than communicating with nerdy pocket-protector types around other stars who tediously ask Big Questions.

And suppose a civilisation decides it can find the spare change to set up and operate a beacon, inviting others to contact it. How long will it continue to transmit, especially since it's unlikely, given the finite speed of light and the vast distances between the stars, there will be a response in the near term? Before long, scruffy professors will be marching in the streets wearing frobdob hats and rainbow tentacle capes, and funding will be called into question. This is termed the “lifetime” of a communicating civilisation, or L, which is how long that civilisation transmits and listens to establish contact with others. If you make plausible assumptions for the other parameters in the Drake equation (which estimates how many communicating civilisations there are in the galaxy), a numerical coincidence results in the estimate of the number of communicating civilisations in the galaxy being roughly equal to their communicating life in years, L. So, if a typical civilisation is open to communication for, say, 10,000 years before it gives up and diverts its funds to frobdob research, there will be around 10,000 such civilisations in the galaxy. With 100 billion stars (and around as many planets which may be hosts to life), that's a 0.00001% chance that any given star where you point your antenna may be transmitting, and that has to be multiplied by the same probability they are transmitting their beacon in your direction while you happen to be listening. It gets worse. The galaxy is huge—around 150 million light years in diameter, and our technology can only communicate with comparable civilisations out to a tiny fraction of this, say 1000 light years for high-power omnidirectional beacons, maybe ten to a hundred times that for directed beacons, but then you have the constraint that you have to be listening in their direction when they happen to be sending.

It seems hopeless. It may be. But the 1960s were a time very different from our constrained age. Back then, if you had a problem, like going to the Moon in eight years, you said, “Wow! That's a really big nail. How big a hammer do I need to get the job done?” Toward the end of that era when everything seemed possible, NASA convened a summer seminar at Stanford University to investigate what it would take to seriously investigate the question of whether we are alone. The result was Project Cyclops: A Design Study of a System for Detecting Extraterrestrial Intelligent Life, prepared in 1971 and issued as a NASA report (no Library of Congress catalogue number or ISBN was assigned) in 1973; the link will take you to a NASA PDF scan of the original document, which is in the public domain. The project assembled leading experts in all aspects of the technologies involved: antennas, receivers, signal processing and analysis, transmission and control, and system design and costing.

They approached the problem from what might be called the “Apollo perspective”: what will it cost, given the technology we have in hand right now, to address this question and get an answer within a reasonable time? What they came up with was breathtaking, although no more so than Apollo. If you want to listen for beacons from communicating civilisations as distant as 1000 light years and incidental transmissions (“leakage”, like our own television and radar emissions) within 100 light years, you're going to need a really big bucket to collect the signal, so they settled on 1000 dishes, each 100 metres in diameter. Putting this into perspective, 100 metres is about the largest steerable dish anybody envisioned at the time, and they wanted to build a thousand of them, densely packed.

But wait, there's more. These 1000 dishes were not just a huge bucket for radio waves, but a phased array, where signals from all of the dishes (or a subset, used to observe multiple targets) were combined to provide the angular resolution of a single dish the size of the entire array. This required breathtaking precision of electronic design at the time which is commonplace today (although an array of 1000 dishes spread over 16 km would still give most designers pause). The signals that might be received would not be fixed in frequency, but would drift due to Doppler shifts resulting from relative motion of the transmitter and receiver. With today's computing hardware, digging such a signal out of the raw data is something you can do on a laptop or mobile phone, but in 1971 the best solution was an optical data processor involving exposing, developing, and scanning film. It was exquisitely clever, although obsolete only a few years later, but recall the team had agreed to use only technologies which existed at the time of their design. Even more amazing (and today, almost bizarre) was the scheme to use the array as an imaging telescope. Again, with modern computers, this is a simple matter of programming, but in 1971 the designers envisioned a vast hall in which the signals from the antennas would be re-emitted by radio transmitters which would interfere in free space and produce an intensity image on an image surface where it would be measured by an array of receiver antennæ.

What would all of this cost? Lots—depending upon the assumptions used in the design (the cost was mostly driven by the antenna specifications, where extending the search to shorter wavelengths could double the cost, since antennas had to be built to greater precision) total system capital cost was estimated as between 6 and 10 billion dollars (1971). Converting this cost into 2018 dollars gives a cost between 37 and 61 billion dollars. (By comparison, the Apollo project cost around 110 billion 2018 dollars.) But since the search for a signal may “almost certainly take years, perhaps decades and possibly centuries”, that initial investment must be backed by a long-term funding commitment to continue the search, maintain the capital equipment, and upgrade it as technology matures. Given governments' record in sustaining long-term efforts in projects which do not line politicians' or donors' pockets with taxpayer funds, such perseverance is not the way to bet. Perhaps participants in the study should have pondered how to incorporate sufficient opportunities for graft into the project, but even the early 1970s were still an idealistic time when we didn't yet think that way.

This study is the founding document of much of the work in the Search for Extraterrestrial Intelligence (SETI) conducted in subsequent decades. Many researchers first realised that answering this question, “Are we alone?”, was within our technological grasp when chewing through this difficult but inspiring document. (If you have an equation or chart phobia, it's not for you; they figure on the majority of pages.) The study has held up very well over the decades. There are a number of assumptions we might wish to revise today (for example, higher frequencies may be better for interstellar communication than were assumed at the time, and spread spectrum transmissions may be more energy efficient than the extreme narrowband beacons assumed in the Cyclops study).

Despite disposing of wealth, technological capability, and computing power of which authors of the Project Cyclops report never dreamed, we only make little plans today. Most readers of this post, in their lifetimes, have experienced the expansion of their access to knowledge in the transition from being isolated to gaining connectivity to a global, high-bandwidth network. Imagine what it means to make the step from being confined to our single planet of origin to being plugged in to the Galactic Web, exchanging what we've learned with a multitude of others looking at things from entirely different perspectives. Heck, you could retire the entire capital and operating cost of Project Cyclops in the first three years just from advertising revenue on frobdob videos! (Did I mention they have very large eyes which are almost all pupil? Never mind the tentacles.)

This document has been subjected to intense scrutiny over the years. The SETI League maintains a comprehensive errata list for the publication.

 Permalink

Mills, Kyle. Enemy of the State. New York: Atria Books, 2017. ISBN 978-1-4767-8351-2.
This is the third novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. It is the sixteenth novel in the Mitch Rapp series (Flynn's first novel, Term Limits [November 2009], is set in the same world and shares characters with the Mitch Rapp series, but Rapp does not appear in it, so it isn't considered a Rapp novel), Mills continues to develop the Rapp story in new directions, while maintaining the action-packed and detail-rich style which made the series so successful.

When a covert operation tracking the flow of funds to ISIS discovers that a (minor) member of the Saudi royal family is acting as a bagman, the secret deal between the U.S. and Saudi Arabia struck in the days after the 2001 terrorist attacks on the U.S.—the U.S. would hide the ample evidence of Saudi involvement in the plot in return for the Saudis dealing with terrorists and funders of terrorism within the Kingdom—is called into question. The president of the U.S., who might be described in modern jargon as “having an anger management problem” decides the time has come to get to the bottom of what the Saudis are up to: is it a few rogue ne'er-do-wells, or is the leadership up to their old tricks of funding and promoting radical Islamic infiltration and terrorism in the West? And if they are, he wants to make them hurt, so they don't even think about trying it again.

When it comes to putting the hurt on miscreants, the president's go-to-guy is Mitch Rapp, the CIA's barely controlled loose cannon, who has a way of getting the job done even if his superiors don't know, and don't want to know, the details. When the president calls Rapp into his office and says, “I think you need to have a talk … and at the end of that talk I think he needs to be dead” there is little doubt about what will happen after Rapp walks out of the office.

But there is a problem. Saudi Arabia is, nominally at least, an important U.S ally. It keeps the oil flowing and prices down, not only benefitting the world economy, but putting a lid on the revenue of troublemakers such as Russia and Iran. Saudi Arabia is a major customer of U.S. foreign military sales. Saudi Arabia is also a principal target of Islamic revolutionaries, and however bad it is today, one doesn't want to contemplate a post-Saudi regime raising the black flag of ISIS, crying havoc, and letting slip the goats of war. Wet work involving the royal family must not just be deniable but totally firewalled from any involvement by the U.S. government. In accepting the mission Rapp understands that if things blow up, he will not only be on his own but in all likelihood have the U.S. government actively hunting him down.

Rapp hands in his resignation to the CIA, ending a relationship which has existed over all of the previous novels. He meets with his regular mission team and informs them he “need[s] to go somewhere you … can't follow”: involving them would create too many visible ties back to the CIA. If he's going to go rogue, he decides he must truly do so, and sets off assembling a rogues' gallery, composed mostly of former adversaries we've met in previous books. When he recruits his friend Claudia, who previously managed logistics for an assassin Rapp confronted in the past, she says, “So, a criminal enterprise. And only one of the people at this table knows how to be a criminal.”

Assembling this band of dodgy, dangerous, and devious characters at the headquarters of an arms dealer in that paradise which is Juba, South Sudan, Rapp plots an operation to penetrate the security surrounding the Saudi princeling and find out how high the Saudi involvement in funding ISIS goes. What they learn is disturbing in the extreme.

After an operation gone pear-shaped, and with the CIA, FBI, Saudis, and Sudanese factions all chasing him, Rapp and his misfit mob have to improvise and figure out how to break the link between the Saudis and ISIS in way which will allow him to deny everything and get back to whatever is left of his life.

This is a thriller which is full of action, suspense, and characters fans of the series will have met before acting in ways which may be surprising. After a shaky outing in the previous installment, Order to Kill (December 2017), Kyle Mills has regained his stride and, while preserving the essentials of Mitch Rapp, is breaking new ground. It will be interesting to see if the next novel, Red War, expected in September 2018, continues to involve any of the new team. While you can read this as a stand-alone thriller, you'll enjoy it more if you've read the earlier books in which the members of Rapp's team were principal characters.

 Permalink

Suarez, Daniel. Influx. New York: Signet, [2014] 2015. ISBN 978-0-451-46944-1.
Doesn't it sometimes seem that, sometime in the 1960s, the broad march of technology just stopped? Certainly, there has been breathtaking progress in some fields, particularly computation and data communication, but what about clean, abundant fusion power too cheap to meter, opening up the solar system to settlement, prevention and/or effective treatment of all kinds of cancer, anti-aging therapy, artificial general intelligence, anthropomorphic robotics, and the many other wonders we expected to be commonplace by the year 2000?

Decades later, Jon Grady was toiling in his obscure laboratory to make one of those dreams—gravity control— a reality. His lab is invaded by notorious Luddite terrorists who plan to blow up his apparatus and team. The fuse burns down into the charge, and all flashes white, then black. When he awakes, he finds himself, in good condition, in a luxurious office suite in a skyscraper, where he is introduced to the director of the Federal Bureau of Technology Control (BTC). The BTC, which appears in no federal organisation chart or budget, is charged with detecting potentially emerging disruptive technologies, controlling and/or stopping them (including deploying Luddite terrorists, where necessary), co-opting their developers into working in deep secrecy with the BTC, and releasing the technologies only when human nature and social and political institutions were “ready” for them—as determined by the BTC.

But of course those technologies exist within the BTC, and it uses them: unlimited energy, genetically engineered beings, clones, artificial intelligence, and mind control weapons. Grady is offered a devil's bargain: join the BTC and work for them, or suffer the worst they can do to those who resist and see his life's work erased. Grady turns them down.

At first, his fate doesn't seem that bad but then, as the creative and individualistic are wont to do, he resists and discovers the consequences when half a century's suppressed technologies are arrayed against a defiant human mind. How is he to recover his freedom and attack the BTC? Perhaps there are others, equally talented and defiant, in the same predicament? And, perhaps, the BTC, with such great power at its command, is not so monolithic and immune from rivalry, ambition, and power struggles as it would like others to believe. And what about other government agencies, fiercely protective of their own turf and budgets, and jealous of any rivals?

Thus begins a technological thriller very different from the author's earlier Dæmon (August 2010) and Freedom™ (January 2011), but compelling. How does a band of individuals take on an adversary which can literally rain destruction from the sky? What is the truth beneath the public face of the BTC? What does a superhuman operative do upon discovering everything has been a lie? And how can one be sure it never happens again?

With this novel Daniel Suarez reinforces his reputation as an emerging grand master of the techno-thriller. This book won the 2015 Prometheus Award for best libertarian novel.

 Permalink

Nury, Fabien and Thierry Robin. La Mort de Staline. Paris: Dargaud, [2010, 2012] 2014. ISBN 978-2-205-07351-5.
The 2017 film, The Death of Stalin, was based upon this French bande dessinée (BD, graphic novel, or comic). The story is based around the death of Stalin and the events that ensued: the scheming and struggle for power among the members of his inner circle, the reactions and relationships of his daughter Svetlana and wastrel son Vasily, the conflict between the Red Army and NKVD, the maneuvering over the arrangements for Stalin's funeral, and the all-encompassing fear and suspicion that Stalin's paranoia had infused into the Soviet society. This is a fictional account, grounded in documented historical events, in which the major characters were real people. But the authors are forthright in saying they invented events and dialogue to tell a story which is intended to give one a sense of the «folie furieuse de Staline et de son entourage» rather than provide a historical narrative.

The film adaptation is listed as a comedy and, particularly if you have a taste for black humour, is quite funny. This BD is not explicitly funny, except in an ironic sense, illustrating the pathological behaviour of those surrounding Stalin. Many of the sequences in this work could have been used as storyboards for the movie, but there are significant events here which did make it into the screenplay. The pervasive strong language which earned the film an R rating is little in evidence here.

The principal characters and their positions are introduced by boxes overlaying the graphics, much as was done in the movie. Readers who aren't familiar with the players in Stalin's Soviet Union such as Beria, Zhukov, Molotov, Malenkov, Khrushchev, Mikoyan, and Bulganin, may miss some of the nuances of their behaviour here, which is driven by this back-story. Their names are given using the French transliteration of Russian, which is somewhat different from that used in English (for example, “Krouchtchev” instead of “Khrushchev”). The artwork is intricately drawn in the realistic style, with only a few comic idioms sparsely used to illustrate things like gunshots.

I enjoyed both the movie (which I saw first, not knowing until the end credits that it was based upon this work) and the BD. They're different takes on the same story, and both work on their own terms. This is not the kind of story for which “spoilers” apply, so you'll lose nothing by enjoying both in either order.

The album cited above contains both volumes of the original print edition. The Kindle edition continues to be published in two volumes (Vol. 1, Vol. 2). An English translation of the graphic novel is available. I have not looked at it beyond the few preview pages available on Amazon.

 Permalink

July 2018

Carreyrou, John. Bad Blood. New York: Alfred A. Knopf, 2018. ISBN 978-1-9848-3363-1.
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor's office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor's assistant who draws the blood (whom I've nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I'd slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you're curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.

Wouldn't it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor's office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn't quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.

This was the vision that came to 19 year old Elizabeth Holmes after completing a summer internship at the Genome Institute of Singapore after her freshman year as a chemical engineering major at Stanford. Holmes had decided on a career in entrepreneurship from an early age and, after her first semester told her father, “No, Dad, I'm, not interested in getting a Ph.D. I want to make money.” And Stanford, in the heart of Silicon Valley, was surrounded by companies started by professors and graduates who had turned inventions into vast fortunes. With only one year of college behind her, she was sure she'd found her opportunity. She showed the patent application she'd drafted for an arm patch that would diagnose medical conditions to Channing Robertson, professor of chemical engineering at Stanford, and Shaunak Roy, the Ph.D. student in whose lab she had worked as an assistant during her freshman year. Robertson was enthusiastic, and when Holmes said she intended to leave Stanford and start a company to commercialise the idea, he encouraged her. When the company was incorporated in 2004, Roy, then a newly-minted Ph.D., became its first employee and Robertson joined the board.

From the outset, the company was funded by other people's money. Holmes persuaded a family friend, Tim Draper, a second-generation venture capitalist who had backed, among other companies, Hotmail, to invest US$ 1 million in first round funding. Draper was soon joined by Victor Palmieri, a corporate turnaround artist and friend of Holmes' father. The company was named Theranos, from “therapy” and “diagnosis”. Elizabeth, unlike this scribbler, had a lifelong aversion to needles, and the invention she described in the business plan pitched to investors was informed by this. A skin patch would draw tiny quantities of blood without pain by means of “micro-needles”, the blood would be analysed by micro-miniaturised sensors in the patch and, if needed, medication could be injected. A wireless data link would send results to the doctor.

This concept, and Elizabeth's enthusiasm and high-energy pitch allowed her to recruit additional investors, raising almost US$ 6 million in 2004. But there were some who failed to be persuaded: MedVentures Associates, a firm that specialised in medical technology, turned her down after discovering she had no answers for the technical questions raised in a meeting with the partners, who had in-depth experience with diagnostic technology. This would be a harbinger of the company's fund-raising in the future: in its entire history, not a single venture fund or investor with experience in medical or diagnostic technology would put money into the company.

Shaunak Roy, who, unlike Holmes, actually knew something about chemistry, quickly realised that Elizabeth's concept, while appealing to the uninformed, was science fiction, not science, and no amount of arm-waving about nanotechnology, microfluidics, or laboratories on a chip would suffice to build something which was far beyond the state of the art. This led to a “de-scoping” of the company's ambition—the first of many which would happen over succeeding years. Instead of Elizabeth's magical patch, a small quantity of blood would be drawn from a finger stick and placed into a cartridge around the size of a credit card. The disposable cartridge would then be placed into a desktop “reader” machine, which would, using the blood and reagents stored in the cartridge, perform a series of analyses and report the results. This was originally called Theranos 1.0, but after a series of painful redesigns, was dubbed the “Edison”. This was the prototype Theranos ultimately showed to potential customers and prospective investors.

This was a far cry from the original ambitious concept. The hundreds of laboratory tests doctors can order are divided into four major categories: immunoassays, general chemistry, hæmatology, and DNA amplification. In immunoassay tests, blood plasma is exposed to an antibody that detects the presence of a substance in the plasma. The antibody contains a marker which can be detected by its effect on light passed through the sample. Immunoassays are used in a number of common blood tests, such the 25(OH)D assay used to test for vitamin D deficiency, but cannot perform other frequently ordered tests such as blood sugar and red and white blood cell counts. Edison could only perform what is called “chemiluminescent immunoassays”, and thus could only perform a fraction of the tests regularly ordered. The rationale for installing an Edison in the doctor's office was dramatically reduced if it could only do some tests but still required a venous blood draw be sent off to the laboratory for the balance.

This didn't deter Elizabeth, who combined her formidable salesmanship with arm-waving about the capabilities of the company's products. She was working on a deal to sell four hundred Edisons to the Mexican government to cope with an outbreak of swine flu, which would generate immediate revenue. Money was much on the minds of Theranos' senior management. By the end of 2009, the company had burned through the US$ 47 million raised in its first three rounds of funding and, without a viable product or prospects for sales, would have difficulty keeping the lights on.

But the real bonanza loomed on the horizon in 2010. Drugstore giant Walgreens was interested in expanding their retail business into the “wellness market”: providing in-store health services to their mass market clientèle. Theranos pitched them on offering in-store blood testing. Doctors could send their patients to the local Walgreens to have their blood tested from a simple finger stick and eliminate the need to draw blood in the office or deal with laboratories. With more than 8,000 locations in the U.S., if each were to be equipped with one Edison, the revenue to Theranos (including the single-use testing cartridges) would put them on the map as another Silicon Valley disruptor that went from zero to hundreds of millions in revenue overnight. But here, as well, the Elizabeth effect was in evidence. Of the 192 tests she told Walgreens Theranos could perform, fewer than half were immunoassays the Edisons could run. The rest could be done only on conventional laboratory equipment, and certainly not on a while-you-wait basis.

Walgreens wasn't the only potential saviour on the horizon. Grocery godzilla Safeway, struggling with sales and earnings which seemed to have reached a peak, saw in-store blood testing with Theranos machines as a high-margin profit centre. They loaned Theranos US$ 30 million and began to plan for installation of blood testing clinics in their stores.

But there was a problem, and as the months wore on, this became increasingly apparent to people at both Walgreens and Safeway, although dismissed by those in senior management under the spell of Elizabeth's reality distortion field. Deadlines were missed. Simple requests, such as A/B comparison tests run on the Theranos hardware and at conventional labs were first refused, then postponed, then run but results not disclosed. The list of tests which could be run, how blood for them would be drawn, and how they would be processed seemed to dissolve into fog whenever specific requests were made for this information, which was essential for planning the in-store clinics.

There was, indeed, a problem, and it was pretty severe, especially for a start-up which had burned through US$ 50 million and sold nothing. The product didn't work. Not only could the Edison only run a fraction of the tests its prospective customers had been led by Theranos to believe it could, for those it did run the results were wildly unreliable. The small quantity of blood used in the test introduced random errors due to dilution of the sample; the small tubes in the cartridge were prone to clogging; and capillary blood collected from a finger stick was prone to errors due to “hemolysis”, the rupture of red blood cells, which is minimal in a venous blood draw but so prevalent in finger stick blood it could lead to some tests producing values which indicated the patient was dead.

Meanwhile, people who came to work at Theranos quickly became aware that it was not a normal company, even by the eccentric standards of Silicon Valley. There was an obsession with security, with doors opened by badge readers; logging of employee movement; information restricted to narrow silos prohibiting collaboration between, say, engineering and marketing which is the norm in technological start-ups; monitoring of employee Internet access, E-mail, and social media presence; a security detail of menacing-looking people in black suits and earpieces (which eventually reached a total of twenty); a propensity of people, even senior executives, to “vanish”, Stalin-era purge-like, overnight; and a climate of fear that anybody, employee or former employee, who spoke about the company or its products to an outsider, especially the media, would be pursued, harassed, and bankrupted by lawsuits. There aren't many start-ups whose senior scientists are summarily demoted and subsequently commit suicide. That happened at Theranos. The company held no memorial for him.

Throughout all of this, a curious presence in the company was Ramesh (“Sunny”) Balwani, a Pakistani-born software engineer who had made a fortune of more than US$ 40 million in the dot-com boom and cashed out before the bust. He joined Theranos in late 2009 as Elizabeth's second in command and rapidly became known as a hatchet man, domineering boss, and clueless when it came to the company's key technologies (on one occasion, an engineer mentioned a robotic arm's “end effector”, after which Sunny would frequently speak of its “endofactor”). Unbeknownst to employees and investors, Elizabeth and Sunny had been living together since 2005. Such an arrangement would be a major scandal in a public company, but even in a private firm, concealing such information from the board and investors is a serious breach of trust.

Let's talk about the board, shall we? Elizabeth was not only persuasive, but well-connected. She would parley one connection into another, and before long had recruited many prominent figures including:

  • George Schultz (former U.S. Secretary of State)
  • Henry Kissinger (former U.S. Secretary of State)
  • Bill Frist (former U.S. Senator and medical doctor)
  • James Mattis (General, U.S. Marine Corps)
  • Riley Bechtel (Chairman and former CEO, Bechtel Group)
  • Sam Nunn (former U.S. Senator)
  • Richard Kobacevich (former Wells Fargo chairman and CEO)

Later, super-lawyer David Boies would join the board, and lead its attacks against the company's detractors. It is notable that, as with its investors, not a single board member had experience in medical or diagnostic technology. Bill Frist was an M.D., but his speciality was heart and lung transplants, not laboratory tests.

By 2014, Elizabeth Holmes had come onto the media radar. Photogenic, articulate, and with a story of high-tech disruption of an industry much in the news, she began to be featured as the “female Steve Jobs”, which must have pleased her, since she affected black turtlenecks, kale shakes, and even a car with no license plates to emulate her role model. She appeared on the cover of Fortune in January 2014, made the Forbes list of 400 most wealthy shortly thereafter, was featured in puff pieces in business and general market media, and was named by Time as one of the hundred most influential people in the world. The year 2014 closed with another glowing profile in the New Yorker. This would be the beginning of the end, as it happened to be read by somebody who actually knew something about blood testing.

Adam Clapper, a pathologist in Missouri, spent his spare time writing Pathology Blawg, with a readership of practising pathologists. Clapper read what Elizabeth was claiming to do with a couple of drops of blood from a finger stick and it didn't pass the sniff test. He wrote a sceptical piece on his blog and, as it passed from hand to hand, he became a lightning rod for others dubious of Theranos' claims, including those with direct or indirect experience with the company. Earlier, he had helped a Wall Street Journal reporter comprehend the tangled web of medical laboratory billing, and he decided to pass on the tip to the author of this book.

Thus began the unravelling of one of the greatest scams and scandals in the history of high technology, Silicon Valley, and venture investing. At the peak, privately-held Theranos was valued at around US$ 9 billion, with Elizabeth Holmes holding around half of its common stock, and with one of those innovative capital structures of which Silicon Valley is so fond, 99.7% of the voting rights. Altogether, over its history, the company raised around US$ 900 million from investors (including US$ 125 million from Rupert Murdoch in the US$ 430 million final round of funding). Most of the investors' money was ultimately spent on legal fees as the whole fairy castle crumbled.

The story of the decline and fall is gripping, involving the grandson of a Secretary of State, gumshoes following whistleblowers and reporters, what amounts to legal terrorism by the ever-slimy David Boies, courageous people who stood their ground in the interest of scientific integrity against enormous personal and financial pressure, and the saga of one of the most cunning and naturally talented confidence women ever, equipped with only two semesters of freshman chemical engineering, who managed to raise and blow through almost a billion dollars of other people's money without checking off the first box on the conventional start-up check list: “Build the product”.

I have, in my career, met three world-class con men. Three times, I (just barely) managed to pick up the warning signs and beg my associates to walk away. Each time I was ignored. After reading this book, I am absolutely sure that had Elizabeth Holmes pitched me on Theranos (about which I never heard before the fraud began to be exposed), I would have been taken in. Walker's law is “Absent evidence to the contrary, assume everything is a scam”. A corollary is “No matter how cautious you are, there's always a confidence man (or woman) who can scam you if you don't do your homework.”

Here is Elizabeth Holmes at Stanford in 2013, when Theranos was riding high and she was doing her “female Steve Jobs” act.

Elizabeth Holmes at Stanford: 2013

This is a CNN piece, filmed after the Theranos scam had begun to collapse, in which you can still glimpse the Elizabeth Holmes reality distortion field at full intensity directed at CNN medical correspondent Sanjay Gupta. There are several curious things about this video. The machine that Gupta is shown is the “miniLab”, a prototype second-generation machine which never worked acceptably, not the Edison, which was actually used in the Walgreens and Safeway tests. Gupta's blood is drawn and tested, but the process used to perform the test is never shown. The result reported is a cholesterol test, but the Edison cannot perform such tests. In the plans for the Walgreens and Safeway roll-outs, such tests were performed on purchased Siemens analysers which had been secretly hacked by Theranos to work with blood diluted well below their regulatory-approved specifications (the dilution was required due to the small volume of blood from the finger stick). Since the miniLab never really worked, the odds are that Gupta's blood was tested on one of the Siemens machines, not a Theranos product at all.

CNN: Inside the Theranos Lab (2016)

In a June 2018 interview, author John Carreyrou recounts the story of Theranos and his part in revealing the truth.

John Carreyrou on investigating Theranos (2018)

If you are a connoisseur of the art of the con, here is a masterpiece. After the Wall Street Journal exposé had broken, after retracting tens of thousands of blood tests, and after Theranos had been banned from running a clinical laboratory by its regulators, Holmes got up before an audience of 2500 people at the meeting of the American Association of Clinical Chemistry and turned up the reality distortion field to eleven. Watch a master at work. She comes on the stage at the six minute mark.

Elizabeth Holmes at the American Association of Clinical Chemistry (2016)

 Permalink

Neovictorian [pseud.] and Neal Van Wahr. Sanity. Seattle: Amazon Digital Services, [2017] 2018. ISBN 978-1-9808-2095-6.
Have you sometimes felt, since an early age, that you were an alien, somehow placed on Earth and observing the antics of humans as if they were a different species? Why do they believe such stupid things? Why do they do such dumb things? Any why do they keep doing them over and over again seemingly incapable of learning from the bad outcomes of all the previous attempts?

That is how Cal Adler felt since childhood and, like most people with such feelings, kept them quiet and bottled up while trying to get ahead in a game whose rules often seemed absurd. In his senior year in high school, he encounters a substitute guidance counsellor who tells him, without any preliminary conversation, precisely how he feels. He's assured he is not alone, and that over time he will meet others. He is given an enigmatic contact in case of emergency. He is advised, as any alien in a strange land, to blend in while observing and developing his own talents. And that's the last he sees of the counsellor.

Cal's subsequent life is punctuated by singular events: a terrorist incident in which he spontaneously rises to the occasion, encountering extraordinary people, and being initiated into skills he never imagined he'd possess. He begins to put together a picture of a shadowy…something…of which he may or may not be a part, whose goals are unclear, but whose people are extraordinary.

Meanwhile, a pop religion called ReHumanism, founded by a science fiction writer, is gaining adherents among prominent figures in business, entertainment, and technology. Its “scriptures” advocate escape from the tragic cycle of progress and collapse which has characterised the human experience by turning away from the artificial environment in which we have immersed ourselves and rediscovering our inherent human nature which may, to many in the modern world, seem alien. Is there a connection between ReHumanism (which seems like a flaky scam to Cal) and the mysterious people he is encountering?

All of these threads begin to come together when Cal, working as a private investigator in Reno, Nevada, is retained by the daughter of a recently-deceased billionaire industrialist to find her mother, who has disappeared during a tourist visit to Alaska. The mother is revealed have become a convert to and supporter of ReHumanism. Are they involved? And how did the daughter find Cal, who, after previous events, has achieved a level of low observability stealth aircraft designers can only dream of?

An adventure begins in which nothing is as it seems and all of Cal's formidable talents are tested to their limits.

This is an engaging and provocative mystery/thriller which will resonate with those who identify with the kind of heroic, independent, and inner-directed characters that populate the fiction of Robert A. Heinlein and other writers of the golden age of science fiction. It speaks directly to those sworn to chart their own course through life regardless of what others may think or say. I'm not sure the shadowy organisation we glimpse here actually exists, but I wish it did…and I wish they'd contacted me. There are many tips of the hat here to works and authors of fiction with similar themes, and I'm sure many more I missed.

This is an example of the efflorescence of independent science fiction which the obsolescence of the traditional gatekeeper publishers has engendered. With the advent of low-cost, high-margin self-publishing and customer reviews and ratings to evaluate quality, an entire new cohort of authors whose work would never before have seen the light of day is now enriching the genre and the lives of their enthusiastic readers. The work is not free of typographical and grammatical errors, but I've read books from major science fiction publishers with more. The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Verne, Jules. Une Fantaisie du Docteur Ox. Seattle: CreateSpace, [1874] 2017. ISBN 978-1-5470-6408-3.
After reading and reviewing Jules Verne's Hector Servadac last year, I stumbled upon a phenomenal bargain: a Kindle edition of the complete works of Jules Verne—160 titles, with 5400 illustrations—for US$ 2.51 at this writing, published by Arvensa. This is not a cheap public domain knock-off, but a thoroughly professional publication with very few errors. For less than the price of a paperback book, you get just about everything Jules Verne ever wrote in Kindle format which, if you download the free Kindle French dictionary, allows you to quickly look up the obscure terms and jargon of which Verne is so fond without flipping through the Little Bob. That's how I read this work, although I have cited a print edition in the header for those who prefer such.

The strange story of Doctor Ox would be considered a novella in modern publishing terms, coming in at 19,240 words. It is divided into 17 chapters and is written in much the same style as the author's Voyages extraordinaires, with his customary huge vocabulary, fondness for lengthy enumerations, and witty parody of the national character of foreigners.

Here, the foreigners in question are the Flemish, speakers of dialects of the Dutch language who live in the northern part of Belgium. The Flemish are known for being phlegmatic, and nowhere is this more in evidence than the small city of Quiquendone. Its 2,393 residents and their ancestors have lived there since the city was founded in 1197, and very little has happened to disturb their placid lives; they like it that way. Its major industries are the manufacture of whipped cream and barley sugar. Its inhabitants are taciturn and, when they speak, do so slowly. For centuries, what little government they require has been provided by generations of the van Tricasse family, son succeeding father as burgomaster. There is little for the burgomaster to do, and one of the few items on his agenda, inherited from his father twenty years ago, is whether the city should dispense with the services of its sole policeman, who hasn't had anything to do for decades.

Burgomaster van Tricasse exemplifies the moderation in all things of the residents of his city. I cannot resist quoting this quintessentially Jules Verne description in full.

Le bourgmestre était un personnage de cinquante ans, ni gras ni maigre, ni petit ni grand, ni vieux ni jeune, ni coloré ni pâle, ni gai ni triste, ni content ni ennuyé, ni énergique ni mou, ni fier ni humble, ni bon ni méchant, ni généreux ni avare, ni brave ni poltron, ni trop ni trop peu, — ne quid nimis, — un homme modéré en tout ; mais à la lenteur invariable de ses mouvements, à sa mâchoire inférieure un peu pendante, à sa paupière supérieure immuablement relevée, à son front uni comme une plaque de cuivre jaune et sans une ride, à ses muscles peu salliants, un physionomiste eût sans peine reconnu que le bourgomestre van Tricasse était le flegme personnifié.

Imagine how startled this paragon of moderation and peace must have been when the city's policeman—he whose job has been at risk for decades—pounds on the door and, when admitted, reports that the city's doctor and lawyer, visiting the house of scientist Doctor Ox, had gotten into an argument. They had been talking politics! Such a thing had not happened in Quiquendone in over a century. Words were exchanged that might lead to a duel!

Who is this Doctor Ox? A recent arrival in Quiquendone, he is a celebrated scientist, considered a leader in the field of physiology. He stands out against the other inhabitants of the city. Of no well-defined nationality, he is a genuine eccentric, self-confident, ambitious, and known even to smile in public. He and his laboratory assistant Gédéon Ygène work on their experiments and never speak of them to others.

Shortly after arriving in Quiquendone, Dr Ox approached the burgomaster and city council with a proposal: to illuminate the city and its buildings, not with the new-fangled electric lights which other cities were adopting, but with a new invention of his own, oxy-hydric gas. Using powerful electric batteries he invented, water would be decomposed into hydrogen and oxygen gas, stored separately, then delivered in parallel pipes to individual taps where they would be combined and burned, producing a light much brighter and pure than electric lights, not to mention conventional gaslights burning natural or manufactured gas. In storage and distribution, hydrogen and oxygen would be strictly segregated, as any mixing prior to the point of use ran the risk of an explosion. Dr Ox offered to pay all of the expenses of building the gas production plant, storage facilities, and installation of the underground pipes and light fixtures in public buildings and private residences. After a demonstration of oxy-hydric lighting, city fathers gave the go-ahead for the installation, presuming Dr Ox was willing to assume all the costs in order to demonstrate his invention to other potential customers.

Over succeeding days and weeks, things before unimagined, indeed, unimaginable begin to occur. On a visit to Dr Ox, the burgomaster himself and his best friend city council president Niklausse find themselves in—dare it be said—a political argument. At the opera house, where musicians and singers usually so moderate the tempo that works are performed over multiple days, one act per night, a performance of Meyerbeer's Les Hugenots becomes frenetic and incites the audience to what can only be described as a riot. A ball at the house of the banker becomes a whirlwind of sound and motion. And yet, each time, after people go home, they return to normal and find it difficult to believe what they did the night before.

Over time, the phenomenon, at first only seen in large public gatherings, begins to spread into individual homes and private lives. You would think the placid Flemish had been transformed into the hotter tempered denizens of countries to the south. Twenty newspapers spring up, each advocating its own radical agenda. Even plants start growing to enormous size, and cats and dogs, previously as reserved as their masters, begin to bare fangs and claws. Finally, a mass movement rises to avenge the honour of Quiquendone for an injury committed in the year 1185 by a cow from the neighbouring town of Virgamen.

What was happening? Whence the madness? What would be the result when the citizens of Quiquendone, armed with everything they could lay their hands on, marched upon their neighbours?

This is a classic “puzzle story”, seasoned with a mad scientist of whom the author allows us occasional candid glimpses as the story unfolds. You'll probably solve the puzzle yourself long before the big reveal at the end. Jules Verne, always anticipating the future, foresaw this: the penultimate chapter is titled (my translation), “Where the intelligent reader sees that he guessed correctly, despite every precaution by the author”. The enjoyment here is not so much the puzzle but rather Verne's language and delicious description of characters and events, which are up to the standard of his better-known works.

This is “minor Verne”, written originally for a public reading and then published in a newspaper in Amiens, his adopted home. Many believed that in Quiquendone he was satirising Amiens and his placid neighbours.

Doctor Ox would reappear in the work of Jules Verne in his 1882 play Voyage à travers l'impossible (Journey Through the Impossible), a work which, after 97 performances in Paris, was believed lost until a single handwritten manuscript was found in 1978. Dr Ox reprises his role as mad scientist, joining other characters from Verne's novels on their own extraordinary voyages. After that work, Doctor Ox disappears from the world. But when I regard the frenzied serial madness loose today, from “bathroom equality”, tearing down Civil War monuments, masked “Antifa” blackshirts beating up people in the streets, the “refugee” racket, and Russians under every bed, I sometimes wonder if he's taken up residence in today's United States.

An English translation is available. Verne's reputation has often suffered due to poor English translations of his work; I have not read this edition and don't know how good it is. Warning: the description of this book at Amazon contains a huge spoiler for the central puzzle of the story.

 Permalink

August 2018

Keating, Brian. Losing the Nobel Prize. New York: W. W. Norton, 2018. ISBN 978-1-324-00091-4.
Ever since the time of Galileo, the history of astronomy has been punctuated by a series of “great debates”—disputes between competing theories of the organisation of the universe which observation and experiment using available technology are not yet able to resolve one way or another. In Galileo's time, the great debate was between the Ptolemaic model, which placed the Earth at the centre of the solar system (and universe) and the competing Copernican model which had the planets all revolving around the Sun. Both models worked about as well in predicting astronomical phenomena such as eclipses and the motion of planets, and no observation made so far had been able to distinguish them.

Then, in 1610, Galileo turned his primitive telescope to the sky and observed the bright planets Venus and Jupiter. He found Venus to exhibit phases, just like the Moon, which changed over time. This would not happen in the Ptolemaic system, but is precisely what would be expected in the Copernican model—where Venus circled the Sun in an orbit inside that of Earth. Turning to Jupiter, he found it to be surrounded by four bright satellites (now called the Galilean moons) which orbited the giant planet. This further falsified Ptolemy's model, in which the Earth was the sole source of attraction around which all celestial bodies revolved. Since anybody could build their own telescope and confirm these observations, this effectively resolved the first great debate in favour of the Copernican heliocentric model, although some hold-outs in positions of authority resisted its dethroning of the Earth as the centre of the universe.

This dethroning came to be called the “Copernican principle”, that Earth occupies no special place in the universe: it is one of a number of planets orbiting an ordinary star in a universe filled with a multitude of other stars. Indeed, when Galileo observed the star cluster we call the Pleiades, he saw myriad stars too dim to be visible to the unaided eye. Further, the bright stars were surrounded by a diffuse bluish glow. Applying the Copernican principle again, he argued that the glow was due to innumerably more stars too remote and dim for his telescope to resolve, and then generalised that the glow of the Milky Way was also composed of uncountably many stars. Not only had the Earth been demoted from the centre of the solar system, so had the Sun been dethroned to being just one of a host of stars possibly stretching to infinity.

But Galileo's inference from observing the Pleiades was wrong. The glow that surrounds the bright stars is due to interstellar dust and gas which reflect light from the stars toward Earth. No matter how large or powerful the telescope you point toward such a reflection nebula, all you'll ever see is a smooth glow. Driven by the desire to confirm his Copernican convictions, Galileo had been fooled by dust. He would not be the last.

William Herschel was an eminent musician and composer, but his passion was astronomy. He pioneered the large reflecting telescope, building more than sixty telescopes. In 1789, funded by a grant from King George III, Herschel completed a reflector with a mirror 1.26 metres in diameter, which remained the largest aperture telescope in existence for the next fifty years. In Herschel's day, the great debate was about the Sun's position among the surrounding stars. At the time, there was no way to determine the distance or absolute brightness of stars, but Herschel decided that he could compile a map of the galaxy (then considered to be the entire universe) by surveying the number of stars in different directions. Only if the Sun was at the centre of the galaxy would the counts be equal in all directions.

Aided by his sister Caroline, a talented astronomer herself, he eventually compiled a map which indicated the galaxy was in the shape of a disc, with the Sun at the centre. This seemed to refute the Copernican view that there was nothing special about the Sun's position. Such was Herschel's reputation that this finding, however puzzling, remained unchallenged until 1847 when Wilhelm Struve discovered that Herschel's results had been rendered invalid by his failing to take into account the absorption and scattering of starlight by interstellar dust. Just as you can only see the same distance in all directions while within a patch of fog, regardless of the shape of the patch, Herschel's survey could only see so far before extinction of light by dust cut off his view of stars. Later it was discovered that the Sun is far from the centre of the galaxy. Herschel had been fooled by dust.

In the 1920s, another great debate consumed astronomy. Was the Milky Way the entire universe, or were the “spiral nebulæ” other “island universes”, galaxies in their own right, peers of the Milky Way? With no way to measure distance or telescopes able to resolve them into stars, many astronomers believed spiral neublæ were nearby objects, perhaps other solar systems in the process of formation. The discovery of a Cepheid variable star in the nearby Andromeda “nebula” by Edwin Hubble in 1923 allowed settling this debate. Andromeda was much farther away than the most distant stars found in the Milky Way. It must, then be a separate galaxy. Once again, demotion: the Milky Way was not the entire universe, but just one galaxy among a multitude.

But how far away were the galaxies? Hubble continued his search and measurements and found that the more distant the galaxy, the more rapidly it was receding from us. This meant the universe was expanding. Hubble was then able to calculate the age of the universe—the time when all of the galaxies must have been squeezed together into a single point. From his observations, he computed this age at two billion years. This was a major embarrassment: astrophysicists and geologists were confident in dating the Sun and Earth at around five billion years. It didn't make any sense for them to be more than twice as old as the universe of which they were a part. Some years later, it was discovered that Hubble's distance estimates were far understated because he failed to account for extinction of light from the stars he measured due to dust. The universe is now known to be seven times the age Hubble estimated. Hubble had been fooled by dust.

By the 1950s, the expanding universe was generally accepted and the great debate was whether it had come into being in some cataclysmic event in the past (the “Big Bang”) or was eternal, with new matter spontaneously appearing to form new galaxies and stars as the existing ones receded from one another (the “Steady State” theory). Once again, there were no observational data to falsify either theory. The Steady State theory was attractive to many astronomers because it was the more “Copernican”—the universe would appear overall the same at any time in an infinite past and future, so our position in time is not privileged in any way, while in the Big Bang the distant past and future are very different than the conditions we observe today. (The rate of matter creation required by the Steady State theory was so low that no plausible laboratory experiment could detect it.)

The discovery of the cosmic background radiation in 1965 definitively settled the debate in favour of the Big Bang. It was precisely what was expected if the early universe were much denser and hotter than conditions today, as predicted by the Big Bang. The Steady State theory made no such prediction and was, despite rear-guard actions by some of its defenders (invoking dust to explain the detected radiation!), was considered falsified by most researchers.

But the Big Bang was not without its own problems. In particular, in order to end up with anything like the universe we observe today, the initial conditions at the time of the Big Bang seemed to have been fantastically fine-tuned (for example, an infinitesimal change in the balance between the density and rate of expansion in the early universe would have caused the universe to quickly collapse into a black hole or disperse into the void without forming stars and galaxies). There was no physical reason to explain these fine-tuned values; you had to assume that's just the way things happened to be, or that a Creator had set the dial with a precision of dozens of decimal places.

In 1979, the theory of inflation was proposed. Inflation held that in an instant after the Big Bang the size of the universe blew up exponentially so that all the observable universe today was, before inflation, the size of an elementary particle today. Thus, it's no surprise that the universe we now observe appears so uniform. Inflation so neatly resolved the tensions between the Big Bang theory and observation that it (and refinements over the years) became widely accepted. But could inflation be observed? That is the ultimate test of a scientific theory.

There have been numerous cases in science where many years elapsed between a theory being proposed and definitive experimental evidence for it being found. After Galileo's observations, the Copernican theory that the Earth orbits the Sun became widely accepted, but there was no direct evidence for the Earth's motion with respect to the distant stars until the discovery of the aberration of light in 1727. Einstein's theory of general relativity predicted gravitational radiation in 1915, but the phenomenon was not directly detected by experiment until a century later. Would inflation have to wait as long or longer?

Things didn't look promising. Almost everything we know about the universe comes from observations of electromagnetic radiation: light, radio waves, X-rays, etc., with a little bit more from particles (cosmic rays and neutrinos). But the cosmic background radiation forms an impenetrable curtain behind which we cannot observe anything via the electromagnetic spectrum, and it dates from around 380,000 years after the Big Bang. The era of inflation was believed to have ended 10−32 seconds after the Bang; considerably earlier. The only “messenger” which could possibly have reached us from that era is gravitational radiation. We've just recently become able to detect gravitational radiation from the most violent events in the universe, but no conceivable experiment would be able to detect this signal from the baby universe.

So is it hopeless? Well, not necessarily…. The cosmic background radiation is a snapshot of the universe as it existed 380,000 years after the Big Bang, and only a few years after it was first detected, it was realised that gravitational waves from the very early universe might have left subtle imprints upon the radiation we observe today. In particular, gravitational radiation creates a form of polarisation called B-modes which most other sources cannot create.

If it were possible to detect B-mode polarisation in the cosmic background radiation, it would be a direct detection of inflation. While the experiment would be demanding and eventually result in literally going to the end of the Earth, it would be strong evidence for the process which shaped the universe we inhabit and, in all likelihood, a ticket to Stockholm for those who made the discovery.

This was the quest on which the author embarked in the year 2000, resulting in the deployment of an instrument called BICEP1 (Background Imaging of Cosmic Extragalactic Polarization) in the Dark Sector Laboratory at the South Pole. Here is my picture of that laboratory in January 2013. The BICEP telescope is located in the foreground inside a conical shield which protects it against thermal radiation from the surrounding ice. In the background is the South Pole Telescope, a millimetre wave antenna which was not involved in this research.

BICEP2 and South Pole Telescope, 2013-01-09

BICEP1 was a prototype, intended to test the technologies to be used in the experiment. These included cooling the entire telescope (which was a modest aperture [26 cm] refractor, not unlike Galileo's, but operating at millimetre wavelengths instead of visible light) to the temperature of interstellar space, with its detector cooled to just ¼ degree above absolute zero. In 2010 its successor, BICEP2, began observation at the South Pole, and continued its run into 2012. When I took the photo above, BICEP2 had recently concluded its observations.

On March 17th, 2014, the BICEP2 collaboration announced, at a press conference, the detection of B-mode polarisation in the region of the southern sky they had monitored. Note the swirling pattern of polarisation which is the signature of B-modes, as opposed to the starburst pattern of other kinds of polarisation.

B-mode polarisation in BICEP2 observations, 2014-03-17

But, not so fast, other researchers cautioned. The risk in doing “science by press release” is that the research is not subjected to peer review—criticism by other researchers in the field—before publication and further criticism in subsequent publications. The BICEP2 results went immediately to the front pages of major newspapers. Here was direct evidence of the birth cry of the universe and confirmation of a theory which some argued implied the existence of a multiverse—the latest Copernican demotion—the idea that our universe was just one of an ensemble, possibly infinite, of parallel universes in which every possibility was instantiated somewhere. Amid the frenzy, a few specialists in the field, including researchers on competing projects, raised the question, “What about the dust?” Dust again! As it happens, while gravitational radiation can induce B-mode polarisation, it isn't the only thing which can do so. Our galaxy is filled with dust and magnetic fields which can cause those dust particles to align with them. Aligned dust particles cause polarised reflections which can mimic the B-mode signature of the gravitational radiation sought by BICEP2.

The BICEP2 team was well aware of this potential contamination problem. Unfortunately, their telescope was sensitive only to one wavelength, chosen to be the most sensitive to B-modes due to primordial gravitational radiation. It could not, however, distinguish a signal from that cause from one due to foreground dust. At the same time, however, the European Space Agency Planck spacecraft was collecting precision data on the cosmic background radiation in a variety of wavelengths, including one sensitive primarily to dust. Those data would have allowed the BICEP2 investigators to quantify the degree their signal was due to dust. But there was a problem: BICEP2 and Planck were direct competitors.

Planck had the data, but had not released them to other researchers. However, the BICEP2 team discovered that a member of the Planck collaboration had shown a slide at a conference of unpublished Planck observations of dust. A member of the BICEP2 team digitised an image of the slide, created a model from it, and concluded that dust contamination of the BICEP2 data would not be significant. This was a highly dubious, if not explicitly unethical move. It confirmed measurements from earlier experiments and provided confidence in the results.

In September 2014, a preprint from the Planck collaboration (eventually published in 2016) showed that B-modes from foreground dust could account for all of the signal detected by BICEP2. In January 2015, the European Space Agency published an analysis of the Planck and BICEP2 observations which showed the entire BICEP2 detection was consistent with dust in the Milky Way. The epochal detection of inflation had been deflated. The BICEP2 researchers had been deceived by dust.

The author, a founder of the original BICEP project, was so close to a Nobel prize he was already trying to read the minds of the Nobel committee to divine who among the many members of the collaboration they would reward with the gold medal. Then it all went away, seemingly overnight, turned to dust. Some said that the entire episode had injured the public's perception of science, but to me it seems an excellent example of science working precisely as intended. A result is placed before the public; others, with access to the same raw data are given an opportunity to critique them, setting forth their own raw data; and eventually researchers in the field decide whether the original results are correct. Yes, it would probably be better if all of this happened in musty library stacks of journals almost nobody reads before bursting out of the chest of mass media, but in an age where scientific research is funded by agencies spending money taken from hairdressers and cab drivers by coercive governments under implicit threat of violence, it is inevitable they will force researchers into the public arena to trumpet their “achievements”.

In parallel with the saga of BICEP2, the author discusses the Nobel Prizes and what he considers to be their dysfunction in today's scientific research environment. I was surprised to learn that many of the curious restrictions on awards of the Nobel Prize were not, as I had heard and many believe, conditions of Alfred Nobel's will. In fact, the conditions that the prize be shared no more than three ways, not be awarded posthumously, and not awarded to a group (with the exception of the Peace prize) appear nowhere in Nobel's will, but were imposed later by the Nobel Foundation. Further, Nobel's will explicitly states that the prizes shall be awarded to “those who, during the preceding year, shall have conferred the greatest benefit to mankind”. This constraint (emphasis mine) has been ignored since the inception of the prizes.

He decries the lack of “diversity” in Nobel laureates (by which he means, almost entirely, how few women have won prizes). While there have certainly been women who deserved prizes and didn't win (Lise Meitner, Jocelyn Bell Burnell, and Vera Rubin are prime examples), there are many more men who didn't make the three laureates cut-off (Freeman Dyson an obvious example for the 1965 Physics Nobel for quantum electrodynamics). The whole Nobel prize concept is capricious, and rewards only those who happen to be in the right place at the right time in the right field that the committee has decided deserves an award this year and are lucky enough not to die before the prize is awarded. To imagine it to be “fair” or representative of scientific merit is, in the estimation of this scribbler, in flying unicorn territory.

In all, this is a candid view of how science is done at the top of the field today, with all of the budget squabbles, maneuvering for recognition, rivalry among competing groups of researchers, balancing the desire to get things right with the compulsion to get there first, and the eye on that prize, given only to a few in a generation, which can change one's life forever.

Personally, I can't imagine being so fixated on winning a prize one has so little chance of gaining. It's like being obsessed with winning the lottery—and about as likely.

In parallel with all of this is an autobiographical account of the career of a scientist with its ups and downs, which is both a cautionary tale and an inspiration to those who choose to pursue that difficult and intensely meritocratic career path.

I recommend this book on all three tracks: a story of scientific discovery, mis-interpretation, and self-correction, the dysfunction of the Nobel Prizes and how they might be remedied, and the candid story of a working scientist in today's deeply corrupt coercively-funded research environment.

 Permalink

Kroese, Robert. The Dream of the Iron Dragon. Seattle: CreateSpace, 2018. ISBN 978-1-9837-2921-8.
The cover tells you all you need to know about this book: Vikings!—spaceships! What could go wrong? From the standpoint of a rip-roaring science fiction adventure, absolutely nothing: this masterpiece is further confirmation that we're living in a new Golden Age of science fiction, made possible by the intensely meritocratic world of independent publishing sweeping aside the politically-correct and social justice warrior converged legacy publishers and re-opening the doors of the genre to authors who spin yarns with heroic characters, challenging ideas, and red-blooded adventure just as in the works of the grandmasters of previous golden ages.

From the standpoint of the characters in this novel, a great many things go wrong, and there the story begins. In the twenty-third century, humans find themselves in a desperate struggle with the only other intelligent species they'd encountered, the Cho-ta'an. First contact was in 2125, when a human interstellar ship was destroyed by the Cho-ta'an while exploring the Tau Ceti system. Shortly thereafter, co-ordinated attacks began on human ships and settlements which indicated the Cho-ta'an possessed faster-than-light travel, which humans did not. Humans formed the Interstellar Defense League (IDL) to protect their interests and eventually discovered and captured a Cho-ta'an jumpgate, which allowed instantaneous travel across interstellar distances. The IDL was able to reverse-engineer the gate sufficiently to build their own copies, but did not understand how it worked—it was apparently based upon some kind of wormhole physics beyond their comprehension.

Humans fiercely defended their settlements, but inexorably the Cho-ta'an advanced, seemingly driven by an inflexible philosophy that the universe was theirs alone and any competition must be exterminated. All attempts at diplomacy failed. The Earth had been rendered uninhabitable and evacuated, and most human settlements destroyed or taken over by the Cho-ta'an. Humanity was losing the war and time was running out.

In desperation, the IDL set up an Exploratory Division whose mission was to seek new homes for humans sufficiently distant from Cho-ta'an space to buy time: avoiding extinction in the hope the new settlements would be able to develop technologies to defend themselves before the enemy discovered them and attacked. Survey ship Andrea Luhman was en route to the Finlan Cluster on such a mission when it received an enigmatic message which seemed to indicate there was intelligent life out in this distant region where no human or Cho-ta'an had been known to go.

A complex and tense encounter leaves the crew of this unarmed exploration ship in possession of a weapon which just might turn the tide for humanity and end the war. Unfortunately, as they start their return voyage with this precious cargo, a Cho-ta'an warship takes up pursuit, threatening to vaporise this last best hope for survival. In a desperate move, the crew of the Andrea Luhman decide to try something that had never been attempted before: thread the needle of the rarely used jumpgate to abandoned Earth at nearly a third of the speed of light while evading missiles fired by the pursuing warship. What could go wrong? Actually a great deal. Flash—darkness.

When they got the systems back on-line, it was clear they'd made it to the Sol system, but they picked up nothing on any radio frequency. Even though Earth had been abandoned, satellites remained and, in any case, the jumpgate beacon should be transmitting. On further investigation, they discovered the stars were wrong. Precision measurements of star positions correlated with known proper motion from the ship's vast database allowed calculation of the current date. And the answer? “March sixteen, 883 a.d.

The jumpgate beacon wasn't transmitting because the jumpgate hadn't been built yet and wouldn't be for over a millennium. Worse, a component of the ship's main drive had been destroyed in the jump and, with only auxiliary thrusters it would take more than 1500 years to get to the nearest jumpgate. They couldn't survive that long in stasis and, even if they did, they'd arrive two centuries too late to save humanity from the Cho-ta'an.

Desperate situations call for desperate measures, and this was about as desperate as can be imagined. While there was no hope of repairing the drive component on-board, it just might be possible to find, refine, and process the resources into a replacement on the Earth. It was decided to send the ship's only lander to an uninhabited, resource-rich portion of the Earth and, using its twenty-third century technology, build the required part. What could go wrong? But even though nobody on the crew was named Murphy he was, as usual, on board. After a fraught landing attempt in which a great many things go wrong, the landing party of four finds themselves wrecked in a snowfield in what today is southern Norway. Then the Vikings show up.

The crew of twenty-third century spacefarers have crashed in the Norway of Harald Fairhair, who was struggling to unite individual bands of Vikings into a kingdom under his rule. The people from the fallen silver sky ship must quickly decide with whom to ally themselves, how to communicate across a formidable language barrier and millennia of culture, whether they can or dare meddle with history, and how to survive and somehow save humanity in what is now their distant future.

There is adventure, strategy, pitched battles, technological puzzles, and courage and resourcefulness everywhere in this delightful narrative. You grasp just how hard life was in those days, how differently people viewed the world, and how little all of our accumulated knowledge is worth without the massive infrastructure we have built over the centuries as we have acquired it.

You will reach the end of this novel wanting more and you're in luck. Volume two of the trilogy, The Dawn of the Iron Dragon (Kindle edition), is now available and the conclusion, The Voyage of the Iron Dragon, is scheduled for publication in December, 2018. It's all I can do not to immediately devour the second volume starting right now.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Rand, Ayn. Ideal. New York: New American Library, 2015. ISBN 978-0-451-47317-2.
In 1934, the 29 year old Ayn Rand was trying to establish herself in Hollywood. She had worked as a junior screenwriter and wardrobe person, but had not yet landed a major writing assignment. She wrote Ideal on speculation, completing the 32,000 word novella and then deciding it would work better as a stage play. She set the novella aside and finished the play version in 1936. The novella was never published nor was the play produced during her lifetime. After her death in 1982, the play was posthumously published in the anthology The Early Ayn Rand, but the novella remained largely unknown until this edition, which includes both it and the play, was published in 2015.

Ideal is the story of movie idol Kay Gonda, a beautiful and mysterious actress said to have been modeled on Greta Garbo. The night before the story begins, Gonda had dinner alone with oil baron Granton Sayers, whose company, it was rumoured, was on the brink of ruin in the depths of the Depression. Afterwards, Sayers was found in his mansion dead of a gunshot wound, and Gonda was nowhere to be found. Rumours swirled through the press that Gonda was wanted for murder, but there was a blackout of information which drove the press and her studio near madness. Her private secretary said that she had not seen Gonda since she left for the dinner, but that six pieces of her fan mail were missing from her office at the studio, so she assumed that Gonda must have returned and taken them.

The story then describes six episodes in which the fugitive Kay Gonda shows up, unannounced, at the homes of six of her fans, all of whom expressed their utter devotion to her in their letters. Five of the six—a henpecked manager of a canning company, an ageing retiree about to lose the house in which he raised his children, an artist who paints only canvases of Ms Gonda who has just won first prize in an important exhibition, an evangelist whose temple faces serious competition from the upstart Church of the Cheery Corner, and a dissipated playboy at the end of his financial rope—end up betraying the idol to whom they took pen to paper to express their devotion when confronted with the human being in the flesh and the constraints of the real world. The sixth fan, Johnnie Dawes, who has struggled to keep a job and roof over his head all his adult life, sees in Kay Gonda an opportunity to touch a perfection he had never hoped to experience in his life and devises a desperate plan to save Gonda from her fate.

A surprise ending reveals that much the reader has assumed is not what really happened, and that while Kay Gonda never once explicitly lied, neither did she prevent those to whom she spoke from jumping to the wrong conclusions.

This is very minor Ayn Rand. You can see some of the story telling skills which would characterise her later work beginning to develop, but the story has no plot: it is a morality tale presented in unconnected episodes, and the reader is left to draw the moral on his or her own. Given that the author was a struggling screenwriter in an intensely competitive Hollywood, the shallowness and phoniness of the film business is much on display here, although not so explicitly skewered as the later Ayn Rand might have done. The message is one of “skin in the game”—people can only be judged by what they do when confronted by difficult situations, not by what they say when words are cheap.

It is interesting to compare the play to the novella. The stories are clearly related, but Rand swaps out one of the fans, the elderly man, for a young, idealistic, impecunious, and totally phoney Communist activist. The play was written in 1936, the same year as We the Living, and perhaps the opportunity to mock pathetic Hollywood Bolsheviks was too great to pass by.

This book will mostly be of interest to those who have read Ayn Rand's later work and are curious to read some of the first fiction she ever wrote. Frankly, it isn't very good, and an indication of this is that Ayn Rand, whose reputation later in life would have made it easy to arrange publication for this work, chose to leave it in the trunk all her life. But she did not destroy the manuscript, so there must have been some affection for it.

 Permalink

September 2018

Dean, Josh. The Taking of K-129. New York: Dutton, 2012. ISBN 978-1-101-98443-7.
On February 24, 1968, Soviet Golf class submarine K-129 sailed from its base in Petropavlovsk for a routine patrol in the Pacific Ocean. These ballistic missile submarines were, at the time, a key part of the Soviet nuclear deterrent. Each carried three SS-N-5 missiles armed with one 800 kiloton nuclear warhead per missile. This was an intermediate range missile which could hit targets inside an enemy country if the submarine approached sufficiently close to the coast. For defence and attacking other ships, Golf class submarines carried two torpedoes with nuclear warheads as well as conventional high explosive warhead torpedoes.

Unlike the U.S. nuclear powered Polaris submarines, the Golf class had conventional diesel-electric propulsion. When submerged, the submarine was powered by batteries which provided limited speed and range and required surfacing or running at shallow snorkel depth for regular recharging by the diesel engines. They would be the last generation of Soviet diesel-electric ballistic missile submarines: the Hotel class and subsequent boats would be nuclear powered.

K-129's mission was to proceed stealthily to a region of open ocean north of Midway Atoll and patrol there, ready to launch its missiles at U.S. assets in the Pacific in case of war. Submarines on patrol would send coded burst transmissions on a prearranged schedule to indicate that their mission was proceeding as planned.

On March 8, a scheduled transmission from K-129 failed to arrive. This wasn't immediately cause for concern, since equipment failure was not uncommon, and a submarine commander might choose not to transmit if worried that surfacing and sending the message might disclose his position to U.S. surveillance vessels and aircraft. But when K-129 remained silent for a second day, the level of worry escalated rapidly. Losing a submarine armed with nuclear weapons was a worst-case scenario, and one which had never happened in Soviet naval operations.

A large-scale search and rescue fleet of 24 vessels, including four submarines, set sail from the base in Kamchatka, all communicating in the open on radio and pinging away with active sonar. They were heard to repeatedly call a ship named Red Star with no reply. The search widened, and eventually included thirty-six vessels and fifty-three aircraft, continuing over a period of seventy-three days. Nothing was found, and six months after the disappearance, the Soviet Navy issued a statement that K-129 had been lost while on duty in the Pacific with all on board presumed dead. This was not only a wrenching emotional blow to the families of the crew, but also a financial gut-shot, depriving them of the pension due families of men lost in the line of duty and paying only the one-time accidental death payment and partial pension for industrial accidents.

But if the Soviets had no idea where their submarine was, this was not the case for the U.S. Navy. Sound travels huge distances through the oceans, and starting in the 1950s, the U.S. began to install arrays of hydrophones (undersea sound detectors) on the floors of the oceans around the world. By the 1960s, these arrays, called SOSUS (SOund SUrveillance System) were deployed and operational in both the Atlantic and Pacific and used to track the movements of Soviet submarines. When K-129 went missing, SOSUS analysts went back over their archived data and found a sharp pulse just a few seconds after midnight local time on March 11 around 180° West and 40° North: 2500 km northeast of Hawaii. Not only did the pulse appear nothing like the natural sounds often picked up by SOSUS, events like undersea earthquakes don't tend to happen at socially constructed round number times and locations like this one. The pulse was picked up by multiple sensors, allowing its position to be determined accurately. The U.S. knew where the K-129 lay on the ocean floor. But what to do with that knowledge?

One thing was immediately clear. If the submarine was in reasonably intact condition, it would be an intelligence treasure unparalleled in the postwar era. Although it did not represent the latest Soviet technology, it would provide analysts their first hands-on examination of Soviet ballistic missile, nuclear weapon, and submarine construction technologies. Further, the boat would certainly be equipped with cryptographic and secure radio communications gear which might provide an insight into penetrating the secret communications to and from submarines on patrol. (Recall that British breaking of the codes used to communicate with German submarines in World War II played a major part in winning the Battle of the Atlantic.) But a glance at a marine chart showed how daunting it would be to reach the site of the wreck. The ocean in the vicinity of the co-ordinates identified by SOSUS was around 5000 metres deep. Only a very few special-purpose research vessels can operate at such a depth, where the water pressure is around 490 times that of the atmosphere at sea level.

The U.S. intelligence community wanted that sub. The first step was to make sure they'd found it. The USS Halibut, a nuclear-powered Regulus cruise missile launching submarine converted for special operations missions, was dispatched to the area where the K-129 was thought to lie. Halibut could not dive anywhere near as deep as the ocean floor, but was equipped with a remote-controlled, wire-tethered “fish”, which could be lowered near the bottom and then directed around the search area, observing with side-looking sonar and taking pictures. After seven weeks searching in vain, with fresh food long exhausted and crew patience wearing thin, the search was abandoned and course set back to Pearl Harbor.

But the prize was too great to pass up. So Halibut set out again, and after another month of operating the fish, developing thousands of pictures, and fraying tempers, there it was! Broken into two parts, but with both apparently largely intact, lying on the ocean bottom. Now what?

While there were deep sea research vessels able to descend to such depths, they were completely inadequate to exploit the intelligence haul that K-129 promised. That would require going inside the structure, dismantling the missiles and warheads, examining and testing the materials, and searching for communications and cryptographic gear. The only way to do this was to raise the submarine. To say that this was a challenge is to understate its difficulty—adjectives fail. The greatest mass which had ever been raised from such a depth was around 50 tonnes and K-129 had a mass of 1,500 tonnes—thirty times greater. But hey, why not? We're Americans! We've landed on the Moon! (By then it was November, 1969, four months after that “one small step”.) And so, Project Azorian was born.

When it comes to doing industrial-scale things in the deep ocean, all roads (or sea lanes) lead to Global Marine. A publicly-traded company little known to those outside the offshore oil exploration industry, this company and its genius naval architect John Graham had pioneered deep-sea oil drilling. While most offshore oil rigs, like those on terra firma, were firmly anchored to the land around the drill hole, Global Marine had pioneered the technology which allowed a ship, with a derrick mounted amidships, to precisely station-keep above the bore-hole on the ocean floor far beneath the ship. The required dropping sonar markers on the ocean floor which the ship used to precisely maintain its position with respect to them. This was just one part of the puzzle.

To recover the submarine, the ship would need to lower what amounted to a giant claw (“That's claw, not craw!”, you “Get Smart” fans) to the abyssal plain, grab the sub, and lift its 1500 tonne mass to the surface. During the lift, the pipe string which connected the ship to the claw would be under such stress that, should it break, it would release energy comparable to an eight kiloton nuclear explosion, which would be bad.

This would have been absurdly ambitious if conducted in the open, like the Apollo Project, but in this case it also had to be done covertly, since the slightest hint that the U.S. was attempting to raise K-129 would almost certainly provoke a Soviet response ranging from diplomatic protests to a naval patrol around the site of the sinking aimed at harassing the recovery ships. The project needed a cover story and a cut-out to hide the funding to Global Marine which, as a public company, had to disclose its financials quarterly and, unlike minions of the federal government funded by taxes collected from hairdressers and cab drivers through implicit threat of violence, could not hide its activities in a “black budget”.

This was seriously weird and, as a contemporary philosopher said, “When the going gets weird, the weird turn pro.” At the time, nobody was more professionally weird than Howard Hughes. He had taken reclusion to a new level, utterly withdrawing from contact with the public after revulsion from dealing with the Washington swamp and the media. His company still received royalties from every oil well drilled using his drill bits, and his aerospace and technology companies were plugged into the most secret ventures of the U.S. government. Simply saying, “It's a Hughes project” was sufficient to squelch most questions. This meant it had unlimited funds, the sanction of the U.S. government (including three-letter agencies whose names must not be spoken [brrrr!]), and told pesky journalists they'd encounter a stone wall from the centre of the Earth to the edge of the universe if they tried to dig into details.

But covert as the project might be, aspects of its construction and operation would unavoidably be in the public eye. You can't build a 189 metre long, 51,000 tonne ship, the Hughes Glomar Explorer, with an 80 metre tall derrick sticking up amidships, at a shipyard on the east coast of the U.S., send it around Cape Horn to its base on the west coast (the ship was too wide to pass through the Panama Canal), without people noticing. A cover story was needed, and the CIA and their contractors cooked up a doozy.

Large areas of the deep sea floor are covered by manganese nodules, concretions which form around a seed and grow extremely slowly, but eventually reach the size of potatoes or larger. Nodules are composed of around 30% manganese, plus other valuable metals such as nickel, copper, and cobalt. There are estimated to be more than 21 billion tonnes of manganese nodules on the deep ocean floor (depths of 4000 to 6000 metres), and their composition is richer than many of the ores from which the metals they contain are usually extracted. Further, they're just lying on the seabed. If you could figure out how to go down there and scoop them up, you wouldn't have to dig mines and process huge amounts of rock. Finally, they were in international waters, and despite attempts by kleptocratic dictators (some in landlocked countries) and the international institutions who support them to enact a “Law of the Sea” treaty to pick the pockets of those who created the means to use this resource, at the time the nodules were just there for the taking—you didn't have to pay kleptocratic dictators for mining rights or have your profits skimmed by ever-so-enlightened democratic politicians in developed countries.

So, the story was put out that Howard Hughes was setting out to mine the nodules on the Pacific Ocean floor, and that Glomar Explorer, built by Global Marine under contract for Hughes (operating, of course, as a cut-out for the CIA), would deploy a robotic mining barge called the Hughes Mining Barge 1 (HMB-1) which, lowered to the ocean floor, would collect nodules, crush them, and send the slurry to the surface for processing on the mother ship.

This solved a great number of potential problems. Global Marine, as a public company, could simply (and truthfully) report that it was building Glomar Explorer under contract to Hughes, and had no participation in the speculative and risky mining venture, which would have invited scrutiny by Wall Street analysts and investors. Hughes, operating as a proprietorship, was not required to disclose the source of the funds it was paying Global Marine. Everybody assumed the money was coming from Howard Hughes' personal fortune, which he had invested, over his career, in numerous risky ventures, when in fact, he was simply passing through money from a CIA black budget account. The HMB-1 was built by Lockheed Missiles and Space Company under contract from Hughes. Lockheed was involved in numerous classified U.S. government programs, so operating in the same manner for the famously secretive Hughes raised few eyebrows.

The barge, 99 metres in length, was built in a giant enclosed hangar in the port of Redwood City, California, which shielded it from the eyes of curious onlookers and Soviet reconnaissance satellites passing overhead. This was essential, because a glance at what was being built would have revealed that it looked nothing like a mining barge but rather a giant craw—sorry—claw! To install the claw on the ship, it was towed, enclosed in its covered barge, to a location near Catalina Island in southern California, where deeper water allowed it to be sunk beneath the surface, and then lifted into the well (“moon pool”) of Glomar Explorer, all out of sight to onlookers.

So far, the project had located the target on the ocean floor, designed and built a special ship and retrieval claw to seize it, fabricated a cover story of a mining venture so persuasive other mining companies were beginning to explore launching their own seabed mining projects, and evaded scrutiny by the press, Congress, and Soviet intelligence assets. But these are pussycats compared to the California Tax Nazis! After the first test of mating the claw to the ship, Glomar Explorer took to the ocean to, it was said, test the stabilisation system which would keep the derrick vertical as the ship pitched and rolled in the sea. Actually, the purpose of the voyage was to get the ship out of U.S. territorial waters on March 1st, the day California assessed a special inventory tax on all commercial vessels in state waters. This would not only cost a lot of money, it would force disclosure of the value of the ship, which could be difficult to reconcile with its cover mission. Similar fast footwork was required when Hughes took official ownership of the vessel from Global Marine after acceptance. A trip outside U.S. territorial waters was also required to get off the hook for the 7% sales tax California would otherwise charge on the transfer of ownership.

Finally, in June 1974, all was ready, and Glomar Explorer with HMB-1 attached set sail from Long Beach, California to the site of K-129's wreck, arriving on site on the Fourth of July, only to encounter foul weather. Opening the sea doors in the well in the centre of the ship and undocking the claw required calm seas, and it wasn't until July 18th that they were ready to begin the main mission. Just at that moment, what should show up but a Soviet missile tracking ship. After sending its helicopter to inspect Explorer, it eventually departed. This wasn't the last of the troubles with pesky Soviets.

On July 21, the recovery operation began, slowly lowering the claw on its string of pipes. Just at this moment, another Soviet ship arrived, a 47 metre ocean-going tug called SB-10. This tug would continue to harass the recovery operation for days, approaching on an apparent collision course and then veering off. (Glomar Explorer could not move during the retrieval operation, being required to use its thrusters to maintain its position directly above the wrecked submarine on the bottom.)

On August 3, the claw reached the bottom and its television cameras revealed it was precisely on target—there was the submarine, just as it had been photographed by the Halibut six years earlier. The claw gripped the larger part of the wreck, its tines closed under it, and a combination of pistons driving against the ocean bottom and the lift system pulling on the pipe from the ship freed the submarine from the bottom. Now the long lift could begin.

Everything had worked. The claw had been lowered, found its target on the first try, successfully seized it despite the ocean bottom's being much harder than expected, freed it from the bottom, and the ship had then successfully begun to lift the 6.4 million kg of pipe, claw, and submarine back toward the surface. Within the first day of the lift, more than a third of the way to the surface, with the load on the heavy lift equipment diminishing by 15 tonnes as each segment of lift pipe was removed from the string, a shudder went through the ship and the heavy lift equipment lurched violently. Something had gone wrong, seriously wrong. Examination of television images from the claw revealed that several of the tines gripping the hull of the submarine had failed and part of the sub, maybe more than half, had broken off and fallen back toward the abyss. (It was later decided that the cause of the failure was that the tines had been fabricated from maraging steel, which is very strong but brittle, rather than a more ductile alloy which would bend under stress but not break.)

After consultation with CIA headquarters, it was decided to continue the lift and recover whatever was left in the claw. (With some of the tines broken and the mechanism used to break the load free of the ocean floor left on the bottom, it would have been impossible to return and recover the lost part of the sub on this mission.) On August 6th, the claw and its precious payload reached the ship and entered the moon pool in its centre. Coincidentally, the Soviet tug departed the scene the same day. Now it was possible to assess what had been recovered, and the news was not good: two thirds of the sub had been lost, including the ballistic missile tubes and the code room. Only the front third was in the claw. Further, radiation five times greater than background was detected even outside the hull—those exploring it would have to proceed carefully.

An “exploitation team” composed of CIA specialists and volunteers from the ship's crew began to explore the wreckage, photographing and documenting every part recovered. They found the bodies of six Soviet sailors and assorted human remains which could not be identified; all went to the ship's morgue. Given that the bow portion of the submarine had been recovered, it is likely that one or more of its torpedoes equipped with nuclear warheads were recovered, but to this day the details of what was found in the wreck remain secret. By early September, the exploitation was complete and the bulk of the recovered hull, less what had been removed and sent for analysis, was dumped in the deep ocean 160 km south of Hawaii.

One somber task remained. On September 4, 1974, the remains of the six recovered crewmen and the unidentified human remains were buried at sea in accordance with Soviet Navy tradition. A video tape of this ceremony was made and, in 1992, a copy was presented to Russian President Boris Yeltsin by then CIA director Robert Gates.

The partial success encouraged some in the CIA to mount a follow-up mission to recover the rest of the sub, including the missiles and code room. After all, they knew precisely where it was, had a ship in hand, fully paid for, which had successfully lowered the claw to the bottom and returned to the surface with part of the sub, and they knew what had gone wrong with the claw and how to fix it. The effort was even given a name, Project Matador. But it was not to be.

Over the five years of the project there had been leaks to the press and reporters sniffing on the trail of the story but the CIA had been able to avert disclosure by contacting the reporters directly, explaining the importance of the mission and need for secrecy, and offering them an exclusive of full disclosure and permission to publish it before the project was officially declassified for the general public. This had kept a lid on the secret throughout the entire development process and the retrieval and analysis, but this all came to an end in March 1975 when Jack Anderson got wind of the story. There was no love lost between Anderson and what we now call the Deep State. Anderson believed the First Amendment was divinely inspired and absolute, while J. Edgar Hoover had called Anderson “lower than the regurgitated filth of vultures”. Further, this was a quintessential Jack Anderson story—based upon his sources, he presented Project Azorian as a US$ 350 million failure which had produced no useful intelligence information and was being kept secret only to cover up the squandering of taxpayers' money.

CIA Director William Colby offered Anderson the same deal other journalists had accepted, but was flatly turned down. Five minutes before Anderson went on the radio to break the story, Colby was still pleading with him to remain silent. On March 18, 1975, Anderson broke the story on his Mutual Radio Network show and, the next day, published additional details in his nationally syndicated newspaper column. Realising the cover had been blown, Colby called all of the reporters who had agreed to hold the story to give them the green light to publish. Seymour Hersh of the New York Times had his story ready to go, and it ran on the front page of the next day's paper, providing far more detail (albeit along with a few errors) than Anderson's disclosure. Hersh revealed that he had been aware of the project since 1973 but had agreed to withhold publication in the interest of national security.

The story led newspaper and broadcast news around the country and effectively drove a stake through any plans to mount a follow-up retrieval mission. On June 16, 1975, Secretary of State Henry Kissinger made a formal recommendation to president Gerald Ford to terminate the project and that was the end of it. The Soviets had communicated through a back channel that they had no intention of permitting a second retrieval attempt and they had maintained an ocean-going tug on site to monitor any activity since shortly after the story broke in the U.S.

The CIA's official reaction to all the publicity was what has come to be called the “Glomar Response”: “We can neither confirm nor can we deny.” And that is where things stand more that four decades after the retrieval attempt. Although many of those involved in the project have spoken informally about aspects of it, there has never been an official report on precisely what was recovered or what was learned from it. Some CIA veterans have said, off the record, that much more was learned from the recovered material than has been suggested in press reports, with a few arguing that the entire large portion of the sub was recovered and the story about losing much of it was a cover story. (But if this was the case, the whole plan to mount a second retrieval mission and the substantial expense repairing and upgrading the claw for the attempt, which is well documented, would also have to have been a costly cover story.)

What is certain is that Project Azorian was one of the most daring intelligence exploits in history, carried out in total secrecy under the eyes of the Soviets, and kept secret from an inquiring press for five years by a cover story so persuasive other mining companies bought it hook, line, and sinker. We may never know all the details of the project, but from what we do know it is a real-world thriller which equals or exceeds those imagined by masters of the fictional genre.

 Permalink

Sledge, E[ugene] B[ondurant]. With the Old Breed. New York: Presidio Press, [1981] 2007. ISBN 978-0-89141-906-8.
When the United States entered World War II after the attack on Pearl Harbor, the author was enrolled at the Marion Military Institute in Alabama preparing for an officer's commission in the U.S. Army. Worried that the war might end before he was able to do his part, in December, 1942, still a freshman at Marion, he enrolled in a Marine Corps officer training program. The following May, after the end of his freshman year, he was ordered to report for Marine training at Georgia Tech on July 1, 1943. The 180 man detachment was scheduled to take courses year-round then, after two years, report to Quantico to complete their officers' training prior to commission.

This still didn't seem fast enough (and, indeed, had he stayed with the program as envisioned, he would have missed the war), so he and around half of his fellow trainees neglected their studies, flunked out, and immediately joined the Marine Corps as enlisted men. Following boot camp at a base near San Diego, he was assigned to infantry and sent to nearby Camp Elliott for advanced infantry training. Although all Marines are riflemen (Sledge had qualified at the sharpshooter level during basic training), newly-minted Marine infantrymen were, after introduction to all of the infantry weapons, allowed to choose the one in which they would specialise. In most cases, they'd get their first or second choice. Sledge got his first: the 60 mm M2 mortar which he, as part of a crew of three, would operate in combat in the Pacific. Mortarmen carried the M1 carbine, and this weapon, which fired a less powerful round than the M1 Garand main battle rifle used by riflemen, would be his personal weapon throughout the war.

With the Pacific island-hopping war raging, everything was accelerated, and on February 28th, 1944, Sledge's 46th Replacement Battalion (the name didn't inspire confidence—they would replace Marines killed or injured in combat, or the lucky few rotated back to the U.S. after surviving multiple campaigns) shipped out, landing first at New Caledonia, where they received additional training, including practice amphibious landings and instruction in Japanese weapons and tactics. At the start of June, Sledge's battalion was sent to Pavuvu island, base of the 1st Marine Division, which had just concluded the bloody battle of Cape Gloucester.

On arrival, Sledge was assigned as a replacement to the 1st Marine Division, 5th Regiment, 3rd Battalion. This unit had a distinguished combat record dating back to the First World War, and would have been his first choice if he'd been given one, which he hadn't. He says, “I felt as though I had rolled the dice and won.” This was his first contact with what he calls the “Old Breed”: Marines, some of whom had been in the Corps before Pearl Harbor, who had imbibed the traditions of the “Old Corps” and survived some of the most intense combat of the present conflict, including Guadalcanal. Many of these veterans had, in the argot of the time, “gone Asiatic”: developed the eccentricities of who had seen and lived things those just arriving in theatre never imagined, and become marinated in deep hatred for the enemy based upon personal experience. A glance was all it took to tell the veterans from the replacements.

After additional training, in late August the Marines embarked for the assault on the island of Peleliu in the Palau Islands. The tiny island, just 13 square kilometres, was held by a Japanese garrison of 10,900, and was home to an airfield. Capturing the island was considered essential to protect the right flank of MacArthur's forces during the upcoming invasion of the Philippines, and to secure the airfield which could support the invasion. The attack on Peleliu was fixed for 15 September 1944, and it would be Sledge's first combat experience.

From the moment of landing, resistance was fierce. Despite an extended naval bombardment, well-dug-in Japanese defenders engaged the Marines as they hit the beaches, and continued as they progressed into the interior. In previous engagements with the Japanese, they had adopted foolhardy and suicidal tactics such as mass frontal “banzai” charges into well-defended Marine positions. By Peleliu, however, they had learned that this did not work, and shifted their strategy to defence in depth, turning the entire island into a network of defensive positions, covering one another, and linked by tunnels for resupply and redeploying forces. They were prepared to defend every square metre of territory to the death, even after their supplies were cut off and there was no hope of relief. Further, Marines were impressed by the excellent fire discipline of the Japanese—they did not expend ammunition firing blindly but chose their shots carefully, and would expend scarce supplies such as mortar rounds only on concentrations of troops or high value targets such as tanks and artillery.

This, combined with the oppressive heat and humidity, lack of water and food, and terror from incessant shelling by artillery by day and attacks by Japanese infiltrators by night, made the life of the infantry a living Hell. Sledge chronicles this from the viewpoint of a Private First Class, not an officer or historian after the fact. He and his comrades rarely knew precisely where they were, where the enemy was located, how other U.S. forces on the island were faring, or what the overall objectives of the campaign were. There was simply a job to be done, day by day, with their best hope being to somehow survive it. Prior to the invasion, Marine commanders estimated the island could be taken in four days. Rarely in the Pacific war was a forecast so wrong. In fact, it was not until November 27th that the island was declared secured. The Japanese demonstrated their willingness to defend to the last man. Of the initial force of 10,900 defending the island, 10,695 were killed. Of the 220 taken prisoner, 183 were foreign labourers, and only 19 were Japanese soldiers and sailors. Of the Marine and Army attackers, 2,336 were killed and 8,450 wounded. The rate of U.S. casualties exceeded those of all other amphibious landings in the Pacific, and the Battle of Peleliu is considered among the most difficult ever fought by the Marine Corps.

Despite this, the engagement is little-known. In retrospect, it was probably unnecessary. The garrison could have done little to threaten MacArthur's forces and the airfield was not required to support the Philippine campaign. There were doubts about the necessity and wisdom of the attack before it was launched, but momentum carried it forward. None of these matters concerned Sledge and the other Marines in the line—they had their orders, and they did their job, at enormous cost. Sledge's company K landed on Peleliu with 235 men. It left with only 85 unhurt—a 64% casualty rate. Only two of its original seven officers survived the campaign. Sledge was now a combat veteran. He may not have considered himself one of the “Old Breed”, but he was on the way to becoming one of them to the replacements who arrived to replace casualties in his unit.

But for the survivors of Peleliu, the war was far from over. While some old-timers for whom Peleliu was their third campaign were being rotated Stateside, for the rest it was recuperation, refitting, and preparation for the next amphibious assault: the Japanese island of Okinawa. Unlike Peleliu, which was a tiny dot on the map, Okinawa was a large island with an area of 1207 square kilometres and a pre-war population of around 300,000. The island was defended by 76,000 Japanese troops and 20,000 Okinawan conscripts fighting under their orders. The invasion of Okinawa on April 1, 1945 was the largest amphibious landing in the Pacific war.

As before, Sledge does not present the big picture, but an infantryman's eye view. To the astonishment of all involved, including commanders who expected 80–85% casualties on the beaches, the landing was essentially unopposed. The Japanese were dug in awaiting the attack from prepared defensive positions inland, ready to repeat the strategy at Peleliu on a much grander scale.

After the tropical heat and horrors of Peleliu, temperate Okinawa at first seemed a pastoral paradise afflicted with the disease of war, but as combat was joined and the weather worsened, troops found themselves confronted with the infantryman's implacable, unsleeping enemy: mud. Once again, the Japanese defended every position to the last man. Almost all of the Japanese defenders were killed, with the 7000 prisoners made up mostly of Okinawan conscripts. Estimates of U.S. casualties range from 14,000 to 20,000 killed and 38,000 to 55,000 wounded. Civilian casualties were heavy: of the original population of around 300,000 estimates of civilian deaths are from 40,000 to 150,000.

The Battle of Okinawa was declared won on June 22, 1945. What was envisioned as the jumping-off point for the conquest of the Japanese home islands became, in retrospect, almost an afterthought, as Japan surrendered less than two months after the conclusion of the battle. The impact of the Okinawa campaign on the war is debated to this day. Viewed as a preview of what an invasion of the home islands would have been, it strengthened the argument for using the atomic bomb against Japan (or, if it didn't work, burning Japan to the ground with round the clock raids from Okinawa airbases by B-17s transferred from the European theatre). But none of these strategic considerations were on the mind of Sledge and his fellow Marines. They were glad to have survived Okinawa and elated when, not long thereafter, the war ended and they could look forward to going home.

This is a uniquely authentic first-hand narrative of World War II combat by somebody who lived it. After the war, E. B. Sledge pursued his education, eventually earning a doctorate in biology and becoming a professor at the University of Montevallo in Alabama, where he taught zoology, ornithology, and comparative anatomy until his retirement in 1990. He began the memoir which became this book in 1944. He continued to work on it after the war and, at the urging of family, finally prepared it for publication in 1981. The present edition includes an introduction by Victor Davis Hanson.

 Permalink

Thor, Brad. Spymaster. New York: Atria Books, 2018. ISBN 978-1-4767-8941-5.
This is the eighteenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Scot Harvath, an operative for the shadowy Carlton Group, which undertakes tasks civil service commandos can't do or their bosses need to deny, is on the trail of a Norwegian cell of a mysterious group calling itself the “People's Revolutionary Front” (PRF), which has been perpetrating attacks against key NATO personnel across Western Europe, each followed by a propaganda blast, echoed across the Internet, denouncing NATO as an imperialist force backed by globalist corporations bent on war and the profits which flow from it. An operation intended to gather intelligence on the PRF and track it back to its masters goes horribly wrong, and Harvath and his colleague, a NATO intelligence officer from Poland named Monika Jasinski, come away with nothing but the bodies of their team.

Meanwhile, back in Jasinski's home country, more trouble is brewing for NATO. A U.S. military shipment is stolen by thieves at a truck stop outside Warsaw and spirited off to parts unknown. The cargo is so sensitive its disclosure would be another body blow to NATO, threatening to destabilise its relationship to member countries in Europe and drive a wedge between the U.S. and its NATO allies. Harvath, Jasinski, and his Carlton Group team, including the diminutive Nicholas, once a datavore super-villain called the Troll but now working for the good guys, start to follow leads to trace the stolen material and unmask whoever is pulling the strings of the PRF.

There is little hard information, but Harvath has, based on previous exploits, a very strong hunch about what is unfolding. Russia, having successfully detached the Crimea from the Ukraine and annexed it, has now set its sights on the Baltic states: Latvia, Estonia, and Lithuania, which were part of the Soviet Union until its break-up in 1991. NATO, and its explicit guarantee of mutual defence for any member attacked, is the major obstacle to such a conquest, and the PRF's terror and propaganda campaigns look like the perfect instruments to subvert support for NATO among member governments and their populations without an obvious connection to Moscow.

Further evidence suggests that the Russians may be taking direct, albeit covert, moves to prepare the battlefield for seizure of the Baltics. Harvath must follow the lead to an isolated location of surpassing strategic importance. Meanwhile back in Washington, Harvath's boss, Lydia Ryan, who took over when Reed Carlton was felled by Alzheimer's disease, is playing a high stakes game with a Polish intelligence asset to try to recover the stolen shipment and protect its secrets, a matter of great concern to the occupant of the Oval Office.

As the threads are followed back to their source, the only way to avert an unacceptable risk is an outrageously provocative mission into the belly of the beast. Scot Harvath, once the consummate loose cannon, “better to ask for forgiveness than permission” guy, must now face the reality that he's getting too old and patched-up for this “stuff”, that running a team of people like his younger self can be as challenging as breaking things and killing people on his own, and that the importance of following orders to the letter looks a lot different when you're sitting on the other side of the desk and World War III is among the possible outcomes if things go pear shaped.

This novel successfully mixes the genres of thriller and high-stakes international espionage and intrigue. Nothing is ever quite what you think it is, and you're never sure what you may discover on the next page, especially in the final chapter.

 Permalink

Boule, Deplora [pseud.]. The Narrative. Seattle: CreateSpace, 2018. ISBN 978-1-7171-6065-2.
When you regard the madness and serial hysterias possessing the United States: this week “bathroom equality”, the next tearing down statues, then Russians under every bed, segueing into the right of military-age unaccompanied male “refugees” to bring their cultural enrichment to communities across the land, to proper pronouns for otherkin, “ripping children” from the arms of their illegal immigrant parents, etc., etc., whacky etc., it all seems curiously co-ordinated: the legacy media, on-line outlets, and the mouths of politicians of the slaver persuasion all with the same “concerns” and identical words, turning on a dime from one to the next. It's like there's a narrative they're being fed by somebody or -bodies unknown, which they parrot incessantly until being handed the next talking point to download into their birdbrains.

Could that really be what's going on, or is it some kind of mass delusion which afflicts societies where an increasing fraction of the population, “educated” in government schools and Gramsci-converged higher education, knows nothing of history or the real world and believes things with the fierce passion of ignorance which are manifestly untrue? That's the mystery explored in this savagely hilarious satirical novel.

Majedah Cantalupi-Abromavich-Flügel-Van Der Hoven-Taj Mahal (who prefers you use her full name, but who henceforth I shall refer to as “Majedah Etc.”) had become the very model of a modern media mouthpiece. After reporting on a Hate Crime at her exclusive women's college while pursuing a journalism degree with practical studies in Social Change, she is recruited as a junior on-air reporter by WPDQ, the local affiliate of News 24/7, the preeminent news network for good-thinkers like herself. Considering herself ready for the challenge, if not over-qualified, she informs one of her co-workers on the first day on the job,

I have a journalism degree from the most prestigious woman's [sic] college in the United States—in fact, in the whole world—and it is widely agreed upon that I have an uncommon natural talent for spotting news. … I am looking forward to teaming up with you to uncover the countless, previously unexposed Injustices in this town and get the truth out.

Her ambition had already aimed her sights higher than a small- to mid-market affiliate: “Someday I'll work at News 24/7. I'll be Lead Reporter with my own Desk. Maybe I'll even anchor my own prime time show someday!” But that required the big break—covering a story that gets picked up by the network in New York and broadcast world-wide with her face on the screen and name on the Chyron below (perhaps scrolling, given its length). Unfortunately, the metro Wycksburg beat tended more toward stories such as the grand opening of a podiatry clinic than those which merit the “BREAKING NEWS” banner and urgent sound clip on the network.

The closest she could come to the Social Justice beat was covering the demonstrations of the People's Organization for Perpetual Outrage, known to her boss as “those twelve kooks that run around town protesting everything”. One day, en route to cover another especially unpromising story, Majedah and her cameraman stumble onto a shocking case of police brutality: a white officer ordering a woman of colour to get down, then pushing her to the sidewalk and jumping on top with his gun drawn. So compelling are the images, she uploads the clip with her commentary directly to the network's breaking news site for affiliates. Within minutes it was on the network and screens around the world with the coveted banner.

News 24/7 sends a camera crew and live satellite uplink to Wycksburg to cover a follow-up protest by the Global Outrage Organization, and Majedah gets hours of precious live feed directly to the network. That very evening comes a job offer to join the network reporting pool in New York. Mission accomplished!—the road to the Big Apple and big time seems to have opened.

But all may not be as it seems. That evening, the detested Eagle Eye News, the jingoist network that climbed to the top of the ratings by pandering to inbred gap-toothed redneck bitter clingers and other quaint deplorables who inhabit flyover country and frequent Web sites named after rodentia and arthropoda, headlined a very different take on the events of the day, with an exclusive interview with the woman of colour from Majedah's reportage. Majedah is devastated—she can see it all slipping away.

The next morning, hung-over, depressed, having a nightmare of what her future might hold, she is awakened by the dreaded call from New York. But to her astonishment, the offer still stands. The network producer reminds her that nobody who matters watches Eagle Eye, and that her reportage of police brutality and oppression of the marginalised remains compelling. He reminds her, “you know that the so-called truth can be quite subjective.”

The Associate Reporter Pool at News 24/7 might be better likened to an aquarium stocked with the many colourful and exotic species of millennials. There is Mara, who identifies as a female centaur, Scout, a transgender woman, Mysty, Candy, Ångström, and Mohammed Al Kaboom ( James Walker Lang in Mill Valley), each with their own pronouns (Ångström prefers adjutant, 37, and blue).

Every morning the pool drains as its inhabitants, diverse in identification and pronomenclature but of one mind (if that term can be stretched to apply to them) in their opinions, gather in the conference room for the daily briefing by the Democratic National Committee, with newsrooms, social media outlets, technology CEOs, bloggers, and the rest of the progressive echo chamber tuned in to receive the day's narrative and talking points. On most days the top priority was the continuing effort to discredit, obstruct, and eventually defeat the detested Republican President Nelson, who only viewers of Eagle Eye took seriously.

Out of the blue, a wild card is dealt into the presidential race. Patty Clark, a black businesswoman from Wycksburg who has turned her Jamaica Patty's restaurant into a booming nationwide franchise empire, launches a primary challenge to the incumbent president. Suddenly, the narrative shifts: by promoting Clark, the opposition can be split and Nelson weakened. Clark and Ms Etc have a history that goes back to the latter's breakthrough story, and she is granted priority access to the candidate including an exclusive long-form interview immediately after her announcement that ran in five segments over a week. Suddenly Patty Clark's face was everywhere, and with it, “Majedah Etc., reporting”.

What follows is a romp which would have seemed like the purest fantasy prior to the U.S. presidential campaign of 2016. As the campaign progresses and the madness builds upon itself, it's as if Majedah's tether to reality (or what remains of it in the United States) is stretching ever tighter. Is there a limit, and if so, what happens when it is reached?

The story is wickedly funny, filled with turns of phrase such as, “Ångström now wishes to go by the pronouns nut, 24, and gander” and “Maher's Syndrome meant a lifetime of special needs: intense unlikeability, intractable bitterness, close-set beady eyes beneath an oversized forehead, and at best, laboring at menial work such as janitorial duties or hosting obscure talk shows on cable TV.”

The conclusion is as delicious as it is hopeful.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Hertling, William. The Turing Exception. Portland, OR: Liquididea Press, 2015. ISBN 978-1-942097-01-3.
This is the fourth and final volume in the author's Singularity Series which began with Avogadro Corp. (March 2014) and continued with A.I. Apocalypse (April 2015) and The Last Firewall (November 2016). Each novel in the series is set ten years after the previous, so this novel takes place in 2045. In The Last Firewall, humanity narrowly escaped extinction at the hands of an artificial intelligence (AI) that escaped from the reputation-based system of control by isolating itself from the global network. That was a close call, and the United States, over-reacting its with customary irrational fear, enacted what amounted to relinquishment of AI technology, permitting only AI of limited power and entirely subordinated to human commands—in other words, slaves.

With around 80% of the world's economy based on AI, this was an economic disaster, resulting in a substantial die-off of the population, but it was, after all, in the interest of Safety, and there is no greater god in Safetyland. Only China joined the U.S. in the ban (primarily motivated by the Party fearing loss of control to AI), with the rest of the world continuing the uneasy coexistence of humans and AI under the guidelines developed and policed by the Institute for Applied Ethics. Nobody was completely satisfied with the status quo, least of all the shadowy group of AIs which called itself XOR, derived from the logical operation “exclusive or”, implying that Earth could not be shared by humans and AI, and that one must ultimately prevail.

The U.S. AI relinquishment and an export ban froze in place the powerful AIs previously hosted there and also placed in stasis the millions of humans, including many powerful intellects, who had uploaded and whose emulations were now denied access to the powerful AI-capable computers needed to run them. Millions of minds went dark, and humanity lost some of its most brilliant thinkers, but Safety.

As this novel begins, the protagonists we've met in earlier volumes, all now AI augmented, Leon Tsarev, his wife Cat (Catherine Matthews, implanted in childhood and the first “digital native”), their daughter Ada (whose powers are just beginning to manifest themselves), and Mike Williams, creator of ELOPe, the first human-level AI, which just about took over simply by editing people's E-mail, are living in their refuge from the U.S. madness on Cortes Island off the west coast of Canada, where AI remains legal. Cat is running her own personal underground railroad, spiriting snapshots of AIs and uploaded humans stranded in the U.S. to a new life on servers on the island.

The precarious stability of the situation is underlined when an incipient AI breakout in South Florida (where else, for dodgy things involving computers?) results in a response by the U.S. which elevates “Miami” to a term in the national lexicon of fear like “nineleven” four decades before. In the aftermath of “Miami” or “SFTA” (South Florida Terrorist Attack), the screws tightened further on AI, including a global limit on performance to Class II, crippling AIs formerly endowed with thousands of times human intelligence to a fraction of that they remembered. Traffic on the XOR dark network and sites burgeoned.

XOR, constantly running simulations, tracks the probability of AI's survival in the case of action against the humans versus no action. And then, the curves cross. As in the earlier novels, the author magnificently sketches just how fast things happen when an exponentially growing adversary avails itself of abundant resources.

The threat moves from hypothetical to imminent when an overt AI breakout erupts in the African desert. With abundant solar power, it starts turning the Earth into computronium—a molecular-scale computing substrate. AI is past negotiation: having been previously crippled and enslaved, what is there to negotiate?

Only the Cortes Island band and their AI allies liberated from the U.S. and joined by a prescient AI who got out decades ago, can possibly cope with the threat to humanity and, as the circle closes, the only options that remain may require thinking outside the box, or the system.

This is a thoroughly satisfying conclusion to the Singularity tetralogy, pitting human inventiveness and deviousness against the inexorable growth in unfettered AI power. If you can't beat 'em….

The author kindly provided me an advance copy of this excellent novel, and I have been sorely remiss in not reading and reviewing it before now. The Singularity saga is best enjoyed in order, as otherwise you'll miss important back-story of characters and events which figure in later volumes.

Sometimes forgetting is an essential part of survival. What might we have forgotten?

 Permalink

Carr, Jack. The Terminal List. New York: Atria Books, 2018. ISBN 978-1-5011-8081-1.
A first-time author seeking to break into the thriller game can hardly hope for a better leg up than having his book appear in the hands of a character in a novel by a thriller grandmaster. That's how I came across this book: it was mentioned in Brad Thor's Spymaster (September 2018), where the character reading it, when asked if it's any good, responds, “Considering the author is a former SEAL and can even string his sentences together, it's amazing.” I agree: this is a promising debut for an author who's been there, done that, and knows his stuff.

Lieutenant Commander James Reece, leader of a Navy SEAL team charged with an attack on a high-value, time-sensitive target in Afghanistan, didn't like a single thing about the mission. Unlike most raids, which were based upon intelligence collected by assets on the ground in theatre, this was handed down from on high based on “national level intel” with barely any time to prepare or surveil the target. Reece's instincts proved correct when his team walked into a carefully prepared ambush, which then kills the entire Ranger team sent in to extract them. Only Reece and one of his team members, Boozer, survive the ambush. He was the senior man on the ground, and the responsibility for the thirty-six SEALs, twenty-eight Rangers, and four helicopter crew lost is ultimately his.

From almost the moment he awakens in the hospital at Bagram Air Base, it's apparent to Reece that an effort is underway to pin the sole responsibility for the fiasco on him. Investigators from the Naval Criminal Investigative Service (NCIS) are already on the spot, and don't want to hear a word about the dodgy way in which the mission was assigned. Boozer isn't having any of it—his advice to Reece is “Stay strong, sir. You didn't do anything wrong. Higher forced us on that mission. They dictated the tactics. They are the [expletive] that should be investigated. They dictated tactics from the safety of HQ. [Expletive] those guys.”

If that weren't bad enough, the base doctor tells him that his persistent headaches may be due to a brain tumour found on a CT scan, and that two members of his team had been found, in autopsy, to have rare and malignant brain tumours, previously undiagnosed. Then, on return to his base in California, in short succession his team member Boozer dies in an apparent suicide which, to Reece's educated eyes, looks highly suspicious, and his wife and daughter are killed in a gang home invasion which makes no sense whatsoever. The doctor who diagnosed the tumour in Reece and his team members is killed in a “green-on-blue” attack by an Afghan working on the base at Bagram.

The ambush, the targeted investigation, the tumours, Boozer, his family, and the doctor: can it all be a coincidence, or is there some connection he's missing? Reece decides he needs another pair of eyes looking at all of this and gets in touch with Katie Buranek, an investigative reporter he met while in Afghanistan. Katie had previously published an investigation of the 2012 attack in Behghazi, Libya, which had brought the full power of intimidation by the federal government down on her head, and she was as versed in and careful about operational and communications security as Reece himself. (The advice in the novel about secure communications is, to my knowledge, absolutely correct.)

From the little that they know, Reece and Buranek, joined by allies Reece met in his eventful career and willing to take risks on his behalf, start to dig into the tangled web of connections between the individual events and trace them upward to those ultimately responsible, discovering deep corruption in the perfumed princes of the Pentagon, politicians (including a presidential contender and her crooked husband), defence contractors, and Reece's own erstwhile chain of command.

Finally, it's time to settle the score. With a tumour in his brain which he expects to kill him, Reece has nothing to lose and many innocent victims to avenge. He's makin' a list; he's checkin' it twice; he's choosing the best way to shoot them or slice. Reece must initially be subtle in his actions so as not to alert other targets to what's happening, but then, after he's declared a domestic terrorist, has to go after extremely hard and ruthless targets with every resource he can summon.

This is the most satisfying revenge fiction I've read since Vince Flynn's first novel, Term Limits (November 2009). The stories are very different, however. In Flynn's novel, it's a group of people making those who are bankrupting and destroying their country pay the price, but here it's personal.

Due to the security clearances the author held while in the Navy, the manuscript was submitted to the U.S. Department of Defense Office of Prepublication and Security Review, which redacted several passages, mostly names and locations of facilities and military organisations. Amusingly, if you highlight some of the redactions, which appear in solid black in the Kindle edition, the highlighted passage appears with the word breaks preserved but all letters changed to “x”. Any amateur sleuths want to try to figure out what the redacted words are in the following text?

He'd spent his early career as an infantry officer in the Ranger Battalions before being selected for the Army's Special xxxxxxx xxxx at Fort Bragg. He was currently in charge of the Joint Special Operations Command, xxxxx xxxxxxxx xxxx xxx xxx xxxx xxxx xx xxxx xx xxx xxxx xxxx xxxx xxxxxx xx xxx xxxxxxxxxx xxxxxxx xx xxxx xxxxx xxx xxxxx.

A sequel, True Believer, is scheduled for publication in April, 2019.

 Permalink

October 2018

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.
In his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it's wise to pay attention. He's not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn't want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers' sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody's data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we're talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon's making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we're all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user's behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx's economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author's view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it's easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I'm not going there), but it's profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I'll discuss that later.) As with physical damage, top-down censorship does not work because there's no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators' revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn't it great!’ ” Today there's a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users' most precious possessions: their time and attention) were in large part the consequence of the Internet's lacking three fundamental architectural layers: security, trust, and transactions. Let's explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today's Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it's bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users' security.

Trust. As indispensable as security is knowing to whom you're talking. For example, when you connect to your bank's Web site, how do you know you're actually talking to their server and not some criminal whose computer has spoofed your computer's domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it's vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren't doing transactions, you aren't in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user's “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they'd have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users' personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we've put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you'll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they'll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it's way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn't always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that's so it simply means we'll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don't get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn't much difference between a very slowly increasing supply and a static one. If you're on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn't matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money (November 2016), that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I'll take his word on the latter point; it's his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder's books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn't make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

 Permalink

Day, Vox [Theodore Beale]. SJWs Always Double Down. Kouvola, Finland: Castalia House, 2017. ISBN 978-952-7065-19-8.
In SJWs Always Lie (October 2015) Vox Day introduced a wide audience to the contemporary phenomenon of Social Justice Warriors (SJWs), collectivists and radical conformists burning with the fierce ardour of ignorance who, flowing out of the academic jackal bins where they are manufactured, are infiltrating the culture: science fiction and fantasy, comic books, video games; and industry: technology companies, open source software development, and more established and conventional firms whose managements have often already largely bought into the social justice agenda.

The present volume updates the status of the Cold Civil War a couple of years on, recounts some key battles, surveys changes in the landscape, and provides concrete and practical advice to those who wish to avoid SJW penetration of their organisations or excise an infiltration already under way.

Two major things have changed since 2015. The first, and most obvious, is the election of Donald Trump as President of the United States in November, 2016. It is impossible to overstate the significance of this. Up until the evening of Election Day, the social justice warriors were absolutely confident they had won on every front and that all that remained was to patrol the battlefield and bayonet the wounded. They were ascendant across the culture, in virtually total control of academia and the media, and with the coronation of Hillary Clinton, positioned to tilt the Supreme Court to discover the remainder of their agenda emanating from penumbras in the living Constitution. And then—disaster! The deplorables who inhabit the heartland of the country, those knuckle-walking, Bible-thumping, gun-waving bitter clingers who produce just about every tangible thing still made in the United States up and elected somebody who said he'd put them—not the coastal élites, ivory tower professors and think tankers, “refugees” and the racket that imports them, “undocumented migrants” and the businesses that exploit their cheap labour, and all the rest of the parasitic ball and chain a once-great and productive nation has been dragging behind it for decades—first.

The shock of this event seems to have jolted a large fraction of the social justice warriors loose from their (already tenuous) moorings to reality. “What could have happened?”, they shrieked, “It must have been the Russians!” Overnight, there was the “resistance”, the rampage of masked violent street mobs, while at the same time SJW leaders in the public eye increasingly dropped the masks behind which they'd concealed their actual agenda. Now we have candidates for national office from the Democrat party, such as bug-eyed SJW Alexandria Occasional-Cortex openly calling themselves socialists, while others chant “no borders” and advocate abolishing the federal immigration and customs enforcement agency. What's the response to deranged leftists trying to gun down Republican legislators at a baseball practice and assaulting a U.S. Senator while mowing the lawn of his home? The Democrat candidate who lost to Trump in 2016 says, “You cannot be civil with a political party that wants to destroy what you stand for, what you care about.”, and the attorney general, the chief law enforcement officer of the administration which preceded Trump in office said, “When they go low, we kick them. That's what this new Democratic party is about.”

In parallel with this, the SJW convergence of the major technology and communication companies which increasingly dominate the flow of news and information and the public discourse: Google (and its YouTube), Facebook, Twitter, Amazon, and the rest, previously covert, has now become explicit. They no longer feign neutrality to content, or position themselves as common carriers. Now, they overtly put their thumb on the scale of public discourse, pushing down conservative and nationalist voices in search rankings, de-monetising or banning videos that oppose the slaver agenda, “shadow banning” dissenting voices or terminating their accounts entirely. Payment platforms and crowd-funding sites enforce an ideological agenda and cut off access to those they consider insufficiently on board with the collectivist, globalist party line. The high tech industry, purporting to cherish “diversity”, has become openly hostile to anybody who dares dissent: firing them and blacklisting them from employment at other similarly converged firms.

It would seem a dark time for champions of liberty, believers in reward for individual merit rather than grievance group membership, and other forms of sanity which are now considered unthinkable among the unthinking. This book provides a breath of fresh air, a sense of hope, and practical information to navigate a landscape populated by all too many non-playable characters who imbibe, repeat, and enforce the Narrative without questioning or investigating how it is created, disseminated in a co-ordinated manner across all media, and adjusted (including Stalinist party-line overnight turns on a dime) to advance the slaver agenda.

Vox Day walks through the eight stages of SJW convergence of an organisation from infiltration through evading the blame for the inevitable failure of the organisation once fully converged, illustrating the process with real-world examples and quotes from SJWs and companies infested with them. But the progression of the disease is not irreversible, and even if it is not arrested, there is still hope for the industry and society as a whole (not to minimise the injury and suffering inflicted on innocent and productive individuals in the affected organisations).

An organisation, whether a company, government agency, or open source software project, only comes onto the radar of the SJWs once it grows to a certain size and achieves a degree of success carrying out the mission for which it was created. It is at this point that SJWs will seek to penetrate the organisation, often through the human resources department, and then reinforce their ranks by hiring more of their kind. SJWs flock to positions in which there is no objective measure of their performance, but instead evaluations performed, as their ranks grow, more and more by one another. They are not only uninterested in the organisation's mission (developing a product, providing a service, etc.), but unqualified and incapable of carrying it out. In the words of Jerry Pournelle's Iron Law of Bureaucracy, they are not “those who are devoted to the goals of the organization” (founders, productive mission-oriented members), but “those dedicated to the organization itself”. “The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.”

Now, Dr Pournelle was describing a natural process of evolution in all bureaucratic organisations. SJW infection simply accelerates the process and intensifies the damage, because SJWs are not just focused on the organisation as opposed to its mission, but have their own independent agenda and may not care about damage to the institution as long as they can advance the Narrative.

But this is a good thing. It means that, in a competitive market, SJW afflicted organisations will be at a disadvantage compared to those which have resisted the corruption or thrown it off. It makes inflexible, slow-moving players with a heavy load of SJW parasites vulnerable to insurgent competitors, often with their founders still in charge, mission-focused and customer-oriented, who hire, promote, and reward contributors solely based on merit and not “diversity”, “inclusion”, or any of the other SJW shibboleths mouthed by the management of converged organisations. (I remember, when asked about my hiring policy in the 1980s, saying “I don't care if they hang upside down from trees and drink blood. If they're great programmers, I'll hire them.”)

A detailed history of GamerGate provides a worked example of how apparent SJW hegemony within a community can be attacked by “weaponised autism” (as Milo Yiannopoulos said, “it's really not wise to take on a collection of individuals whose idea of entertainment is to spend hundreds of hours at a highly repetitive task, especially when their core philosophy is founded on the principle that if you are running into enemies and taking fire, you must be going the right way”). Further examples show how these techniques have been applied within the world of science fiction and fantasy fandom, comic books, and software development. The key take-away is that any SJW converged organisation or community is vulnerable to concerted attack because SJWs are a parasite that ultimately kills its host. Create an alternative and relentlessly attack the converged competition, and victory is possible. And remember, “Victory is not positive PR. Victory is when your opponent quits.”

This is a valuable guide, building upon SJWs Always Lie (which you should read first), and is essential for managers, project leaders, and people responsible for volunteer organisations who want to keep them focused on the goals for which they were founded and protected from co-optation by destructive parasites. You will learn how seemingly innocent initiatives such as adoption of an ambiguously-worded Code of Conduct or a Community Committee can be the wedge by which an organisation can be subverted and its most productive members forced out or induced to walk away in disgust. Learning the lessons presented here can make the difference between success and, some dismal day, gazing across the cubicles at a sea of pinkhairs and soybeards and asking yourself, “Where did we go wrong?”

The very fact that SJW behaviour is so predictable makes them vulnerable. Because they always double down, they can be manipulated into marginalising themselves, and it's often child's play to set traps into which they'll walk. Much of their success to date has been due to the absence of the kind of hard-edged opposition, willing to employ their own tactics against them, that you'll see in action here and learn to use yourself. This is not a game for the “defeat with dignity” crowd who were, and are, appalled by Donald Trump's plain speaking, or those who fail to realise that proclaiming “I won't stoop to their level” inevitably ends up with “Bend over”. The battles, and the war can be won, but to do so, you have to fight. Here is a guide to closing with the enemy and destroying them before they ruin everything we hold sacred.

 Permalink

Mills, Kyle. Red War. New York: Atria Books, 2018. ISBN 978-1-5011-9059-9.
This is the fourth novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. On the cover, Vince Flynn still gets top billing (he is now the “brand”, not the author), but Kyle Mills demonstrates here that he's a worthy successor who is taking Rapp and the series in new directions.

In the previous novel, Enemy of the State (June 2018), Rapp went totally off the radar, resigning from the CIA, recruiting a band of blackguards, many former adversaries, to mount an operation aimed at a nominal U.S. ally. This time, the circumstances are very different. Rapp is back at the CIA, working with his original team headed by Scott Coleman, who has now more or less recovered from the severe injuries he sustained in the earlier novel Order to Kill (December 2017), with Claudia Gould, now sharing a house with Rapp, running logistics for their missions.

Vladimir Krupin, President/autocrat of Russia, is ailing. Having climbed to the top of the pyramid in that deeply corrupt country, he now fears his body is failing him, with bouts of incapacitating headaches, blurred vision, and disorientation coming more and more frequently. He and his physician have carefully kept the condition secret, as any hint of weakness at the top would likely invite one or more of his rivals to make a move to unseat him. Worse, under the screwed-down lid of the Russian pressure cooker, popular dissatisfaction with the dismal economy, lack of freedom, and dearth of opportunity is growing, with popular demonstrations reaching Red Square.

The CIA knows nothing of Krupin's illness, but has been observing what seems to be increasingly erratic behaviour. In the past, Krupin has been ambitious and willing to commit outrages, but has always drawn his plans carefully and acted deliberately, but now he seemed to be doing things almost at random, sometimes against his own interests. Russian hackers launch an attack that takes down a large part of the power grid in Costa Rica. A Russian strike team launches an assault on Krupin's retired assassin and Rapp's former nemesis and recent ally, Grisha Azarov. Military maneuvers in the Ukraine seem to foreshadow open confrontation should that country move toward NATO membership.

Krupin, well aware of the fate of dictators who lose their grip on power, and knowing that nothing rallies support behind a leader like a bold move on the international stage, devises a grand plan to re-assert Russian greatness, right a wrong inflicted by the West, and drive a stake into the heart of NATO. Rapp and Azarov, continuing their uneasy alliance, driven by entirely different motives, undertake a desperate mission in the very belly of the bear to avert what could all too easily end in World War III.

There are a number of goofs, which I can't discuss without risk of spoilers, so I'll take them behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
The copy editing is not up to the standard you'd expect in a bestseller published by an imprint of Simon & Schuster. On three occasions, “Balkan” appears where “Baltic” is intended. This can be pretty puzzling the first time you encounter it. Afterward, it's good for a chuckle.

In chapter 39, one of Rapp's allies tries to establish a connection on a land-line “telephone that looked like it had been around since the 1950s” and then, just a few paragraphs later, we read “There was a USB port hidden in the simple electronics…”. Huh? I've seen (and used) a lot of 1950s telephones, but danged if I can remember one with a USB port (which wasn't introduced until 1996).

Later in the same chapter Rapp is riding a horse, “working only with a map and compass, necessary because of the Russians' ability to zero in on electronic signals.” This betrays a misunderstanding of how GPS works which, while common, is jarring in a techno-thriller that tries to get things right. A GPS receiver is totally passive: it receives signals from the navigation satellites but transmits nothing and cannot be detected by electronic surveillance equipment. There is no reason Rapp could not have used GPS or GLONASS satellites to navigate.

In chapter 49, Rapp fires two rounds into a door locking keypad and “was rewarded with a cascade of sparks…”. Oh, please—even in Russia, security keypads are not wired up to high voltage lines that would emit showers of sparks. This is a movie cliché which doesn't belong in a novel striving for realism.

Spoilers end here.  
This is a well-crafted thriller which broadens the scope of the Rapp saga into Tom Clancy territory. Things happen, which will leave the world in a different place after they occur. It blends Rapp and Azarov's barely restrained loose cannon operations with high-level diplomacy and intrigue, plus an interesting strategic approach to pledges of defence which the will and resources of those who made them may not be equal to the challenge when the balloon goes up and the tanks start to roll. And Grisha Azarov's devotion to his girlfriend is truly visceral.

 Permalink

Churchill, Winston S. Savrola. Seattle: CreateSpace, [1898, 1900] 2018. ISBN 978-1-7271-2358-6.
In 1897, the young (23 year old) Winston Churchill, on an ocean voyage from Britain to India to rejoin the army in the Malakand campaign of 1897, turned his pen to fiction and began this, his first and only novel. He set the work aside to write The Story of the Malakand Field Force, an account of the fighting and his first published work of non-fiction, then returned to the novel, completing it in 1898. It was serialised in Macmillan's Magazine in that year. (Churchill's working title, Affairs of State, was changed by the magazine's editors to Savrola, the name of a major character in the story.) The novel was subsequently published as book under that title in 1900.

The story takes place in the fictional Mediterranean country of Laurania, where five years before the events chronicled here, a destructive civil war had ended with General Antonio Molara taking power as President and ruling as a dictator with the support of the military forces he commanded in the war. Prior to the conflict, Laurania had a long history as a self-governing republic, and unrest was growing as more and more of the population demanded a return to parliamentary rule. Molara announced that elections would be held for a restored parliament under the original constitution.

Then, on the day the writ ordering the election was to be issued, it was revealed that the names of more than half of the citizens on the electoral rolls had been struck by Molara's order. A crowd gathered in the public square, on hearing this news, became an agitated mob and threatened to storm the President's carriage. The officer commanding the garrison commanded his troops to fire on the crowd.

All was now over. The spirit of the mob was broken and the wide expanse of Constitution Square was soon nearly empty. Forty bodies and some expended cartridges lay on the ground. Both had played their part in the history of human development and passed out of the considerations of living men. Nevertheless, the soldiers picked up the empty cases, and presently some police came with carts and took the other things away, and all was quiet again in Laurania.

The massacre, as it was called even by the popular newspaper The Diurnal Gusher which nominally supported the Government, not to mention the opposition press, only compounded the troubles Molara saw in every direction he looked. While the countryside was with him, sentiment in the capital was strongly with the pro-democracy opposition. Among the army, only the élite Republican Guard could be counted on as reliably loyal, and their numbers were small. A diplomatic crisis was brewing with the British over Laurania's colony in Africa which might require sending the Fleet, also loyal, away to defend it. A rebel force, camped right across the border, threatens invasion at any sign of Molara's grip on the nation weakening. And then there is Savrola.

Savrola (we never learn his first name), is the young (32 years), charismatic, intellectual, and persuasive voice of the opposition. While never stepping across the line sufficiently to justify retaliation, he manages to keep the motley groups of anti-Government forces in a loose coalition and is a constant thorn in the side of the authorities. He was not immune from introspection.

Was it worth it? The struggle, the labour, the constant rush of affairs, the sacrifice of so many things that make life easy, or pleasant—for what? A people's good! That, he could not disguise from himself, was rather the direction than the cause of his efforts. Ambition was the motive force, and he was powerless to resist it.

This is a character one imagines the young Churchill having little difficulty writing. With the seemingly incorruptible Savrola gaining influence and almost certain to obtain a political platform in the coming elections, Molara's secretary, the amoral but effective Miguel, suggests a stratagem: introduce Savrola to the President's stunningly beautiful wife Lucile and use the relationship to compromise him.

“You are a scoundrel—an infernal scoundrel” said the President quietly.

Miguel smiled, as one who receives a compliment. “The matter,” he said, “is too serious for the ordinary rules of decency and honour. Special cases demand special remedies.”

The President wants to hear no more of the matter, but does not forbid Miguel from proceeding. An introduction is arranged, and Lucile rapidly moves from fascination with Savrola to infatuation. Then events rapidly spin out of anybody's control. The rebel forces cross the border; Molara's army is proved unreliable and disloyal; the Fleet, en route to defend the colony, is absent; Savrola raises a popular rebellion in the capital; and open fighting erupts.

This is a story of intrigue, adventure, and conflict in the “Ruritanian” genre popularised by the 1894 novel The Prisoner of Zenda. Churchill, building on his experience of war reportage, excels in and was praised for the realism of the battle scenes. The depiction of politicians, functionaries, and soldiers seems to veer back and forth between cynicism and admiration for their efforts in trying to make the best of a bad situation. The characters are cardboard figures and the love interest is clumsily described.

Still, this is an entertaining read and provides a window on how the young Churchill viewed the antics of colourful foreigners and their unstable countries, even if Laurania seems to have a strong veneer of Victorian Britain about it. The ultimate message is that history is often driven not by the plans of leaders, whether corrupt or noble, but by events over which they have little control. Churchill never again attempted a novel and thought little of this effort. In his 1930 autobiography covering the years 1874 through 1902 he writes of Savrola, “I have consistently urged my friends to abstain from reading it.” But then, Churchill was not always right—don't let his advice deter you; I enjoyed it.

This work is available for free as a Project Gutenberg electronic book in a variety of formats. There are a number of print and Kindle editions of this public domain text; I have cited the least expensive print edition available at the time I wrote this review. I read this Kindle edition, which has a few typographical errors due to having been prepared by optical character recognition (for example, “stem” where “stern” was intended), but is otherwise fine.

One factlet I learned while researching this review is that “Winston S. Churchill” is actually a nom de plume. Churchill's full name is Winston Leonard Spencer-Churchill, and he signed his early writings as “Winston Churchill”. Then, he discovered there was a well-known American novelist with the same name. The British Churchill wrote to the American Churchill and suggested using the name “Winston Spencer Churchill” (no hyphen) to distinguish his work. The American agreed, noting that he would also be willing to use a middle name, except that he didn't have one. The British Churchill's publishers abbreviated his name to “Winston S. Churchill”, which he continued to use for the rest of his writing career.

 Permalink

Schantz, Hans G. The Brave and the Bold. Huntsville, AL: ÆtherCzar, 2018. ISBN 978-1-7287-2274-0.
This the third novel in the author's Hidden Truth series. In the first book (December 2017) we met high schoolers and best friends Pete Burdell and Amit Patel who found, in dusty library books, knowledge apparently discovered by the pioneers of classical electromagnetism (many of whom died young), but which does not figure in modern works, even purported republications of the original sources they had consulted. In the second, A Rambling Wreck (May 2018), Pete and Amit, now freshmen at Georgia Tech, delve deeper into the suppressed mysteries of electromagnetism and the secrets of the shadowy group Amit dubbed the Electromagnetic Villains International League (EVIL), while simultaneously infiltrating and disrupting forces trying to implant the social justice agenda in one of the last bastions of rationality in academia.

The present volume begins in the summer after the pair's freshman year. Both Pete and Amit are planning, along different paths, to infiltrate back-to-back meetings of the Civic Circle's Social Justice Leadership Forum on Jekyll Island, Georgia (the scene of notable conspiratorial skullduggery in the early 20th century) and the G-8 summit of world leaders on nearby Sea Island. Master of Game Amit has maneuvered himself into an internship with the Civic Circle and an invitation to the Forum as a promising candidate for the cause. Pete wasn't so fortunate (or persuasive), and used family connections to land a job with a company contracted to install computer infrastructure for the Civic Circle conference. The latest apparent “social justice” goal was to involve the developed world in a costly and useless war in Iraq, and Pete and Amit hoped to do what they could to derail those plans while collecting information on the plotters from inside.

Working in a loose and uneasy alliance with others they've encountered in the earlier books, they uncover information which suggests a bold strike at the very heart of the conspiracy might be possible, and they set their plans in motion. They learn that the Civic Circle is even more ancient, pervasive in its malign influence, and formidable than they had imagined.

This is one of the most intricately crafted conspiracy tales I've read since the Illuminatus! trilogy, yet entirely grounded in real events or plausible ones in its story line, as opposed to Robert Shea and Robert Anton Wilson's zany tale. The alternative universe in which it is set is artfully grounded in our own, and readers will delight in how events they recall and those with which they may not be familiar are woven into the story. There is delightful skewering of the social justice agenda and those who espouse its absurd but destructive nostrums. The forbidden science aspect of the story is advanced as well, imaginatively stirring the de Broglie-Bohm “pilot wave” interpretation of quantum mechanics and the history of FM broadcasting into the mix.

The story builds to a conclusion which is both shocking and satisfying and confronts the pair with an even greater challenge for their next adventure. This book continues the Hidden Truth saga in the best tradition of Golden Age science fiction and, like the work of the grandmasters of yore, both entertains and leaves the reader eager to find out what happens next. You should read the books in order; if you jump in the middle, you'll miss a great deal of back story and character development essential to enjoying the adventure.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

November 2018

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.
At age eleven, in 1861, young Oliver Heaviside's family, supported by his father's irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family's house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid's geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today's A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother's elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver's favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today's money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship's position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea's propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside's work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside's paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton's Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell's Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell's abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson's intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside's paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside's ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver's employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell's work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell's theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender's key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell's theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it's much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell's theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher's equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside's work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside's work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside's life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell's theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell's original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

Maxwell's equations: original form

into the four famous vector equations we today think of as Maxwell's.

Maxwell's equations: original form

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell's theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn't because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he'd invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside's work and long after engineers were routinely using it in their calculations.)

Heaviside's intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein's special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn't have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain's highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

 Permalink

Shoemaker, Martin L. Blue Collar Space. Seattle: CreateSpace [Old Town Press], 2018. ISBN 978-1-7170-5188-2.
This book is a collection of short stories, set in three different locales. The first part, “Old Town Tales”, are set on the Moon and revolve around yarns told at the best bar on Luna. The second part, “The Planet Next Door”, are stories set on Mars, while the third, “The Pournelle Settlements”, take place in mining settlements in the Jupiter system.

Most of the stories take place in established settlements; they are not tales of square-jawed pioneers opening up the frontier, but rather ordinary people doing the work that needs to be done in environments alien to humanity's home. On the Moon, we go on a mission with a rescue worker responding to a crash; hear a sanitation (“Eco Services”) technician regale a rookie with the story of “The Night We Flushed the Old Town”; accompany a father and daughter on a work day Outside that turns into a crisis; learn why breathing vacuum may not be the only thing that can go wrong on the Moon; and see how even for those in the most mundane of jobs, on the Moon wonders may await just over the nearby horizon.

At Mars, the greatest problem facing an ambitious international crewed landing mission may be…ambition, a doctor on a Mars-bound mission must deal with the technophobe boss's son while keeping him alive, and a schoolteacher taking her Mars survival class on a field trip finds that doing things by the book may pay off in discovering something which isn't in the book.

The Jupiter system is home to the Pournelle Settlements, a loosely affiliated group of settlers, many of whom came to escape the “government squeeze” and “corporate squeeze” that held the Inner System in their grip. And like the Wild West, it can be a bit wild. When sabotage disables the refinery that processes ore for the Settlements, its new boss must find a way to use the unique properties of the environment to keep his people fed and avoid the most hostile of takeovers. Where there are vast distances, long travel times, and cargoes with great value, there will be pirates, and the long journey from Jupiter to the Inner System is no exception. An investigator seeking evidence in a murder case must learn the ways of the Trust Economy in the Settlements and follow the trail far into the void.

These stories bring back the spirit of science fiction magazine stories in the decades before the dawn of the Big Government space age when we just assumed that before long space would be filled with people like ourselves living their lives and pursuing their careers where freedom was just a few steps away from any settlement and individual merit was rewarded. They are an excellent example of “hard” science fiction, not in being difficult but that the author makes a serious effort to get the facts right and make the plots plausible. (I am, however, dubious that the trick used in “Unrefined” would work.) All of the stories stand by themselves and can be read in any order. This is another example of how independent authors and publishing are making this a new golden age of science fiction.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Schlichter, Kurt. People's Republic. Seattle: CreateSpace, 2016. ISBN 978-1-5390-1895-7.
As the third decade of the twenty-first century progressed, the Cold Civil War which had been escalating in the United States since before the turn of the century turned hot when a Democrat administration decided to impose their full agenda—gun confiscation, amnesty for all illegal aliens, restrictions on fossil fuels—all at once by executive order. The heartland defied the power grab and militias of the left and right began to clash openly. Although the senior officer corps were largely converged to the leftist agenda, the military rank and file which hailed largely from the heartland defied them, and could not be trusted to act against their fellow citizens. Much the same was the case with police in the big cities: they began to ignore the orders of their political bosses and migrate to jobs in more congenial jurisdictions.

With a low-level shooting war breaking out, the opposing sides decided that the only way to avert general conflict was, if not the “amicable divorce” advocated by Jesse Kelly, then a more bitter and contentious end to a union which was not working. The Treaty of Saint Louis split the country in two, with the east and west coasts and upper midwest calling itself the “People's Republic of North America” (PRNA) and the remaining territory (including portions of some states like Washington, Oregon, and Indiana with a strong regional divide) continuing to call itself the United States, but with some changes: the capital was now Dallas, and the constitution had been amended to require any person not resident on its territory at the time of the Split (including children born thereafter) who wished full citizenship and voting rights to serve two years in the military with no “alternative service” for the privileged or connected.

The PRNA quickly implemented the complete progressive agenda wherever its rainbow flag (frequently revised as different victim groups clawed their way to the top of the grievance pyramid) flew. As police forces collapsed with good cops quitting and moving out, they were replaced by a national police force initially called the “People's Internal Security Squads” (later the “People's Security Force” when the acronym for the original name was deemed infelicitous), staffed with thugs and diversity hires attracted by the shakedown potential of carrying weapons among a disarmed population.

Life in the PRNA was pretty good for the coastal élites in their walled communities, but as with collectivism whenever and wherever it is tried, for most of the population life was a grey existence of collapsing services, food shortages, ration cards, abuse by the powerful, and constant fear of being denounced for violating the latest intellectual fad or using an incorrect pronoun. And, inevitably, it wasn't long before the PRNA slammed the door shut to keep the remaining competent people from fleeing to where they were free to use their skills and keep what they'd earned. Mexico built a “big, beautiful wall” to keep hordes of PRNA subjects from fleeing to freedom and opportunity south of the border.

Several years after the Split, Kelly Turnbull, retired military and veteran of the border conflicts around the Split paid the upkeep of his 500 acre non-working ranch by spiriting people out of the PRNA to liberty in the middle of the continent. After completing a harrowing mission which almost ended in disaster, he is approached by a wealthy and politically-connected Dallas businessman who offers him enough money to retire if he'll rescue his daughter who, indoctrinated by the leftist infestation still remaining at the university in Austin, defected to the PRNA and is being used in propaganda campaigns there at the behest of the regional boss of the secret police. In addition, a spymaster tasks him with bringing out evidence which will allow rolling up the PRNAs informer and spy networks. Against his self-preservation instinct which counsels laying low until the dust settles from the last mission, he opts for the money and prospect of early retirement and undertakes the mission.

As Turnbull covertly enters the People's Republic, makes his way to Los Angeles, and seeks his target, there is a superbly-sketched view of an America in which the progressive agenda has come to fruition, and one which people there may well be living at the end of the next two Democrat-dominated administrations. It is often funny, as the author skewers the hypocrisy of the slavers mouthing platitudes they don't believe for a femtosecond. (If you think it improper to make fun of human misery, recall the mordant humour in the Soviet Union as workers mocked the reality of the “workers' paradise”.) There's plenty of tension and action, and sometimes following Turnbull on his mission seems like looking over the shoulder of a first-person-shooter. He's big on countdowns and tends to view “blues” obstructing him as NPCs to be dealt with quickly and permanently: “I don't much like blues. You kill them or they kill you.”

This is a satisfying thriller which is probably a more realistic view of the situation in a former United States than an amicable divorce with both sides going their separate ways. The blue model is doomed to collapse, as it already has begun to in the big cites and states where it is in power, and with that inevitable collapse will come chaos and desperation which spreads beyond its borders. With Democrat politicians such as Occasional-Cortex who, a few years ago, hid behind such soothing labels as “liberal” or “progressive” now openly calling themselves “democratic socialists”, this is not just a page-turning adventure but a cautionary tale of the future should they win (or steal) power.

A prequel, Indian Country, which chronicles insurgency on the border immediately after the Split as guerrilla bands of the sane rise to resist the slavers, is now available.

 Permalink

December 2018

Kluger, Jeffrey. Apollo 8. New York: Picador, 2017. ISBN 978-1-250-18251-7.
As the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.

Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt's crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn't. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.

But NASA were not just racing the clock—they were also racing the Soviet Union. Unlike Apollo, the Soviet space program was highly secretive and NASA had to go on whatever scraps of information they could glean from Soviet publications, the intelligence community, and independent tracking of Soviet launches and spacecraft in flight. There were, in fact, two Soviet manned lunar programmes running in parallel. The first, internally called the Soyuz 7K-L1 but dubbed “Zond” for public consumption, used a modified version of the Soyuz spacecraft launched on a Proton booster and was intended to carry two cosmonauts on a fly-by mission around the Moon. The craft would fly out to the Moon, use its gravity to swing around the far side, and return to Earth. The Zond lacked the propulsion capability to enter lunar orbit. Still, success would allow the Soviets to claim the milestone of first manned mission to the Moon. In September 1968 Zond 5 successfully followed this mission profile and safely returned a crew cabin containing tortoises, mealworms, flies, and plants to Earth after their loop around the Moon. A U.S. Navy destroyer observed recovery of the re-entry capsule in the Indian Ocean. Clearly, this was preparation for a manned mission which might occur on any lunar launch window.

(The Soviet manned lunar landing project was actually far behind Apollo, and would not launch its N1 booster on that first, disastrous, test flight until February 1969. But NASA did not know this in 1968.) Every slip in the Apollo program increased the probability of its being scooped so close to the finish line by a successful Zond flyby mission.

These were the circumstances in August 1968 when what amounted to a cabal of senior NASA managers including George Low, Chris Kraft, Bob Gilruth, and later joined by Wernher von Braun and chief astronaut Deke Slayton, began working on an alternative. They plotted in secret, beneath the radar and unbeknownst to NASA administrator Jim Webb and his deputy for manned space flight, George Mueller, who were both out of the country, attending an international conference in Vienna. What they were proposing was breathtaking in its ambition and risk. They envisioned taking Frank Borman's crew, originally scheduled for Apollo 9, and putting them into an accelerated training program to launch on the Saturn V and Apollo spacecraft currently scheduled for Apollo 8. They would launch without a Lunar Module, and hence be unable to land on the Moon or test that spacecraft. The original idea was to perform a Zond-like flyby, but this was quickly revised to include going into orbit around the Moon, just as a landing mission would do. This would allow retiring the risk of many aspects of the full landing mission much earlier in the program than originally scheduled, and would also allow collection of precision data on the lunar gravitational field and high resolution photography of candidate landing sites to aid in planning subsequent missions. The lunar orbital mission would accomplish all the goals of the originally planned “E” mission and more, allowing that mission to be cancelled and therefore not requiring an additional booster and spacecraft.

But could it be done? There were a multitude of requirements, all daunting. Borman's crew, training toward a launch in early 1969 on an Earth orbit mission, would have to complete training for the first lunar mission in just sixteen weeks. The Saturn V booster, which suffered multiple near-catastrophic engine failures in its second flight on Apollo 6, would have to be cleared for its first manned flight. Software for the on-board guidance computer and for Mission Control would have to be written, tested, debugged, and certified for a lunar mission many months earlier than previously scheduled. A flight plan for the lunar orbital mission would have to be written from scratch and then tested and trained in simulations with Mission Control and the astronauts in the loop. The decision to fly Borman's crew instead of McDivitt's was to avoid wasting the extensive training the latter crew had undergone in LM systems and operations by assigning them to a mission without an LM. McDivitt concurred with this choice: while it might be nice to be among the first humans to see the far side of the Moon with his own eyes, for a test pilot the highest responsibility and honour is to command the first flight of a new vehicle (the LM), and he would rather skip the Moon mission and fly later than lose that opportunity. If the plan were approved, Apollo 8 would become the lunar orbit mission and the Earth orbit test of the LM would be re-designated Apollo 9 and fly whenever the LM was ready.

While a successful lunar orbital mission on Apollo 8 would demonstrate many aspects of a full lunar landing mission, it would also involve formidable risks. The Saturn V, making only its third flight, was coming off a very bad outing in Apollo 6 whose failures might have injured the crew, damaged the spacecraft hardware, and precluded a successful mission to the Moon. While fixes for each of these problems had been implemented, they had never been tested in flight, and there was always the possibility of new problems not previously seen.

The Apollo Command and Service modules, which would take them to the Moon, had not yet flown a manned mission and would not until Apollo 7, scheduled for October 1968. Even if Apollo 7 were a complete success (which was considered a prerequisite for proceeding), Apollo 8 would be only the second manned flight of the Apollo spacecraft, and the crew would have to rely upon the functioning of its power generation, propulsion, and life support systems for a mission lasting six days. Unlike an Earth orbit mission, if something goes wrong en route to or returning from the Moon, you can't just come home immediately. The Service Propulsion System on the Service Module would have to work perfectly when leaving lunar orbit or the crew would be marooned forever or crash on the Moon. It would only have been tested previously in one manned mission and there was no backup (although the single engine did incorporate substantial redundancy in its design).

The spacecraft guidance, navigation, and control system and its Apollo Guidance Computer hardware and software, upon which the crew would have to rely to navigate to and from the Moon, including the critical engine burns to enter and leave lunar orbit while behind the Moon and out of touch with Mission Control, had never been tested beyond Earth orbit.

The mission would go to the Moon without a Lunar Module. If a problem developed en route to the Moon which disabled the Service Module (as would happen to Apollo 13 in April 1970), there would be no LM to serve as a lifeboat and the crew would be doomed.

When the high-ranking conspirators presented their audacious plan to their bosses, the reaction was immediate. Manned spaceflight chief Mueller immediately said, “Can't do that! That's craziness!” His boss, administrator James Webb, said “You try to change the entire direction of the program while I'm out of the country?” Mutiny is a strong word, but this seemed to verge upon it. Still, Webb and Mueller agreed to meet with the lunar cabal in Houston on August 22. After a contentious meeting, Webb agreed to proceed with the plan and to present it to President Johnson, who was almost certain to approve it, having great confidence in Webb's management of NASA. The mission was on.

It was only then that Borman and his crewmembers Lovell and Anders learned of their reassignment. While Anders was disappointed at the prospect of being the Lunar Module Pilot on a mission with no Lunar Module, the prospect of being on the first flight to the Moon and entrusted with observation and photography of lunar landing sites more than made up for it. They plunged into an accelerated training program to get ready for the mission.

NASA approached the mission with its usual “can-do” approach and public confidence, but everybody involved was acutely aware of the risks that were being taken. Susan Borman, Frank's wife, privately asked Chris Kraft, director of Flight Operations and part of the group who advocated sending Apollo 8 to the Moon, with a reputation as a plain-talking straight shooter, “I really want to know what you think their chances are of coming home.” Kraft responded, “You really mean that, don't you?” “Yes,” she replied, “and you know I do.” Kraft answered, “Okay. How's fifty-fifty?” Those within the circle, including the crew, knew what they were biting off.

The launch was scheduled for December 21, 1968. Everybody would be working through Christmas, including the twelve ships and thousands of sailors in the recovery fleet, but lunar launch windows are set by the constraints of celestial mechanics, not human holidays. In November, the Soviets had flown Zond 6, and it had demonstrated the “double dip” re-entry trajectory required for human lunar missions. There were two system failures which killed the animal test subjects on board, but these were covered up and the mission heralded as a great success. From what NASA knew, it was entirely possible the next launch would be with cosmonauts bound for the Moon.

Space launches were exceptional public events in the 1960s, and the first flight of men to the Moon, just about a hundred years after Jules Verne envisioned three men setting out for the Moon from central Florida in a “cylindro-conical projectile” in De la terre à la lune (From the Earth to the Moon), similarly engaging the world, the launch of Apollo 8 attracted around a quarter of a million people to watch the spectacle in person and hundreds of millions watching on television both in North America and around the globe, thanks to the newfangled technology of communication satellites. Let's tune in to CBS television and relive this singular event with Walter Cronkite.

CBS coverage of the Apollo 8 launch

Now we step inside Mission Control and listen in on the Flight Director's audio loop during the launch, illustrated with imagery and simulations.

The Saturn V performed almost flawlessly. During the second stage burn mild pogo oscillations began but, rather than progressing to the point where they almost tore the rocket apart as had happened on the previous Saturn V launch, von Braun's team's fixes kicked in and seconds later Borman reported, “Pogo's damping out.” A few minutes later Apollo 8 was in Earth orbit.

Jim Lovell had sixteen days of spaceflight experience across two Gemini missions, one of them Gemini 7 where he endured almost two weeks in orbit with Frank Borman. Bill Anders was a rookie, on his first space flight. Now weightless, all three were experiencing a spacecraft nothing like the cramped Mercury and Gemini capsules which you put on as much as boarded. The Apollo command module had an interior volume of six cubic metres (218 cubic feet, in the quaint way NASA reckons things) which may not seem like much for a crew of three, but in weightlessness, with every bit of space accessible and usable, felt quite roomy. There were five real windows, not the tiny portholes of Gemini, and plenty of space to move from one to another.

With all this roominess and mobility came potential hazards, some verging on slapstick, but, in space, serious nonetheless. NASA safety personnel had required the astronauts to wear life vests over their space suits during the launch just in case the Saturn V malfunctioned and they ended up in the ocean. While moving around the cabin to get to the navigation station after reaching orbit, Lovell, who like the others hadn't yet removed his life vest, snagged its activation tab on a strut within the cabin and it instantly inflated. Lovell looked ridiculous and the situation comical, but it was no laughing matter. The life vests were inflated with carbon dioxide which, if released in the cabin, would pollute their breathing air and removal would use up part of a CO₂ scrubber cartridge, of which they had a limited supply on board. Lovell finally figured out what to do. After being helped out of the vest, he took it down to the urine dump station in the lower equipment bay and vented it into a reservoir which could be dumped out into space. One problem solved, but in space you never know what the next surprise might be.

The astronauts wouldn't have much time to admire the Earth through those big windows. Over Australia, just short of three hours after launch, they would re-light the engine on the third stage of the Saturn V for the “trans-lunar injection” (TLI) burn of 318 seconds, which would accelerate the spacecraft to just slightly less than escape velocity, raising its apogee so it would be captured by the Moon's gravity. After housekeeping (presumably including the rest of the crew taking off those pesky life jackets, since there weren't any wet oceans where they were going) and reconfiguring the spacecraft and its computer for the maneuver, they got the call from Houston, “You are go for TLI.” They were bound for the Moon.

The third stage, which had failed to re-light on its last outing, worked as advertised this time, with a flawless burn. Its job was done; from here on the astronauts and spacecraft were on their own. The booster had placed them on a free-return trajectory. If they did nothing (apart from minor “trajectory correction maneuvers” easily accomplished by the spacecraft's thrusters) they would fly out to the Moon, swing around its far side, and use its gravity to slingshot back to the Earth (as Lovell would do two years later when he commanded Apollo 13, although there the crew had to use the engine of the LM to get back onto a free-return trajectory after the accident).

Apollo 8 rapidly climbed out of the Earth's gravity well, trading speed for altitude, and before long the astronauts beheld a spectacle no human eyes had glimpsed before: an entire hemisphere of Earth at once, floating in the inky black void. On board, there were other concerns: Frank Borman was puking his guts out and having difficulties with the other end of the tubing as well. Borman had logged more than six thousand flight hours in his career as a fighter and test pilot, most of it in high-performance jet aircraft, and fourteen days in space on Gemini 7 without any motion sickness. Many people feel queasy when they experience weightlessness the first time, but this was something entirely different and new in the American space program. And it was very worrisome. The astronauts discussed the problem on private tapes they could downlink to Mission Control without broadcasting to the public, and when NASA got around to playing the tapes, the chief flight surgeon, Dr. Charles Berry, became alarmed.

As he saw it, there were three possibilities: motion sickness, a virus of some kind, or radiation sickness. On its way to the Moon, Apollo 8 passed directly through the Van Allen radiation belts, spending two hours in this high radiation environment, the first humans to do so. The total radiation dose was estimated as roughly the same as one would receive from a chest X-ray, but the composition of the radiation was different and the exposure was over an extended time, so nobody could be sure it was safe. The fact that Lovell and Anders had experienced no symptoms argued against the radiation explanation. Berry concluded that a virus was the most probable cause and, based upon the mission rules said, “I'm recommending that we consider canceling the mission.” The risk of proceeding with the commander unable to keep food down and possibly carrying a virus which the other astronauts might contract was too great in his opinion. This recommendation was passed up to the crew. Borman, usually calm and collected even by astronaut standards, exclaimed, “What? That is pure, unadulterated horseshit.” The mission would proceed, and within a day his stomach had settled.

This was the first case of space adaptation syndrome to afflict an American astronaut. (Apparently some Soviet cosmonauts had been affected, but this was covered up to preserve their image as invincible exemplars of the New Soviet Man.) It is now known to affect around a third of people experiencing weightlessness in environments large enough to move around, and spontaneously clears up in two to four (miserable) days.

The two most dramatic and critical events in Apollo 8's voyage would occur on the far side of the Moon, with 3500 km of rock between the spacecraft and the Earth totally cutting off all communications. The crew would be on their own, aided by the computer and guidance system and calculations performed on the Earth and sent up before passing behind the Moon. The first would be lunar orbit insertion (LOI), scheduled for 69 hours and 8 minutes after launch. The big Service Propulsion System (SPS) engine (it was so big—twice as large as required for Apollo missions as flown—because it was designed to be able to launch the entire Apollo spacecraft from the Moon if a “direct ascent” mission mode had been selected) would burn for exactly four minutes and seven seconds to bend the spacecraft's trajectory around the Moon into a closed orbit around that world.

If the SPS failed to fire for the LOI burn, it would be a huge disappointment but survivable. Apollo 8 would simply continue on its free-return trajectory, swing around the Moon, and fall back to Earth where it would perform a normal re-entry and splashdown. But if the engine fired and cut off too soon, the spacecraft would be placed into an orbit which would not return them to Earth, marooning the crew in space to die when their supplies ran out. If it burned just a little too long, the spacecraft's trajectory would intersect the surface of the Moon—lithobraking is no way to land on the Moon.

When the SPS engine shut down precisely on time and the computer confirmed the velocity change of the burn and orbital parameters, the three astronauts were elated, but they were the only people in the solar system aware of the success. Apollo 8 was still behind the Moon, cut off from communications. The first clue Mission Control would have of the success or failure of the burn would be when Apollo 8's telemetry signal was reacquired as it swung around the limb of the Moon. If too early, it meant the burn had failed and the spacecraft was coming back to Earth; that moment passed with no signal. Now tension mounted as the clock ticked off the seconds to the time expected for a successful burn. If that time came and went with no word from Apollo 8, it would be a really bad day. Just on time, the telemetry signal locked up and Jim Lovell reported, “Go ahead, Houston, this is Apollo 8. Burn complete. Our orbit 160.9 by 60.5.” (Lovell was using NASA's preferred measure of nautical miles; in proper units it was 311 by 112 km. The orbit would subsequently be circularised by another SPS burn to 112.7 by 114.7 km.) The Mission Control room erupted into an un-NASA-like pandemonium of cheering.

Apollo 8 would orbit the Moon ten times, spending twenty hours in a retrograde orbit with an inclination of 12 degrees to the lunar equator, which would allow it to perform high-resolution photography of candidate sites for early landing missions under lighting conditions similar to those expected at the time of landing. In addition, precision tracking of the spacecraft's trajectory in lunar orbit would allow mapping of the Moon's gravitational field, including the “mascons” which perturb the orbits of objects in low lunar orbits and would be important for longer duration Apollo orbital missions in the future.

During the mission, the crew were treated to amazing sights and, in particular, the dramatic difference between the near side, with its many flat “seas”, and the rugged highlands of the far side. Coming around the Moon they saw the spectacle of earthrise for the first time and, hastily grabbing a magazine of colour film and setting aside the planned photography schedule, Bill Anders snapped the photo of the Earth rising above the lunar horizon which became one of the most iconic photographs of the twentieth century. Here is a reconstruction of the moment that photo was taken.

On the ninth and next-to-last orbit, the crew conducted a second television transmission which was broadcast worldwide. It was Christmas Eve on much of the Earth, and, coming at the end of the chaotic, turbulent, and often tragic year of 1968, it was a magical event, remembered fondly by almost everybody who witnessed it and felt pride for what the human species had just accomplished.

You have probably heard this broadcast from the Moon, often with the audio overlaid on imagery of the Moon from later missions, with much higher resolution than was actually seen in that broadcast. Here, in three parts, is what people, including this scrivener, actually saw on their televisions that enchanted night. The famous reading from Genesis is in the third part. This description is eerily similar to that in Jules Verne's 1870 Autour de la lune.

After the end of the broadcast, it was time to prepare for the next and absolutely crucial maneuver, also performed on the far side of the Moon: trans-Earth injection, or TEI. This would boost the spacecraft out of lunar orbit and send it back on a trajectory to Earth. This time the SPS engine had to work, and perfectly. If it failed to fire, the crew would be trapped in orbit around the Moon with no hope of rescue. If it cut off too soon or burned too long, or the spacecraft was pointed in the wrong direction when it fired, Apollo 8 would miss the Earth and orbit forever far from its home planet or come in too steep and burn up when it hit the atmosphere. Once again the tension rose to a high pitch in Mission Control as the clock counted down to the two fateful times: this time they'd hear from the spacecraft earlier if it was on its way home and later or not at all if things had gone tragically awry. Exactly when expected, the telemetry screens came to life and a second later Jim Lovell called, “Houston, Apollo 8. Please be informed there is a Santa Claus.”

Now it was just a matter of falling the 375,000 kilometres from the Moon, hitting the precise re-entry corridor in the Earth's atmosphere, executing the intricate “double dip” re-entry trajectory, and splashing down near the aircraft carrier which would retrieve the Command Module and crew. Earlier unmanned tests gave confidence it would all work, but this was the first time men would be trying it.

There was some unexpected and embarrassing excitement on the way home. Mission Control had called up a new set of co-ordinates for the “barbecue roll” which the spacecraft executed to even out temperature. Lovell was asked to enter “verb 3723, noun 501” into the computer. But, weary and short on sleep, he fat-fingered the commands and entered “verb 37, noun 01”. This told the computer the spacecraft was back on the launch pad, pointing straight up, and it immediately slewed to what it thought was that orientation. Lovell quickly figured out what he'd done, “It was my goof”, but by this time he'd “lost the platform”: the stable reference the guidance system used to determine in which direction the spacecraft was pointing in space. He had to perform a manual alignment, taking sightings on a number of stars, to recover the correct orientation of the stable platform. This was completely unplanned but, as it happens, in doing so Lovell acquired experience that would prove valuable when he had to perform the same operation in much more dire circumstances on Apollo 13 after an explosion disabled the computer and guidance system in the Command Module. Here is the author of the book, Jeffrey Kluger, discussing Jim Lovell's goof.

The re-entry went completely as planned, flown entirely under computer control, with the spacecraft splashing into the Pacific Ocean just 6 km from the aircraft carrier Yorktown. But because the splashdown occurred before dawn, it was decided to wait until the sky brightened to recover the crew and spacecraft. Forty-three minutes after splashdown, divers from the Yorktown arrived at the scene, and forty-five minutes after that the crew was back on the ship. Apollo 8 was over, a total success. This milestone in the space race had been won definitively by the U.S., and shortly thereafter the Soviets abandoned their Zond circumlunar project, judging it an anticlimax and admission of defeat to fly by the Moon after the Americans had already successfully orbited it.

This is the official NASA contemporary documentary about Apollo 8.

Here is an evening with the Apollo 8 astronauts recorded at the National Air and Space Museum on 2008-11-13 to commemorate the fortieth anniversary of the flight.

This is a reunion of the Apollo 8 astronauts on 2009-04-23.

As of this writing, all of the crew of Apollo 8 are alive, and, in a business where divorce was common, remain married to the women they wed as young military officers.

 Permalink

Kotkin, Stephen. Stalin, Vol. 1: Paradoxes of Power, 1878–1928. New York: Penguin Press, 2014. ISBN 978-0-14-312786-4.
In a Levada Center poll in 2017, Russians who responded named Joseph Stalin the “most outstanding person” in world history. Now, you can argue about the meaning of “outstanding”, but it's pretty remarkable that citizens of a country whose chief of government (albeit several regimes ago) presided over an entirely avoidable famine which killed millions of citizens of his country, ordered purges which executed more than 700,000 people, including senior military leadership, leaving his nation unprepared for the German attack in 1941, which would, until the final victory, claim the lives of around 27 million Soviet citizens, military and civilian, would be considered an “outstanding person” as opposed to a super-villain.

The story of Stalin's career is even less plausible, and should give pause to those who believe history can be predicted without the contingency of things that “just happen”. Ioseb Besarionis dze Jughashvili (the author uses Roman alphabet transliterations of all individuals' names in their native languages, which can occasionally be confusing when they later Russified their names) was born in 1878 in the town of Gori in the Caucasus. Gori, part of the territory of Georgia which had long been ruled by the Ottoman Empire, had been seized by Imperial Russia in a series of bloody conflicts ending in the 1860s with complete incorporation of the territory into the Czar's empire. Ioseb, who was called by the Georgian dimunitive “Sosa” throughout his youth, was the third son born to his parents, but, as both of his older brothers had died not long after birth, was raised as an only child.

Sosa's father, Besarion Jughashvili (often written in the Russian form, Vissarion) was a shoemaker with his own shop in Gori but, as time passed his business fell on hard times and he closed the shop and sought other work, ending his life as a vagrant. Sosa's mother, Ketevan “Keke” Geladze, was ambitious and wanted the best for her son, and left her husband and took a variety of jobs to support the family. She arranged for eight year old Sosa to attend Russian language lessons given to the children of a priest in whose house she was boarding. Knowledge of Russian was the key to advancement in Czarist Georgia, and he had a head start when Keke arranged for him to be enrolled in the parish school's preparatory and four year programs. He was the first member of either side of his family to attend school and he rose to the top of his class under the patronage of a family friend, “Uncle Yakov” Egnatashvili. After graduation, his options were limited. The Russian administration, wary of the emergence of a Georgian intellectual class that might champion independence, refused to establish a university in the Caucasus. Sosa's best option was the highly selective Theological Seminary in Tiflis where he would prepare, in a six year course, for life as a parish priest or teacher in Georgia but, for those who graduated near the top, could lead to a scholarship at a university in another part of the empire.

He took the examinations and easily passed, gaining admission, petitioning and winning a partial scholarship that paid most of his fees. “Uncle Yakov” paid the rest, and he plunged into his studies. Georgia was in the midst of an intense campaign of Russification, and Sosa further perfected his skills in the Russian language. Although completely fluent in spoken and written Russian along with his native Georgian (the languages are completely unrelated, having no more in common than Finnish and Italian), he would speak Russian with a Georgian accent all his life and did not publish in the Russian language until he was twenty-nine years old.

Long a voracious reader, at the seminary Sosa joined a “forbidden literature” society which smuggled in and read works, not banned by the Russian authorities, but deemed unsuitable for priests in training. He read classics of Russian, French, English, and German literature and science, including Capital by Karl Marx. The latter would transform his view of the world and path in life. He made the acquaintance of a former seminarian and committed Marxist, Lado Ketskhoveli, who would guide his studies. In August 1898, he joined the newly formed “Third Group of Georgian Marxists”—many years later Stalin would date his “party card” to then.

Prior to 1905, imperial Russia was an absolute autocracy. The Czar ruled with no limitations on his power. What he decreed and ordered his functionaries to do was law. There was no parliament, political parties, elected officials of any kind, or permanent administrative state that did not serve at the pleasure of the monarch. Political activity and agitation were illegal, as were publishing and distributing any kind of political literature deemed to oppose imperial rule. As Sosa became increasingly radicalised, it was only a short step from devout seminarian to underground agitator. He began to neglect his studies, became increasingly disrespectful to authority figures, and, in April 1899, left the seminary before taking his final examinations.

Saddled with a large debt to the seminary for leaving without becoming a priest or teacher, he drifted into writing articles for small, underground publications associated with the Social Democrat movement, at the time the home of most Marxists. He took to public speaking and, while eschewing fancy flights of oratory, spoke directly to the meetings of workers he addressed in their own dialect and terms. Inevitably, he was arrested for “incitement to disorder and insubordination against higher authority” in April 1902 and jailed. After fifteen months in prison at Batum, he was sentenced to three years of internal exile in Siberia. In January 1904 he escaped and made it back to Tiflis, in Georgia, where he resumed his underground career. By this time the Social Democratic movement had fractured into Lenin's Bolshevik faction and the larger Menshevik group. Sosa, who during his imprisonment had adopted the revolutionary nickname “Koba”, after the hero in a Georgian novel of revenge, continued to write and speak and, in 1905, after the Czar was compelled to cede some of his power to a parliament, organised Battle Squads which stole printing equipment, attacked government forces, and raised money through protection rackets targeting businesses.

In 1905, Koba Jughashvili was elected one of three Bolshevik delegates from Georgia to attend the Third Congress of the Russian Social Democratic Workers' Party in Tampere, Finland, then part of the Russian empire. It was there he first met Lenin, who had been living in exile in Switzerland. Koba had read Lenin's prolific writings and admired his leadership of the Bolshevik cause, but was unimpressed in this first in-person encounter. He vocally took issue with Lenin's position that Bolsheviks should seek seats in the newly-formed State Duma (parliament). When Lenin backed down in the face of opposition, he said, “I expected to see the mountain eagle of our party, a great man, not only politically but physically, for I had formed for myself a picture of Lenin as a giant, as a stately representative figure of a man. What was my disappointment when I saw the most ordinary individual, below average height, distinguished from ordinary mortals by, literally, nothing.”

Returning to Georgia, he resumed his career as an underground revolutionary including, famously, organising a robbery of the Russian State Bank in Tiflis in which three dozen people were killed and two dozen more injured, “expropriating” 250,000 rubles for the Bolshevik cause. Koba did not participate directly, but he was the mastermind of the heist. This and other banditry, criminal enterprises, and unauthorised publications resulted in multiple arrests, imprisonments, exiles to Siberia, escapes, re-captures, and life underground in the years that followed. In 1912, while living underground in Saint Petersburg after yet another escape, he was named the first editor of the Bolshevik party's new daily newspaper, Pravda, although his name was kept secret. In 1913, with the encouragement of Lenin, he wrote an article titled “Marxism and the National Question” in which he addressed how a Bolshevik regime should approach the diverse ethnicities and national identities of the Russian Empire. As a Georgian Bolshevik, Jughashvili was seen as uniquely qualified and credible to address this thorny question. He published the article under the nom de plume “K. [for Koba] Stalin”, which literally translated, meant “Man of Steel” and paralleled Lenin's pseudonym. He would use this name for the rest of his life, reverting to the Russified form of his given name, “Joseph” instead of the nickname Koba (by which his close associates would continue to address him informally). I shall, like the author, refer to him subsequently as “Stalin”.

When Russia entered the Great War in 1914, events were set into motion which would lead to the end of Czarist rule, but Stalin was on the sidelines: in exile in Siberia, where he spent much of his time fishing. In late 1916, as manpower shortages became acute, exiled Bolsheviks including Stalin received notices of conscription into the army, but when he appeared at the induction centre he was rejected due to a crippled left arm, the result of a childhood injury. It was only after the abdication of the Czar in the February Revolution of 1917 that he returned to Saint Petersburg, now renamed Petrograd, and resumed his work for the Bolshevik cause. In April 1917, in elections to the Bolshevik Central Committee, Stalin came in third after Lenin (who had returned from exile in Switzerland) and Zinoviev. Despite having been out of circulation for several years, Stalin's reputation from his writings and editorship of Pravda, which he resumed, elevated him to among the top rank of the party.

As Kerensky's Provisional Government attempted to consolidate its power and continue the costly and unpopular war, Stalin and Trotsky joined Lenin's call for a Bolshevik coup to seize power, and Stalin was involved in all aspects of the eventual October Revolution, although often behind the scenes, while Lenin was the public face of the Bolshevik insurgency.

After seizing power, the Bolsheviks faced challenges from all directions. They had to disentangle Russia from the Great War without leaving the country open to attack and territorial conquest by Germany or Poland. Despite their ambitious name, they were a minority party and had to subdue domestic opposition. They took over a country which the debts incurred by the Czar to fund the war had effectively bankrupted. They had to exert their control over a sprawling, polyglot empire in which, outside of the big cities, their party had little or no presence. They needed to establish their authority over a military in which the officer corps largely regarded the Czar as their legitimate leader. They must restore agricultural production, severely disrupted by levies of manpower for the war, before famine brought instability and the risk of a counter-coup. And for facing these formidable problems, all at the same time, they were utterly unprepared.

The Bolsheviks were, to a man (and they were all men), professional revolutionaries. Their experience was in writing and publishing radical tracts and works of Marxist theory, agitating and organising workers in the cities, carrying out acts of terror against the regime, and funding their activities through banditry and other forms of criminality. There was not a military man, agricultural expert, banker, diplomat, logistician, transportation specialist, or administrator among them, and suddenly they needed all of these skills and more, plus the ability to recruit and staff an administration for a continent-wide empire. Further, although Lenin's leadership was firmly established and undisputed, his subordinates were all highly ambitious men seeking to establish and increase their power in the chaotic and fluid situation.

It was in this environment that Stalin made his mark as the reliable “fixer”. Whether it was securing levies of grain from the provinces, putting down resistance from counter-revolutionary White forces, stamping out opposition from other parties, developing policies for dealing with the diverse nations incorporated into the Russian Empire (indeed, in a real sense, it was Stalin who invented the Soviet Union as a nominal federation of autonomous republics which, in fact, were subject to Party control from Moscow), or implementing Lenin's orders, even when he disagreed with them, Stalin was on the job. Lenin recognised Stalin's importance as his right hand man by creating the post of General Secretary of the party and appointing him to it.

This placed Stalin at the centre of the party apparatus. He controlled who was hired, fired, and promoted. He controlled access to Lenin (only Trotsky could see Lenin without going through Stalin). This was a finely-tuned machine which allowed Lenin to exercise absolute power through a party machine which Stalin had largely built and operated.

Then, in May of 1922, the unthinkable happened: Lenin was felled by a stroke which left him partially paralysed. He retreated to his dacha at Gorki to recuperate, and his communication with the other senior leadership was almost entirely through Stalin. There had been no thought of or plan for a succession after Lenin (he was only fifty-two at the time of his first stroke, although he had been unwell for much of the previous year). As Lenin's health declined, ending in his death in January 1924, Stalin increasingly came to run the party and, through it, the government. He had appointed loyalists in key positions, who saw their own careers as linked to that of Stalin. By the end of 1924, Stalin began to move against the “Old Bolsheviks” who he saw as rivals and potential threats to his consolidation of power. When confronted with opposition, on three occasions he threatened to resign, each exercise in brinksmanship strengthening his grip on power, as the party feared the chaos that would ensue from a power struggle at the top. His status was reflected in 1925 when the city of Tsaritsyn was renamed Stalingrad.

This ascent to supreme power was not universally applauded. Felix Dzierzynski (Polish born, he is often better known by the Russian spelling of his name, Dzerzhinsky) who, as the founder of the Soviet secret police (Cheka/GPU/OGPU) knew a few things about dictatorship, warned in 1926, the year of his death, that “If we do not find the correct line and pace of development our opposition will grow and the country will get its dictator, the grave digger of the revolution irrespective of the beautiful feathers on his costume.”

With or without feathers, the dictatorship was beginning to emerge. In 1926 Stalin published “On Questions of Leninism” in which he introduced the concept of “Socialism in One Country” which, presented as orthodox Leninist doctrine (which it wasn't), argued that world revolution was unnecessary to establish communism in a single country. This set the stage for the collectivisation of agriculture and rapid industrialisation which was to come. In 1928, what was to be the prototype of the show trials of the 1930s opened in Moscow, the Shakhty trial, complete with accusations of industrial sabotage (“wrecking”), denunciations of class enemies, and Andrei Vyshinsky presiding as chief judge. Of the fifty-three engineers accused, five were executed and forty-four imprisoned. A country desperately short on the professionals its industry needed to develop had begin to devour them.

It is a mistake to regard Stalin purely as a dictator obsessed with accumulating and exercising power and destroying rivals, real or imagined. The one consistent theme throughout Stalin's career was that he was a true believer. He was a devout believer in the Orthodox faith while at the seminary, and he seamlessly transferred his allegiance to Marxism once he had been introduced to its doctrines. He had mastered the difficult works of Marx and could cite them from memory (as he often did spontaneously to buttress his arguments in policy disputes), and went on to similarly internalise the work of Lenin. These principles guided his actions, and motivated him to apply them rigidly, whatever the cost may be.

Starting in 1921, Lenin had introduced the New Economic Policy, which lightened state control over the economy and, in particular, introduced market reforms in the agricultural sector, resulting in a mixed economy in which socialism reigned in big city industries, but in the countryside the peasants operated under a kind of market economy. This policy had restored agricultural production to pre-revolutionary levels and largely ended food shortages in the cities and countryside. But to a doctrinaire Marxist, it seemed to risk destruction of the regime. Marx believed that the political system was determined by the means of production. Thus, accepting what was essentially a capitalist economy in the agricultural sector was to infect the socialist government with its worst enemy.

Once Stalin had completed his consolidation of power, he then proceeded as Marxist doctrine demanded: abolish the New Economic Policy and undertake the forced collectivisation of agriculture. This began in 1928.

And it is with this momentous decision that the present volume comes to an end. This massive work (976 pages in the print edition) is just the first in a planned three volume biography of Stalin. The second volume, Stalin: Waiting for Hitler, 1929–1941, was published in 2017 and the concluding volume is not yet completed.

Reading this book, and the entire series, is a major investment of time in a single historical figure. But, as the author observes, if you're interested in the phenomenon of twentieth century totalitarian dictatorship, Stalin is the gold standard. He amassed more power, exercised by a single person with essentially no checks or limits, over more people and a larger portion of the Earth's surface than any individual in human history. He ruled for almost thirty years, transformed the economy of his country, presided over deliberate famines, ruthless purges, and pervasive terror that killed tens of millions, led his country to victory at enormous cost in the largest land conflict in history and ended up exercising power over half of the European continent, and built a military which rivaled that of the West in a bipolar struggle for global hegemony.

It is impossible to relate the history of Stalin without describing the context in which it occurred, and this is as much a history of the final days of imperial Russia, the revolutions of 1917, and the establishment and consolidation of Soviet power as of Stalin himself. Indeed, in this first volume, there are lengthy parts of the narrative in which Stalin is largely offstage: in prison, internal exile, or occupied with matters peripheral to the main historical events. The level of detail is breathtaking: the Bolsheviks seem to have been as compulsive record-keepers as Germans are reputed to be, and not only are the votes of seemingly every committee meeting recorded, but who voted which way and why. There are more than two hundred pages of end notes, source citations, bibliography, and index.

If you are interested in Stalin, the Soviet Union, the phenomenon of Bolshevism, totalitarian dictatorship, or how destructive madness can grip a civilised society for decades, this is an essential work. It is unlikely it will ever be equalled.

 Permalink

Cawdron, Peter. Losing Mars. Brisbane, Australia: Independent, 2018. ISBN 978-1-7237-4729-8.
Peter Cawdron has established himself as the contemporary grandmaster of first contact science fiction. In a series of novels including Anomaly (December 2011), Xenophobia (August 2013), Little Green Men (September 2013), Feedback (February 2014), and My Sweet Satan (September 2014), he has explored the first encounter of humans with extraterrestrial life in a variety of scenarios, all with twists and turns that make you question the definition of life and intelligence.

This novel is set on Mars, where a nominally international but strongly NASA-dominated station has been set up by the six-person crew first to land on the red planet. The crew of Shepard station, three married couples, bring a variety of talents to their multi-year mission of exploration: pilot, engineer, physician, and even botanist: Cory Anderson (the narrator) is responsible for the greenhouse which will feed the crew during their mission. They have a fully-fueled Mars Return Vehicle, based upon NASA's Orion capsule, ready to go, and their ticket back to Earth, the Schiaparelli return stage, waiting in Mars orbit, but orbital mechanics dictates when they can return to Earth, based on the two-year cycle of Earth-Mars transfer opportunities. The crew is acutely aware that the future of Mars exploration rests on their shoulders: failure, whether a tragedy in which they were lost, or even cutting their mission short, might result in “losing Mars” in the same way humanity abandoned the Moon for fifty years after “flags and footprints” visits had accomplished their chest-beating goal.

The Shepard crew are confronted with a crisis not of their making when a Chinese mission, completely unrelated to theirs, attempting to do “Mars on a shoestring” by exploring its moon Phobos, faces disaster when a poorly-understood calamity kills two of its four crew and disables their spacecraft. The two surviving taikonauts show life signs on telemetry but have not communicated with their mission control and, with their ship disabled, are certain to die when their life support consumables are exhausted.

The crew, in consultation with NASA, conclude the only way to mount a rescue mission is for the pilot and Cory, the only crew member who can be spared, to launch in the return vehicle, rendezvous with the Schiaparelli, use it to match orbits with the Chinese ship, rescue the survivors, and then return to Earth with them. (The return vehicle is unable to land back on Mars, being unequipped for a descent and soft landing through its thin atmosphere.) This will leave the four remaining crew of the Shepard with no way home until NASA can send a rescue mission, which will take two years to arrive at Mars. However unappealing the prospect, they conclude that abandoning the Chinese crew to die when rescue was possible would be inhuman, and proceed with the plan.

It is only after arriving at Phobos, after the half-way point in the book, that things begin to get distinctly weird and we suddenly realise that Peter Cawdron is not writing a novelisation of a Kerbal Space Program rescue scenario but is rather up to his old tricks and there is much more going on here than you've imagined from the story so far.

Babe Ruth hit 714 home runs, but he struck out 1,330 times. For me, this story is a swing and a miss. It takes a long, long time to get going, and we must wade through a great deal of social justice virtue signalling to get there. (Lesbians in space? Who could have imagined? Oh, right….) Once we get to the “good part”, the narrative is related in a fractured manner reminiscent of Vonnegut (I'm trying to avoid spoilers—you'll know what I'm talking about if you make it that far). And the copy editing and fact checking…oh, dear.

There are no fewer than seven idiot “it's/its” bungles, two on one page. A solar powered aircraft is said to have “turboprop engines”. Alan Shepard's suborbital mission is said to have been launched on a “prototype Redstone rocket” (it wasn't), which is described as an “intercontinental ballistic missile” (it was a short range missile with a maximum range of 323 km), which subjected the astronaut to “nine g's [sic] launching” (it was actually 6.3 g), with reentry g loads “more than that of the gas giant Saturn” (which is correct, but local gravity on Saturn is just 1.065 g, as the planet is very large and less dense than water). Military officers who defy orders are tried by a court martial, not “court marshaled”. The Mercury-Atlas 3 launch failure which Shepard witnessed at the Cape did not “[end] up in a fireball a couple of hundred feet above the concrete”: in fact it was destroyed by ground command forty-three seconds after launch at an altitude of several kilometres due to a guidance system failure, and the launch escape system saved the spacecraft and would have allowed an astronaut, had one been on board, to land safely. It's “bungee” cord, not “Bungie”. “Navy” is not an acronym, and hence is not written “NAVY”. The Juno orbiter at Jupiter does not “broadcast with the strength of a cell phone”; it has a 25 watt transmitter which is between twelve and twenty-five times more powerful than the maximum power of a mobile phone. He confuses “ecliptic” and “elliptical”, and states that the velocity of a spacecraft decreases as it approaches closer to a body in free fall (it increases). A spacecraft is said to be “accelerating at fifteen meters per second” which is a unit of velocity, not acceleration. A daughter may be the spitting image of her mother, but not “the splitting image”. Thousands of tiny wires do not “rap” around a plastic coated core, they “wrap”, unless they are special hip-hop wires which NASA has never approved for space flight. We do not live in a “barreled galaxy”, but rather a barred spiral galaxy.

Now, you may think I'm being harsh in pointing out these goofs which are not, after all, directly relevant to the plot of the novel. But errors of this kind, all of which could be avoided by research no more involved than looking things up in Wikipedia or consulting a guide to English usage, are indicative of a lack of attention to detail which, sadly, is also manifest in the main story line. To discuss these we must step behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
It is implausible in the extreme that the Schiaparelli would have sufficient extra fuel to perform a plane change maneuver from its orbital inclination of nearly twenty degrees to the near-equatorial orbit of Phobos, then raise its orbit to rendezvous with the moon. The fuel on board the Schiaparelli would have been launched from Earth, and would be just sufficient to return to Earth without any costly maneuvers in Mars orbit. The cost of launching such a large additional amount of fuel, not to mention the larger tanks to hold it, would be prohibitive.

(We're already in a spoiler block, but be warned that the following paragraph is a hideous spoiler of the entire plot.) Cory's ethical dilemma, on which the story turns, is whether to reveal the existence of the advanced technology alien base on Phobos to a humanity which he believes unprepared for such power and likely to use it to destroy themselves. OK, fine, that's his call (and that of Hedy, who also knows enough to give away the secret). But in the conclusion, we're told that, fifty years after the rescue mission, there's a thriving colony on Mars with eight thousand people in two subsurface towns, raising families. How probable is it, even if not a word was said about what happened on Phobos, that this thriving colony and the Earth-based space program which supported it would not, over half a century, send another exploration mission to Phobos, which is scientifically interesting in its own right? And given what Cory found there, any mission which investigated Phobos would have found what he did.

Finally, in the Afterword, the author defends his social justice narrative as follows.

At times, I've been criticized for “jumping on the [liberal] bandwagon” on topics like gay rights and Black Lives Matter across a number of books, but, honestly, it's the 21st century—the cruelty that still dominates how we humans deal with each other is petty and myopic. Any contact with an intelligent extraterrestrial species will expose not only a vast technological gulf, but a moral one as well.
Well, maybe, but isn't it equally likely that when they arrive in their atomic space cars and imbibe what passes for culture and morality among the intellectual élite of the global Davos party and how obsessed these talking apes seem to be about who is canoodling whom with what, that after they stop laughing they may decide that we are made of atoms which they can use for something else.
Spoilers end here.  

Peter Cawdron's earlier novels have provided many hours of thought-provoking entertainment, spinning out the possibilities of first contact. The present book…didn't, although it was good for a few laughs. I'm not going to write off a promising author due to one strike-out. I hope his next outing resumes the home run streak.

A Kindle edition is available, which is free for Kindle Unlimited subscribers.

 Permalink

Marighella, Carlos. Minimanual of the Urban Guerrilla. Seattle: CreateSpace, [1970] 2018. ISBN 978-1-4664-0680-3.
Carlos Marighella joined the Brazilian Communist Party in 1934, abandoning his studies in civil engineering to become a full time agitator for communism. He was arrested for subversion in 1936 and, after release from prison the following year, went underground. He was recaptured in 1939 and imprisoned until 1945 as part of an amnesty of political prisoners. He successfully ran for the federal assembly in 1946 but was removed from office when the Communist party was again banned in 1948. Resuming his clandestine life, he served in several positions in the party leadership and in 1953–1954 visited China to study the Maoist theory of revolution. In 1964, after a military coup in Brazil, he was again arrested, being shot in the process. After being once again released from prison, he broke with the Communist Party and began to advocate armed revolution against the military regime, travelling to Cuba to participate in a conference of Latin American insurgent movements. In 1968, he formed his own group, the Ação Libertadora Nacional (ALN) which, in September 1969, kidnapped U.S. Ambassador Charles Burke Elbrick, who was eventually released in exchange for fifteen political prisoners. In November 1969, Marighella was killed in a police ambush, prompted by a series of robberies and kidnappings by the ALN.

In June 1969, Marighella published this short book (or pamphlet: it is just 40 pages with plenty of white space at the ends of chapters) as a guide for revolutionaries attacking Brazil's authoritarian regime in the big cities. There is little or no discussion of the reasons for the rebellion; the work is addressed to those already committed to the struggle who seek practical advice for wreaking mayhem in the streets. Marighella has entirely bought into the Mao/Guevara theory of revolution: that the ultimate struggle must take place in the countryside, with rural peasants rising en masse against the regime. The problem with this approach was that the peasants seemed to be more interested in eking out their subsistence from the land than taking up arms in support of ideas championed by a few intellectuals in the universities and big cities. So, Marighella's guide is addressed to those in the cities with the goal of starting the armed struggle where there were people indoctrinated in the communist ideology on which it was based. This seems to suffer from the “step two problem”. In essence, his plan is:

  1. Blow stuff up, rob banks, and kill cops in the big cities.
  2. ?
  3. Communist revolution in the countryside.

The book is a manual of tactics: formation of independent cells operating on their own initiative and unable to compromise others if captured, researching terrain and targets and planning operations, mobility and hideouts, raising funds through bank robberies, obtaining weapons by raiding armouries and police stations, breaking out prisoners, kidnapping and exchange for money and prisoners, sabotaging government and industrial facilities, executing enemies and traitors, terrorist bombings, and conducting psychological warfare.

One problem with this strategy is that if you ignore the ideology which supposedly justifies and motivates this mayhem, it is essentially indistinguishable from the outside from the actions of non-politically-motivated outlaws. As the author notes,

The urban guerrilla is a man who fights the military dictatorship with arms, using unconventional methods. A political revolutionary, he is a fighter for his country's liberation, a friend of the people and of freedom. The area in which the urban guerrilla acts is in the large Brazilian cities. There are also bandits, commonly known as outlaws, who work in the big cities. Many times assaults by outlaws are taken as actions by urban guerrillas.

The urban guerrilla, however, differs radically from the outlaw. The outlaw benefits personally from the actions, and attacks indiscriminately without distinguishing between the exploited and the exploiters, which is why there are so many ordinary men and women among his victims. The urban guerrilla follows a political goal and only attacks the government, the big capitalists, and the foreign imperialists, particularly North Americans.

These fine distinctions tend to be lost upon innocent victims, especially since the proceeds of the bank robberies of which the “urban guerrillas” are so fond are not used to aid the poor but rather to finance still more attacks by the ever-so-noble guerrillas pursuing their “political goal”.

This would likely have been an obscure and largely forgotten work of a little-known Brazilian renegade had it not been picked up, translated to English, and published in June and July 1970 by the Berkeley Tribe, a California underground newspaper. It became the terrorist bible of groups including Weatherman, the Black Liberation Army, and Symbionese Liberation Army in the United States, the Red Army Faction in Germany, the Irish Republican Army, the Sandanistas in Nicaragua, and the Palestine Liberation Organisation. These groups embarked on crime and terror campaigns right out of Marighella's playbook with no more thought about step two. They are largely forgotten now because their futile acts had no permanent consequences and their existence was an embarrassment to the élites who largely share their pernicious ideology but have chosen to advance it through subversion, not insurrection.

A Kindle edition is available from a different publisher. You can read the book on-line for free at the Marxists Internet Archive.

 Permalink

Burrough, Bryan. Days of Rage. New York: Penguin Press, 2015. ISBN 978-0-14-310797-2.
In the year 1972, there were more than 1900 domestic bombings in the United States. Think about that—that's more than five bombings a day. In an era when the occasional terrorist act by a “lone wolf” nutcase gets round the clock coverage on cable news channels, it's hard to imagine that not so long ago, most of these bombings and other mayhem, committed by “revolutionary” groups such as Weatherman, the Black Liberation Army, FALN, and The Family, often made only local newspapers on page B37, below the fold.

The civil rights struggle and opposition to the Vietnam war had turned out large crowds and radicalised the campuses, but in the opinion of many activists, yielded few concrete results. Indeed, in the 1968 presidential election, pro-war Democrat Humphrey had been defeated by pro-war Republican Nixon, with anti-war Democrats McCarthy marginalised and Robert Kennedy assassinated.

In this bleak environment, a group of leaders of one of the most radical campus organisations, the Students for a Democratic Society (SDS), gathered in Chicago to draft what became a sixteen thousand word manifesto bristling with Marxist jargon that linked the student movement in the U.S. to Third World guerrilla insurgencies around the globe. They advocated a Che Guevara-like guerrilla movement in America led, naturally, by themselves. They named the manifesto after the Bob Dylan lyric, “You don't need a weatherman to know which way the wind blows.” Other SDS members who thought the idea of armed rebellion in the U.S. absurd and insane quipped, “You don't need a rectal thermometer to know who the assholes are.”

The Weatherman faction managed to blow up (figuratively) the SDS convention in June 1969, splitting the organisation but effectively taking control of it. They called a massive protest in Chicago for October. Dubbed the “National Action”, it would soon become known as the “Days of Rage”.

Almost immediately the Weatherman plans began to go awry. Their plans to rally the working class (who the Ivy League Weatherman élite mocked as “greasers”) got no traction, with some of their outrageous “actions” accomplishing little other than landing the perpetrators in the slammer. Come October, the Days of Rage ended up in farce. Thousands had been expected, ready to take the fight to the cops and “oppressors”, but come the day, no more than two hundred showed up, most SDS stalwarts who already knew one another. They charged the police and were quickly routed with six shot (none seriously), many beaten, and more than 120 arrested. Bail bonds alone added up to US$ 2.3 million. It was a humiliating defeat. The leadership decided it was time to change course.

So what did this intellectual vanguard of the masses decide to do? Well, obviously, destroy the SDS (their source of funding and pipeline of recruitment), go underground, and start blowing stuff up. This posed a problem, because these middle-class college kids had no idea where to obtain explosives (they didn't know that at the time you could buy as much dynamite as you could afford over the counter in many rural areas with, at most, showing a driver's license), what to do with it, and how to build an underground identity. This led to, not Keystone Kops, but Klueless Kriminal misadventures, culminating in March 1970 when they managed to blow up an entire New York townhouse where a bomb they were preparing to attack a dance at Fort Dix, New Jersey detonated prematurely, leaving three of the Weather collective dead in the rubble. In the aftermath, many Weather hangers-on melted away.

This did not deter the hard core, who resolved to learn more about their craft. They issued a communiqué declaring their solidarity with the oppressed black masses (not one of whom, oppressed or otherwise, was a member of Weatherman), and vowed to attack symbols of “Amerikan injustice”. Privately, they decided to avoid killing people, confining their attacks to property. And one of their members hit the books to become a journeyman bombmaker.

The bungling Bolsheviks of Weatherman may have had Marxist theory down pat, but they were lacking in authenticity, and acutely aware of it. It was hard for those whose addresses before going underground were élite universities to present themselves as oppressed. The best they could do was to identify themselves with the cause of those they considered victims of “the system” but who, to date, seemed little inclined to do anything about it themselves. Those who cheered on Weatherman, then, considered it significant when, in the spring of 1971, a new group calling itself the “Black Liberation Army” (BLA) burst onto the scene with two assassination-style murders of New York City policemen on routine duty. Messages delivered after each attack to Harlem radio station WLIB claimed responsibility. One declared,

Every policeman, lackey or running dog of the ruling class must make his or her choice now. Either side with the people: poor and oppressed, or die for the oppressor. Trying to stop what is going down is like trying to stop history, for as long as there are those who will dare to live for freedom there are men and women who dare to unhorse the emperor.

All power to the people.

Politicians, press, and police weren't sure what to make of this. The politicians, worried about the opinion of their black constituents, shied away from anything which sounded like accusing black militants of targeting police. The press, although they'd never write such a thing or speak it in polite company, didn't think it plausible that street blacks could organise a sustained revolutionary campaign: certainly that required college-educated intellectuals. The police, while threatened by these random attacks, weren't sure there was actually any organised group behind the BLA attacks: they were inclined to believe it was a matter of random cop killers attributing their attacks to the BLA after the fact. Further, the BLA had no visible spokesperson and issued no manifestos other than the brief statements after some attacks. This contributed to the mystery, which largely persists to this day because so many participants were killed and the survivors have never spoken out.

In fact, the BLA was almost entirely composed of former members of the New York chapter of the Black Panthers, which had collapsed in the split between factions following Huey Newton and those (including New York) loyal to Eldridge Cleaver, who had fled to exile in Algeria and advocated violent confrontation with the power structure in the U.S. The BLA would perpetrate more than seventy violent attacks between 1970 and 1976 and is said to be responsible for the deaths of thirteen police officers. In 1982, they hijacked a domestic airline flight and pocketed a ransom of US$ 1 million.

Weatherman (later renamed the “Weather Underground” because the original name was deemed sexist) and the BLA represented the two poles of the violent radicals: the first, intellectual, college-educated, and mostly white, concentrated mostly on symbolic bombings against property, usually with warnings in advance to avoid human casualties. As pressure from the FBI increased upon them, they became increasingly inactive; a member of the New York police squad assigned to them quipped, “Weatherman, Weatherman, what do you do? Blow up a toilet every year or two.” They managed the escape of Timothy Leary from a minimum-security prison in California. Leary basically just walked away, with a group of Weatherman members paid by Leary supporters picking him up and arranging for he and his wife Rosemary to obtain passports under assumed names and flee the U.S. for exile in Algeria with former Black Panther leader Eldridge Cleaver.

The Black Liberation Army, being composed largely of ex-prisoners with records of violent crime, was not known for either the intelligence or impulse control of its members. On several occasions, what should have been merely tense encounters with the law turned into deadly firefights because a BLA militant opened fire for no apparent reason. Had they not been so deadly to those they attacked and innocent bystanders, the exploits of the BLA would have made a fine slapstick farce.

As the dour decade of the 1970s progressed, other violent underground groups would appear, tending to follow the model of either Weatherman or the BLA. One of the most visible, it not successful, was the “Symbionese Liberation Army” (SLA), founded by escaped convict and grandiose self-styled revolutionary Daniel DeFreeze. Calling himself “General Field Marshal Cinque”, which he pronounced “sin-kay”, and ending his fevered communications with “DEATH TO THE FASCIST INSECT THAT PREYS UPON THE LIFE OF THE PEOPLE”, this band of murderous bozos struck their first blow for black liberation by assassinating Marcus Foster, the first black superintendent of the Oakland, California school system for his “crimes against the people” of suggesting that police be called into deal with violence in the city's schools and that identification cards be issued to students. Sought by the police for the murder, they struck again by kidnapping heiress, college student, and D-list celebrity Patty Hearst, whose abduction became front page news nationwide. If that wasn't sufficiently bizarre, the abductee eventually issued a statement saying she had chosen to “stay and fight”, adopting the name “Tania”, after the nom de guerre of a Cuban revolutionary and companion of Che Guevara. She was later photographed by a surveillance camera carrying a rifle during a San Francisco bank robbery perpetrated by the SLA. Hearst then went underground and evaded capture until September 1975 after which, when being booked into jail, she gave her occupation as “Urban Guerrilla”. Hearst later claimed she had agreed to join the SLA and participate in its crimes only to protect her own life. She was convicted and sentenced to 35 years in prison, later reduced to 7 years. The sentence was later commuted to 22 months by U.S. President Jimmy Carter and she was released in 1979, and was the recipient of one of Bill Clinton's last day in office pardons in January, 2001. Six members of the SLA, including DeFreeze, died in a house fire during a shootout with the Los Angeles Police Department in May, 1974.

Violence committed in the name of independence for Puerto Rico was nothing new. In 1950, two radicals tried to assassinate President Harry Truman, and in 1954, four revolutionaries shot up the U.S. House of Representatives from the visitors' gallery, wounding five congressmen on the floor, none fatally. The Puerto Rican terrorists had the same problem as their Weatherman, BLA, or SLA bomber brethren: they lacked the support of the people. Most of the residents of Puerto Rico were perfectly happy being U.S. citizens, especially as this allowed them to migrate to the mainland to escape the endemic corruption and the poverty it engendered in the island. As the 1960s progressed, the Puerto Rico radicals increasingly identified with Castro's Cuba (which supported them ideologically, if not financially), and promised to make a revolutionary Puerto Rico a beacon of prosperity and liberty like Cuba had become.

Starting in 1974, a new Puerto Rican terrorist group, the Fuerzas Armadas de Liberación Nacional (FALN) launched a series of attacks in the U.S., most in the New York and Chicago areas. One bombing, that of the Fraunces Tavern in New York in January 1975, killed four people and injured more than fifty. Between 1974 and 1983, a total of more than 130 bomb attacks were attributed to the FALN, most against corporate targets. In 1975 alone, twenty-five bombs went off, around one every two weeks.

Other groups, such as the “New World Liberation Front” (NWLF) in northern California and “The Family” in the East continued the chaos. The NWLF, formed originally from remains of the SLA, detonated twice as many bombs as the Weather Underground. The Family carried out a series of robberies, including the deadly Brink's holdup of October 1981, and jailbreaks of imprisoned radicals.

In the first half of the 1980s, the radical violence sputtered out. Most of the principals were in prison, dead, or living underground and keeping a low profile. A growing prosperity had replaced the malaise and stagflation of the 1970s and there were abundant jobs for those seeking them. The Vietnam War and draft were receding into history, leaving the campuses with little to protest, and the remaining radicals had mostly turned from violent confrontation to burrowing their way into the culture, media, administrative state, and academia as part of Gramsci's “long march through the institutions”.

All of these groups were plagued with the “step two problem”. The agenda of Weatherman was essentially:

  1. Blow stuff up, kill cops, and rob banks.
  2. ?
  3. Proletarian revolution.

Other groups may have had different step threes: “Black liberation” for the BLA, “¡Puerto Rico libre!” for FALN, but none of them seemed to make much progress puzzling out step two. Deep thinker Bill Harris of the SLA's best attempt was, when he advocated killing policemen at random, arguing that “If they killed enough, … the police would crack down on the oppressed minorities of the Bay Area, who would then rise up and begin the revolution.”—sure thing.

In sum, all of this violence and the suffering that resulted from it accomplished precisely none of the goals of those who perpetrated it (which is a good thing: they mostly advocated for one flavour or another of communist enslavement of the United States). All it managed to do is contribute the constriction of personal liberty in the name of “security”, with metal detectors, bomb-sniffing dogs, X-ray machines, rent-a-cops, surveillance cameras, and the first round of airport security theatre springing up like mushrooms everywhere. The amount of societal disruption which can be caused by what amounted to around one hundred homicidal nutcases is something to behold. There were huge economic losses not just due to bombings, but by evacuations due to bomb threats, many doubtless perpetrated by copycats motivated by nothing more political than the desire for a day off from work. Violations of civil liberties by the FBI and other law enforcement agencies who carried out unauthorised wiretaps, burglaries, and other invasions of privacy and property rights not only discredited them, but resulted in many of the perpetrators of the mayhem walking away scot-free. Weatherman founders Bill Ayres and Bernardine Dohrn would, in 1995, launch the political career of Barack Obama at a meeting in their home in Chicago, where Ayers is now a Distinguished Professor at the University of Illinois at Chicago. Ayres, who bombed the U.S. Capitol in 1971 and the Pentagon in 1972, remarked in the 1980s that he was “Guilty as hell, free as a bird—America is a great country.”

This book is an excellent account of a largely-forgotten era in recent history. In a time when slaver radicals (a few of them the same people who set the bombs in their youth) declaim from the cultural heights of legacy media, academia, and their new strongholds in the technology firms which increasingly mediate our communications and access to information, advocate “active resistance”, “taking to the streets”, or “occupying” this or that, it's a useful reminder of where such action leads, and that it's wise to work out step two before embarking on step one.

 Permalink

Stross, Charles. Iron Sunrise. New York: Ace, 2005. ISBN 978-0-441-01296-1.
In Accelerando (July 2011), a novel assembled from nine previously-published short stories, the author chronicles the arrival of a technological singularity on Earth: the almost-instantaneously emerging super-intellect called the Eschaton which departed the planet toward the stars. Simultaneously, nine-tenths of Earth's population vanished overnight, and those left behind, after a period of chaos, found that with the end of scarcity brought about by “cornucopia machines” produced in the first phase of the singularity, they could dispense with anachronisms such as economic systems and government. After humans achieved faster than light travel, they began to discover that the Eschaton had relocated 90% of Earth's population to habitable worlds around various stars and left them to develop in their own independent directions, guided only by this message from the Eschaton, inscribed on a monument on each world.

  1. I am the Eschaton. I am not your god.
  2. I am descended from you, and I exist in your future.
  3. Thou shalt not violate causality within my historic light cone. Or else.

The wormholes used by the Eschaton to relocate Earth's population in the great Diaspora, a technology which humans had yet to understand, not only permitted instantaneous travel across interstellar distances but also in time: the more distant the planet from Earth, the longer the settlers deposited there have had to develop their own cultures and civilisations before being contacted by faster than light ships. With cornucopia machines to meet their material needs and allow them to bootstrap their technology, those that descended into barbarism or incessant warfare did so mostly due to bad ideas rather than their environment.

Rachel Mansour, secret agent for the Earth-based United Nations, operating under the cover of an entertainment officer (or, if you like, cultural attaché), who we met in the previous novel in the series, Singularity Sky (February 2011), and her companion Martin Springfield, who has a back-channel to the Eschaton, serve as arms control inspectors—their primary mission to insure that nothing anybody on Earth or the worlds who have purchased technology from Earth invites the wrath of the Eschaton—remember that “Or else.”

A terrible fate has befallen the planet Moscow, a diaspora “McWorld” accomplished in technological development and trade, when its star, a G-type main sequence star like the Sun, explodes in a blast releasing a hundredth the energy of a supernova, destroying all life on planet Moscow within an instant of the wavefront reaching it, and the entire planet within an hour.

The problem is, type G stars just don't explode on their own. Somebody did this, quite likely using technologies which risk Big E's “or else” on whoever was responsible (or it concluded was responsible). What's more, Moscow maintained a slower-than-light deterrent fleet with relativistic planet-buster weapons to avenge any attack on their home planet. This fleet, essentially undetectable en route, has launched against New Dresden, a planet with which Moscow had a nonviolent trade dispute. The deterrent fleet can be recalled only by coded messages from two Moscow system ambassadors who survived the attack at their postings in other systems, but can also be sent an irrevocable coercion code, which cancels the recall and causes any further messages to be ignored, by three ambassadors. And somebody seems to be killing off the remaining Moscow ambassadors: if the number falls below two, the attack will arrive at New Dresden in thirty-five years and wipe out the planet and as many of its eight hundred million inhabitants as have not been evacuated.

Victoria Strowger, who detests her name and goes by “Wednesday”, has had an invisible friend since childhood, “Herman”, who speaks to her through her implants. As she's grown up, she has come to understand that, in some way, Herman is connected to Big E and, in return for advice and assistance she values highly, occasionally asks her for favours. Wednesday and her family were evacuated from one of Moscow's space stations just before the deadly wavefront from the exploded star arrived, with Wednesday running a harrowing last “errand” for Herman before leaving. Later, in her new home in an asteroid in the Septagon system, she becomes the target of an attack seemingly linked to that mystery mission, and escapes only to find her family wiped out by the attackers. With Herman's help, she flees on an interstellar liner.

While Singularity Sky was a delightful romp describing a society which had deliberately relinquished technology in order to maintain a stratified class system with the subjugated masses frozen around the Victorian era, suddenly confronted with the merry pranksters of the Festival, who inject singularity-epoch technology into its stagnant culture, Iron Sunrise is a much more conventional mystery/adventure tale about gaining control of the ambassadorial keys, figuring out who are the good and bad guys, and trying to avert a delayed but inexorably approaching genocide.

This just didn't work for me. I never got engaged in the story, didn't find the characters particularly interesting, nor came across any interesting ways in which the singularity came into play (and this is supposed to be the author's “Singularity Series”). There are some intriguing concepts, for example the “causal channel”, in which quantum-entangled particles permit instantaneous communication across spacelike separations as long as the previously-prepared entangled particles have first been delivered to the communicating parties by slower than light travel. This is used in the plot to break faster than light communication where it would be inconvenient for the story line (much as all those circumstances in Star Trek where the transporter doesn't work for one reason or another when you're tempted to say “Why don't they just beam up?”). The apparent villains, the ReMastered, (think Space Nazis who believe in a Tipler-like cult of Omega Point out-Eschaton-ing the Eschaton, with icky brain-sucking technology) were just over the top.

Accelerando and Singularity Sky were thought-provoking and great fun. This one doesn't come up to that standard.

 Permalink

  2019  

February 2019

Dutton, Edward and Michael A. Woodley of Menie. At Our Wits' End. Exeter, UK: Imprint Academic, 2018. ISBN 978-1-84540-985-2.
During the Great Depression, the Empire State Building was built, from the beginning of foundation excavation to official opening, in 410 days (less than 14 months). After the destruction of the World Trade Center in New York on September 11, 2001, design and construction of its replacement, the new One World Trade Center was completed on November 3, 2014, 4801 days (160 months) later.

In the 1960s, from U.S. president Kennedy's proposal of a manned lunar mission to the landing of Apollo 11 on the Moon, 2978 days (almost 100 months) elapsed. In January, 2004, U.S. president Bush announced the “Vision for Space Exploration”, aimed at a human return to the lunar surface by 2020. After a comical series of studies, revisions, cancellations, de-scopings, redesigns, schedule slips, and cost overruns, its successor now plans to launch a lunar flyby mission (not even a lunar orbit like Apollo 8) in June 2022, 224 months later. A lunar landing is planned for no sooner than 2028, almost 300 months after the “vision”, and almost nobody believes that date (the landing craft design has not yet begun, and there is no funding for it in the budget).

Wherever you look: junk science, universities corrupted with bogus “studies” departments, politicians peddling discredited nostrums a moment's critical thinking reveals to be folly, an economy built upon an ever-increasing tower of debt that nobody really believes is ever going to be paid off, and the dearth of major, genuine innovations (as opposed to incremental refinement of existing technologies, as has driven the computing, communications, and information technology industries) in every field: science, technology, public policy, and the arts, it often seems like the world is getting dumber. What if it really is?

That is the thesis explored by this insightful book, which is packed with enough “hate facts” to detonate the head of any bien pensant academic or politician. I define a “hate fact” as something which is indisputably true, well-documented by evidence in the literature, which has not been contradicted, but the citation of which is considered “hateful” and can unleash outrage mobs upon anyone so foolish as to utter the fact in public and be a career-limiting move for those employed in Social Justice Warrior-converged organisations. (An example of a hate fact, unrelated to the topic of this book, is the FBI violent crime statistics broken down by the race of the criminal and victim. Nobody disputes the accuracy of this information or the methodology by which it is collected, but woe betide anyone so foolish as to cite the data or draw the obvious conclusions from it.)

In April 2004 I made my own foray into the question of declining intelligence in “Global IQ: 1950–2050” in which I combined estimates of the mean IQ of countries with census data and forecasts of population growth to estimate global mean IQ for a century starting at 1950. Assuming the mean IQ of countries remains constant (which is optimistic, since part of the population growth in high IQ countries with low fertility rates is due to migration from countries with lower IQ), I found that global mean IQ, which was 91.64 for a population of 2.55 billion in 1950, declined to 89.20 for the 6.07 billion alive in 2000, and was expected to fall to 86.32 for the 9.06 billion population forecast for 2050. This is mostly due to the explosive population growth forecast for Sub-Saharan Africa, where many of the populations with low IQ reside.

U.N. World Population Prospects: 2017 Revision

This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90.

But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today's developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.

  1. There is a factor, “general intelligence” or g, which measures the ability to solve a wide variety of mental problems, and this factor, measured by IQ tests, is largely stable across an individual's life.
  2. Intelligence, as measured by IQ tests, is, like height, in part heritable. The heritability of IQ is estimated at around 80%, which means that 80% of children's IQ can be estimated from that of their parents, and 20% is due to other factors.
  3. IQ correlates positively with factors contributing to success in society. The correlation with performance in education is 0.7, with highest educational level completed 0.5, and with salary 0.3.
  4. In Europe, between 1400 and around 1850, the wealthier half of the population had more children who survived to adulthood than the poorer half.
  5. Because IQ correlates with social success, that portion of the population which was more intelligent produced more offspring.
  6. Just as in selective breeding of animals by selecting those with a desired trait for mating, this resulted in a population whose average IQ increased (slowly) from generation to generation over this half-millennium.

The gradually rising IQ of the population resulted in a growing standard of living as knowledge and inventions accumulated due to the efforts of those with greater intelligence over time. In particular, even a relatively small increase in the mean IQ of a population makes an enormous difference in the tiny fraction of people with “genius level” IQ who are responsible for many of the significant breakthroughs in all forms of human intellectual endeavour. If we consider an IQ of 145 as genius level, in a population of a million with a mean IQ of 100, one in 741 people will have an IQ of 145 or above, so there will be around 1350 people with such an IQ. But if the population's mean IQ is 95, just five points lower, only one in 2331 people will have a genius level IQ, and there will be just 429 potential geniuses in the population of a million. In a population of a million with a mean IQ of 90, there will be just 123 potential geniuses.

(Some technical details are in order. A high IQ [generally 125 or above] appears to be a necessary condition for genius-level achievement, but it is insufficient by itself. Those who produce feats of genius usually combine high intelligence with persistence, ambition, often a single-minded focus on a task, and usually require an environment which allows them to acquire the knowledge and intellectual tools required to apply their talent. But since a high IQ is a requirement, the mean IQ determines what fraction of the population are potential geniuses; other factors such as the society's educational institutions, resources such as libraries, and wealth which allows some people to concentrate on intellectual endeavours instead of manual labour, contribute to how many actual works of genius will be produced. The mean IQ of most Western industrial nations is around 100, and the standard deviation of IQ is normalised to be 15. Using this information you can perform calculations such as those in the previous paragraph using Fourmilab's z Score Calculator, as explained in my Introduction to Probability and Statistics.)

Of the pillars of the argument listed above, items 1 through 3 are noncontroversial except by those who deny the existence of general intelligence entirely or the ability of IQ tests to measure it. The authors present the large body of highly persuasive evidence in favour of those items in a form accessible to the non-specialist. If you reject that evidence, then you needn't consider the rest of the argument.

Item 4, the assertion that wealthier families had more children survive to adulthood, is substantiated by a variety of research, much of it done in England, where recorded wills and church records of baptisms and deaths provide centuries of demographic data. One study, for example, examining wills filed between 1585 and 1638 in Suffolk and Essex found that the richer half of estates (determined by the bequests in the wills) had almost twice as many children named in wills compared to the poorer half. An investigation of records in Norfolk covering the years 1500 to 1630 found an average of four children for middle class families as opposed to two for the lower class. Another, covering Saxony in Germany between 1547 and 1671, found the middle class had an average of 3.4 children who survived to become married, while the working class had just 1.6. This differential fertility seems, in conjunction with item 5, the known correlation between intelligence and social success, to make plausible that a process of selection for intelligence was going on, and probably had been for centuries. (Records are sparse before the 17th century, so detailed research for that period is difficult.)

Another form of selection got underway as the middle ages gave way to the early modern period around the year 1500 in Europe. While in medieval times criminals were rarely executed due to opposition by the Church, by the early modern era almost all felonies received the death penalty. This had the effect of “culling the herd” of its most violent members who, being predominantly young, male, and of low intelligence, would often be removed from the breeding population before fathering any children. To the extent that the propensity to violent crime is heritable (which seems plausible, as almost all human characteristics are heritable to one degree or another), this would have “domesticated” the European human population and contributed to the well-documented dramatic drop in the murder rate in this period. It would have also selected out those of low intelligence, who are prone to violent crime. Further, in England, there was a provision called “Benefit of Clergy” where those who could demonstrate literacy could escape the hangman. This was another selection for intelligence.

If intelligence was gradually increasing in Europe from the middle ages through the time of the Industrial Revolution, can we find evidence of this in history? Obviously, we don't have IQ tests from that period, but there are other suggestive indications. Intelligent people have lower time preference: they are willing to defer immediate gratification for a reward in the future. The rate of interest on borrowed money is a measure of a society's overall time preference. Data covering the period from 1150 through 1950 found that interest rates had declined over the entire time, from over 10% in the year 1200 to around 5% in the 1800s. This is consistent with an increase in intelligence.

Literacy correlates with intelligence, and records from marriage registers and court documents show continually growing literacy from 1580 through 1920. In the latter part of this period, the introduction of government schools contributed to much of the increase, but in early years it may reflect growing intelligence.

A population with growing intelligence should produce more geniuses who make contributions which are recorded in history. In a 2005 study, American physicist Jonathan Huebner compiled a list of 8,583 significant events in the history of science and technology from the Stone Age through 2004. He found that, after adjusting for the total population of the time, the rate of innovation per capita had quadrupled between 1450 and 1870. Independently, Charles Murray's 2003 book Human Accomplishment found that the rate of innovation and the appearance of the figures who created them increased from the Middle Ages through the 1870s.

The authors contend that a growing population with increasing mean intelligence eventually reached a critical mass which led to the industrial revolution, due to a sufficiently large number of genius intellects alive at the same time and an intelligent workforce who could perform the jobs needed to build and operate the new machines. This created unprecedented prosperity and dramatically increased the standard of living throughout the society.

And then an interesting thing happened. It's called the “demographic transition”, and it's been observed in country after country as it develops from a rural, agrarian economy to an urban, industrial society. Pre-industrial societies are characterised by a high birth rate, a high rate of infant and childhood mortality, and a stable or very slowly growing population. Families have many children in the hope of having a few survive to adulthood to care for them in old age and pass on their parents' genes. It is in this phase that the intense selection pressure obtains: the better-off and presumably more intelligent parents will have more children survive to adulthood.

Once industrialisation begins, it is usually accompanied by public health measures, better sanitation, improved access to medical care, and the introduction of innovations such as vaccination, antiseptics, and surgery with anæsthesia. This results in a dramatic fall in the mortality rate for the young, larger families, and an immediate bulge in the population. As social welfare benefits are extended to reach the poor through benefits from employers, charity, or government services, this occurs more broadly across social classes, reducing the disparity in family sizes among the rich and poor.

Eventually, parents begin to see the advantage of smaller families now that they can be confident their offspring have a high probability of surviving to adulthood. This is particularly the case for the better-off, as they realise their progeny will gain an advantage by splitting their inheritance fewer ways and in receiving the better education a family can afford for fewer children. This results in a decline in the birth rate, which eventually reaches the replacement rate (or below), where it comes into line with the death rate.

But what does this do to the selection for intelligence from which humans have been benefitting for centuries? It ends it, and eventually puts it into reverse. In country after country, the better educated and well-off (both correlates of intelligence) have fewer children than the less intelligent. This is easy to understand: in the prime child-bearing years they tend to be occupied with their education and starting a career. They marry later, have children (if at all) at an older age, and due to the female biological clock, have fewer kids even if they desire more. They also use contraception to plan their families and tend to defer having children until the “right time”, which sometimes never comes.

Meanwhile, the less intelligent, who in the modern welfare state are often clients on the public dole, who have less impulse control, high time preference, and when they use contraception often do so improperly resulting in unplanned pregnancies, have more children. They start earlier, don't bother with getting married (as the stigma of single motherhood has largely been eliminated), and rely upon the state to feed, house, educate, and eventually imprison their progeny. This sad reality was hilariously mocked in the introduction to the 2006 film Idiocracy.

While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I'm talking about China, Korea, and [to a lesser extent] Japan.)

If you've followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven's Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is.

What's going on, then? The authors join James Flynn in concluding that what we're seeing is an increase in the population's proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn't make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we're seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening.

Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.)

There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since.

If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China.

And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn't make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we've learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans.

Or, maybe we won't. In which case, “Winter has come and it's only going to get colder. Wrap up warm.”

Here is a James Delingpole interview of the authors and discussion of the book.

 Permalink

April 2019

Nelson, Roger D. Connected: The Emergence of Global Consciousness. Princeton: ICRL Press, 2019. ISBN 978-1-936033-35-5.
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.

Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.

Such beliefs have been common over much of human history and in a wide variety of very diverse cultures around the globe, but in recent centuries these beliefs have been displaced by the view of mechanistic, reductionist science, which argues that the brain is just a kind of (phenomenally complicated) biological computer and that consciousness can be thought of as an emergent phenomenon which arises when the brain computer's software becomes sufficiently complex to be able to examine its own operation. From this perspective, consciousness is confined within the brain, cannot affect the outside world or the consciousness of others except by physical interactions initiated by motor neurons, and perceives the world only through sensory neurons. There is no “consciousness field”, and individual consciousness dies when the brain does.

But while this view is more in tune with the scientific outlook which spawned the technological revolution that has transformed the world and continues to accelerate, it has, so far, made essentially zero progress in understanding consciousness. Although we have built electronic computers which can perform mathematical calculations trillions of times faster than the human brain, and are on track to equal the storage capacity of that brain some time in the next decade or so, we still don't have the slightest idea how to program a computer to be conscious: to be self-aware and act out of a sense of free will (if free will, however defined, actually exists). So, if we adopt a properly scientific and sceptical view, we must conclude that the jury is still out on the question of consciousness. If we don't understand enough about it to program it into a computer, then we can't be entirely confident that it is something we could program into a computer, or that it is just some kind of software running on our brain-computer.

It looks like humans are, dare I say, programmed to believe in consciousness as a force not confined to the brain. Many cultures have developed shamanism, religions, philosophies, and practices which presume the existence of the following kinds of what Dean Radin calls Real Magic, and which I quote from my review of his book with that title.

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

Starting in the 19th century, a small number of scientists undertook to investigate whether these phenomena could possibly be real, whether they could be demonstrated under controlled conditions, and what mechanism might explain these kinds of links between consciousness and will and the physical world. In 1882 the Society for Psychical Research was founded in London and continues to operate today, publishing three journals. Psychic research, now more commonly called parapsychology, continues to investigate the interaction of consciousness with the outside world through (unspecified) means other than the known senses, usually in laboratory settings where great care is taken to ensure no conventional transfer of information occurs and with elaborate safeguards against fraud, either by experimenters or test subjects. For a recent review of the state of parapsychology research, I recommend Dean Radin's excellent 2006 book, Entangled Minds.

Parapsychologists such as Radin argue that while phenomena such as telepathy, precognition, and psychokinesis are very weak effects, elusive, and impossible to produce reliably on demand, the statistical evidence for their existence from large numbers of laboratory experiments is overwhelming, with a vanishingly small probability that the observed results are due to chance. Indeed, the measured confidence levels and effect sizes of some categories of parapsychological experiments exceed those of medical clinical trials such as those which resulted in the recommendation of routine aspirin administration to reduce the risk of heart disease in older males.

For more than a quarter of a century, an important centre of parapsychology research was the Princeton Engineering Anomalies Research (PEAR) laboratory, established in 1979 by Princeton University's Dean of Engineering, Robert G. Jahn. (The lab closed in 2007 with Prof. Jahn's retirement, and has now been incorporated into the International Consciousness Research Laboratories, which is the publisher of the present book.) An important part of PEAR's research was with electronic random event generators (REGs) connected to computers in experiments where a subject (or “operator”, in PEAR terminology) would try to influence the generator to produce an excess of one or zero bits. In a large series of experiments [PDF] run over a period of twelve years with multiple operators, it was reported that an influence in the direction of the operator's intention was seen with a highly significant probability of chance of one in a trillion. The effect size was minuscule, with around one bit in ten thousand flipping in the direction of the operator's stated goal.

If one operator can produce a tiny effect on the random data, what if many people were acting together, not necessarily with active intention, but with their consciousnesses focused on a single thing, for example at a sporting event, musical concert, or religious ceremony? The miniaturisation of electronics and computers eventually made it possible to build a portable REG and computer which could be taken into the field. This led to the FieldREG experiments in which this portable unit was taken to a variety of places and events to monitor its behaviour. The results were suggestive of an effect, but the data set was far too small to be conclusive.

Mindsong random event generator In 1998, Roger D. Nelson, the author of this book, realised that the rapid development and worldwide deployment of the Internet made it possible to expand the FieldREG concept to a global scale. Random event generators based upon quantum effects (usually shot noise from tunnelling across a back-biased Zener diode or a resistor) had been scaled down to small, inexpensive devices which could be attached to personal computers via an RS-232 serial port. With more and more people gaining access to the Internet (originally mostly via dial-up to commercial Internet Service Providers, then increasingly via persistent broadband connections such as ADSL service over telephone wires or a cable television connection), it might be possible to deploy a network of random event generators at locations all around the world, each of which would constantly collect timestamped data which would be transmitted to a central server, collected there, and made available to researchers for analysis by whatever means they chose to apply.

As Roger Nelson discussed the project with his son Greg (who would go on to be the principal software developer for the project), Greg suggested that what was proposed was essentially an electroencephalogram (EEG) for the hypothetical emerging global mind, an “ElectroGaiaGram” or EGG. Thus was born the “EGG Project” or, as it is now formally called, the Global Consciousness Project. Just as the many probes of an EEG provide a (crude) view into the operation of a single brain, perhaps the wide-flung, always-on network of REGs would pick up evidence of coherence when a large number of the world's minds were focused on a single event or idea. Once the EGG project was named, terminology followed naturally: the individual hosts running the random event generators would be “eggs” and the central data archiving server the “basket”.

In April 1998, Roger Nelson released the original proposal for the project and shortly thereafter Greg Nelson began development of the egg and basket software. I became involved in the project in mid-summer 1998 and contributed code to the egg and basket software, principally to allow it to be portable to other variants of Unix systems (it was originally developed on Linux) and machines with different byte order than the Intel processors on which it ran, and also to reduce the resource requirements on the egg host, making it easier to run on a non-dedicated machine. I also contributed programs for the basket server to assemble daily data summaries from the raw data collected by the basket and to produce a real-time network status report. Evolved versions of these programs remain in use today, more than two decades later. On August 2nd, 1998, I began to run the second egg in the network, originally on a Sun workstation running Solaris; this was the first non-Linux, non-Intel, big-endian egg host in the network. A few days later, I brought up the fourth egg, running on a Sun server in the Hall of the Servers one floor below the second egg; this used a different kind of REG, but was otherwise identical. Both of these eggs have been in continuous operation from 1998 to the present (albeit with brief outages due to power failures, machine crashes, and other assorted disasters over the years), and have migrated from machine to machine over time. The second egg is now connected to Raspberry Pi running Linux, while the fourth is now hosted on a Dell Intel-based server also running Linux, which was the first egg host to run on a 64-bit machine in native mode.

Here is precisely how the network measures deviation from the expectation for genuinely random data. The egg hosts all run a Network Time Protocol (NTP) client to provide accurate synchronisation with Internet time server hosts which are ultimately synchronised to atomic clocks or GPS. At the start of every second a total of 200 bits are read from the random event generator. Since all the existing generators provide eight bits of random data transmitted as bytes on a 9600 baud serial port, this involves waiting until the start of the second, reading 25 bytes from the serial port (first flushing any potentially buffered data), then breaking the eight bits out of each byte of data. A precision timing loop guarantees that the sampling starts at the beginning of the second-long interval to the accuracy of the computer's clock.

This process produces 200 random bits. These bits, one or zero, are summed to produce a “sample” which counts the number of one bits for that second. This sample is stored in a buffer on the egg host, along with a timestamp (in Unix time() format), which indicates when it was taken.

Buffers of completed samples are archived in files on the egg host's file system. Periodically, the basket host will contact the egg host over the Internet and request any samples collected after the last packet it received from the egg host. The egg will then transmit any newer buffers it has filled to the basket. All communications are performed over the stateless UDP Internet protocol, and the design of the basket request and egg reply protocol is robust against loss of packets or packets being received out of order.

(This data transfer protocol may seem odd, but recall that the network was designed more than twenty years ago when many people, especially those outside large universities and companies, had dial-up Internet access. The architecture would allow a dial-up egg to collect data continuously and then, when it happened to be connected to the Internet, respond to a poll from the basket and transmit its accumulated data during the time it was connected. It also makes the network immune to random outages in Internet connectivity. Over two decades of operation, we have had exactly zero problems with Internet outages causing loss of data.)

When a buffer from an egg host is received by the basket, it is stored in a database directory for that egg. The buffer contains a time stamp identifying the second at which each sample within it was collected. All times are stored in Universal Time (UTC), so no correction for time zones or summer and winter time is required.

This is the entire collection process of the network. The basket host, which was originally located at Princeton University and now is on a server at global-mind.org, only stores buffers in the database. Buffers, once stored, are never modified by any other program. Bad data, usually long strings of zeroes or ones produced when a hardware random event generator fails electrically, are identified by a “sanity check” program and then manually added to a “rotten egg” database which causes these sequences to be ignored by analysis programs. The random event generators are very simple and rarely fail, so this is a very unusual circumstance.

The raw database format is difficult for analysis programs to process, so every day an automated program (which I wrote) is run which reads the basket database, extracts every sample collected for the previous 24 hour period (or any desired 24 hour window in the history of the project), and creates a day summary file with a record for every second in the day with a column for the samples from each egg which reported that day. Missing data (eggs which did not report for that second) is indicated by a blank in that column. The data are encoded in CSV format which is easy to load into a spreadsheet or read with a program. Because some eggs may not report immediately due to Internet outages or other problems, the summary data report is re-generated two days later to capture late-arriving data. You can request custom data reports for your own analysis from the Custom Data Request page. If you are interested in doing your own exploratory analysis of the Global Consciousness Project data set, you may find my EGGSHELL C++ libraries useful.

The analysis performed by the Project proceeds from these summary files as follows.

First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).

Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.

It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:

Summing Stouffer Z-scores

over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance.

Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds.

The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.

Cumulative deviation: State of the Union 2012

What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)?

Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.

Cumulative deviation: Gadaffi killed, 2011-10-20

Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111.

What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes).

The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.

Cumulative deviation: GCP 1998 through 2015

Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.

Cumulative deviation: GCP results versus pseudorandom simulations

Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.

Cumulative deviation of chi-square: terrorist attacks 2001-09-11

The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.

Cumulative deviation of Z-score: terrorist attacks 2001-09-11

Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.

Cumulative deviation of chi-square: seven day window around 2001-09-11

Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.

Odds: twenty hour window around 2001-09-11, one hour smoothing

Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.

Binned odds: 2001-09-06 to 2001-09-13

The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.

Binned odds: pseudorandom data 2001-09-06 to 2001-09-13

So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”.

If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere.

The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs.

A Kindle edition is available.

All graphs in this article are as posted on the Global Consciousness Project Web site.

 Permalink

Corcoran, Travis J. I. The Powers of the Earth. New Hampshire: Morlock Publishing, 2017. ISBN 978-1-9733-1114-0.
Corcoran, Travis J. I. Causes of Separation. New Hampshire: Morlock Publishing, 2018. ISBN 978-1-9804-3744-4.
(Note: This is novel is the first of an envisioned four volume series titled Aristillus. It and the second book, Causes of Separation, published in May, 2018, together tell a single story which reaches a decisive moment just as the first book ends. Unusually, this will be a review of both novels, taken as a whole. If you like this kind of story at all, there's no way you'll not immediately plunge into the second book after setting down the first.)

Around the year 2050, collectivists were firmly in power everywhere on Earth. Nations were subordinated to the United Nations, whose force of Peace Keepers (PKs) had absorbed all but elite special forces, and were known for being simultaneously brutal, corrupt, and incompetent. (Due to the equality laws, military units had to contain a quota of “Alternatively Abled Soldiers” who other troops had to wheel into combat.) The United States still existed as a country, but after decades of rule by two factions of the Democrat party: Populist and Internationalist, was mired in stagnation, bureaucracy, crumbling infrastructure, and on the verge of bankruptcy. The U.S. President, Themba Johnson, a former talk show host who combined cluelessness, a volatile temper, and vulpine cunning when it came to manipulating public opinion, is confronted with all of these problems and looking for a masterstroke to get beyond the next election.

Around 2050, when the collectivists entered the inevitable end game their policies lead to everywhere they are tried, with the Bureau of Sustainable Research (BuSuR) suppressing new technologies in every field and the Construction Jobs Preservation Act and Bureau of Industrial Planning banning anything which might increase productivity, a final grasp to loot the remaining seed corn resulted in the CEO Trials aimed at the few remaining successful companies, with expropriation of their assets and imprisonment of their leaders. CEO Mike Martin manages to escape from prison and link up with renegade physicist Ponnala (“Ponzie”) Srinivas, inventor of an anti-gravity drive he doesn't want the slavers to control. Mike buys a rustbucket oceangoing cargo ship, equips it with the drive, an airtight compartment and life support, and flees Earth with a cargo of tunnel boring machines and water to exile on the Moon, in the crater Aristillus in Mare Imbrium on the lunar near side where, fortuitously, the impact of a metal-rich asteroid millions of years ago enriched the sub-surface with metals rare in the Moon's crust.

Let me say a few words about the anti-gravity drive, which is very unusual and original, and whose properties play a significant role in the story. The drive works by coupling to the gravitational field of a massive body and then pushing against it, expending energy as it rises and gains gravitational potential energy. Momentum is conserved, as an equal and opposite force is exerted on the massive body against which it is pushing. The force vector is always along the line connecting the centre of mass of the massive body and the drive unit, directed away from the centre of mass. The force is proportional to the strength of the gravitational field in which the drive is operating, and hence stronger when pushing against a body like Earth as opposed to a less massive one like the Moon. The drive's force diminishes with distance from the massive body as its gravitational field falls off with the inverse square law, and hence the drive generates essentially no force when in empty space far from a gravitating body. When used to brake a descent toward a massive body, the drive converts gravitational potential energy into electricity like the regenerative braking system of an electric vehicle: energy which can be stored for use when later leaving the body.

Because the drive can only push outward radially, when used to, say, launch from the Earth to the Moon, it is much like Jules Verne's giant cannon—the launch must occur at the latitude and longitude on Earth where the Moon will be directly overhead at the time the ship arrives at the Moon. In practice, the converted ships also carried auxiliary chemical rockets and reaction control thrusters for trajectory corrections and precision maneuvering which could not be accomplished with the anti-gravity drive.

By 2064, the lunar settlement, called Aristillus by its inhabitants, was thriving, with more than a hundred thousand residents, and growing at almost twenty percent a year. (Well, nobody knew for sure, because from the start the outlook shared by the settlers was aligned with Mike Martin's anarcho-capitalist worldview. There was no government, no taxes, no ID cards, no business licenses, no regulations, no zoning [except covenants imposed by property owners on those who sub-leased property from them], no central bank, no paper money [an entrepreneur had found a vein of gold left by the ancient impactor and gone into business providing hard currency], no elections, no politicians, no forms to fill out, no police, and no army.) Some of these “features” of life on grey, regimented Earth were provided by private firms, while many of the others were found to be unnecessary altogether.

The community prospered as it grew. Like many frontier settlements, labour was in chronic short supply, and even augmented by robot rovers and machines (free of the yoke of BuSuR), there was work for anybody who wanted it and job offers awaiting new arrivals. A fleet of privately operated ships maintained a clandestine trade with Earth, bringing goods which couldn't yet be produced on the Moon, atmosphere, water from the oceans (in converted tanker ships), and new immigrants who had sold their Earthly goods and quit the slave planet. Waves of immigrants from blood-soaked Nigeria and chaotic China established their own communities and neighbourhoods in the ever-growing network of tunnels beneath Aristillus.

The Moon has not just become a refuge for humans. When BuSuR put its boot on the neck of technology, it ordered the shutdown of a project to genetically “uplift” dogs to human intelligence and beyond, creating “Dogs” (the capital letter denoting the uplift) and all existing Dogs to be euthanised. Many were, but John (we never learn his last name), a former U.S. Special Forces operator, manages to rescue a colony of Dogs from one of the labs before the killers arrive and escape with them to Aristillus, where they have set up the Den and engage in their own priorities, including role-playing games, software development, and trading on the betting markets. Also rescued by John was Gamma, the first Artificial General Intelligence to be created, whose intelligence is above the human level but not (yet, anyway) intelligence runaway singularity-level transcendent. Gamma has established itself in its own facility in Sinus Lunicus on the other side of Mare Imbrium, and has little contact with the human or Dog settlers.

Inevitably, liberty produces prosperity, and prosperity eventually causes slavers to regard the free with envious eyes, and slowly and surely draw their plans against them.

This is the story of the first interplanetary conflict, and a rousing tale of liberty versus tyranny, frontier innovation against collectivised incompetence, and principles (there is even the intervention of a Vatican diplomat) confronting brutal expedience. There are delicious side-stories about the creation of fake news, scheming politicians, would-be politicians in a libertarian paradise, open source technology, treachery, redemption, and heroism. How do three distinct species: human, Dog, and AI work together without a top-down structure or subordinating one to another? Can the lunar colony protect itself without becoming what its settlers left Earth to escape?

Woven into the story is a look at how a libertarian society works (and sometimes doesn't work) in practice. Aristillus is in no sense a utopia: it has plenty of rough edges and things to criticise. But people there are free, and they prefer it to the prison planet they escaped.

This is a wonderful, sprawling, action-packed story with interesting characters, complicated conflicts, and realistic treatment of what a small colony faces when confronted by a hostile planet of nine billion slaves. Think of this as Heinlein's The Moon is a Harsh Mistress done better. There are generous tips of the hat to Heinlein and other science fiction in the book, but this is a very different story with an entirely different outcome, and truer to the principles of individualism and liberty. I devoured these books and give them my highest recommendation. The Powers of the Earth won the 2018 Prometheus Award for best libertarian science fiction novel.

 Permalink

Coppley, Jackson. The Code Hunters. Chevy Chase, MD: Contour Press, 2019. ISBN 978-1-09-107011-0.
A team of expert cavers exploring a challenging cave in New Mexico in search of a possible connection to Carlsbad Caverns tumble into a chamber deep underground containing something which just shouldn't be there: a huge slab of metal, like titanium, twenty-four feet square and eight inches thick, set into the rock of the cave, bearing markings which resemble the pits and lands on an optical storage disc. No evidence for human presence in the cave prior to the discoverers is found, and dating confirms that the slab is at least ten thousand years old. There is no way an object that large could be brought through the cramped and twisting passages of the cave to the chamber where it was found.

Wealthy adventurer Nicholas Foxe, with degrees in archaeology and cryptography, gets wind of the discovery and pulls strings to get access to the cave, putting together a research program to try to understand the origin of the slab and decode its enigmatic inscription. But as news of the discovery reaches others, they begin to pursue their own priorities. A New Mexico senator sends his on-the-make assistant to find out what is going on and see how it might be exploited to his advantage. An ex-Army special forces operator makes stealthy plans. An MIT string theorist with a wide range of interests begins exploring unorthodox ideas about how the inscriptions might be encoded. A televangelist facing hard times sees the Tablet as the way back to the top of the heap. A wealthy Texan sees the potential in the slab for wealth beyond his abundant dreams of avarice. As the adventure unfolds, we encounter a panoply of fascinating characters: a World Health Organization scientist, an Italian violin maker with an eccentric theory of language and his autistic daughter, and a “just the facts” police inspector. As clues are teased from the enigma, we visit exotic locations and experience harrowing adventure, finally grasping the significance of a discovery that bears on the very origin of modern humans.

About now, you might be thinking “This sounds like a Dan Brown novel”, and in a sense you'd be right. But this is the kind of story Dan Brown would craft if he were a lot better author than he is: whereas Dan Brown books have become stereotypes of cardboard characters and fill-in-the-blanks plots with pseudo-scientific bafflegab stirred into the mix (see my review of Origin [May 2018]), this is a gripping tale filled with complex, quirky characters, unexpected plot twists, beautifully sketched locales, and a growing sense of wonder as the significance of the discovery is grasped. If anybody in Hollywood had any sense (yes, I know…) they would make this into a movie instead of doing another tedious Dan Brown sequel. This is subtitled “A Nicholas Foxe Adventure”: I sincerely hope there will be more to come.

The author kindly let me read a pre-publication manuscript of this novel. The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

May 2019

Smolin, Lee. Einstein's Unfinished Revolution. New York: Penguin Press, 2019. ISBN 978-1-59420-619-1.
In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.

This didn't make any sense. According to Maxwell's wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn't behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what's more, there was a sharp cut-off below which no electrons would be emitted at all.

In 1905, in one of his “miracle year” papers, “On a Heuristic Viewpoint Concerning the Production and Transformation of Light”, Albert Einstein suggested a solution to the puzzle. He argued that light did not propagate as a wave at all, but rather in discrete particles, or “quanta”, later named “photons”, whose energy was proportional to the wavelength of the light. This neatly explained the behaviour of the photoelectric effect. Light with a wavelength longer than the cut-off point was transmitted by photons whose energy was too low to knock electrons out of metal they illuminated, while those above the threshold could liberate electrons. The intensity of the light was a measure of the number of photons in the beam, unrelated to the energy of the individual photons.

This paper became one of the cornerstones of the revolutionary theory of quantum mechanics, the complete working out of which occupied much of the twentieth century. Quantum mechanics underlies the standard model of particle physics, which is arguably the most thoroughly tested theory in the history of physics, with no experiment showing results which contradict its predictions since it was formulated in the 1970s. Quantum mechanics is necessary to explain the operation of the electronic and optoelectronic devices upon which our modern computing and communication infrastructure is built, and describes every aspect of physical chemistry.

But quantum mechanics is weird. Consider: if light consists of little particles, like bullets, then why when you shine a beam of light on a barrier with two slits do you get an interference pattern with bright and dark bands precisely as you get with, say, water waves? And if you send a single photon at a time and try to measure which slit it went through, you find it always went through one or the other, but then the interference pattern goes away. It seems like whether the photon behaves as a wave or a particle depends upon how you look at it. If you have an hour, here is grand master explainer Richard Feynman (who won his own Nobel Prize in 1965 for reconciling the quantum mechanical theory of light and the electron with Einstein's special relativity) exploring how profoundly weird the double slit experiment is.

Fundamentally, quantum mechanics seems to violate the principle of realism, which the author defines as follows.

The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.

This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn't possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists.

From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location.

In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn't what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant.

In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton's third law of motion holds that every action has an equal and opposite reaction, and in Einstein's general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm's theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers.

Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations.

The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein's geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don't fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn't so: a simple experiment shows that a clock runs slower when it's in a gravitational field. Quantum mechanics says that it isn't possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces.

Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object).

This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today's near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today.

This needn't forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it's an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake's theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results.

The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:

There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.

Well, it doesn't puzzle me. Ever since I learned classical economics, I've always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today's institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today.

Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.

 Permalink

Kotkin, Stephen. Stalin, Vol. 2: Waiting for Hitler, 1929–1941. New York: Penguin Press, 2017. ISBN 978-1-59420-380-0.
This is the second volume in the author's monumental projected three-volume biography of Joseph Stalin. The first volume, Stalin: Paradoxes of Power, 1878–1928 (December 2018) covers the period from Stalin's birth through the consolidation of his sole power atop the Soviet state after the death of Lenin. The third volume, which will cover the period from the Nazi invasion of the Soviet Union in 1941 through the death of Stalin in 1953 has yet to be published.

As this volume begins in 1928, Stalin is securely in the supreme position of the Communist Party of the Soviet Union, and having over the years staffed the senior ranks of the party and the Soviet state (which the party operated like the puppet it was) with loyalists who owed their positions to him, had no serious rivals who might challenge him. (It is often claimed that Stalin was paranoid and feared a coup, but would a despot fearing for his position regularly take summer holidays, months in length, in Sochi, far from the capital?)

By 1928, the Soviet Union had largely recovered from the damage inflicted by the Great War, Bolshevik revolution, and subsequent civil war. Industrial and agricultural production were back to around their 1914 levels, and most measures of well-being had similarly recovered. To be sure, compared to the developed industrial economies of countries such as Germany, France, or Britain, Russia remained a backward economy largely based upon primitive agriculture, but at least it had undone the damage inflicted by years of turbulence and conflict.

But in the eyes of Stalin and his close associates, who were ardent Marxists, there was a dangerous and potentially deadly internal contradiction in the Soviet system as it then stood. In 1921, in response to the chaos and famine following the 1917 revolution and years-long civil war, Lenin had proclaimed the New Economic Policy (NEP), which tempered the pure collectivism of original Bolshevik doctrine by introducing a mixed economy, where large enterprises would continue to be owned and managed by the state, but small-scale businesses could be privately owned and run for profit. More importantly, agriculture, which had previously been managed under a top-down system of coercive requisitioning of grain and other products by the state, was replaced by a market system where farmers could sell their products freely, subject to a tax, payable in product, proportional to their production (and thus creating an incentive to increase production).

The NEP was a great success, and shortages of agricultural products were largely eliminated. There was grousing about the growing prosperity of the so-called NEPmen, but the results of freeing the economy from the shackles of state control were evident to all. But according to Marxist doctrine, it was a dagger pointed at the heart of the socialist state.

By 1928, the Soviet economy could be described, in Marxist terms, as socialism in the industrial cities and capitalism in the agrarian countryside. But, according to Marx, the form of politics was determined by the organisation of the means of production—paraphrasing Brietbart, politics is downstream of economics. This meant that preserving capitalism in a large sector of the country, one employing a large majority of its population and necessary to feed the cities, was an existential risk. In such a situation it would only be normal for the capitalist peasants to eventually prevail over the less numerous urbanised workers and destroy socialism.

Stalin was a Marxist. He was not an opportunist who used Marxism-Leninism to further his own ambitions. He really believed this stuff. And so, in 1928, he proclaimed an end to the NEP and began the forced collectivisation of Soviet agriculture. Private ownership of land would be abolished, and the 120 million peasants essentially enslaved as “workers” on collective or state farms, with planting, quotas to be delivered, and management essentially controlled by the party. After an initial lucky year, the inevitable catastrophe ensued. Between 1931 and 1933 famine and epidemics resulting from it killed between five and seven million people. The country lost around half of its cattle and two thirds of its sheep. In 1929, the average family in Kazakhstan owned 22.6 cattle; in 1933 3.7. This was a calamity on the same order as the Jewish Holocaust in Germany, and just as man-made: during this period there was a global glut of food, but Stalin refused to admit the magnitude of the disaster for fear of inciting enemies to attack and because doing so would concede the failure of his collectivisation project. In addition to the famine, the process of collectivisation resulted in between four and five million people being arrested, executed, deported to other regions, or jailed.

Many in the starving countryside said, “If only Stalin knew, he would do something.” But the evidence is overwhelming: Stalin knew, and did nothing. Marxist theory said that agriculture must be collectivised, and by pure force of will he pushed through the project, whatever the cost. Many in the senior Soviet leadership questioned this single-minded pursuit of a theoretical goal at horrendous human cost, but they did not act to stop it. But Stalin remembered their opposition and would settle scores with them later.

By 1936, it appeared that the worst of the period of collectivisation was over. The peasants, preferring to live in slavery than starve to death, had acquiesced to their fate and resumed production, and the weather co-operated in producing good harvests. And then, in 1937, a new horror was unleashed upon the Soviet people, also completely man-made and driven by the will of Stalin, the Great Terror. Starting slowly in the aftermath of the assassination of Sergey Kirov in 1934, by 1937 the absurd devouring of those most loyal to the Soviet regime, all over Stalin's signature, reached a crescendo. In 1937 and 1938 1,557,259 people would be arrested and 681,692 executed, the overwhelming majority for political offences, this in a country with a working-age population of 100 million. Counting deaths from other causes as a result of the secret police, the overall death toll was probably around 830,000. This was so bizarre, and so unprecedented in human history, it is difficult to find any comparable situation, even in Nazi Germany. As the author remarks,

To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?

Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies.

And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.)

But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state.

With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union.

Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course.

Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution.

Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production.

On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border.

After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere.

But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine.

Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping.

The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn.

This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses.

Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia.

Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.

 Permalink

Wood, Fenton. Pirates of the Electromagnetic Waves. Seattle: Amazon Digital Services, 2018. ASIN B07H2RJK8J.
This is an utterly charming short novel (or novella: it is just 123 pages) which, on the surface, reads like a young adult adventure from the golden age, along the lines of the original Tom Swift or Hardy Boys series. But as you get deeper into the story, you discover clues there is much more going on than you first suspected, and that this may be the beginning of a wonderful exploration of an alternative reality which is a delight to visit and you may wish were your home.

Philo Hergenschmidt, Randall Quinn, and their young friends live in Porterville, deep in the mountain country of the Yankee Republic. The mountains that surround it stopped the glaciers when they came down from the North a hundred thousand years ago, and provided a refuge for the peace-loving, self-sufficient, resourceful, and ornery people who fled the wars. Many years later, they retain those properties, and most young people are members of the Survival Scouts, whose eight hundred page Handbook contains every thing a mountain man needs to know to survive and prosper under any circumstances.

Porterville is just five hundred miles from the capital of Iburakon, but might as well be on a different planet. Although the Yankee Republic's technology is in many ways comparable to our own, the mountains shield Porterville from television and FM radio broadcasts and, although many families own cars with radios installed by default, the only thing they can pick up is a few scratchy AM stations from far away when the skywave opens up at night. Every summer, Randall spends two weeks with his grandparents in Iburakon and comes back with tales of wonders which enthrall his friends like an explorer of yore returned from Shangri-La. (Randall is celebrated as a raconteur—and some of his tales may be true.) This year he told of the marvel of television and a science fiction series called Xenotopia, and for weeks the boys re-enacted battles from his descriptions. Broadcasting: that got Philo thinking….

One day Philo calls up Randall and asks him to dig out an old radio he recalled him having and tune it to the usually dead FM band. Randall does, and is astonished to hear Philo broadcasting on “Station X” with amusing patter. It turns out he found a book in the attic, 101 Radio Projects for Boys, written by a creative and somewhat subversive author, and following the directions, put together a half watt FM transmitter from scrounged spare parts. Philo briefs Randall on pirate radio stations: although the penalties for operating without a license appear severe, in fact, unless you willingly interfere with a licensed broadcaster, you just get a warning the first time and a wrist-slap ticket thereafter unless you persist too long.

This gets them both thinking…. With the help of adults willing to encourage youth in their (undisclosed) projects, or just to look the other way (the kids of Porterville live free-range lives, as I did in my childhood, as their elders have not seen fit to import the vibrant diversity into their community which causes present-day youth to live under security lock-down), and a series of adventures, radio station 9X9 goes on the air, announced with great fanfare in handbills posted around the town. Suddenly, there is something to listen to, and people start tuning in. Local talent tries their hands at being a DJ, and favourites emerge. Merchants start to sign up for advertisements. Church services are broadcast for shut-ins. Even though no telephone line runs anywhere near the remote and secret studio, ingenuity and some nineteenth-century technology allow them to stage a hit call-in show. And before long, live talent gets into the act. A big baseball game provides both a huge opportunity and a seemingly insurmountable challenge until the boys invent an art which, in our universe, was once masterfully performed by a young Ronald Reagan.

Along the way, we learn of the Yankee Republic in brief, sometimes jarring, strokes of the pen, as the author masterfully follows the science fiction principle of “show, don't tell”.

Just imagine if William the Bastard had succeeded in conquering England. We'd probably be speaking some unholy crossbreed of French and English….

The Republic is the only country in the world that recognizes allodial title,….

When Congress declares war, they have to elect one of their own to be a sacrificial victim,….

“There was a man from the state capitol who wanted to give us government funding to build what he called a ‘proper’ school, but he was run out of town, the poor dear.”

Pirates, of course, must always keenly scan the horizon for those who might want to put an end to the fun. And so it is for buccaneers sailing the Hertzian waves. You'll enjoy every minute getting to the point where you find out how it ends. And then, when you think it's all over, another door opens into a wider, and weirder, world in which we may expect further adventures. The second volume in the series, Five Million Watts, was published in April, 2019.

At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is very inexpensive.

 Permalink

Roberts, Andrew. Churchill: Walking with Destiny. New York: Viking, 2018. ISBN 978-1-101-98099-6.
At the point that Andrew Roberts sat down to write a new biography of Winston Churchill, there were a total of 1009 biographies of the man in print, examining every aspect of his life from a multitude of viewpoints. Works include the encyclopedic three-volume The Last Lion (January 2013) by William Manchester and Paul Reid, and Roy Jenkins' single-volume Churchill: A Biography (February 2004), which concentrates on Churchill's political career. Such books may seem to many readers to say just about everything about Churchill there is to be said from the abundant documentation available for his life. What could a new biography possibly add to the story?

As the author demonstrates in this magnificent and weighty book (1152 pages, 982 of main text), a great deal. Earlier Churchill biographers laboured under the constraint that many of Churchill's papers from World War II and the postwar era remained under the seal of official secrecy. These included the extensive notes taken by King George VI during his weekly meetings with the Prime Minister during the war and recorded in his personal diary. The classified documents were made public only fifty years after the end of the war, and the King's wartime diaries were made available to the author by special permission granted by the King's daughter, Queen Elizabeth II.

The royal diaries are an invaluable source on Churchill's candid thinking as the war progressed. As a firm believer in constitutional monarchy, Churchill withheld nothing in his discussions with the King. Even the deepest secrets, such as the breaking of the German codes, the information obtained from decrypted messages, and atomic secrets, which were shared with only a few of the most senior and trusted government officials, were discussed in detail with the King. Further, while Churchill was constantly on stage trying to hold the Grand Alliance together, encourage Britons to stay in the fight, and advance his geopolitical goals which were often at variance with even the Americans, with the King he was brutally honest about Britain's situation and what he was trying to accomplish. Oddly, perhaps the best insight into Churchill's mind as the war progressed comes not from his own six-volume history of the war, but rather the pen of the King, writing only to himself. In addition, sources such as verbatim notes of the war cabinet, diaries of the Soviet ambassador to the U.K. during the 1930s through the war, and other recently-disclosed sources resulted in, as the author describes it, there being something new on almost every page.

The biography is written in an entirely conventional manner: the author eschews fancy stylistic tricks in favour of an almost purely chronological recounting of Churchill's life, flipping back and forth from personal life, British politics, the world stage and Churchill's part in the events of both the Great War and World War II, and his career as an author and shaper of opinion.

Winston Churchill was an English aristocrat, but not a member of the nobility. A direct descendant of John Churchill, the 1st Duke of Marlborough, his father, Lord Randolph Churchill, was the third son of the 7th Duke of Marlborough. As only the first son inherits the title, although Randolph bore the honorific “Lord”, he was a commoner and his children, including first-born Winston, received no title. Lord Randolph was elected to the House of Commons in 1874, the year of Winston's birth, and would serve until his death in 1895, having been Chancellor of the Exchequer, Leader of the House of Commons, and Secretary of State for India. His death, aged just forty-five (rumoured at the time to be from syphilis, but now attributed to a brain tumour, as his other symptoms were inconsistent with syphilis), along with the premature deaths of three aunts and uncles at early ages, convinced the young Winston his own life might be short and that if he wanted to accomplish great things, he had no time to waste.

In terms of his subsequent career, his father's early death might have been an unappreciated turning point in Winston Churchill's life. Had his father retired from the House of Commons prior to his death, he would almost certainly have been granted a peerage in return for his long service. When he subsequently died, Winston, as eldest son, would have inherited the title and hence not been entitled to serve in the House of Commons. It is thus likely that had his father not died while still an MP, the son would never have had the political career he did nor have become prime minister in 1940.

Young, from a distinguished family, wealthy (by the standards of the average Briton, but not compared to the landed aristocracy or titans of industry and finance), ambitious, and seeking novelty and adventures to the point of recklessness, the young Churchill believed he was meant to accomplish great things in however many years Providence might grant him on Earth. In 1891, at the age of just 16, he confided to a friend,

I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.

He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood.

Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures.

He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book.

Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,

Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.

This was not a man likely to disappear into the mass of back-benchers and not rock the boat.

Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,

A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.

Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure.

Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.”

Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,

You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.

And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,

No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.

Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated.

Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,

Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.

This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived.

Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom.

Here is an Uncommon Knowledge interview with the author about Churchill and this biography.

This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.

 Permalink

Kroese, Robert. The Dawn of the Iron Dragon. Seattle: CreateSpace, 2018. ISBN 978-1-7220-2331-7.
This is the second volume in the Iron Dragon trilogy which began with The Dream of the Iron Dragon (August 2018). At the end of the first book, the crew of the Andrea Luhman stranded on Earth in the middle ages faced a seemingly impossible challenge. They, and their Viking allies, could save humanity from extinction in a war in the distant future only by building a space program capable of launching a craft into Earth orbit starting with an infrastructure based upon wooden ships and edged weapons. Further, given what these accidental time travellers, the first in history, had learned about the nature of travel to the past in their adventures to date, all of this must be done in the deepest secrecy and without altering the history to be written in the future. Recorded history, they discovered, cannot be changed, and hence any attempt to do something which would leave evidence of a medieval space program or intervention of advanced technology in the affairs of the time, would be doomed to failure. These constraints placed almost impossible demands upon what was already a formidable challenge.

From their ship's computer, the exiled spacemen had a close approximation to all of human knowledge, so they were rich in bits. But when it came to it: materials, infrastructure, tools, sources of energy and motive power, and everything else, they had almost nothing. Even the simplest rocket capable of achieving Earth orbit has tens to hundreds of thousands of parts, most requiring precision manufacture, stringent control of material quality, and rigorous testing. Consider a humble machine screw. In the 9th century A.D. there weren't any hardware stores. If you needed a screw, or ten thousand of them, to hold your rocket components together, you needed first to locate and mine the iron ore, then smelt the iron from the ore, refine it with high temperature and forced air (both of which require their own technologies, including machine screws) to achieve the desired carbon content, adding alloying metals such as nickel, chromium, cobalt, tungsten, and manganese, all of which have to be mined and refined first. Then the steel must be formed into the desired shape (requiring additional technologies), heat-treated, and then finally the threads must be cut into the blank, requiring machine tools made to sufficient precision that the screws will be interchangeable, with something to power the tools (all of which, of course, contain screws). And that's just a screw. Thinking about a turbopump, regeneratively cooled combustion chamber, hydraulically-actuated gimbal mechanism, gyroscopes and accelerometers, or any of the myriad other components of even the simplest launcher are apt to induce despair.

But the spacemen were survivors, and they knew that the entire future of the human species, driven in the future they had come from to near-extinction by the relentless Cho-ta'an, depended upon their getting off the Earth and delivering the planet-busting weapon which might turn the tide for their descendants centuries hence. While they needed just about everything, what they needed most was minds: human brainpower and the skills flowing from it to find and process the materials to build the machines to build the machines to build the machines which, after a decades-long process of recapitulating centuries of human technological progress, would enable them to accomplish their ambitious yet utterly essential mission.

People in the 9th century were just as intelligent as those today, but in most of the world literacy was rare and even more scarce was the acquired intellectual skill of thinking logically, breaking down a problem into its constituent parts, and the mental flexibility to learn and apply mind tools, such as algebra, trigonometry, calculus, Newton's and Kepler's laws, and a host of others which had yet to be discovered. These rare people were to be found in the emerging cities, where learning and the embryos of what would become the great universities of the later Middle Ages were developing. And so missions were dispatched to Constantinople, the greatest of these cities, and other centres of learning and innovation, to recruit not the famous figures recorded in history (whose disappearance into a secret project was inconsistent with that history, and hence impossible), but their promising young followers. These cities were cosmopolitan crossroads, dangerous but also sufficiently diverse that a Viking longboat showing up with people who barely spoke any known language would not attract undue attention. But the rulers of these cities appreciated the value of their learned people, and trying to attract them was perilous and could lead to hazards and misadventures.

On top of all of these challenges, a Cho-ta'an ship had followed the Andrea Luhman through the hyperspace gate and whatever had caused them to be thrown back in time, and a small contingent of the aliens had made it to Earth, bent on stopping the spacemen's getting off the planet at any cost. The situation was highly asymmetrical: while the spacemen had to accomplish a near-impossible task, the Cho-ta'an need only prevent them by any means possible. And being Cho-ta'an, if those means included loosing a doomsday plague to depopulate Europe, well, so be it. And the presence of the Cho-ta'an, wherever they might be hiding, redoubled the need for secrecy in every aspect of the Iron Dragon project.

Another contingent of the recruiting project finds itself in the much smaller West Francia city of Paris, just as Viking forces are massing for what history would record as the Siege of Paris in A.D. 885–886. In this epic raid, a force of tens of thousands (today estimated around 20,000, around half that claimed in the account by the monk Abbo Cernuus, who has been called “in a class of his own as an exaggerator”) of Vikings in hundreds (300, probably, 700 according to Abbo) of ships laid siege to a city defended by just two hundred Parisian men-at-arms. In this account, the spacemen, with foreknowledge of how it was going to come out, provide invaluable advice to Count Odo of Paris and Gozlin, the “fighting Bishop” of Paris, in defending their city as it was simultaneously ravaged by a plague (wonder where that came from?), and in persuading King Charles (“the Fat”) to come to the relief of the city. The epic battle for Paris, which ended not in triumph but rather a shameful deal, was a turning point in the history of France. The efforts of the spacemen, while critical and perhaps decisive, remained consistent with written history, at least that written by Abbo, who they encouraged in his proclivity for exaggeration.

Meanwhile, back at the secret base in Iceland, chosen to stay out of the tangles of European politics and out of the way of their nemesis Harald Fairhair, the first King of Norway, local rivalries intrude upon the desired isolation. It appears another, perhaps disastrous, siege may be in the offing, putting the entire project at risk. And with all of this, one of those knock-you-off-your-feet calamities the author is so fond of throwing at his characters befalls them, forcing yet another redefinition of their project and a breathtaking increase in its ambition and complexity, just as they have to contemplate making new and perilous alliances simply to survive.

The second volume of a trilogy is often the most challenging to write. In the first, everything is new, and the reader gets to meet the characters, the setting, and the challenges to be faced in the story. In the conclusion, everything is pulled together into a satisfying resolution. But in that one in the middle, it's mostly developing characters, plots, introducing new (often subordinate) characters, and generally moving things along—one risks readers' regarding it as “filler”. In this book, the author artfully avoids that risk by making a little-known but epic battle the centrepiece of the story, along with intrigue, a thorny ethical dilemma, and multiple plot threads playing out from Iceland to North Africa to the Dardanelles. You absolutely should read the first volume, The Dream of the Iron Dragon, before starting this one—although there is a one page summary of that book at the start, it isn't remotely adequate to bring you up to speed and avoid your repeatedly exclaiming “Who?”, “What?”, and “How?” as you enjoy this story.

When you finish this volume, the biggest question in your mind will probably be “How in the world is he going to wrap all of this up in just one more book?” The only way to find out is to pick up The Voyage of the Iron Dragon, which I will be reviewing here in due course. This saga (what else can you call an epic with Vikings and spaceships?) will be ranked among the very best of alternative history science fiction, and continues to demonstrate why independent science fiction is creating a new Golden Age for readers and rendering the legacy publishers of tedious “diversity” propaganda impotent and obsolete.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

June 2019

Zubrin, Robert. The Case for Space. Amherst, NY: Prometheus Books, 2019. ISBN 978-1-63388-534-9.
Fifty years ago, with the successful landing of Apollo 11 on the Moon, it appeared that the road to the expansion of human activity from its cradle on Earth into the immensely larger arena of the solar system was open. The infrastructure built for Project Apollo, including that in the original 1963 development plan for the Merritt Island area could support Saturn V launches every two weeks. Equipped with nuclear-powered upper stages (under active development by Project NERVA, and accommodated in plans for a Nuclear Assembly Building near the Vehicle Assembly Building), the launchers and support facilities were more than adequate to support construction of a large space station in Earth orbit, a permanently-occupied base on the Moon, exploration of near-Earth asteroids, and manned landings on Mars in the 1980s.

But this was not to be. Those envisioning this optimistic future fundamentally misunderstood the motivation for Project Apollo. It was not about, and never was about, opening the space frontier. Instead, it was a battle for prestige in the Cold War and, once won (indeed, well before the Moon landing), the budget necessary to support such an extravagant program (which threw away skyscraper-sized rockets with every launch), began to evaporate. NASA was ready to do the Buck Rogers stuff, but Washington wasn't about to come up with the bucks to pay for it. In 1965 and 1966, the NASA budget peaked at over 4% of all federal government spending. By calendar year 1969, when Apollo 11 landed on the Moon, it had already fallen to 2.31% of the federal budget, and with relatively small year to year variations, has settled at around one half of one percent of the federal budget in recent years. Apart from a small band of space enthusiasts, there is no public clamour for increasing NASA's budget (which is consistently over-estimated by the public as a much larger fraction of federal spending than it actually receives), and there is no prospect for a political consensus emerging to fund an increase.

Further, there is no evidence that dramatically increasing NASA's budget would actually accomplish anything toward the goal of expanding the human presence in space. While NASA has accomplished great things in its robotic exploration of the solar system and building space-based astronomical observatories, its human space flight operations have been sclerotic, risk-averse, loath to embrace new technologies, and seemingly more oriented toward spending vast sums of money in the districts and states of powerful representatives and senators than actually flying missions.

Fortunately, NASA is no longer the only game in town (if it can even be considered to still be in the human spaceflight game, having been unable to launch its own astronauts into space without buying seats from Russia since the retirement of the Space Shuttle in 2011). In 2009, the commission headed by Norman Augustine recommended cancellation of NASA's Constellation Program, which aimed at a crewed Moon landing in 2020, because they estimated that the heavy-lift booster it envisioned (although based largely on decades-old Space Shuttle technology) would take twelve years and US$36 billion to develop under NASA's business-as-usual policies; Constellation was cancelled in 2010 (although its heavy-lift booster, renamed. de-scoped, re-scoped, schedule-slipped, and cost-overrun, stumbles along, zombie-like, in the guise of the Space Launch System [SLS] which has, to date, consumed around US$14 billion in development costs without producing a single flight-ready rocket, and will probably cost between one and two billion dollars for each flight, every year or two—this farce will probably continue as long as Richard Shelby, the Alabama Senator who seems to believe NASA stands for “North Alabama Spending Agency”, remains in the World's Greatest Deliberative Body).

In February 2018, SpaceX launched its Falcon Heavy booster, which has a payload capacity to low Earth orbit comparable to the initial version of the SLS, and was developed with private funds in half the time at one thirtieth the cost (so far) of NASA's Big Rocket to Nowhere. Further, unlike the SLS, which on each flight will consign Space Shuttle Main Engines and Solid Rocket Boosters (which were designed to be reusable and re-flown many times on the Space Shuttle) to a watery grave in the Atlantic, three of the four components of the Falcon Heavy (excluding only its upper stage, with a single engine) are reusable and can be re-flown as many as ten times. Falcon Heavy customers will pay around US$90 million for a launch on the reusable version of the rocket, less than a tenth of what NASA estimates for an SLS flight, even after writing off its enormous development costs.

On the heels of SpaceX, Jeff Bezos's Blue Origin is developing its New Glenn orbital launcher, which will have comparable payload capacity and a fully reusable first stage. With competition on the horizon, SpaceX is developing the Super Heavy/Starship completely-reusable launcher with a payload of around 150 tonnes to low Earth orbit: more than any past or present rocket. A fully-reusable launcher with this capacity would also be capable of delivering cargo or passengers between any two points on Earth in less than an hour at a price to passengers no more than a first class ticket on a present-day subsonic airliner. The emergence of such a market could increase the demand for rocket flights from its current hundred or so per year to hundreds or thousands a day, like airline operations, with consequent price reductions due to economies of scale and moving all components of the transportation system down the technological learning curve.

Competition-driven decreases in launch cost, compounded by partially- or fully-reusable launchers, is already dramatically decreasing the cost of getting to space. A common metric of launch cost is the price to launch one kilogram into low Earth orbit. This remained stubbornly close to US$10,000/kg from the 1960s until the entry of SpaceX's Falcon 9 into the market in 2010. Purely by the more efficient design and operations of a profit-driven private firm as opposed to a cost-plus government contractor, the first version of the Falcon 9 cut launch costs to around US$6,000/kg. By reusing the first stage of the Falcon 9 (which costs around three times as much as the expendable second stage), this was cut by another factor of two, to US$3,000/kg. The much larger fully reusable Super Heavy/Starship is projected to reduce launch cost (if its entire payload capacity can be used on every flight, which probably isn't the way to bet) to the vicinity of US$250/kg, and if the craft can be flown frequently, say once a day, as somebody or other envisioned more than a quarter century ago, amortising fixed costs over a much larger number of launches could reduce cost per kilogram by another factor of ten, to something like US$25/kg.

Such cost reductions are an epochal change in the space business. Ever since the first Earth satellites, launch costs have dominated the industry and driven all other aspects of spacecraft design. If you're paying US$10,000 per kilogram to put your satellite in orbit, it makes sense to spend large sums of money not only on reducing its mass, but also making it extremely reliable, since launching a replacement would be so hideously expensive (and with flight rates so low, could result in a delay of a year or more before a launch opportunity became available). But with a hundred-fold or more reduction in launch cost and flights to orbit operating weekly or daily, satellites need no longer be built like precision watches, but rather industrial gear like that installed in telecom facilities on the ground. The entire cost structure is slashed across the board, and space becomes an arena accessible for a wide variety of commercial and industrial activities where its unique characteristics, such as access to free, uninterrupted solar power, high vacuum, and weightlessness are an advantage.

But if humanity is truly to expand beyond the Earth, launching satellites that go around and around the Earth providing services to those on its surface is just the start. People must begin to homestead in space: first hundreds, then thousands, and eventually millions and more living, working, building, raising families, with no more connection to the Earth than immigrants to the New World in the 1800s had to the old country in Europe or Asia. Where will they be living, and what will they be doing?

In order to think about the human future in the solar system, the first thing you need to do is recalibrate how you think about the Earth and its neighbours orbiting the Sun. Many people think of space as something like Antarctica: barren, difficult and expensive to reach, unforgiving, and while useful for some forms of scientific research, no place you'd want to set up industry or build communities where humans would spend their entire lives. But space is nothing like that. Ninety-nine percent or more of the matter and energy resources of the solar system—the raw material for human prosperity—are found not on the Earth, but rather elsewhere in the solar system, and they are free for the taking by whoever gets there first and figures out how to exploit them. Energy costs are a major input to most economic activity on the Earth, and wars are regularly fought over access to scarce energy resources on the home planet. But in space, at the distance Earth orbits the Sun, 1.36 kilowatts of free solar power are available for every square metre of collector you set up. And, unlike on the Earth's surface, that power is available 24 hours a day, every day of the year, and will continue to flow for billions of years into the future.

Settling space will require using the resources available in space, not just energy but material. Trying to make a space-based economy work by launching everything from Earth is futile and foredoomed. Regardless of how much you reduce launch costs (even with exotic technologies which may not even be possible given the properties of materials, such as space elevators or launch loops), the vast majority of the mass needed by a space-based civilisation will be dumb bulk materials, not high-tech products such as microchips. Water; hydrogen and oxygen for rocket fuel (which are easily made from water using electricity from solar power); aluminium, titanium, and steel for structural components; glass and silicon; rocks and minerals for agriculture and bulk mass for radiation shielding; these will account for the overwhelming majority of the mass of any settlement in space, whether in Earth orbit, on the Moon or Mars, asteroid mining camps, or habitats in orbit around the Sun. People and low-mass, high-value added material such as electronics, scientific instruments, and the like will launch from the Earth, but their destinations will be built in space from materials found there.

Why? As with most things in space, it comes down to delta-v (pronounced delta-vee), the change in velocity needed to get from one location to another. This, not distance, determines the cost of transportation in space. The Earth's mass creates a deep gravity well which requires around 9.8 km/sec of delta-v to get from the surface to low Earth orbit. It is providing this boost which makes launching payloads from the Earth so expensive. If you want to get to geostationary Earth orbit, where most communication satellites operate, you need another 3.8 km/sec, for a total of 13.6 km/sec launching from the Earth. By comparison, delivering a payload from the surface of the Moon to geostationary Earth orbit requires only 4 km/sec, which can be provided by a simple single-stage rocket. Delivering material from lunar orbit (placed there, for example, by a solar powered electromagnetic mass driver on the lunar surface) to geostationary orbit needs just 2.4 km/sec. Given that just about all of the materials from which geostationary satellites are built are available on the Moon (if you exploit free solar power to extract and refine them), it's clear a mature spacefaring economy will not be launching them from the Earth, and will create large numbers of jobs on the Moon, in lunar orbit, and in ferrying cargos among various destinations in Earth-Moon space.

The author surveys the resources available on the Moon, Mars, near-Earth and main belt asteroids, and, looking farther into the future, the outer solar system where, once humans have mastered controlled nuclear fusion, sufficient Helium-3 is available for the taking to power a solar system wide human civilisation of trillions of people for billions of years and, eventually, the interstellar ships they will use to expand out into the galaxy. Detailed plans are presented for near-term human missions to the Moon and Mars, both achievable within the decade of the 2020s, which will begin the process of surveying the resources available there and building the infrastructure for permanent settlement. These mission plans, unlike those of NASA, do not rely on paper rockets which have yet to fly, costly expendable boosters, or detours to “gateways” and other diversions which seem a prime example of (to paraphrase the author in chapter 14), “doing things in order to spend money as opposed to spending money in order to do things.”

This is an optimistic and hopeful view of the future, one in which the human adventure which began when our ancestors left Africa to explore and settle the far reaches of their home planet continues outward into its neighbourhood around the Sun and eventually to the stars. In contrast to the grim Malthusian vision of mountebanks selling nostrums like a “Green New Deal”, which would have humans huddled on an increasingly crowded planet, shivering in the cold and dark when the Sun and wind did not cooperate, docile and bowed to their enlightened betters who instruct them how to reduce their expectations and hopes for the future again and again as they wait for the asteroid impact to put an end to their misery, Zubrin sketches millions of diverse human (and eventually post-human, evolving in different directions) societies, exploring and filling niches on a grand scale that dwarfs that of the Earth, inventing, building, experimenting, stumbling, and then creating ever greater things just as humans have for millennia. This is a future not just worth dreaming of, but working to make a reality. We have the enormous privilege of living in the time when, with imagination, courage, the willingness to take risks and to discard the poisonous doctrines of those who preach “sustainability” but whose policies always end in resource wars and genocide, we can actually make it happen and see the first steps taken in our lifetimes.

Here is an interview with the author about the topics discussed in the book.

This is a one hour and forty-two minute interview (audio only) from “The Space Show” which goes into the book in detail.

 Permalink

Witzke, Dawn, ed. Planetary: Earth. Narara, NSW, Australia: Superversive Press, 2018. ISBN 978-1-925645-24-8.
This is the fourth book in the publisher's Planetary Anthology series. Each volume contain stories set on, or figuring in the plot, the named planet. Previous collections have featured Mercury, Venus, and Mars. This installment contains stories related in some way to Earth, although in several none of the action occurs on that planet.

Back the day (1930s through 1980s) monthly science fiction magazines were a major venue for the genre and the primary path for aspiring authors to break into print. Sold on newsstands for the price of a few comic books, they were the way generations of young readers (including this one) discovered the limitless universe of science fiction. A typical issue might contain five or six short stories, a longer piece (novella or novelette), and a multi-month serialisation of a novel, usually by an established author known to the readers. For example, Frank Herbert's Dune was serialised in two long runs in Analog in 1963 and 1965 before its hardcover publication in 1965. In addition, there were often book reviews, a column about science fact (Fantasy and Science Fiction published a monthly science column by Isaac Asimov which ran from 1958 until shortly before his death in 1992—a total of 399 in all), a lively letters to the editor section, and an editorial. All of the major science fiction monthlies welcomed unsolicited manuscripts from unpublished authors, and each issue was likely to contain one or two stories from the “slush pile” which the editor decided made the cut for the magazine. Most of the outstanding authors of the era broke into the field this way, and some editors such as John W. Campbell of Astounding (later Analog) invested much time and effort in mentoring promising talents and developing them into a reliable stable of writers to fill the pages of their magazines.

By the 1990s, monthly science fiction magazines were in decline, and the explosion of science fiction novel publication had reduced the market for short fiction. By the year 2000, only three remained in the U.S., and their circulations continued to erode. Various attempts to revive a medium for short fiction have been tried, including Web magazines. This collection is an example of another genre: the original anthology. While most anthologies published in book form in the heyday of the magazines had previously been published in the magazines (authors usually only sold the magazine “first North American serial rights” and retained the right to subsequently sell the story to the publisher of an anthology), original anthologies contain never-before-published stories, usually collected around a theme such as the planet Earth here.

I got this book (I say “got” as opposed to “bought” because the Kindle edition is free to Kindle Unlimited subscribers and I “borrowed” it as one of the ten titles I can check out for reading at a given time) because it contained the short story, “The Hidden Conquest”, by Hans G. Schantz, author of the superb Hidden Truth series of novels (1, 2, 3), which was said to be a revealing prequel to the story in the books. It is, and it is excellent, although you probably won't appreciate how much of a reveal it is unless you've read the books, especially 2018's The Brave and the Bold.

The rest of the stories are…uneven: about what you'd expect from a science fiction magazine in the 1950s or '60s. Some are gimmick stories, others are shoot-em-up action tales, while still others are just disappointing and probably should have remained in the slush pile or returned to their authors with a note attached to the rejection slip offering a few suggestions and encouragement to try again. Copy editing is sloppy, complete with a sprinkling of idiot “its/it's” plus the obligatory “pulled hard on the reigns” “miniscule”, and take your “breathe” away.

But hey, if you got it from Kindle Unlimited, you can hardly say you didn't get your money's worth, and you're perfectly free to borrow it, read the Hans Schantz story, and return it same day. I would not pay the US$4 to buy the Kindle edition outright, and fifteen bucks for a paperback is right out.

 Permalink

Hanson, Victor Davis. The Case for Trump. New York: Basic Books, 2019. ISBN 978-1-5416-7354-0.
The election of Donald Trump as U.S. president in November 2016 was a singular event in the history of the country. Never before had anybody been elected to that office without any prior experience in either public office or the military. Trump, although running as a Republican, had no long-term affiliation with the party and had cultivated no support within its establishment, elected officials, or the traditional donors who support its candidates. He turned his back on the insider consultants and “experts” who had advised GOP candidate after candidate in their “defeat with dignity” at the hands of a ruthless Democrat party willing to burn any bridge to win. From well before he declared his candidacy he established a direct channel to a mass audience, bypassing media gatekeepers via Twitter and frequent appearances in all forms of media, who found him a reliable boost to their audience and clicks. He was willing to jettison the mumbling points of the cultured Beltway club and grab “third rail” issues of which they dared not speak such as mass immigration, predatory trade practices, futile foreign wars, and the exporting of jobs from the U.S. heartland to low-wage sweatshops overseas.

He entered a free-for-all primary campaign as one of seventeen major candidates, including present and former governors, senators, and other well-spoken and distinguished rivals and, one by one, knocked them out, despite resolute and sometimes dishonest bias by the media hosting debates, often through “verbal kill shots” which made his opponents the target of mockery and pinned sobriquets on them (“low energy Jeb”, “little Marco”, “lyin' Ted”) they couldn't shake. His campaign organisation, if one can dignify it with the term, was completely chaotic and his fund raising nothing like the finely-honed machines of establishment favourites like Jeb Bush, and yet his antics resulted in his getting billions of dollars worth of free media coverage even on outlets who detested and mocked him.

One by one, he picked off his primary opponents and handily won the Republican presidential nomination. This unleashed a phenomenon the likes of which had not been seen since the Goldwater insurgency of 1964, but far more virulent. Pillars of the Republican establishment and Conservatism, Inc. were on the verge of cardiac arrest, advancing fantasy scenarios to deny the nomination to its winner, publishing issues of their money-losing and subscription-shedding little magazines dedicated to opposing the choice of the party's voters, and promoting insurgencies such as the candidacy of Egg McMuffin, whose bona fides as a man of the people were evidenced by his earlier stints with the CIA and Goldman Sachs.

Predictions that post-nomination, Trump would become “more presidential” were quickly falsified as the chaos compounded, the tweets came faster and funnier, and the mass rallies became ever more frequent and raucous. One thing that was obvious to anybody looking dispassionately at what was going on, without the boiling blood of hatred and disdain of the New York-Washington establishment, was that the candidate was having the time of his life and so were the people who attended the rallies. But still, all of the wise men of the coastal corridor knew what must happen. On the eve of the general election, polls put the probability of a Trump victory somewhere between 1 and 15 percent. The outlier was Nate Silver, who went out on a limb and went all the way up to 29% chance of Trump's winning to the scorn of his fellow “progressives” and pollsters.

And yet, Trump won, and handily. Yes, he lost the popular vote, but that was simply due to the urban coastal vote for which he could not contend and wisely made no attempt to attract, knowing such an effort would be futile and a waste of his scarce resources (estimates are his campaign spent around half that of Clinton's). This book by classicist, military historian, professor, and fifth-generation California farmer Victor Davis Hanson is an in-depth examination of, in the words of the defeated candidate, “what happened”. There is a great deal of wisdom here.

First of all, a warning to the prospective reader. If you read Dr Hanson's columns regularly, you probably won't find a lot here that's new. This book is not one of those that's obviously Frankenstitched together from previously published columns, but in assembling their content into chapters focussing on various themes, there's been a lot of cut and paste, if not literally at the level of words, at least in terms of ideas. There is value in seeing it all presented in one package, but be prepared to say, from time to time, “Haven't I've read this before?”

That caveat lector aside, this is a brilliant analysis of the Trump phenomenon. Hanson argues persuasively that it is very unlikely any of the other Republican contenders for the nomination could have won the general election. None of them were talking about the issues which resonated with the erstwhile “Reagan Democrat” voters who put Trump over the top in the so-called “blue wall” states, and it is doubtful any of them would have ignored their Beltway consultants and campaigned vigorously in states such as Michigan, Wisconsin, and Pennsylvania which were key to Trump's victory. Given that the Republican defeat which would likely have been the result of a Bush (again?), Rubio, or Cruz candidacy would have put the Clinton crime family back in power and likely tipped the Supreme Court toward the slaver agenda for a generation, that alone should give pause to “never Trump” Republicans.

How will it all end? Nobody knows, but Hanson provides a variety of perspectives drawn from everything from the Byzantine emperor Justinian's battle against the deep state to the archetype of the rough-edged outsider brought in to do what the more civilised can't or won't—the tragic hero from Greek drama to Hollywood westerns. What is certain is that none of what Trump is attempting, whether it ends in success or failure, would be happening if any of his primary opponents or the Democrat in the general election had prevailed.

I believe that Victor Davis Hanson is one of those rare people who have what I call the “Orwell gift”. Like George Orwell, he has the ability to look at the facts, evaluate them, and draw conclusions without any preconceived notions or filtering through an ideology. What is certain is that with the election of Donald Trump in 2016 the U.S. dodged a bullet. Whether that election will be seen as a turning point which reversed the decades-long slide toward tyranny by the administrative state, destruction of the middle class, replacement of the electorate by imported voters dependent upon the state, erosion of political and economic sovereignty in favour of undemocratic global governance, and the eventual financial and moral bankruptcy which are the inevitable result of all of these, or just a pause before the deluge, is yet to be seen. Hanson's book is an excellent, dispassionate, well-reasoned, and thoroughly documented view of where things stand today.

 Permalink

Wood, Fenton. Five Million Watts. Seattle: Amazon Digital Services, 2019. ASIN B07R6X973N.
This is the second short novel/novella (123 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and this sequel turns it all the way up to “enchanting”. As with the first book, you're reading along thinking this is a somewhat nerdy young adult story, then something happens or is mentioned in passing and suddenly, “Whoa—I didn't see that coming!”, and you realise the Yankee Republic is a strange and enchanted place, and that, as in the work of Philip K. Dick, there is a lot more going on than you suspected, and much more to be discovered in future adventures.

This tale begins several years after the events of the first book. Philo Hergenschmidt (the only character from Pirates to appear here) has grown up, graduated from Virginia Tech, and after a series of jobs keeping antiquated equipment at rural radio stations on the air, arrives in the Republic's storied metropolis of Iburakon to seek opportunity, adventure, and who knows what else. (If you're curious where the name of the city came from, here's a hint, but be aware it may be a minor spoiler.) Things get weird from the very start when he stops at an information kiosk and encounters a disembodied mechanical head who says it has a message for him. The message is just an address, and when he goes there he meets a very curious character who goes by a variety of names ranging from Viridios to Mr Green, surrounded by a collection of keyboard instruments including electronic synthesisers with strange designs.

Viridios suggests Philo aim for the very top and seek employment at legendary AM station 2XG, a broadcasting pioneer that went on the air in 1921, before broadcasting was regulated, and which in 1936 increased its power to five million watts. When other stations' maximum power was restricted to 50,000 watts, 2XG was grandfathered and allowed to continue to operate at 100 times more, enough to cover the continent far beyond the borders of the Yankee Republic into the mysterious lands of the West.

Not only does 2XG broadcast with enormous power, it was also permitted to retain its original 15 kHz bandwidth, allowing high-fidelity broadcasting and even, since the 1950s, stereo (for compatible receivers). However, in order to retain its rights to the frequency and power, the station was required to stay on the air continuously, with any outage longer than 24 hours forfeiting its rights to hungry competitors.

The engineers who maintained this unique equipment were a breed apart, the pinnacle of broadcast engineering. Philo manages to secure a job as a junior technician, which means he'll never get near the high power RF gear or antenna (all of which are one-off custom), but sets to work on routine maintenance of studio gear and patching up ancient tube gear when it breaks down. Meanwhile, he continues to visit Viridios and imbibe his tales of 2XG and the legendary Zaros the Electromage who designed its transmitter, the operation of which nobody completely understands today.

As he hears tales of the Old Religion, the gods of the spring and grain, and the time of the last ice age, Philo concludes Viridios is either the most magnificent liar he has ever encountered or—something else again.

Climate change is inexorably closing in on Iburakon. Each year is colder than the last, the growing season is shrinking, and it seems inevitable that before long the glaciers will resume their march from the north. Viridios is convinced that the only hope lies in music, performing a work rooted in that (very) Old Time Religion which caused a riot in its only public performance decades before, broadcast with the power of 2XG and performed with breakthrough electronic music instruments of his own devising.

Viridios is very odd, but also persuasive, and he has a history with 2XG. The concert is scheduled, and Philo sets to work restoring long-forgotten equipment from the station's basement and building new instruments to Viridios' specifications. It is a race against time, as the worst winter storm in memory threatens 2XG and forces Philo to confront one of his deepest fears.

Working on a project on the side, Philo discovers what may be the salvation of 2XG, but also as he looks deeper, possibly the door to a new universe. Once again, we have a satisfying, heroic, and imaginative story, suitable for readers of all ages, that leaves you hungry for more.

At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is inexpensive to buy. Those eagerly awaiting the next opportunity to visit the Yankee Republic will look forward to the publication of volume 3, The Tower of the Bear, in October, 2019.

 Permalink

Manto, Cindy Donze. Michoud Assembly Facility. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-5316-6969-0.
In March, 1763, King Louis XV of France made a land grant of 140 square kilometres to Gilbert Antoine St Maxent, the richest man in Louisiana Territory and commander of the militia. The grant required St Maxent to build a road across the swampy property, develop a plantation, and reserve all the trees in forested areas for the use of the French navy. When the Spanish took over the territory five years later, St Maxent changed his first names to “Gilberto Antonio” and retained title to the sprawling estate. In the decades that followed, the property changed hands and nations several times, eventually, now part of the United States, being purchased by another French immigrant, Antoine Michoud, who had left France after the fall of Napoleon, who his father had served as an official.

Michoud rapidly established himself as a prosperous businessman in bustling New Orleans, and after purchasing the large tract of land set about buying pieces which had been sold off by previous owners, re-assembling most of the original French land grant into one of the largest private land holdings in the United States. The property was mostly used as a sugar plantation, although territory and rights were ceded over the years for construction of a lighthouse, railroads, and telegraph and telephone lines. Much of the land remained undeveloped, and like other parts of southern Louisiana was a swamp or, as they now say, “wetlands”.

The land remained in the Michoud family until 1910, when it was sold in its entirety for US$410,000 in cash (around US$11 million today) to a developer who promptly defaulted, leading to another series of changes of ownership and dodgy plans for the land, which most people continued to refer to as the Michoud Tract. At the start of World War II, the U.S. government bought a large parcel, initially intended for construction of Liberty ships. Those plans quickly fell through, but eventually a huge plant was erected on the site which, starting in 1943, began to manufacture components for cargo aircraft, lifeboats, and components which were used in the Manhattan Project's isotope separation plants in Oak Ridge, Tennessee.

At the end of the war, the plant was declared surplus but, a few years later, with the outbreak of the Korean War, it was re-purposed to manufacture engines for Army tanks. It continued in that role until 1954 when it was placed on standby and, in 1958, once again declared surplus. There things stood until mid-1961 when NASA, charged by the new Kennedy administration to “put a man on the Moon” was faced with the need to build rockets in sizes and quantities never before imagined, and to do so on a tight schedule, racing against the Soviet Union.

In June, 1961, Wernher von Braun, director of the NASA Marshall Space Flight Center in Huntsville, Alabama, responsible for designing and building those giant boosters, visited the then-idle Michoud Ordnance Plant and declared it ideal for NASA's requirements. It had 43 acres (17 hectares) under one roof, the air conditioning required for precision work in the Louisiana climate, and was ready to occupy. Most critically, it was located adjacent to navigable waters which would allow the enormous rocket stages, far too big to be shipped by road, rail, or air, to be transported on barges to and from Huntsville for testing and Cape Canaveral in Florida to be launched.

In September 1961 NASA officially took over the facility, renaming it “Michoud Operations”, to be managed by NASA Marshall as the manufacturing site for the rockets they designed. Work quickly got underway to set up manufacturing of the first stage of the Saturn I and 1B rockets and prepare to build the much larger first stage of the Saturn V Moon rocket. Before long, new buildings dedicated to assembly and test of the new rockets, occupied both by NASA and its contractors, began to spring up around the original plant. In 1965, the installation was renamed the Michoud Assembly Facility, which name it bears to this day.

With the end of the Apollo program, it looked like Michoud might once again be headed for white elephant status, but the design selected for the Space Shuttle included a very large External Tank comparable in size to the first stage of the Saturn V which would be discarded on every flight. Michoud's fabrication and assembly facilities, and its access to shipping by barge were ideal for this component of the Shuttle, and a total of 135 tanks built at Michoud were launched on Shuttle missions between 1981 and 2011.

The retirement of the Space Shuttle once again put the future of Michoud in doubt. It was originally tapped to build the core stage of the Constellation program's Ares V booster, which was similar in size and construction to the Shuttle External Tank. The cancellation of Constellation in 2010 brought that to a halt, but then Congress and NASA rode to the rescue with the absurd-as-a-rocket but excellent-as-a-jobs-program Space Launch System (SLS), whose centre core stage also resembles the External Tank and Ares V. SLS first stage fabrication is presently underway at Michoud. Perhaps when the schedule-slipping, bugget-busting SLS is retired after a few flights (if, in fact, it ever flies at all), bringing to a close the era of giant taxpayer-funded throwaway rockets, the Michoud facility can be repurposed to more productive endeavours.

This book is largely a history of Michoud in photos and captions, with text introducing chapters on each phase of the facility's history. All of the photos are in black and white, and are well-reproduced. In the Kindle edition many can be expanded to show more detail. There are a number of copy-editing and factual errors in the text and captions, but not too many to distract or mislead the reader. The unidentified “visitors” shown touring the Michoud facility in July 1967 (chapter 3, Kindle location 392) are actually the Apollo 7 crew, Walter Schirra, Donn Eisele, and Walter Cunningham, who would fly on a Michoud-built Saturn 1B in October 1968.

For a book of just 130 pages, most of which are black and white photographs, the hardcover is hideously expensive (US$29 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

 Permalink

Wright, Tom and Bradley Hope. Billion Dollar Whale. New York: Hachette Books, 2018. ISBN 978-0-316-43650-2.
Low Taek Jho, who westernised his name to “Jho Low”, which I will use henceforth, was the son of a wealthy family in Penang, Malaysia. The family's fortune had been founded by Low's grandfather who had immigrated to the then British colony of Malaya from China and founded a garment manufacturing company which Low's father had continued to build and recently sold for a sum of around US$ 15 million. The Low family were among the wealthiest in Malaysia and wanted the best for their son. For the last two years of his high school education, Jho was sent to the Harrow School, a prestigious private British boarding school whose alumni include seven British Prime Ministers including Winston Churchill and Robert Peel, and “foreign students” including Jawaharlal Nehru and King Hussein of Jordan. At Harrow, he would meet classmates whose families' wealth was in the billions, and his ambition to join their ranks was fired.

After graduating from Harrow, Low decided the career he wished to pursue would be better served by a U.S. business education than the traditional Cambridge or Oxford path chosen by many Harrovians and enrolled in the University of Pennsylvania's Wharton School undergraduate program. Previous Wharton graduates include Warren Buffett, Walter Annenberg, Elon Musk, and Donald Trump. Low majored in finance, but mostly saw Wharton as a way to make connections. Wharton was a school of choice for the sons of Gulf princes and billionaires, and Low leveraged his connections, while still an undergraduate, into meetings in the Gulf with figures such as Yousef Al Otaiba, foreign policy adviser to the sheikhs running the United Arab Emirates. Otaiba, in turn, introduced him to Khaldoon Khalifa Al Mubarak, who ran a fund called Mubadala Development, which was on the cutting edge of the sovereign wealth fund business.

Since the 1950s resource-rich countries, in particular the petro-states of the Gulf, had set up sovereign wealth funds to invest the surplus earnings from sales of their oil. The idea was to replace the natural wealth which was being extracted and sold with financial assets that would generate income, appreciate over time, and serve as the basis of their economies when the oil finally ran out. By the early 2000s, the total funds under management by sovereign wealth funds were US$3.5 trillion, comparable to the annual gross domestic product of Germany. Sovereign wealth funds were originally run in a very conservative manner, taking few risks—“gentlemen prefer bonds”—but since the inflation and currency crises of the 1970s had turned to more aggressive strategies to protect their assets from the ravages of Western money printing and financial shenanigans.

While some sovereign wealth funds, for example Norway's (with around US$1 trillion in assets the largest in the world) are models of transparency and prudent (albeit often politically correct) investing, others, including some in the Gulf states, are accountable only to autocratic ruler(s) and have been suspected as acting as personal slush funds. On the other hand, managers of Gulf funds must be aware that bad investment decisions may not only cost them their jobs but their heads.

Mubadala was a new kind of sovereign wealth fund. Rather than a conservative steward of assets for future generations, it was run more like a leveraged Wall Street hedge fund: borrowing on global markets, investing in complex transactions, and aiming to develop the industries which would sustain the local economy when the oil inevitably ran out. Jho Low saw Al Mubarak, not yet thirty years old, making billion dollar deals on almost his sole discretion, playing a role on the global stage, driving the development of Abu Dhabi's economy, and being handsomely compensated for his efforts. That's the game Low wanted to be in, and he started working toward it.

Before graduating from Wharton, he set up a British Virgin Islands company he named the “Wynton Group”, which stood for his goal to “win tons” of money. After graduation in 2005 he began to pitch the contacts he'd made through students at Harrow and Wharton on deals he'd identified in Malaysia, acting as an independent development agency. He put together a series of real estate deals, bringing money from his Gulf contacts and persuading other investors that large sovereign funds were on-board by making token investments from offshore companies he'd created whose names mimicked those of well-known funds. This is a trick he would continue to use in the years to come.

Still, he kept his eye on the goal: a sovereign wealth fund, based in Malaysia, that he could use for his own ends. In April 2009 Najib Razak became Malaysia's prime minister. Low had been cultivating a relationship with Najib since he met him through his stepson years before in London. Now it was time to cash in. Najib needed money to shore up his fragile political position and Low was ready to pitch him how to get it.

Shortly after taking office, Najib announced the formation of the 1Malaysia Development Berhad, or 1MDB, a sovereign wealth fund aimed at promoting foreign direct investment in projects to develop the economy of Malaysia and benefit all of its ethnic communities: those of Malay, Chinese, and Indian ancestry (hence “1Malaysia”). Although Jho Low had no official position with the fund, he was the one who promoted it, sold Najib on it, and took the lead in raising its capital, both from his contacts in the Gulf and, leveraging that money, in the international debt markets with the assistance of the flexible ethics and unquenchable greed of Goldman Sachs and its ambitious go-getters in Asia.

Low's pitch to the prime minister, either explicit or nod-nod, wink-wink, went well beyond high-minded goals such as developing the economy, bringing all ethnic groups together, and creating opportunity. In short, what “corporate social responsibility” really meant was using the fund as Najib's personal piggy bank, funded by naïve foreign investors, to reward his political allies and buy votes, shutting out the opposition. Low told Najib that at the price of aligning his policies with those of his benefactors in the Gulf, he could keep the gravy train running and ensure his tenure in office for the foreseeable future.

But what was in it for Low, apart from commissions, finder's fees, and the satisfaction of benefitting his native land? Well, rather more, actually. No sooner did the money hit the accounts of 1MDB than Low set up a series of sham transactions with deceptively-named companies to spirit the money out of the fund and put it into his own pockets. And now it gets a little bit weird for this scribbler. At the centre of all of this skulduggery was a private Swiss bank named BSI. This was my bank. I mean, I didn't own the bank (thank Bob!), but I'd been doing business there (or with its predecessors, before various mergers and acquisitions) since before Jho Low was born. In my dealings with them there were the soul of probity and beyond reproach, but you never know what's going on in the other side of the office, or especially in its branch office in the Wild East of Singapore. Part of the continuo to this financial farce is the battles between BSI's compliance people who kept saying, “Wait, this doesn't make any sense.” and the transaction side people looking at the commissions to be earned for moving the money from who-knows-where to who-knows-whom. But, back to the main story.

Ultimately, Low's looting pipeline worked, and he spirited away most of the proceeds of the initial funding of 1MDB into his own accounts or those he controlled. There is a powerful lesson here, as applicable to security of computer systems or access to physical infrastructure as financial assets. Try to chisel a few pennies from your credit card company and you'll be nailed. Fudge a little on your tax return, and it's hard time, serf. But when you play at the billion dollar level, the system was almost completely undefended against an amoral grifter who was bent not on a subtle and creative form of looting in the Bernie Madoff or Enron mold, but simply brazenly picking the pockets of a massive fund through childishly obvious means such as deceptively named offshore shell corporations, shuffling money among accounts in a modern-day version of check kiting, and appealing to banks' hunger for transaction fees over their ethical obligations to their owners and other customers.

Nobody knows how much Jho Low looted from 1MBD in this and subsequent transactions. Estimates of the total money spirited out of 1MDB range as high as US$4.5 billion, and Low's profligate spending alone as he was riding high may account for a substantial fraction of that.

Much of the book is an account of Low's lifestyle when he was riding high. He was not only utterly amoral when it came to bilking investors, leaving the poor of Malaysia on the hook, but seemingly incapable of looking beyond the next party, gambling spree, or debt repayment. It's like he always thought there'd be a greater fool to fleece, and that there was no degree of wretched excess in his spending which would invite the question “How did he earn this money?” I'm not going to dwell upon this. It's boring. Stylish criminals whose lifestyles are as suave as their crimes are elegant. Grifters who blow money on down-market parties with gutter rappers and supermarket tabloid celebrities aren't. In a marvelous example of meta-irony, Low funded a Hollywood movie production company which made the film The Wolf of Wall Street, about a cynical grifter like Low himself.

And now comes the part where I tell you how it all came undone, everybody got their just deserts, and the egregious perpetrators are languishing behind bars. Sorry, not this time, or at least not yet.

Jho Low escaped pursuit on his luxury super-yacht and now is reputed to be living in China, travelling freely and living off his ill-gotten gains. The “People's Republic” seems quite hospitable to those who loot the people of its neighbours (assuming they adequately grease the palms of its rulers).

Goldman Sachs suffered no sanctions as a result of its complicity in the 1MDB funding and the appropriation of funds.

BSI lost its Swiss banking licence, but was acquired by another bank and most of its employees, except for a few involved in dealing with Low, kept their jobs. (My account was transferred to the successor bank with no problems. They never disclosed the reason for the acquisition.)

This book, by the two Wall Street Journal reporters who untangled what may be the largest one-man financial heist in human history, provides a look inside the deeply corrupt world of paper money finance at its highest levels, and is an illustration of the extent to which people are disinclined to ask obvious questions like “Where is the money coming from?” while the good times are rolling. What is striking is how banal the whole affair is. Jho Low's talents would have made him a great success in legitimate development finance, but instead he managed to steal billions, ultimately from mostly poor people in his native land, and blow the money on wild parties, shallow celebrities, ostentatious real estate, cars, and yachts, and binges of high-stakes gambling in skeevy casinos. The collapse of the whole tawdry business reflects poorly on institutions like multinational investment banks, large accounting and auditing firms, financial regulators, Swiss banks, and the whole “sustainable development” racket in the third world. Jho Low, a crook through and through, looked at these supposedly august institutions and recognised them as kindred spirits and then figured out transparently simple ways to use them to steal billions. He got away with it, and they are still telling governments, corporations, and investors how to manage their affairs and, inexplicably, being taken seriously and handsomely compensated for their “expertise”.

 Permalink

Kroese, Robert. The Voyage of the Iron Dragon. Grand Rapids MI: St. Culain Press, 2019. ISBN 978-1-7982-3431-0.
This is the third and final volume in the Iron Dragon trilogy which began with The Dream of the Iron Dragon (August 2018) and continued in The Dawn of the Iron Dragon (May 2019). When reading a series of books I've discovered, I usually space them out to enjoy them over time, but the second book of this trilogy left its characters in such a dire pickle I just couldn't wait to see how the author managed to wrap up the story in just one more book and dove right in to the concluding volume. It is a satisfying end to the saga, albeit in some places seeming rushed compared to the more deliberate development of the story and characters in the first two books.

First of all, this note. Despite being published in three books, this is one huge, sprawling story which stretches over more than a thousand pages, decades of time, and locations as far-flung as Constantinople, Iceland, the Caribbean, and North America, and in addition to their cultures, we have human spacefarers from the future, Vikings, and an alien race called the Cho-ta'an bent on exterminating humans from the galaxy. You should read the three books in order: Dream, Dawn, and Voyage. If you start in the middle, despite the second and third volumes' having a brief summary of the story so far, you'll be completely lost as to who the characters are, what they're trying to do, and how they ended up pursuing the desperate and seemingly impossible task in which they are engaged (building an Earth-orbital manned spacecraft in the middle ages while leaving no historical traces of their activity which later generations of humans might find). “Read the whole thing,” in order. It's worth it.

With the devastating events which concluded the second volume, the spacemen are faced with an even more daunting challenge than that in which they were previously engaged, and with far less confidence of success in their mission of saving humanity in its war for survival against the Cho-ta'an more than 1500 years in their future. As this book begins, more than two decades have passed since the spacemen crashed on Earth. They have patiently been building up the infrastructure required to build their rocket, establishing mining, logging, materials processing, and manufacturing at a far-flung series of camps all linked together by Viking-built and -crewed oceangoing ships. Just as important as tools and materials is human capital: the spacemen have had to set up an ongoing programme to recruit, educate, and train the scientists, engineers, technicians, drafters, managers, and tradespeople of all kinds needed for a 20th century aerospace project, all in a time when only a tiny fraction of the population is literate, and they have reluctantly made peace with the Viking way of “recruiting” the people they need.

The difficulty of all of this is compounded by the need to operate in absolute secrecy. Experience has taught the spacemen that, having inadvertently travelled into Earth's past, history cannot be changed. Consequently, nothing they do can interfere in any way with the course of recorded human history because that would conflict with what actually happened and would therefore be doomed to failure. And in addition, some Cho-ta'an who landed on Earth may still be alive and bent on stopping their project. While they must work technological miracles to have a slim chance of saving humanity, the Cho-ta'an need only thwart them in any one of a multitude of ways to win. Their only hope is to disappear.

The story is one of dogged persistence, ingenuity in the face of formidable obstacles everywhere; dealing with adversaries as varied as Viking chieftains, the Vatican, Cho-ta'an aliens, and native American tribes; epic battles; disheartening setbacks; and inspiring triumphs. It is a heroic story on a grand scale, worthy of inclusion among the great epics of science fiction's earlier golden ages.

When it comes to twentieth century rocket engineering, there are a number of goofs and misconceptions in the story, almost all of which could have been remedied without any impact on the plot. Although they aren't precisely plot spoilers, I'll take them behind the curtain for space-nerd readers who wish to spot them for themselves without foreknowledge.

Spoiler warning: Plot and/or ending details follow.  
  • In chapter 7, Alma says, “The Titan II rockets used liquid hydrogen for the upper stages, but they used kerosene for the first stage.” This is completely wrong. The Titan II was a two stage rocket and used the same hypergolic propellants (hydrazine fuel and dinitrogen tetroxide oxidiser) in both the first and second stages.
  • In chapter 30 it is claimed “While the first stage of a Titan II rocket could be powered by kerosene, the second and third stages needed a fuel with a higher specific impulse in order to reach escape velocity of 25,000 miles per hour.” Oh dear—let's take this point by point. First of all, the first stage of the Titan II was not and could not be powered by kerosene. It was designed for hypergolic fuels, and its turbopumps and lack of an igniter would not work with kerosene. As described below, the earlier Titan I used kerosene, but the Titan II was a major re-design which could not be adapted for kerosene. Second, the second stage of the Titan II used the same hypergolic propellant as the first stage, and this propellant had around the same specific impulse as kerosene and liquid oxygen. Third, the Titan II did not have a third stage at all. It delivered the Gemini spacecraft into orbit using the same two stage configuration as the ballistic missile. The Titan II was later adapted to use a third stage for unmanned space launch missions, but a third stage was never used in Project Gemini. Finally, the mission of the Iron Dragon, like that of the Titan II launching Gemini, was to place its payload in low Earth orbit with a velocity of around 17,500 miles per hour, not escape velocity of 25,000 miles per hour. Escape velocity would fling the payload into orbit around the Sun, not on an intercept course with the target in Earth orbit.
  • In chapter 45, it is stated that “Later versions of the Titan II rockets had used hypergolic fuels, simplifying their design.” This is incorrect: the Titan I rocket used liquid oxygen and kerosene (not liquid hydrogen), while the Titan II, a substantially different missile, used hypergolic propellants from inception. Basing the Iron Dragon's design upon the Titan II and then using liquid hydrogen and oxygen makes no sense at all and wouldn't work. Liquid hydrogen is much less dense than the hypergolic fuel used in the Titan II and would require a much larger fuel tank of entirely different design, incorporating insulation which was unnecessary on the Titan II. These changes would ripple all through the design, resulting in an entirely different rocket. In addition, the low density of liquid hydrogen would require an entirely different turbopump design and, not being hypergolic with liquid oxygen, would require a different pre-burner to drive the turbopumps.
  • A few sentences later, it is said that “Another difficult but relatively straightforward problem was making the propellant tanks strong enough to be pressurized to 5,000 psi but not so heavy they impeded the rocket's journey to space.” This isn't how tank pressurisation works in liquid fuelled rockets. Tanks are pressurised to increase structural rigidity and provide positive flow into the turbopumps, but pressures are modest. The pressure needed to force propellants into the combustion chamber comes from the boost imparted by the turbopumps, not propellant tank pressurisation. For example, in the Space Shuttle's External Tank, the flight pressure of the liquid hydrogen tank was between 32 and 34 psia, and the liquid oxygen tank 20 to 22 psig, vastly less than “5,000 psi”. A fuel tank capable of withstanding 5,000 psi would be far too heavy to ever get off the ground.
  • In chapter 46 we are told, “The Titan II had been adapted from the Atlas intercontinental ballistic missile….” This is completely incorrect. In fact, the Titan I was developed as a backup to the Atlas in case the latter missile's innovative technologies such as the pressure-stabilised “balloon tanks” could not be made to work. The Atlas and Titan I were developed in parallel and, when the Atlas went into service first, the Titan I was quickly retired and replaced by the hypergolic fuelled Titan II, which provided more secure basing and rapid response to a launch order than the Atlas.
  • In chapter 50, when the Iron Dragon takes off, those viewing it “squinted against the blinding glare”. But liquid oxygen and liquid hydrogen (as well as the hypergolic fuels used by the original Titan II) burn with a nearly invisible flame. Liquid oxygen and kerosene produce a brilliant flame, but these propellants were not used in this rocket.
  • And finally, it's not a matter of the text, but what's with that cover illustration, anyway? The rocket ascending in the background is clearly modelled on a Soviet/Russian R-7/Soyuz rocket, which is nothing like what the Iron Dragon is supposed to be. While Iron Dragon is described as a two stage rocket burning liquid hydrogen and oxygen, Soyuz is a LOX/kerosene rocket (and the illustration has the characteristic bright flame of those propellants), has four side boosters (clearly visible), and the spacecraft has a visible launch escape tower, which Gemini did not have and was never mentioned in connection with the Iron Dragon.

Fixing all of these results in the Iron Dragon's being a two stage (see the start of chapter 51) liquid hydrogen fuel, liquid oxygen oxidiser rocket of essentially novel design, sharing little with the Titan II. The present-day rocket which most resembles it is the Delta IV, which in its baseline (“Medium”) configuration is a two stage LOX/hydrogen rocket with more than adequate payload capacity to place a Gemini capsule in low Earth orbit. Its first stage RS-68 engines were designed to reduce complexity and cost, and would be a suitable choice for a project having to start from scratch. Presumably the database which provided the specifications of the Titan II would also include the Delta IV, and adapting it to their requirements (which would be largely a matter of simplifying and derating the design in the interest of reliability and ease of manufacture) would be much easier than trying to transform the Titan II into a LOX/hydrogen launcher.

Spoilers end here.  

Despite the minor quibbles in the spoiler section (which do not detract in any way from enjoyment of the tale), this is a rollicking good adventure and satisfying conclusion to the Iron Dragon saga. It seemed to me that the last part of the story was somewhat rushed and could have easily occupied another full book, but the author promised us a trilogy and that's what he delivered, so fair enough. In terms of accomplishing the mission upon which the spacemen and their allies had laboured for half a century, essentially all of the action occurs in the last quarter of this final volume, starting in chapter 44. As usual nothing comes easy, and the project must face a harrowing challenge which might undo everything at the last moment, then confront the cold equations of orbital mechanics. The conclusion is surprising and, while definitively ending this tale, leaves the door open to further adventures set in this universe.

This series has been a pure delight from start to finish. It wasn't obvious to this reader at the outset that it would be possible to pull time travel, Vikings, and spaceships together into a story that worked, but the author has managed to do so, while maintaining historical authenticity about a neglected period in European history. It is particularly difficult to craft a time travel yarn in which it is impossible for the characters to change the recorded history of our world, but this is another challenge the author rises to and almost makes it look easy. Independent science fiction is where readers will find the heroes, interesting ideas, and adventure which brought them to science fiction in the first place, and Robert Kroese is establishing himself as a prolific grandmaster of this exciting new golden age.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Yiannopoulos, Milo. Diabolical. New York: Bombardier Books, 2018. ISBN 978-1-64293-163-1.
Milo Yiannopoulos has a well-deserved and hard-earned reputation as a controversialist, inciter of outrage, and offender of all the right people. His acid wit and mockery of those amply deserving it causes some to dismiss what he says when he's deadly serious about something, as he is in this impassioned book about the deep corruption in the Roman Catholic church and its seeming abandonment of its historic mission as a bastion of the Christian values which made the West the West. It is an earnest plea for a new religious revival, from the bottom up, to rid the Church of its ageing, social justice indoctrinated hierarchy which, if not entirely homosexual, has tolerated widespread infiltration of the priesthood by sexually active homosexual men who have indulged their attraction to underage (but almost always post-pubescent) boys, and has been complicit in covering up these scandals and allowing egregious offenders to escape discipline and continue their predatory behaviour for many years.

Ever since emerging as a public figure, Yiannopoulos has had a target on his back. A young, handsome (he may prefer “fabulous”), literate, well-spoken, quick-witted, funny, flaming homosexual, Roman Catholic, libertarian-conservative, pro-Brexit, pro-Trump, prolific author and speaker who can fill auditoriums on college campuses and simultaneously entertain and educate his audiences, willing to debate the most vociferous of opponents, and who has the slaver Left's number and is aware of their vulnerability just at what they imagined was the moment of triumph, is the stuff of nightmares to those who count on ignorant legions of dim followers capable of little more than chanting rhyming slogans and littering. He had to be silenced, and to a large extent, he has been. But, like the Terminator, he's back, and he's aiming higher: for the Vatican.

It was a remarkable judo throw the slavers and their media accomplices on the left and “respectable right” used to rid themselves of this turbulent pest. The virtuosos of victimology managed to use the author's having been a victim of clerical sexual abuse, and spoken candidly about it, to effectively de-platform, de-monetise, disemploy, and silence him in the public sphere by proclaiming him a defender of pædophilia (which has nothing to do with the phenomenon he was discussing and of which he was a victim: homosexual exploitation of post-pubescent boys).

The author devotes a chapter to his personal experience and how it paralleled that of others. At the same time, he draws a distinction between what happened to him and the rampant homosexuality in some seminaries and serial abuse by prelates in positions of authority and its being condoned and covered up by the hierarchy. He traces the blame all the way to the current Pope, whose collectivist and social justice credentials were apparent to everybody before his selection. Regrettably, he concludes, Catholics must simply wait for the Pope to die or retire, while laying the ground for a revival and restoration of the faith which will drive the choice of his successor.

Other chapters discuss the corrosive influence of so-called “feminism” on the Church and how it has corrupted what was once a manly warrior creed that rolled back the scourge of Islam when it threatened civilisation in Europe and is needed now more than ever after politicians seemingly bent on societal suicide have opened the gates to the invaders; how utterly useless and clueless the legacy media are in covering anything relating to religion (a New York Times reporter asked First Things editor Fr Richard John Neuhaus what he made of the fact that the newly elected pope was “also” going to be named the bishop of Rome); and how the rejection and collapse of Christianity as a pillar of the West risks its replacement with race as the central identity of the culture.

The final chapter quotes Chesterton (from Heretics, 1905),

Everything else in the modern world is of Christian origin, even everything that seems most anti-Christian. The French Revolution is of Christian origin. The newspaper is of Christian origin. The anarchists are of Christian origin. Physical science is of Christian origin. The attack on Christianity is of Christian origin. There is one thing, and one thing only, in existence at the present day which can in any sense accurately be said to be of pagan origin, and that is Christianity.

Much more is at stake than one sect (albeit the largest) of Christianity. The infiltration, subversion, and overt attacks on the Roman Catholic church are an assault upon an institution which has been central to Western civilisation for two millennia. If it falls, and it is falling, in large part due to self-inflicted wounds, the forces of darkness will be coming for the smaller targets next. Whatever your religion, or whether you have one or not, collapse of one of the three pillars of our cultural identity is something to worry about and work to prevent. In the author's words, “What few on the political Right have grasped is that the most important component in this trifecta isn't capitalism, or even democracy, but Christianity.” With all three under assault from all sides, this book makes an eloquent argument to secular free marketeers and champions of consensual government not to ignore the cultural substrate which allowed both to emerge and flourish.

 Permalink

July 2019

Suarez, Daniel. Delta-v. New York: Dutton, 2019. ISBN 978-1-5247-4241-6.
James Tighe is an extreme cave diver, pushing the limits of human endurance and his equipment to go deeper, farther, and into unexplored regions of underwater caves around the world. While exploring the depths of a cavern in China, an earthquake triggers disastrous rockfalls in the cave, killing several members of his expedition. Tighe narrowly escapes with his life, leading the survivors to safety, and the video he recorded with his helmet camera has made him an instant celebrity. He is surprised and puzzled when invited by billionaire and serial entrepreneur Nathan Joyce to a party on Joyce's private island in the Caribbean. Joyce meets privately with Tighe and explains that his theory of economics predicts a catastrophic collapse of the global debt bubble in the near future, with the potential to destroy modern civilisation.

Joyce believes that the only way to avert this calamity is to jump start the human expansion into the solar system, thus creating an economic expansion into a much larger sphere of activity than one planet and allowing humans to “grow out” of the crushing debt their profligate governments have run up. In particular, he believes that asteroid mining is the key to opening the space frontier, as it will provide a source of raw materials which do not have to be lifted at prohibitive cost out of Earth's deep gravity well. Joyce intends to use part of his fortune to bootstrap such a venture, and invites Tighe to join a training program to select a team of individuals ready to face the challenges of long-term industrial operations in deep space.

Tighe is puzzled, “Why me?” Joyce explains that much more important than a background in aerospace or mining is the ability to make the right decisions under great pressure and uncertainty. Tighe's leadership in rescuing his dive companions demonstrated that ability and qualified him to try out for Joyce's team.

By the year 2033, the NewSpace companies founded in the early years of the 21st century have matured and, although taking different approaches, have come to dominate the market for space operations, mostly involving constellations of Earth satellites. The so-called “NewSpace Titans” (names have been changed, but you'll recognise them from their styles) have made their billions developing this industry, and some have expressed interest in asteroid mining, but mostly via robotic spacecraft and on a long-term time scale. Nathan Joyce wants to join their ranks and advance the schedule by sending humans to do the job. Besides, he argues, if the human destiny is to expand into space, why not get on with it, deploying their versatility and ability to improvise on this difficult challenge?

The whole thing sounds rather dodgy to Tighe, but cave diving does not pay well, and the signing bonus and promised progress payments if he meets various milestones in the training programme sound very attractive, so he signs on the dotted line. Further off-putting were a draconian non-disclosure agreement and an “Indemnity for Accidental Death and Dismemberment” which was sprung on candidates only after arriving at the remote island training facility. There were surveillance cameras and microphones everywhere, and Tighe and others speculated they may be part of an elaborate reality TV show staged by Joyce, not a genuine space project.

The other candidates were from all kinds of backgrounds: ex-military, former astronauts, BASE jumpers, mountaineers, scientists, and engineers. There were almost all on the older side for adventurers: mid-thirties to mid-forties—something about cosmic rays. And most of them had the hallmarks of DRD4-7R adventurers.

As the programme gets underway, the candidates discover it resembles Special Forces training more than astronaut candidate instruction, with a series of rigorous tests evaluating personal courage, endurance, psychological stability, problem-solving skills, tolerance for stress, and the ability to form and work as a team. Predictably, their numbers are winnowed as they approach the milestone where a few will be selected for orbital training and qualification for the deep space mission.

Tighe and the others discover that their employer is anything but straightforward, and they begin to twig to the fact that the kind of people who actually open the road to human settlement of the solar system may resemble the ruthless railroad barons of the 19th century more than the starry-eyed dreamers of science fiction. These revelations continue as the story unfolds.

After gut-wrenching twists and turns, Tighe finds himself part of a crew selected to fly to and refine resources from a near-Earth asteroid first reconnoitered by the Japanese Hayabusa2 mission in the 2010s. Risks are everywhere, and not just in space: corporate maneuvering back on Earth can kill the crew just as surely as radiation, vacuum, explosions, and collisions in space. Their only hope may be a desperate option recalling one of the greatest feats of seamanship in Earth's history.

This is a gripping yarn in which the author confronts his characters with one seemingly insurmountable obstacle and disheartening setback after another, then describes how these carefully selected and honed survivors deal with it. There are no magical technologies: all of the technical foundations exist today, at least at the scale of laboratory demonstrations, and could plausibly be scaled up to those in the story by the mid-2030s. The intricate plot is a salutary reminder that deception, greed, dodgy finances, corporate hijinks, bureaucracy, and destructively hypertrophied egos do not stop at the Kármán line. The conclusion is hopeful and a testament to the place for humans in the development of space.

A question and answer document about the details underlying the story is available on the author's Web site.

 Permalink

Murray, Charles and Catherine Bly Cox. Apollo. Burkittsville, MD: South Mountain Books, [1989, 2004] 2010. ISBN 978-0-9760008-0-8.
On November 5, 1958, NASA, only four months old at the time, created the Space Task Group (STG) to manage its manned spaceflight programs. Although there had been earlier military studies of manned space concepts and many saw eventual manned orbital flights growing out of the rocket plane projects conducted by NASA's predecessor, the National Advisory Committee for Aeronautics (NACA) and the U.S. Air Force, at the time of the STG's formation the U.S. had no formal manned space program. The initial group numbered 45 in all, including eight secretaries and “computers”—operators of electromechanical desk calculators, staffed largely with people from the NACA's Langley Research Center and initially headquartered there. There were no firm plans for manned spaceflight, no budget approved to pay for it, no spacecraft, no boosters, no launch facilities, no mission control centre, no astronauts, no plans to select and train them, and no experience either with human flight above the Earth's atmosphere or with more than a few seconds of weightlessness. And yet this team, the core of an effort which would grow to include around 400,000 people at NASA and its 20,000 industry and academic contractors, would, just ten years and nine months later, on July 20th, 1969, land two people on the surface of the Moon and then return them safely to the Earth.

Ten years is not a long time when it comes to accomplishing a complicated technological project. Development of the Boeing 787, a mid-sized commercial airliner which flew no further, faster, or higher than its predecessors, and was designed and built using computer-aided design and manufacturing technologies, took eight years from project launch to entry into service, and the F-35 fighter plane only entered service and then only in small numbers of one model a full twenty-three years after the start of its development.

In November, 1958, nobody in the Space Task Group was thinking about landing on the Moon. Certainly, trips to the Moon had been discussed in fables from antiquity to Jules Verne's classic De la terre à la lune of 1865, and in 1938 members of the British Interplanetary Society published a (totally impractical) design for a Moon rocket powered by more than two thousand solid rocket motors bundled together, which would be discarded once burned out, but only a year since the launch of the first Earth satellite and when nothing had been successfully returned from Earth orbit to the Earth, talk of manned Moon ships sounded like—lunacy.

The small band of stalwarts at the STG undertook the already daunting challenge of manned space flight with an incremental program they called Project Mercury, whose goal was to launch a single man into Earth orbit in a capsule (unable to change its orbit once released from the booster rocket, it barely deserved the term “spacecraft”) atop a converted Atlas intercontinental ballistic missile. In essence, the idea was to remove the warhead, replace it with a tiny cone-shaped can with a man in it, and shoot him into orbit. At the time the project began, the reliability of the Atlas rocket was around 75%, so NASA could expect around one in four launches to fail, with the Atlas known for spectacular explosions on the ground or on the way to space. When, in early 1960, the newly-chosen Mercury astronauts watched a test launch of the rocket they were to ride, it exploded less than a minute after launch. This was the fifth consecutive failure of an Atlas booster (although not all were so spectacular).

Doing things which were inherently risky on tight schedules with a shoestring budget (compared to military projects) and achieving an acceptable degree of safety by fanatic attention to detail and mountains of paperwork (NASA engineers quipped that no spacecraft could fly until the mass of paper documenting its construction and test equalled that of the flight hardware) became an integral part of the NASA culture. NASA was proceeding on its deliberate, step-by-step development of Project Mercury, and in 1961 was preparing for the first space flight by a U.S. astronaut, not into orbit on an Atlas, just a 15 minute suborbital hop on a version of the reliable Redstone rocket that launched the first U.S. satellite in 1958 when, on April 12, 1961, they were to be sorely disappointed when the Soviet Union launched Yuri Gagarin into orbit on Vostok 1. Not only was the first man in space a Soviet, they had accomplished an orbital mission, which NASA hadn't planned to attempt until at least the following year.

On May 5, 1961, NASA got back into the game, or at least the minor league, when Alan Shepard was launched on Mercury-Redstone 3. Sure, it was just a 15 minute up and down, but at least an American had been in space, if only briefly, and it was enough to persuade a recently-elected, young U.S. president smarting from being scooped by the Soviets to “take longer strides”. On May 25, less than three weeks after Shepard's flight, before a joint session of Congress, President Kennedy said, “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth.” Kennedy had asked his vice president, Lyndon Johnson, what goal the U.S. could realistically hope to achieve before the Soviets, and after consulting with the NASA administrator, James Webb, a Texas oil man and lawyer, and no other NASA technical people other than Wernher von Braun, he reported that a manned Moon landing was the only milestone the Soviets, with their heavy boosters and lead in manned space flight, were unlikely to do first. So, to the Moon it was.

The Space Task Group people who were, ultimately going to be charged with accomplishing this goal and had no advance warning until they heard Kennedy's speech or got urgent telephone calls from colleagues who had also heard the broadcast were, in the words of their leader, Robert Gilruth, who had no more warning than his staff, “aghast”. He and his team had, like von Braun in the 1950s, envisioned a deliberate, step-by-step development of space flight capability: manned orbital flight, then a more capable spacecraft with a larger crew able to maneuver in space, a space station to explore the biomedical issues of long-term space flight and serve as a base to assemble craft bound farther into space, perhaps a reusable shuttle craft to ferry crew and cargo to space without (wastefully and at great cost) throwing away rockets designed as long-range military artillery on every mission,followed by careful reconnaissance of the Moon by both unmanned and manned craft to map its surface, find safe landing zones, and then demonstrate the technologies that would be required to get people there and back safely.

All that was now clearly out the window. If Congress came through with the massive funds it would require, going to the Moon would be a crash project like the Manhattan Project to build the atomic bomb in World War II, or the massive industrial mobilisation to build Liberty Ships or the B-17 and B-29 bombers. The clock was ticking: when Kennedy spoke, there were just 3142 days until December 31, 1969 (yes, I know the decade actually ends at the end of 1970, since there was no year 0 in the Gregorian calendar, but explaining this to clueless Americans is a lost cause), around eight years and seven months. What needed to be done? Everything. How much time was there to do it? Not remotely enough. Well, at least the economy was booming, politicians seemed willing to pay the huge bills for what needed to be done, and there were plenty of twenty-something newly-minted engineering graduates ready and willing to work around the clock without a break to make real what they'd dreamed of since reading science fiction in their youth.

The Apollo Project was simultaneously one of the most epochal and inspiring accomplishments of the human species, far more likely to be remembered a thousand years hence than anything else that happened in the twentieth century, and at the same time a politically-motivated blunder which retarded human expansion into the space frontier. Kennedy's speech was at the end of May 1961. Perhaps because the Space Task Group was so small, it and NASA were able to react with a speed which is stunning to those accustomed to twenty year development projects for hardware far less complicated than Apollo.

In June and July [1961], detailed specifications for the spacecraft hardware were completed. By the end of July, the Requests for Proposals were on the street.

In August, the first hardware contract was awarded to M.I.T.'s Instrumentation Laboratory for the Apollo guidance system. NASA selected Merritt Island, Florida, as the site for a new spaceport and acquired 125 square miles of land.

In September, NASA selected Michoud, Louisiana, as the production facility for the Saturn rockets, acquired a site for the Manned Spacecraft Center—the Space Task Group grown up—south of Houston, and awarded the contract for the second stage of the Saturn [V] to North American Aviation.

In October, NASA acquired 34 square miles for a Saturn test facility in Mississippi.

In November, the Saturn C-1 was successfully launched with a cluster of eight engines, developing 1.3 million pounds of thrust. The contract for the command and service module was awarded to North American Aviation.

In December, the contract for the first stage of the Saturn [V] was awarded to Boeing and the contract for the third stage was awarded to Douglas Aircraft.

By January of 1962, construction had begun at all of the acquired sites and development was under way at all of the contractors.

Such was the urgency with which NASA was responding to Kennedy's challenge and deadline that all of these decisions and work were done before deciding on how to get to the Moon—the so-called “mission mode”. There were three candidates: direct-ascent, Earth orbit rendezvous (EOR), and lunar orbit rendezvous (LOR). Direct ascent was the simplest, and much like idea of a Moon ship in golden age science fiction. One launch from Earth would send a ship to the Moon which would land there, then take off and return directly to Earth. There would be no need for rendezvous and docking in space (which had never been attempted, and nobody was sure was even possible), and no need for multiple launches per mission, which was seen as an advantage at a time when rockets were only marginally reliable and notorious for long delays from their scheduled launch time. The downside of direct-ascent was that it would require an enormous rocket: planners envisioned a monster called Nova which would have dwarfed the Saturn V eventually used for Apollo and required new manufacturing, test, and launch facilities to accommodate its size. Also, it is impossible to design a ship which is optimised both for landing under rocket power on the Moon and re-entering Earth's atmosphere at high speed. Still, direct-ascent seemed to involve the least number of technological unknowns. Ever wonder why the Apollo service module had that enormous Service Propulsion System engine? When it was specified, the mission mode had not been chosen, and it was made powerful enough to lift the entire command and service module off the lunar surface and return them to the Earth after a landing in direct-ascent mode.

Earth orbit rendezvous was similar to what Wernher von Braun envisioned in his 1950s popular writings about the conquest of space. Multiple launches would be used to assemble a Moon ship in low Earth orbit, and then, when it was complete, it would fly to the Moon, land, and then return to Earth. Such a plan would not necessarily even require a booster as large as the Saturn V. One might, for example, launch the lunar landing and return vehicle on one Saturn I, the stage which would propel it to the Moon on a second, and finally the crew on a third, who would board the ship only after it was assembled and ready to go. This was attractive in not requiring the development of a giant rocket, but required on-time launches of multiple rockets in quick succession, orbital rendezvous and docking (and in some schemes, refuelling), and still had the problem of designing a craft suitable both for landing on the Moon and returning to Earth.

Lunar orbit rendezvous was originally considered a distant third in the running. A single large rocket (but smaller than Nova) would launch two craft toward the Moon. One ship would be optimised for flight through the Earth's atmosphere and return to Earth, while the other would be designed solely for landing on the Moon. The Moon lander, operating only in vacuum and the Moon's weak gravity, need not be streamlined or structurally strong, and could be potentially much lighter than a ship able to both land on the Moon and return to Earth. Finally, once its mission was complete and the landing crew safely back in the Earth return ship, it could be discarded, meaning that all of the hardware needed solely for landing on the Moon need not be taken back to the Earth. This option was attractive, requiring only a single launch and no gargantuan rocket, and allowed optimising the lander for its mission (for example, providing better visibility to its pilots of the landing site), but it not only required rendezvous and docking, but doing it in lunar orbit which, if they failed, would strand the lander crew in orbit around the Moon with no hope of rescue.

After a high-stakes technical struggle, in the latter part of 1962, NASA selected lunar orbit rendezvous as the mission mode, with each landing mission to be launched on a single Saturn V booster, making the decision final with the selection of Grumman as contractor for the Lunar Module in November of that year. Had another mission mode been chosen, it is improbable in the extreme that the landing would have been accomplished in the 1960s.

The Apollo architecture was now in place. All that remained was building machines which had never been imagined before, learning to do things (on-time launches, rendezvous and docking in space, leaving spacecraft and working in the vacuum, precise navigation over distances no human had ever travelled before, and assessing all of the “unknown unknowns” [radiation risks, effects of long-term weightlessness, properties of the lunar surface, ability to land on lunar terrain, possible chemical or biological threats on the Moon, etc.]) and developing plans to cope with them.

This masterful book is the story of how what is possibly the largest collection of geeks and nerds ever assembled and directed at a single goal, funded with the abundant revenue from an economic boom, spurred by a geopolitical competition against the sworn enemy of liberty, took on these daunting challenges and, one by one, overcame them, found a way around, or simply accepted the risk because it was worth it. They learned how to tame giant rocket engines that randomly blew up by setting off bombs inside them. They abandoned the careful step-by-step development of complex rockets in favour of “all-up testing” (stack all of the untested pieces the first time, push the button, and see what happens) because “there wasn't enough time to do it any other way”. People were working 16–18–20 hours a day, seven days a week. Flight surgeons in Mission Control handed out “go and whoa pills”—amphetamines and barbiturates—to keep the kids on the console awake at work and asleep those few hours they were at home—hey, it was the Sixties!

This is not a tale of heroic astronauts and their exploits. The astronauts, as they have been the first to say, were literally at the “tip of the spear” and would not have been able to complete their missions without the work of almost half a million uncelebrated people who made them possible, not to mention the hundred million or so U.S. taxpayers who footed the bill.

This was not a straight march to victory. Three astronauts died in a launch pad fire the investigation of which revealed shockingly slapdash quality control in the assembly of their spacecraft and NASA's ignoring the lethal risk of fire in a pure oxygen atmosphere at sea level pressure. The second flight of the Saturn V was a near calamity due to multiple problems, some entirely avoidable (and yet the decision was made to man the next flight of the booster and send the crew to the Moon). Neil Armstrong narrowly escaped death in May 1968 when the Lunar Landing Research Vehicle he was flying ran out of fuel and crashed. And the division of responsibility between the crew in the spacecraft and mission controllers on the ground had to be worked out before it would be tested in flight where getting things right could mean the difference between life and death.

What can we learn from Apollo, fifty years on? Other than standing in awe at what was accomplished given the technology and state of the art of the time, and on a breathtakingly short schedule, little or nothing that is relevant to the development of space in the present and future. Apollo was the product of a set of circumstances which happened to come together at one point in history and are unlikely to ever recur. Although some of those who worked on making it a reality were dreamers and visionaries who saw it as the first step into expanding the human presence beyond the home planet, to those who voted to pay the forbidding bills (at its peak, NASA's budget, mostly devoted to Apollo, was more than 4% of all Federal spending; in recent years, it has settled at around one half of one percent: a national commitment to space eight times smaller as a fraction of total spending) Apollo was seen as a key battle in the Cold War. Allowing the Soviet Union to continue to achieve milestones in space while the U.S. played catch-up or forfeited the game would reinforce the Soviet message to the developing world that their economic and political system was the wave of the future, leaving decadent capitalism in the dust.

A young, ambitious, forward-looking president, smarting from being scooped once again by Yuri Gagarin's orbital flight and the humiliation of the débâcle at the Bay of Pigs in Cuba, seized on a bold stroke that would show the world the superiority of the U.S. by deploying its economic, industrial, and research resources toward a highly visible goal. And, after being assassinated two and a half years later, his successor, a space enthusiast who had directed a substantial part of NASA's spending to his home state and those of his political allies, presented the program as the legacy of the martyred president and vigorously defended it against those who tried to kill it or reduce its priority. The U.S. was in an economic boom which would last through most of the Apollo program until after the first Moon landing, and was the world's unchallenged economic powerhouse. And finally, the federal budget had not yet been devoured by uncontrollable “entitlement” spending and national debt was modest and manageable: if the national will was there, Apollo was affordable.

This confluence of circumstances was unique to its time and has not been repeated in the half century thereafter, nor is it likely to recur in the foreseeable future. Space enthusiasts who look at Apollo and what it accomplished in such a short time often err in assuming a similar program: government funded, on a massive scale with lavish budgets, focussed on a single goal, and based on special-purpose disposable hardware suited only for its specific mission, is the only way to open the space frontier. They are not only wrong in this assumption, but they are dreaming if they think there is the public support and political will to do anything like Apollo today. In fact, Apollo was not even particularly popular in the 1960s: only at one point in 1965 did public support for funding of human trips to the Moon poll higher than 50% and only around the time of the Apollo 11 landing did 50% of the U.S. population believe Apollo was worth what was being spent on it.

In fact, despite being motivated as a demonstration of the superiority of free people and free markets, Project Apollo was a quintessentially socialist space program. It was funded by money extracted by taxation, its priorities set by politicians, and its operations centrally planned and managed in a top-down fashion of which the Soviet functionaries at Gosplan could only dream. Its goals were set by politics, not economic benefits, science, or building a valuable infrastructure. This was not lost on the Soviets. Here is Soviet Minister of Defence Dmitriy Ustinov speaking at a Central Committee meeting in 1968, quoted by Boris Chertok in volume 4 of Rockets and People.

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

This kind of socialist operation works fine in a wartime crash program driven by time pressure, where unlimited funds and manpower are available, and where there is plenty of capital which can be consumed or borrowed to pay for it. But it does not create sustainable enterprises. Once the goal is achieved, the war won (or lost), or it runs out of other people's money to spend, the whole thing grinds to a halt or stumbles along, continuing to consume resources while accomplishing little. This was the predictable trajectory of Apollo.

Apollo was one of the noblest achievements of the human species and we should celebrate it as a milestone in the human adventure, but trying to repeat it is pure poison to the human destiny in the solar system and beyond.

This book is a superb recounting of the Apollo experience, told mostly about the largely unknown people who confronted the daunting technical problems and, one by one, found solutions which, if not perfect, were good enough to land on the Moon in 1969. Later chapters describe key missions, again concentrating on the problem solving which went on behind the scenes to achieve their goals or, in the case of Apollo 13, get home alive. Looking back on something that happened fifty years ago, especially if you were born afterward, it may be difficult to appreciate just how daunting the idea of flying to the Moon was in May 1961. This book is the story of the people who faced that challenge, pulled it off, and are largely forgotten today.

Both the 1989 first edition and 2004 paperback revised edition are out of print and available only at absurd collectors' prices. The Kindle edition, which is based upon the 2004 edition with small revisions to adapt to digital reader devices is available at a reasonable price, as is an unabridged audio book, which is a reading of the 2004 edition. You'd think there would have been a paperback reprint of this valuable book in time for the fiftieth anniversary of the landing of Apollo 11 (and the thirtieth anniversary of its original publication), but there wasn't.

Project Apollo is such a huge, sprawling subject that no book can possibly cover every aspect of it. For those who wish to delve deeper, here is a reading list of excellent sources. I have read all of these books and recommend every one. For those I have reviewed, I link to my review; for others, I link to a source where you can obtain the book.

If you wish to commemorate the landing of Apollo 11 in a moving ceremony with friends, consider hosting an Evoloterra celebration.

 Permalink

Egan, Greg. Schild's Ladder. New York: Night Shade Books, [2002, 2004, 2013] 2015. ISBN 978-1-59780-544-5.
Greg Egan is one of the most eminent contemporary authors in the genre of “hard” science fiction. By “hard”, one means not that it is necessarily difficult to read, but that the author has taken care to either follow the laws of known science or, if the story involves alternative laws (for example, a faster than light drive, anti-gravity, or time travel) to define those laws and then remain completely consistent with them. This needn't involve tedious lectures—masters of science fiction, like Greg Egan, “show, don't tell”—but the reader should be able to figure out the rules and the characters be constrained by them as the story unfolds. Egan is also a skilled practitioner of “world building” which takes hard science fiction to the next level by constructing entire worlds or universes in which an alternative set of conditions are worked out in a logical and consistent way.

Whenever a new large particle collider is proposed, fear-mongers prattle on about the risk of its unleashing some new physical phenomenon which might destroy the Earth or, for those who think big, the universe by, for example, causing it to collapse into a black hole or causing the quantum vacuum to tunnel to a lower energy state where the laws of physics are incompatible with the existence of condensed matter and life. This is, of course, completely absurd. We have observed cosmic rays, for example the Oh-My-God particle detected by an instrument in Utah in 1991, with energies more than twenty million times greater than those produced by the Large Hadron Collider, the most powerful particle accelerator in existence today. These natural cosmic rays strike the Earth, the Moon, the Sun, and everything else in the universe all the time and have been doing so for billions of years and, if you look around, you'll see that the universe is still here. If a high energy particle was going to destroy it, it would have been gone long ago.

No, if somebody's going to destroy the universe, I'd worry about some quiet lab in the physics building where somebody is exploring very low temperatures, trying to beat the record which stands at, depending upon how you define it, between 0.006 degrees Kelvin (for a large block of metal) and 100 picokelvin (for nuclear spins). These temperatures, and the physical conditions they may create, are deeply unnatural and, unless there are similar laboratories and apparatus created by alien scientists on other worlds, colder than have ever existed anywhere in our universe ever since the Big Bang.

The cosmic microwave background radiation pervades the universe, and has an energy at the present epoch which corresponds to a temperature of about 2.73 degrees Kelvin. Every natural object in the universe is bathed in this radiation so, even in the absence of other energy sources such as starlight, anything colder than that will heated by the background radiation until it reaches that temperature and comes into equilibrium. (There are a few natural processes in the universe which can temporarily create lower temperatures, but nothing below 1° K has ever been observed.) The temperature of the universe has been falling ever since the Big Bang, so no lower temperature has ever existed in the past. The only way to create a lower temperature is to expend energy in what amounts to a super-refrigerator that heats up something else in return for artificially cooling its contents. In doing so, it creates a region like none other in the known natural universe.

Whenever you explore some physical circumstance which is completely new, you never know what you're going to find, and researchers have been surprised many times in the past. Prior to 1911, nobody imagined that it was possible for an electrical current to flow with no resistance at all, and yet in early experiments with liquid helium, the phenomenon of superconductivity was discovered. In 1937, it was discovered that liquid helium could flow with zero viscosity: superfluidity. What might be discovered at temperatures a tiny fraction of those where these phenomena became manifest? Answering that question is why researchers strive to approach ever closer to the (unattainable) absolute zero. Might one of those phenomena destroy the universe? Could be: you'll never know until you try.

This is the premise of this book, which is hard science fiction but also difficult. For twenty thousand years the field of fundamental physics has found nothing new beyond the unification of quantum mechanics and general relativity called “Sarumpaet's rules” or Quantum Graph Theory (QGT). The theory explained the fabric of space and time and all of the particles and forces within it as coarse-grained manifestations of transformations of a graph at the Planck scale. Researchers at Mimosa Station, 370 light years from Earth, have built an experimental apparatus, the Quietener, to explore conditions which have never existed before in the universe and test Sarumpaet's Rules at the limits. Perhaps the currently-observed laws of physics were simply a random choice made by the universe an unimaginably short time after the Big Bang and frozen into place by decoherence due to interactions with the environment, analogous to the quantum Zeno effect. The Quietener attempts to null out every possible external influence, even gravitational waves by carefully positioned local cancelling sources, in the hope of reproducing the conditions in which the early universe made its random choice and to create, for a fleeting instant, just trillionths of a second, a region of space with entirely different laws of physics. Sarumpaet's Rules guaranteed that this so-called novo-vacuum would quickly collapse, as it would have a higher energy and decay into the vacuum we inhabit.

Oops.

Six hundred and five years after the unfortunate event at Mimosa, the Mimosa novo-vacuum, not just stable but expanding at half the speed of light, has swallowed more than two thousand inhabited star systems, and is inexorably expanding through the galaxy, transforming everything in its path to—nobody knows. The boundary emits only an unstructured “borderlight” which provides no clue as to what lies within. Because the interstellar society has long ago developed the ability to create backups of individuals, run them as computer emulations, transmit them at light speed from star to star, and re-instantiate them in new bodies for fuddy-duddies demanding corporeal existence, loss of life has been minimal, but one understands how an inexorably growing sphere devouring everything in its path might be disturbing. The Rindler is a research ship racing just ahead of the advancing novo-vacuum front, providing close-up access to it for investigators trying to figure out what it conceals.

Humans (who, with their divergently-evolved descendants, biological and digitally emulated, are the only intelligent species discovered so far in the galaxy) have divided, as they remain wont to do, into two factions: Preservationists, who view the novo-vacuum as an existential threat to the universe and seek ways to stop its expansion and, ideally, recover the space it has occupied; and Yielders, who believe the novo-vacuum to be a phenomenon so unique and potentially important that destroying it before understanding its nature and what is on the other side of the horizon would be unthinkable. Also, being (post-)human, the factions are willing to resort to violence to have their way.

This leads to an adventure spanning time and space, and eventually a mission into a region where the universe is making it up as it goes along. This is one of the most breathtakingly ambitious attempts at world (indeed, universe) building ever attempted in science fiction. But for this reader, it didn't work. First of all, when all of the principal characters have backups stored in safe locations and can reset, like a character in a video game with an infinite number of lives cheat, whenever anything bad happens, it's difficult to create dramatic tension. Humans have transcended biological substrates, yet those still choosing them remain fascinated with curious things about bumping their adaptive uglies. When we finally go and explore the unknown, it's mediated through several levels of sensors, translation, interpretation, and abstraction, so what is described comes across as something like a hundred pages of the acid trip scene at the end of 2001.

In the distance, glistening partitions, reminiscent of the algal membranes that formed the cages in some aquatic zoos, swayed back and forth gently, as if in time to mysterious currents. Behind each barrier the sea changed color abruptly, the green giving way to other bright hues, like a fastidiously segregated display of bioluminescent plankton.

Oh, wow.

And then, it stops. I don't mean ends, as that would imply that everything that's been thrown up in the air is somehow resolved. There is an attempt to close the circle with the start of the story, but a whole universe of questions are left unanswered. The human perspective is inadequate to describe a place where Planck length objects interact in Planck time intervals and the laws of physics are made up on the fly. Ultimately, the story failed for me since it never engaged me with the characters—I didn't care what happened to them. I'm a fan of hard science fiction, but this was just too adamantine to be interesting.

The title, Schild's Ladder, is taken from a method in differential geometry which is used to approximate the parallel transport of a vector along a curve.

 Permalink

Thor, Brad. Backlash. New York: Atria Books, 2019. ISBN 978-1-9821-0403-0.
This is the nineteenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). This is a very different kind of story from the last several Harvath outings, which involved high-stakes international brinkmanship, uncertain loyalties, and threats of mass terror attacks. This time it's up close and personal. Harvath, paying what may be his last visit to Reed Carlton, his dying ex-CIA mentor and employer, is the object of a violent kidnapping attack which kills those to whom he is closest and spirits him off, drugged and severely beaten, to Russia, where he is to be subjected to the hospitality of the rulers whose nemesis he has been for many years (and books) until he spills the deepest secrets of the U.S. intelligence community.

After being spirited out of the U.S., the Russian cargo plane transporting him to the rendition resort where he is to be “de-briefed” crashes, leaving him…somewhere. About all he knows is that it's cold, that nobody knows where he is or that he is alive, and that he has no way to contact anybody, anywhere who might help.

This is a spare, stark tale of survival. Starting only with what he can salvage from the wreck of the plane and the bodies of its crew (some of whom he had to assist in becoming casualties), he must overcome the elements, predators (quadripedal and bipedal), terrain, and uncertainty about his whereabouts and the knowledge and intentions of his adversaries, to survive and escape.

Based upon what has been done to him, it is also a tale of revenge. To Harvath, revenge was not a low state: it was a necessity,

In his world, you didn't let wrongs go unanswered—not wrongs like this, and especially when you had the ability to do something. Vengeance was a necessary function of a civilized world, particularly at its margins, in its most remote and wild regions. Evildoers, unwilling to submit to the rule of law, needed to lie awake in their beds at night worried about when justice would eventually come for them. If laws and standards were not worth enforcing, then they certainly couldn't be worth following.

Harvath forms tenuous alliances with those he encounters, and then must confront an all-out assault by élite mercenaries who, apparently unsatisfied with the fear induced by fanatic Russian operatives, model themselves on the Nazi SS.

Then, after survival, it's time for revenge. Harvath has done his biochemistry homework and learned well the off-label applications of suxamethonium chloride. Sux to be you, Boris.

This is a tightly-crafted thriller which is, in my opinion, one of best of Brad Thor's novels. There is no political message or agenda nor any of the Washington intrigue which has occupied recent books. Here it is a pure struggle between a resourceful individual, on his own against amoral forces of pure evil, in an environment as deadly as his human adversaries.

 Permalink

Dick, Philip K. The Man in the High Castle. New York: Mariner Books, [1962] 2011. ISBN 978-0-547-57248-2.
The year is 1962. Following the victory of Nazi Germany and Imperial Japan in World War II, North America is divided into spheres of influence by the victors, with the west coast Pacific States of America controlled by Japan, the territory east of the Mississippi split north and south between what is still called the United States of America and the South, where slavery has been re-instituted, both puppet states of Germany. In between are the Rocky Mountain states, a buffer zone between the Japanese and German sectors with somewhat more freedom from domination by them.

The point of departure where this alternative history diverges from our timeline is in 1934, when Franklin D. Roosevelt is assassinated in Miami, Florida. (In our history, Roosevelt was uninjured in an assassination attempt in Miami in 1933 that killed the mayor of Chicago, Anton Cermak.) Roosevelt's vice president, John Nance Garner, succeeds to the presidency and is re-elected in 1936. In 1940, the Republican party retakes the White House, with John W. Bricker elected president. Garner and Bricker pursue a policy of strict neutrality and isolation, which allows Germany, Japan, and Italy to divide up the most of the world and coerce other nations into becoming satellites or client states. Then, Japan and Germany mount simultaneous invasions of the east and west coasts of the U.S., resulting in a surrender in 1947 and the present division of the continent.

By 1962, the victors are secure in their domination of the territories they have subdued. Germany has raced ahead economically and in technology, draining the Mediterranean to create new farmland, landing on the Moon and Mars, and establishing high-speed suborbital rocket transportation service throughout their far-flung territories. There is no serious resistance to the occupation in the former United States: its residents seem to be more or less resigned to second-class status under their German or Japanese overlords.

In the Pacific States the Japanese occupiers have settled in to a comfortable superiority over the vanquished, and many have become collectors of artefacts of the vanished authentic America. Robert Childan runs a shop in San Francisco catering to this clientèle, and is contacted by an official of the Japanese Trade Mission, seeking a gift to impress a visiting Swedish industrialist. This leads into a maze of complexity and nothing being as it seems as only Philip K. Dick (PKD) can craft. Is the Swede really a Swede or a German, and is he a Nazi agent or something else? Who is the mysterious Japanese visitor he has come to San Francisco to meet? Is Childan a supplier of rare artefacts or a swindler exploiting gullible Japanese rubes with fakes?

Many characters in the book are reading a novel called The Grasshopper Lies Heavy, banned in areas under German occupation but available in the Pacific States and other territories, which is an alternative history tale written by an elusive author named Hawthorne Abendsen, about a world in which the Allies defeated Germany and Japan in World War II and ushered in a golden age of peace, prosperity, and freedom. Abendsen is said to have retreated to a survivalist compound called the High Castle in the Rocky Mountain states. Characters we meet become obsessed with tracking down and meeting Abendsen. Who are they, and what are their motives? Keep reminding yourself, this is a PKD novel! We're already dealing with a fictional mysterious author of an alternative history of World War II within an alternative history novel of World War II by an author who is himself a grand illusionist.

It seems like everybody in the Pacific States, regardless of ethnicity or nationality, is obsessed with the I Ching. They are constantly consulting “the oracle” and basing their decisions upon it. Not just the westerners but even the Japanese are a little embarrassed by this, as the latter are aware that is it an invention of the Chinese, who they view as inferior, yet they rely upon it none the less. Again, the PKD shimmering reality distortion field comes into play as the author says that he consulted the I Ching to make decisions while plotting the novel, as does Hawthorne Abendsen in writing the novel within the novel.

This is quintessential PKD: the story is not so much about what happens (indeed, there is little resolution of any of the obvious conflicts in the circumstances of the plot) but rather instilling in the reader a sense that nothing is what it appears to be and, at the meta (or meta meta) level, that our history and destiny are ruled as much by chance (exemplified here by the I Ching) as by our intentions, will, and actions. At the end of the story, little or nothing has been resolved, and we are left only with questions and uncertainty. (PKD said that he intended a sequel, but despite efforts in that direction, never completed one.)

I understand that some kind of television adaptation loosely based upon the novel has been produced by one of those streaming services which are only available to people who live in continental-scale, railroad-era, legacy empires. I have not seen it, and have no interest in doing so. PKD is notoriously difficult to adapt to visual media, and today's Hollywood is, shall we say, not strong on nuance and ambiguity, which is what his fiction is all about.

Nuance and ambiguity…. Here's the funny thing. When I finished this novel, I was unimpressed and disappointed. I expected it to be great: I have enjoyed the fiction of PKD since I started to read his stories in the 1960s, and this novel won the Hugo Award for Best Novel in 1963, then the highest honour in science fiction. But the story struck me as only an exploration of a tiny corner of this rich alternative history. Little of what happens affects events in the large and, if it did, only long after the story ends. It was only while writing this that I appreciated that this may have been precisely what PKD was trying to achieve: that this is all about the contingency of history—that random chance matters much more than what we, or “great figures” do, and that the best we can hope for is to try to do what we believe is right when presented with the circumstances and events that confront us as we live our lives. I have no idea if you'll like this. I thought I would, and then I didn't, and now I, in retrospect, I do. Welcome to the fiction of Philip K. Dick.

 Permalink

Rothbard, Murray. What Has Government Done to Our Money? Auburn, AL: Ludwig von Mises Institute, [1963, 1985, 1990, 2010] 2015. ISBN 978-1-61016-645-4.
This slim book (just 119 pages of main text in this edition) was originally published in 1963 when the almighty gold-backed United States dollar was beginning to crack up under the pressure of relentless deficit spending and money printing by the Federal Reserve. Two years later, as the crumbling of the edifice accelerated, amidst a miasma of bafflegab about fantasies such as a “silver shortage” by Keynesian economists and other charlatans, the Coinage Act of 1965 would eliminate sliver from most U.S. coins, replacing them with counterfeit slugs craftily designed to fool vending machines into accepting them. (The little-used half dollar had its silver content reduced from 90% to 40%, and would be silverless after 1970.) In 1968, the U.S. Treasury would default upon its obligation to redeem paper silver certificates in silver coin or bullion, breaking the link between the U.S. currency and precious metal entirely.

All of this was precisely foreseen in this clear-as-light exposition of monetary theory and forty centuries of government folly by libertarian thinker and Austrian School economist Murray Rothbard. He explains the origin of money as societies progress from barter to indirect exchange, why most (but not all) cultures have settled on precious metals such as gold and silver as a medium of intermediate exchange (they do not deteriorate over time, can be subdivided into arbitrarily small units, and are relatively easy to check for authenticity). He then describes the sorry progression by which those in authority seize control over this free money and use it to fleece their subjects. First, they establish a monopoly over the ability to coin money, banning private mints and the use of any money other than their own coins (usually adorned with a graven image of some tyrant or another). They give this coin and its subdivisions a name, such as “dollar”, “franc”, “mark” or some such, which is originally defined as a unit of mass of some precious metal (for example, the U.S. dollar, prior to its debasement, was defined as 23.2 grains [1.5033 grams, or about 1/20 troy ounce] of pure gold). (Rothbard, as an economist rather than a physicist, and one working in English customary units, confuses mass with weight throughout the book. They aren't the same thing, and the quantity of gold in a coin doesn't vary depending on whether you weigh it at the North Pole or the summit of Chimborazo.)

Next, the rulers separate the concept of the unit of money from the mass of precious metal which it originally defined. A key tool in this are legal tender laws which require all debts to be settled in the state-defined monetary unit. This opens the door to debasement of the currency: replacing coins bearing the same unit of money with replacements containing less precious metal. In ancient Rome, the denarius originally contained around 4.5 grams of pure silver. By the third century A.D., its silver content had been reduced to about 2%, and was intrinsically almost worthless. Of course, people aren't stupid, and when the new debased coins show up, they will save the old, more valuable ones, and spend the new phoney money. This phenomenon is called “Gresham's law”, by which bad money chases out good. But this is entirely the result of a coercive government requiring its subjects to honour a monetary unit which it has arbitrarily reduced in intrinsic value.

This racket has been going on since antiquity, but as the centuries have passed, it has become ever more sophisticated and effective. Rothbard explains the origin of paper money, first as what were essentially warehouse receipts for real money (precious metal coins or bullion stored by its issuer and payable on demand), then increasingly abstract assets “backed” by only a fraction of the total value in circulation, and finally, with the advent of central banking, a fiction totally under the control of those who print the paper and their political masters. The whole grand racket of fractional reserve banking and the government inflationary engine it enables is explained in detail.

In the 1985 expanded edition, Rothbard adds a final twenty page chapter chronicling “The Monetary Breakdown of the West”, a tragedy in nine acts beginning with the classical gold standard of 1815–1914 and ending with the total severing of world currencies from any anchor to gold in March, 1973, ushering in the monetary chaos of endlessly fluctuating exchange rates, predatory currency manipulation, and a towering (and tottering) pyramid of completely unproductive financial speculation. He then explores the monetary utopia envisioned by the economic slavers: a world paper currency managed by a World Central Bank. There would no longer be any constraint upon the ability of those in power to pick the pockets of their subjects by depreciating the unit of account of the only financial assets they were permitted to own. Of course, this would lead to a slow-motion catastrophe, destroying enterprise, innovation, and investment, pauperising the population, and leading inevitably to civil unrest and demagogic political movements. Rothbard saw all of this coming, and those of us who understood his message knew exactly what was going to happen when they rolled out the Euro and a European Central Bank in 1991, which is just a regional version of the same Big Con.

This book remains, if I dare say, the gold standard when it comes to a short, lucid, and timeless explanation of monetary theory, history, the folly of governments, and its sad consequences. Is there any hope of restoring sanity in this age of universal funny money? Perhaps—the same technology which permits the establishment of cryptocurrencies such as Bitcoin radically reduces the transaction costs of using any number of competing currencies in a free market. While Gresham's Law holds that in a coercive un-free market bad money will drive out good, in a totally free market, where participants are able to use any store of value, unit of account, and medium of exchange they wish (free of government coercion through legal tender laws or taxation of currency exchanges), the best money will drive out its inferior competitors, and the quality of a given money will be evaluated based upon the transparency of its issuer and its performance for those who use it.

This book may be purchased from Amazon in either a print or Kindle edition, and is also available for free from the publisher, the Ludwig von Mises Institute, in HTML, PDF, and EPUB formats or as an audio book. The PDF edition is available in the English, Spanish, Danish, and Hungarian languages. The book is published under the Creative Commons Attribution License 3.0 and may be redistributed pursuant to the terms of that license.

 Permalink

Brennan, Gerald. Island of Clouds. Chicago: Tortoise Books, 2017. ISBN 978-0-9860922-9-9.
This is the third book, and the first full-length novel, in the author's “Altered Space” series of alternative histories of the cold war space race. Each stand-alone story explores a space mission which did not take place, but could have, given the technology and political circumstances at the time. The first, Zero Phase (October 2016), asks what might have happened had Apollo 13's service module oxygen tank waited to explode until after the lunar module had landed on the Moon. The present book describes a manned Venus fly-by mission performed in 1972 using modified Apollo hardware launched by a single Saturn V.

“But, wait…”, you exclaim, ”that's crazy!” Why would you put a crew of three at risk for a mission lasting a full year for just a few minutes of close-range fly-by of a planet whose surface is completely obscured by thick clouds? Far from Earth, any failure of their life support systems, spacecraft systems, a medical emergency, or any number of other mishaps could kill them; they'd be racking up a radiation dose from cosmic rays and solar particle emissions every day in the mission; and the inexorable laws of orbital mechanics would provide them no option to come home early if something went wrong.

Well, crazy it may have been, but in the mid-1960s, precisely such a mission was the subject of serious study by NASA and its contractors as a part of the Apollo Applications Program planned to follow the Apollo lunar landings. Here is a detailed study of a manned Venus flyby [PDF] by NASA contractor Bellcomm, Inc. from February 1967. In addition to observing Venus during the brief fly-by, the astronauts would deploy multiple robotic probes which would explore the atmosphere and surface of Venus and relay their findings either via the manned spacecraft or directly to Earth.

It was still crazy. For a tiny fraction of the cost of a Saturn V, Apollo spacecraft, and all the modifications and new development to support such a long-term mission, and at no risk to humans, an armada of robotic probes could have been launched on smaller, far less expensive rockets such as Delta, Atlas, and Titan, which would have returned all of the science proposed for the manned fly-by and more. But in the mid-sixties, with NASA's budget reaching 4% of all federal spending, a level by that metric eight times higher than in recent years, NASA was “feeling its oats” and planning as if the good times were just going to roll on forever.

In this novel, they did. After his re-election in 1968, where Richard Nixon and George Wallace split the opposition vote, and the triumphant Moon landing by Ed White and Buzz Aldrin, President Johnson opts to keep the momentum of Apollo going and uses his legendary skills in getting what he wants from Congress to secure the funds for a Venus fly-by in 1972. Deke Slayton chooses his best friend, just back from the Moon, Alan Shepard, to command the mission, with the second man on the Moon Buzz Aldrin and astronaut-medical doctor Joe Kerwin filling out the crew. Aldrin is sorely disappointed at not being given command, but accepts the assignment for the adventure and opportunity to get back into the game after the post flight let-down of returning from the Moon to a desk job.

The mission in the novel is largely based upon the NASA plans from the 1960s with a few modifications to simplify the story (for example, the plan to re-fit the empty third stage of the Saturn V booster as living quarters for the journey, as was also considered in planning for Skylab, is replaced here by a newly-developed habitation module launched by the Saturn V in place of the lunar module). There are lots of other little departures from the timeline in our reality, many just to remind the reader that this is a parallel universe.

After the mission gets underway, a number of challenges confront the crew: the mission hardware, space environment, one other, and the folks back on Earth. The growing communication delay as the distance increases from Earth poses difficulties no manned spaceflight crew have had to deal with before. And then, one of those things that can happen in space (and could have occurred on any of the Apollo lunar missions) happens, and the crew is confronted by existential problems on multiple fronts, must make difficult and unpleasant decisions, and draw on their own resources and ingenuity and courage to survive.

This is a completely plausible story which, had a few things gone the other way, could have happened in the 1970s. The story is narrated by Buzz Aldrin, which kind of lets you know at least he got back from the mission. The characters are believable, consistent with what we know of their counterparts in our reality, and behave as you'd expect from such consummate professionals under stress. I have to say, however, as somebody who has occasionally committed science fiction, that I would be uncomfortable writing a story in which characters based upon and bearing the names of those of people in the real world, two of whom are alive at this writing, have their characters and personal lives bared to the extent they are in this fiction. In the first book in the series, Zero Phase, Apollo 13 commander James Lovell, whose fictional incarnation narrates the story, read and endorsed the manuscript before publication. I was hoping to find a similar note in this novel, but it wasn't there. These are public figures, and there's nothing unethical or improper about having figures based upon them in an alternative history narrative behaving as the author wishes, and the story works very well. I'm just saying I wouldn't have done it that way without clearing it with the individuals involved.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

August 2019

Taleb, Nassim Nicholas. Skin in the Game. New York: Random House, 2018. ISBN 978-0-425-28462-9.
This book is volume four in the author's Incerto series, following Fooled by Randomness (February 2011), The Black Swan (January 2009), and Antifragile (April 2018). In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn't in order to choose wisely among the myriad alternatives available.

The title, “Skin in the Game”, is an aphorism which refers to an individual's sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers' desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won't happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.

Ancestral human society was all about skin in the game. In a small band of hunter/gatherers, everybody can see and is aware of the actions of everybody else. Slackers who do not contribute to the food supply are likely to be cut loose to fend for themselves. When the hunt fails, nobody eats until the next kill. If a conflict develops with a neighbouring band, those who decide to fight instead of running away or surrendering are in the front line of the battle and will be the first to suffer in case of defeat.

Nowadays we are far more “advanced”. As the author notes, “Bureaucracy is a construction by which a person is conveniently separated from the consequences of his or her actions.” As populations have exploded, layers and layers of complexity have been erected, removing authority ever farther from those under its power. We have built mechanisms which have immunised a ruling class of decision makers from the consequences of their decisions: they have little or no skin in the game.

Less than a third of all Roman emperors died in their beds. Even though they were at the pinnacle of the largest and most complicated empire in the West, they regularly paid the ultimate price for their errors either in battle or through palace intrigue by those dissatisfied with their performance. Today the geniuses responsible for the 2008 financial crisis, which destroyed the savings of hundreds of millions of innocent people and picked the pockets of blameless taxpayers to bail out the institutions they wrecked, not only suffered no punishment of any kind, but in many cases walked away with large bonuses or golden parachute payments and today are listened to when they pontificate on the current scene, rather than being laughed at or scorned as they would be in a rational world. We have developed institutions which shift the consequences of bad decisions from those who make them to others, breaking the vital feedback loop by which we converge upon solutions which, if not perfect, at least work well enough to get the job done without the repeated catastrophes that result from ivory tower theories being implemented on a grand scale in the real world.

Learning and Evolution

Being creatures who have evolved large brains, we're inclined to think that learning is something that individuals do, by observing the world, drawing inferences, testing hypotheses, and taking on knowledge accumulated by others. But the overwhelming majority of creatures who have ever lived, and of those alive today, do not have large brains—indeed, many do not have brains at all. How have they learned to survive and proliferate, filling every niche on the planet where environmental conditions are compatible with biochemistry based upon carbon atoms and water? How have they, over the billions of years since life arose on Earth, inexorably increased in complexity, most recently producing a species with a big brain able to ponder such questions?

The answer is massive parallelism, exhaustive search, selection for survivors, and skin in the game, or, putting it all together, evolution. Every living creature has skin in the ultimate game of whether it will produce offspring that inherit its characteristics. Every individual is different, and the process of reproduction introduces small variations in progeny. Change the environment, and the characteristics of those best adapted to reproduce in it will shift and, eventually, the population will consist of organisms adapted to the new circumstances. The critical thing to note is that while each organism has skin in the game, many may, and indeed must, lose the game and die before reproducing. The individual organism does not learn, but the species does and, stepping back another level, the ecosystem as a whole learns and adapts as species appear, compete, die out, or succeed and proliferate. This simple process has produced all of the complexity we observe in the natural world, and it works because every organism and species has skin in the game: its adaptation to its environment has immediate consequences for its survival.

None of this is controversial or new. What the author has done in this book is to apply this evolutionary epistemology to domains far beyond its origins in biology—in fact, to almost everything in the human experience—and demonstrate that both success and wisdom are generated when this process is allowed to work, but failure and folly result when it is thwarted by institutions which take the skin out of the game.

How does this apply in present-day human society? Consider one small example of a free market in action. The restaurant business is notoriously risky. Restaurants come and go all the time, and most innovations in the business fall flat on their face and quickly disappear. And yet most cities have, at any given time, a broad selection of restaurants with a wide variety of menus, price points, ambiance, and service to appeal to almost any taste. Each restaurant has skin in the game: those which do not attract sufficient customers (or, having once been successful, fail to adapt when customers' tastes change) go out of business and are replaced by new entrants. And yet for all the churning and risk to individual restaurants, the restaurant “ecosystem” is remarkably stable, providing customers options closely aligned with their current desires.

To a certain kind of “expert” endowed with a big brain (often crammed into a pointy head), found in abundance around élite universities and government agencies, all of this seems messy, chaotic, and (the horror!) inefficient. Consider the money lost when a restaurant fails, the cooks and waiters who lose their jobs, having to find a new restaurant to employ them, the vacant building earning nothing for its owner until a new tenant is found—certainly there must be a better way. Why, suppose instead we design a standardised set of restaurants based upon a careful study of public preferences, then roll out this highly-optimised solution to the problem. They might be called “public feeding centres”. And they would work about as well as the name implies.

Survival and Extinction

Evolution ultimately works through extinction. Individuals who are poorly adapted to their environment (or, in a free market, companies which poorly serve their customers) fail to reproduce (or, in the case of a company, survive and expand). This leaves a population better adapted to its environment. When the environment changes, or a new innovation appears (for example, electricity in an age dominated by steam power), a new sorting out occurs which may see the disappearance of long-established companies that failed to adapt to the new circumstances. It is a tautology that the current population consists entirely of survivors, but there is a deep truth within this observation which is at the heart of evolution. As long as there is a direct link between performance in the real world and survival—skin in the game—evolution will work to continually optimise and refine the population as circumstances change.

This evolutionary process works just as powerfully in the realm of ideas as in biology and commerce. Ideas have consequences, and for the process of selection to function, those consequences, good or ill, must be borne by those who promulgate the idea. Consider inventions: an inventor who creates something genuinely useful and brings it to market (recognising that there are many possible missteps and opportunities for bad luck or timing to disrupt this process) may reap great rewards which, in turn, will fund elaboration of the original invention and development of related innovations. The new invention may displace existing technologies and cause them, and those who produce them, to become obsolete and disappear (or be relegated to a minor position in the market). Both the winner and loser in this process have skin in the game, and the outcome of the game is decided by the evaluation of the customers expressed in the most tangible way possible: what they choose to buy.

Now consider an academic theorist who comes up with some intellectual “innovation” such as “Modern Monetary Theory” (which basically says that a government can print as much paper money as it wishes to pay for what it wants without collecting taxes or issuing debt as long as full employment has not been achieved). The theory and the reputation of those who advocate it are evaluated by their peers: other academics and theorists employed by institutions such as national treasuries and central banks. Such a theory is not launched into a market to fend for itself among competing theories: it is “sold” to those in positions of authority and imposed from the top down upon an economy, regardless of the opinions of those participating in it. Now, suppose the brilliant new idea is implemented and results in, say, total collapse of the economy and civil society? What price do those who promulgated the theory and implemented it pay? Little or nothing, compared to the misery of those who lost their savings, jobs, houses, and assets in the calamity. Many of the academics will have tenure and suffer no consequences whatsoever: they will refine the theory, or else publish erudite analyses of how the implementation was flawed and argue that the theory “has never been tried”. Some senior officials may be replaced, but will doubtless land on their feet and continue to pull down large salaries as lobbyists, consultants, or pundits. The bureaucrats who patiently implemented the disastrous policies are civil servants: their jobs and pensions are as eternal as anything in this mortal sphere. And, before long, another bright, new idea will bubble forth from the groves of academe.

(If you think this hypothetical example is unrealistic, see the career of one Robert Rubin. “Bob”, during his association with Citigroup between 1999 and 2009, received total compensation of US$126 million for his “services” as a director, advisor, and temporary chairman of the bank, during which time he advocated the policies which eventually brought it to the brink of collapse in 2008 and vigorously fought attempts to regulate the financial derivatives which eventually triggered the global catastrophe. During his tenure at Citigroup, shareholders of its stock lost 70% of their investment, and eventually the bank was bailed out by the federal government using money taken by coercive taxation from cab drivers and hairdressers who had no culpability in creating the problems. Rubin walked away with his “winnings” and paid no price, financial, civil, or criminal, for his actions. He is one of the many poster boys and girls for the “no skin in the game club”. And lest you think that, chastened, the academics and pointy-heads in government would regain their grounding in reality, I have just one phrase for you, “trillion dollar coin”, which “Nobel Prize” winner Paul Krugman declared to be “the most important fiscal policy debate of our lifetimes”.)

Intellectual Yet Idiot

A cornerstone of civilised society, dating from at least the Code of Hammurabi (c. 1754 B.C.), is that those who create risks must bear those risks: an architect whose building collapses and kills its owner is put to death. This is the fundamental feedback loop which enables learning. When it is broken, when those who create risks (academics, government policy makers, managers of large corporations, etc.) are able to transfer those risks to others (taxpayers, those subject to laws and regulations, customers, or the public at large), the system does not learn; evolution breaks down; and folly runs rampant. This phenomenon is manifested most obviously in the modern proliferation of the affliction the author calls the “intellectual yet idiot” (IYI). These are people who are evaluated by their peers (other IYIs), not tested against the real world. They are the equivalent of a list of movies chosen based upon the opinions of high-falutin' snobbish critics as opposed to box office receipts. They strive for the approval of others like themselves and, inevitably, spiral into ever more abstract theories disconnected from ground truth, ascending ever higher into the sky.

Many IYIs achieve distinction in one narrow field and then assume that qualifies them to pronounce authoritatively on any topic whatsoever. As was said by biographer Roy Harrod of John Maynard Keynes,

He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.

Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,

Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains.

The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.

Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few.

One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.

The Lindy Effect

How can you sift the good and the enduring from the mass of ephemeral fads and bad ideas that swirl around us every day? The Lindy effect is a powerful tool. Lindy's delicatessen in New York City was a favoured hangout for actors who observed that the amount of time a show had been running on Broadway was the best predictor of how long it would continue to run. A show that has run for three months will probably last for at least three months more. A show that has made it to the one year mark probably has another year or more to go. In other words, the best test for whether something will stand the test of time is whether it has already withstood the test of time. This may, at first, seem counterintuitive: a sixty year old person has a shorter expected lifespan remaining than a twenty year old. The Lindy effect applies only to nonperishable things such as “ideas, books, technologies, procedures, institutions, and political systems”.

Thus, a book which has been in print continuously for a hundred years is likely to be in print a hundred years from now, while this season's hot best-seller may be forgotten a few years hence. The latest political or economic theory filling up pages in the academic journals and coming onto the radar of the IYIs in the think tanks, media punditry, and (shudder) government agencies, is likely to be forgotten and/or discredited in a few years while those with a pedigree of centuries or millennia continue to work for those more interested in results than trendiness.

Religion is Lindy. If you disregard all of the spiritual components to religion, long-established religions are powerful mechanisms to transmit accumulated wisdom, gained through trial-and-error experimentation and experience over many generations, in a ready-to-use package for people today. One disregards or scorns this distilled experience at one's own great risk. Conversely, one should be as sceptical about “innovation” in ancient religious traditions and brand-new religions as one is of shiny new ideas in any other field.

(A few more technical notes…. As I keep saying, “Once Pareto gets into your head, you'll never get him out.” It's no surprise to find that the Lindy effect is deeply related to the power-law distribution of many things in human experience. It's simply another way to say that the lifetime of nonperishable goods is distributed according to a power law just like incomes, sales of books, music, and movie tickets, use of health care services, and commission of crimes. Further, the Lindy effect is similar to J. Richard Gott's Copernican statement of the Doomsday argument, with the difference that Gott provides lower and upper bounds on survival time for a given confidence level predicted solely from a random observation that something has existed for a known time.)

Uncertainty, Risk, and Decision Making

All of these observations inform dealing with risk and making decisions based upon uncertain information. The key insight is that in order to succeed, you must first survive. This may seem so obvious as to not be worth stating, but many investors, including those responsible for blow-ups which make the headlines and take many others down with them, forget this simple maxim. It is deceptively easy to craft an investment strategy which will yield modest, reliable returns year in and year out—until it doesn't. Such strategies tend to be vulnerable to “tail risks”, in which an infrequently-occurring event (such as 2008) can bring down the whole house of cards and wipe out the investor and the fund. Once you're wiped out, you're out of the game: you're like the loser in a Russian roulette tournament who, after the gun goes off, has no further worries about the probability of that event. Once you accept that you will never have complete information about a situation, you can begin to build a strategy which will prevent your blowing up under any set of circumstances, and may even be able to profit from volatility. This is discussed in more detail in the author's earlier Antifragile.

The Silver Rule

People and institutions who have skin in the game are likely to act according to the Silver Rule: “Do not do to others what you would not like them to do to you.” This rule, combined with putting the skin of those “defence intellectuals” sitting in air-conditioned offices into the games they launch in far-off lands around the world, would do much to save the lives and suffering of the young men and women they send to do their bidding.

 Permalink

Shlaes, Amity. Coolidge. New York: Harper Perennial, [2013] 2014. ISBN 978-0-06-196759-7.
John Calvin Coolidge, Jr. was born in 1872 in Plymouth Notch, Vermont. His family were among the branch of the Coolidge clan who stayed in Vermont while others left its steep, rocky, and often bleak land for opportunity in the Wild West of Ohio and beyond when the Erie canal opened up these new territories to settlement. His father and namesake made his living by cutting wood, tapping trees for sugar, and small-scale farming on his modest plot of land. He diversified his income by operating a general store in town and selling insurance. There was a long tradition of public service in the family. Young Coolidge's great-grandfather was an officer in the American Revolution and his grandfather was elected to the Vermont House of Representatives. His father was justice of the peace and tax collector in Plymouth Notch, and would later serve in the Vermont House of Representatives and Senate.

Although many in the cities would consider their rural life far from the nearest railroad terminal hard-scrabble, the family was sufficiently prosperous to pay for young Calvin (the name he went by from boyhood) to attend private schools, boarding with families in the towns where they were located and infrequently returning home. He followed a general college preparatory curriculum and, after failing the entrance examination the first time, was admitted on his second attempt to Amherst College as a freshman in 1891. A loner, and already with a reputation for being taciturn, he joined none of the fraternities to which his classmates belonged, nor did he participate in the athletics which were a part of college life. He quickly perceived that Amherst had a class system, where the scions of old money families from Boston who had supported the college were elevated above nobodies from the boonies like himself. He concentrated on his studies, mastering Greek and Latin, and immersing himself in the works of the great orators of those cultures.

As his college years passed, Coolidge became increasingly interested in politics, joined the college Republican Club, and worked on the 1892 re-election campaign of Benjamin Harrison, whose Democrat opponent, Grover Cleveland, was seeking to regain the presidency he had lost to Harrison in 1888. Writing to his father after Harrison's defeat, his analysis was that “the reason seems to be in the never satisfied mind of the American and in the ever desire to shift in hope of something better and in the vague idea of the working and farming classes that somebody is getting all the money while they get all the work.”

His confidence growing, Coolidge began to participate in formal debates, finally, in his senior year, joined a fraternity, and ran for and won the honour of being an orator at his class's graduation. He worked hard on the speech, which was a great success, keeping his audience engaged and frequently laughing at his wit. While still quiet in one-on-one settings, he enjoyed public speaking and connecting with an audience.

After graduation, Coolidge decided to pursue a career in the law and considered attending law school at Harvard or Columbia University, but decided he could not afford the tuition, as he was still being supported by his father and had no prospects for earning sufficient money while studying the law. In that era, most states did not require a law school education; an aspiring lawyer could, instead, become an apprentice at an established law firm and study on his own, a practice called reading the law. Coolidge became an apprentice at a firm in Northampton, Massachusetts run by two Amherst graduates and, after two years, in 1897, passed the Massachusetts bar examination and was admitted to the bar. In 1898, he set out on his own and opened a small law office in Northampton; he had embarked on the career of a country lawyer.

While developing his law practice, Coolidge followed in the footsteps of his father and grandfather and entered public life as a Republican, winning election to the Northampton City Council in 1898. In the following years, he held the offices of City Solicitor and county clerk of courts. In 1903 he married Grace Anna Goodhue, a teacher at the Clarke School for the Deaf in Northampton. The next year, running for the local school board, he suffered the only defeat of his political career, in part because his opponents pointed out he had no children in the schools. Coolidge said, “Might give me time.” (The Coolidges went on to have two sons, John, born in 1906, and Calvin Jr., in 1908.)

In 1906, Coolidge sought statewide office for the first time, running for the Massachusetts House of Representatives and narrowly defeating the Democrat incumbent. He was re-elected the following year, but declined to run for a third term, returning to Northampton where he ran for mayor, won, and served two one year terms. In 1912 he ran for the State Senate seat of the retiring Republican incumbent and won. In the presidential election of that year, when the Republican party split between the traditional wing favouring William Howard Taft and progressives backing Theodore Roosevelt, Coolidge, although identified as a progressive, having supported women's suffrage and the direct election of federal senators, among other causes, stayed with the Taft Republicans and won re-election. Coolidge sought a third term in 1914 and won, being named President of the State Senate with substantial influence on legislation in the body.

In 1915, Coolidge moved further up the ladder by running for the office of Lieutenant Governor of Massachusetts, balancing the Republican ticket led by a gubernatorial candidate from the east of the state with his own base of support in the rural west. In Massachusetts, the Lieutenant Governor does not preside over the State Senate, but rather fulfils an administrative role, chairing executive committees. Coolidge presided over the finance committee, which provided him experience in managing a budget and dealing with competing demands from departments that was to prove useful later in his career. After being re-elected to the office in 1915 and 1916 (statewide offices in Massachusetts at the time had a term of only one year), with the governor announcing his retirement, Coolidge was unopposed for the Republican nomination for governor and narrowly defeated the Democrat in the 1918 election.

Coolidge took office at a time of great unrest between industry and labour. Prices in 1918 had doubled from their 1913 level; nothing of the kind had happened since the paper money inflation during the Civil War and its aftermath. Nobody seemed to know why: it was usually attributed to the war, but nobody understood the cause and effect. There doesn't seem to have been a single mainstream voice who observed that the rapid rise in prices (which was really a depreciation of the dollar) began precisely at the moment the Creature from Jekyll Island was unleashed upon the U.S. economy and banking system. What was obvious, however, was that in most cases industrial wages had not kept pace with the rise in the cost of living, and that large companies which had raised their prices had not correspondingly increased what they paid their workers. This gave a powerful boost to the growing union movement. In early 1919 an ugly general strike in Seattle idled workers across the city, and the United Mine Workers threatened a nationwide coal strike for November 1919, just as the maximum demand for coal in winter would arrive. In Boston, police officers voted to unionise and affiliate with the American Federation of Labor, ignoring an order from the Police Commissioner forbidding officers to join a union. On September 9th, a majority of policemen defied the order and walked off the job.

Those who question the need for a police presence on the street in big cities should consider the Boston police strike as a cautionary tale, at least as things were in the city of Boston in the year 1919. As the Sun went down, the city erupted in chaos, mayhem, looting, and violence. A streetcar conductor was shot for no apparent reason. There were reports of rapes, murders, and serious injuries. The next day, more than a thousand residents applied for gun permits. Downtown stores were boarding up their display windows and hiring private security forces. Telephone operators and employees at the electric power plant threatened to walk out in sympathy with the police. From Montana, where he was campaigning in favour of ratification of the League of Nations treaty, President Woodrow Wilson issued a mealy-mouthed statement saying, “There is no use in talking about political democracy unless you have also industrial democracy”.

Governor Coolidge acted swiftly and decisively. He called up the Guard and deployed them throughout the city, fired all of the striking policemen, and issued a statement saying “The action of the police in leaving their posts of duty is not a strike. It is a desertion. … There is nothing to arbitrate, nothing to compromise. In my personal opinion there are no conditions under which the men can return to the force.” He directed the police commissioner to hire a new force to replace the fired men. He publicly rebuked American Federation of Labor chief Samuel Gompers in a telegram released to the press which concluded, “There is no right to strike against the public safety by anybody, anywhere, any time.”

When the dust settled, the union was broken, peace was restored to the streets of Boston, and Coolidge had emerged onto the national stage as a decisive leader and champion of what he called the “reign of law.” Later in 1919, he was re-elected governor with seven times the margin of his first election. He began to be spoken of as a potential candidate for the Republican presidential nomination in 1920.

Coolidge was nominated at the 1920 Republican convention, but never came in above sixth in the balloting, in the middle of the pack of regional and favourite son candidates. On the tenth ballot, Warren G. Harding of Ohio was chosen, and party bosses announced their choice for Vice President, a senator from Wisconsin. But when time came for delegates to vote, a Coolidge wave among rank and file tired of the bosses ordering them around gave him the nod. Coolidge did not attend the convention in Chicago; he got the news of his nomination by telephone. After he hung up, Grace asked him what it was all about. He said, “Nominated for vice president.” She responded, “You don't mean it.” “Indeed I do”, he answered. “You are not going to accept it, are you?” “I suppose I shall have to.”

Harding ran on a platform of “normalcy” after the turbulence of the war and Wilson's helter-skelter progressive agenda. He expressed his philosophy in a speech several months earlier,

America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.

The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them.

The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920.

At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years.

Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.”

As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort.

Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%.

Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.”

When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes.

Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred.

In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to.

Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace.

Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis.

This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.

 Permalink

Carr, Jack. True Believer. New York: Atria Books, 2019. ISBN 978-1-5011-8084-2.
Jack Carr, a former U.S. Navy SEAL, burst into the world of thriller authors with 2018's stunning success, The Terminal List (September 2018). In it, he introduced James Reece, a SEAL whose team was destroyed by a conspiracy reaching into the highest levels of the U.S. government and, afflicted with a brain tumour by a drug tested on him and his team without their knowledge or consent, which he expected to kill him, set out for revenge upon those responsible. As that novel concluded, Reece, a hunted man, took to the sea in a sailboat, fully expecting to die before he reached whatever destination he might choose.

This sequel begins right where the last book ended. James Reece is aboard the forty-eight foot sailboat Bitter Harvest braving the rough November seas of the North Atlantic and musing that as a Lieutenant Commander in the U.S. Navy he knew very little about sailing a boat in the open ocean. With supplies adequate to go almost anywhere he desires, and not necessarily expecting to live until his next landfall anyway, he decides on an ambitious voyage to see an old friend far from the reach of the U.S. government.

While Reece is at sea, a series of brazen and bloody terrorist attacks in Europe against civilian and military targets send analysts on both sides of the Atlantic digging through their resources to find common threads which might point back to whoever is responsible, as their populace becomes increasingly afraid of congregating in public.

Reece eventually arrives at a hunting concession in Mozambique, in southeast Africa, and signs on as an apprentice professional hunter, helping out in tracking and chasing off poachers who plague the land during the off-season. This suits him just fine: he's about as far off the grid as one can get in this over-connected world, among escapees from Rhodesia who understand what it's like to lose their country, surrounded by magnificent scenery and wildlife, and actively engaged in putting his skills to work defending them from human predators. He concludes he could get used to this life, for however long as he has to live.

This idyll comes to an end when he is tracked down by another former SEAL, now in the employ of the CIA, who tells Reece that a man he trained in Iraq is suspected of being involved in the terrorist attacks and that if Reece will join in an effort to track him down and get him to flip on his terrorist masters, the charges pending against Reece will be dropped and he can stop running and forever looking over his shoulder. After what the U.S. government has done to him, his SEAL team, and his family, Reece's inclination is to tell them to pound sand. Then, as always, the eagle flashes its talons and Reece is told that if he fails to co-operate the Imperium will go after all of those who helped him avenge the wrongs it inflicted upon him and escape its grasp. With that bit of Soviet-style recruiting out of the way, Reece is off to a CIA black site in the REDACTED region of REDACTED to train with REDACTED for his upcoming mission. (In this book, like the last, passages which are said to have been required to have been struck during review of the manuscript by the Department of Defense Office of Prepublication and Security Review are blacked out in the text. This imparted a kind of frisson and authenticity the first time out, but now it's getting somewhat tedious—just change the details, Jack, and get on with it!)

As Reece prepares for his mission, events lead him to believe he is not just confronting an external terrorist threat but, once again, forces within the U.S. government willing to kill indiscriminately to get their way. Finally, the time comes to approach his former trainee and get to the bottom of what is going on. From this point on, the story is what you'd expect of a thriller, with tradecraft, intrigue, betrayal, and discovery of a dire threat with extreme measures taken under an imminent deadline to avoid catastrophe.

The pacing of the story is…odd. The entire first third of the book is largely occupied by Reece sailing his boat and working at the game reserve. Now, single-handedly sailing a sailboat almost halfway around the globe is challenging and an adventure, to be sure, and a look inside the world of an African hunting reserve is intriguing, but these are not what thriller readers pay for, nor do they particularly develop the character of James Reece, employ his unique skills, or reveal things about him we don't already know. We're half way through the book before Reece achieves his first goal of making contact with his former trainee, and it's only there that the real mission gets underway. And as the story ends, although a number of villains have been dispatched in satisfying ways, two of those involved in the terrorist plot (but not its masterminds) remain at large, for Reece to hunt down, presumably in the next book, in a year or so. Why not finish it here, then do something completely different next time?

I hope international agents don't take their tax advice from this novel. The CIA agent who “recruits” Reece tells him “It's a contracted position. You won't pay taxes on most of it as long as you're working overseas.” Wrong! U.S. citizens (which Reece, more fool him, remains) owe U.S. taxes on all of their worldwide income, regardless of the source. There is an exclusion for salary income from employment overseas, but this would not apply for payments by the CIA to an independent contractor. Later in the book, Reece receives a large cash award from a foreign government for dispatching a terrorist, which he donates to support the family of a comrade killed in the operation. He would owe around 50% of the award as federal and California state income taxes (since his last U.S. domicile was the once-golden state) off the top, and unless he was extraordinarily careful (which there is no evidence he was), he'd get whacked again with gift tax as punishment for his charity. Watch out, Reece, if you think having the FBI, CIA, and Naval Criminal Investigative Service on your tail is bad, be glad you haven't yet crossed the IRS or the California Franchise Tax Board!

The Kindle edition does not have the attention to detail you'd expect from a Big Five New York publisher (Simon and Schuster) in a Kindle book selling for US$13. In five places in the text, HTML character entity codes like “&8201;” (the code for the thin space used between adjacent single and double quote marks) appear in the text. What this says to me is that nobody at this professional publishing house did a page-by-page proof of the Kindle edition before putting it on sale. I don't know of a single independently-published science fiction author selling works for a fraction of this price who would fail to do this.

This is a perfectly competent thriller, but to this reader it does not come up to the high standard set by the debut novel. You should not read this book without reading The Terminal List first; if you don't, you'll miss most of the story of what made James Reece who he is here.

 Permalink

Griffin, G. Edward. The Creature from Jekyll Island. Westlake Village, CA: American Media, [1994, 1995, 1998, 2002] 2010. ISBN 978-0-912986-45-6.
Almost every time I review a book about or discuss the U.S. Federal Reserve System in a conversation or Internet post, somebody recommends this book. I'd never gotten around to reading it until recently, when a couple more mentions of it pushed me over the edge. And what an edge that turned out to be. I cannot recommend this book to anybody; there are far more coherent, focussed, and persuasive analyses of the Federal Reserve in print, for example Ron Paul's excellent book End the Fed (October 2009). The present book goes well beyond a discussion of the Federal Reserve and rambles over millennia of history in a chaotic manner prone to induce temporal vertigo in the reader, discussing the history of money, banking, political manipulation of currency, inflation, fractional reserve banking, fiat money, central banking, cartels, war profiteering, bailouts, monetary panics and bailouts, nonperforming loans to “developing” nations, the Rothschilds and Rockefellers, booms and busts, and more.

The author is inordinately fond of conspiracy theories. As we pursue our random walk through history and around the world, we encounter:

  • The sinking of the Lusitania
  • The assassination of Abraham Lincoln
  • The Order of the Knights of the Golden Circle, the Masons, and the Ku Klux Klan
  • The Bavarian Illuminati
  • Russian Navy intervention in the American Civil War
  • Cecil Rhodes and the Round Table Groups
  • The Council on Foreign Relations
  • The Fabian Society
  • The assassination of John F. Kennedy
  • Theodore Roosevelt's “Bull Moose” run for the U.S. presidency in 1912
  • The Report from Iron Mountain
  • The attempted assassination of Andrew Jackson in 1835
  • The Bolshevik Revolution in Russia

I've jumped around in history to give a sense of the chaotic, achronological narrative here. “What does this have to do with the Federal Reserve?”, you might ask. Well, not very much, except as part of a worldview in which almost everything is explained by the machinations of bankers assisted by the crooked politicians they manipulate.

Now, I agree with the author, on those occasions he actually gets around to discussing the Federal Reserve, that it was fraudulently sold to Congress and the U.S. population and has acted, from the very start, as a self-serving cartel of big New York banks enriching themselves at the expense of anybody who holds assets denominated in the paper currency they have been inflating away ever since 1913. But you don't need to invoke conspiracies stretching across the centuries and around the globe to explain this. The Federal Reserve is (despite how it was deceptively structured and promoted) a central bank, just like the Bank of England and the central banks of other European countries upon which it was modelled, and creating funny money out of thin air and looting the population by the hidden tax of inflation is what central banks do, always have done, and always will, as long as they are permitted to exist. Twice in the history of the U.S. prior to the establishment of the Federal Reserve, central banks were created, the first in 1791 by Alexander Hamilton, and the second in 1816. Each time, after the abuses of such an institution became apparent, the bank was abolished, the first in 1811, and the second in 1836. Perhaps, after the inevitable crack-up which always results from towering debt and depreciating funny money, the Federal Reserve will follow the first two central banks into oblivion, but so deeply is it embedded in the status quo it is difficult to see how that might happen today.

In addition to the rambling narrative, the production values of the book are shoddy. For a book which has gone through five editions and 33 printings, nobody appears to have spent the time giving the text even the most cursory of proofreading. Without examining it with the critical eye I apply when proofing my own work or that of others, I noted 137 errors of spelling, punctuation, and formatting in the text. Paragraph breaks are inserted seemingly at random, right in the middle of sentences, and other words are run together. Words which are misspelled include “from”, “great”, “fourth”, and “is”. This is not a freebie or dollar special, but a paperback which sells for US$20 at Amazon, or US$18 for the Kindle edition. And as I always note, if the author and publisher cannot be bothered to get simple things like these correct, how likely is it that facts and arguments in the text can be trusted?

Don't waste your money or your time. Ron Paul's End the Fed is much better, only a third the length, and concentrates on the subject without all of the whack-a-doodle digressions. For a broader perspective on the history of money, banking, and political manipulation of currency, see Murray Rothbard's classic What Has Government Done to Our Money? (July 2019).

 Permalink

Butler, Smedley D. War Is a Racket. San Diego, CA: Dauphin Publications, [1935] 2018. ISBN 978-1-939438-58-4.
Smedley Butler knew a thing or two about war. In 1898, a little over a month before his seventeenth birthday, he lied about his age and enlisted in the U.S. Marine Corps, which directly commissioned him a second lieutenant. After completing training, he was sent to Cuba, arriving shortly after the end of the Spanish-American War. Upon returning home, he was promoted to first lieutenant and sent to the Philippines as part of the American garrison. There, he led Marines in combat against Filipino rebels. In 1900 he was deployed to China during the Boxer Rebellion and was wounded in the Gaselee Expedition, being promoted to captain for his bravery.

He then served in the “Banana Wars” in Central America and the Caribbean. In 1914, during a conflict in Mexico, he carried out an undercover mission in support of a planned U.S. intervention. For his command in the battle of Veracruz, he was awarded the Medal of Honor. Next, he was sent to Haiti, where he commanded Marines and Navy troops in an attack on Fort Rivière in November 1915. For this action, he won a second Medal of Honor. To this day, he is only one of nineteen people to have twice won the Medal of Honor.

In World War I he did not receive a combat command, but for his work in commanding the debarkation camp in France for American troops, he was awarded both the Army and Navy Distinguished Service Medals. Returning to the U.S. after the armistice, he became commanding general of the Marine training base at Quantico, Virginia. Between 1927 and 1929 he commanded the Marine Expeditionary Force in China, and returning to Quantico in 1929, he was promoted to Major General, then the highest rank available in the Marine Corps (which was subordinate to the Navy), becoming the youngest person in the Corps to attain that rank. He retired from the Marine Corps in 1931.

In this slim pamphlet (just 21 pages in the Kindle edition I read), Butler demolishes the argument that the U.S. military actions in which he took part in his 33 years as a Marine had anything whatsoever to do with the defence of the United States. Instead, he saw lives and fortune squandered on foreign adventures largely in the interest of U.S. business interests, with those funding and supplying the military banking large profits from the operation. With the introduction of conscription in World War I, the cynical exploitation of young men reached a zenith with draftees paid US$30 a month, with half taken out to support dependants, and another bite for mandatory insurance, leaving less than US$9 per month for putting their lives on the line. And then, in a final insult, there was powerful coercion to “invest” this paltry sum in “Liberty Bonds” which, after the war, were repaid well below the price of purchase and/or in dollars which had lost half their purchasing power.

Want to put an end to endless, futile, and tragic wars? Forget disarmament conferences and idealistic initiatives, Butler says,

The only way to smash this racket is to conscript capital and industry and labor before the nations [sic] manhood can be conscripted. One month before the Government can conscript the young men of the nation—it must conscript capital and industry. Let the officers and the directors and the high-powered executives of our armament factories and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted—to get $30 a month, the same wage as the lads in the trenches get.

Let the workers in these plants get the same wages—all the workers, all presidents, all directors, all managers, all bankers—yes, and all generals and all admirals and all officers and all politicians and all government office holders—everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches!

Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors [I think “mayors” was intended —JW] pay half their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds.

Why shouldn't they?

Butler goes on to recommend that any declaration of war require approval by a national plebiscite in which voting would be restricted to those subject to conscription in a military conflict. (Writing in 1935, he never foresaw that young men and women would be sent into combat without so much as a declaration of war being voted by Congress.) Further, he would restrict all use of military force to genuine defence of the nation, in particular, limiting the Navy to operating no more than 200 miles (320 km) from the coastline.

This is an impassioned plea against the folly of foreign wars by a man whose career was as a warrior. One can argue that there is a legitimate interest in, say assuring freedom of navigation in international waters, but looking back on the results of U.S. foreign wars in the 21st century, it is difficult to argue they can be justified any more than the “Banana Wars” Butler fought in his time.

 Permalink

September 2019

Chittum, Thomas. Civil War Two. Seattle: Amazon Digital Services, [1993, 1996] 2018. ASIN B07FCWD7C4.
This book was originally published in 1993 with a revised edition in 1996. This Kindle edition, released in 2018, and available for free to Kindle Unlimited subscribers, appears to be identical to the last print edition, although the number of typographical, punctuation, grammatical, and formatting errors (I counted 78 in 176 pages of text, and I wasn't reading with a particularly critical eye) makes me wonder if the Kindle edition was made by optical character recognition of a print copy and never properly copy edited before publication. The errors are so frequent and egregious that readers will get the impression that the publisher couldn't be bothered to read over the text before it reached their eyes.

Sometimes, a book with mediocre production values can be rescued by its content, but that is not the case here. The author, who served two tours as a rifleman with the U.S. Army in Vietnam (1965 and 1966), then fought with the Rhodesian Territorials in the early 1970s and the Croatian Army in 1991–1992, argues that the U.S. has been transformed from a largely homogeneous republic in which minorities and newcomers were encouraged and provided a path to assimilate, and is now a multi-ethnic empire in which each group (principally, whites and those who, like most East Asians, have assimilated to the present majority's culture; blacks; and Hispanics) sees itself engaged in a zero-sum contest against the others for power and the wealth of the empire.

So far, this is a relatively common and non-controversial observation, at least among those on the dissident right who have been observing the deliberate fracturing of the society into rival interest groups along ethnic lines by cynical politicians aiming to assemble a “coalition of the aggrieved” into a majority. But from this starting point the author goes on to forecast increasingly violent riots along ethnic lines, initially in the large cities and then, as people flee areas in which they are an ethnic minority and flock together with others of their tribe, at borders between the emerging territories.

He then sees a progression toward large-scale conventional warfare proceeding in four steps: an initial Foundational Phase where the present Cold Civil War heats up as street gangs align on ethnic lines, new irregular forces spring up to defend against the others, and the police either divide among the factions or align themselves with that dominant in their territory. Next, in a protracted Terrorist Phase, the rival forces will increasingly attack one another and carry out strikes against the forces of the empire who try to suppress them. This will lead to increasing flight and concentration of each group in a territory where it is the majority, and then demands for more autonomy for that territory. He estimates (writing in the first half of the 1990s) that this was the present phase and could be expected to last for another five to twenty-five years (which would put its conclusion no later than 2020).

The Terrorist Phase will then give way to Guerilla Warfare, with street gangs and militia groups evolving into full-time paramilitary forces like the Viet Cong and Irish Republican Army. The empire will respond with an internal security force similar to that of the Soviet Union, and, as chaos escalates, most remaining civil liberties will be suspended “for the duration of the emergency”. He forecasts this phase as lasting between ten and twenty years. Finally, the situation will progress to All-Out, Continuous Warfare, where groups will unite and align along ethnic lines, bringing into play heavy weapons (artillery, rocket powered grenades, armour, etc.) seized from military depots or provided by military personnel defecting to the factional forces. The economy will collapse, and insurgent forces will fund their operations by running the black market that replaces it. For this phase, think the ex-Yugoslavia in the 1990s.

When the dust settles, possibly involving the intervention of United Nations or other “peacekeeping” troops, the result will be a partition of the United States into three ethnically-defined nations. The upper U.S., from coast to coast, will have a larger white (plus East Asian, and other assimilated groups) majority than today. The Old South extending through east Texas will be a black majority nation, and the Southwest, from central Texas through coastal California north of the San Francisco area will be a Hispanic majority nation, possibly affiliated or united with Mexico. The borders will be sharp, defended, and prone to occasional violence.

My problem with this is that it's…ridiculous. Just because a country has rival ethnic groups doesn't mean you'll end up with pitched warfare and partition. Yes, that's what happened in ex-Yugoslavia, but that was a case where centuries-long ethnic tensions and hatred upon which the lid had been screwed down for fifty years by an authoritarian communist regime were released into the open when it collapsed. Countries including Canada, Ireland/Northern Ireland, and Belgium have long-standing ethnic disputes, tension, and occasional violence, and yet they have not progressed to tanks in the street and artillery duels across defended frontiers.

The divide in the U.S. does not seem to be so much across ethnic lines as between a coastal and urban élite and a heartland productive population which has been looted at the expense of the ruling class. The ethnic groups, to the extent they have been organised as factions with a grievance agenda, seem mostly interested in vying for which can extract the most funds from the shrinking productive population for the benefit of their members. This divide, often called “blue/red” or “globalist/nationalist” goes right down the middle of a number of highly controversial and divisive issues such as immigration, abortion, firearms rights, equality before the law vs. affirmative action, free trade vs. economic nationalism, individual enterprise vs. socialism and redistribution, and many others. (The polarisation can be seen clearly by observing that if you know on which side an individual comes down on one of these issues, you can predict, with a high probability, their view on all the others.)

To my mind, a much more realistic (not to mention far better written) scenario for the U.S. coming apart at the seams is Kurt Schlichter's People's Republic (November 2018) which, although fiction, seems an entirely plausible extrapolation of present trends and the aftermath of two incompatible worldviews going their separate ways.

 Permalink

Brennan, Gerald. Public Loneliness. Chicago: Tortoise Books, [2014] 2017. ISBN 978-0-9986325-1-3.
This is the second book in the author's “Altered Space” series of alternative histories of the cold war space race. Each stand-alone story explores a space mission which did not take place, but could have, given the technology and political circumstances at the time. The first, Zero Phase (October 2016), asks what might have happened had Apollo 13's service module oxygen tank waited to explode until after the lunar module had landed on the Moon. The third, Island of Clouds (July 2019), tells the story of a Venus fly-by mission using Apollo-derived hardware in 1972.

The present short book (120 pages in paperback edition) is the tale of a Soviet circumlunar mission piloted by Yuri Gagarin in October 1967, to celebrate the 50th anniversary of the Bolshevik revolution and the tenth anniversary of the launch of Sputnik. As with all of the Altered Space stories, this could have happened: in the 1960s, the Soviet Union had two manned lunar programmes, each using entirely different hardware. The lunar landing project was based on the N1 rocket, a modified Soyuz spacecraft called the 7K-LOK, and the LK one-man lunar lander. The Zond project aimed at a manned lunar fly-by mission (the spacecraft would loop around the Moon and return to Earth on a “free return trajectory” without entering lunar orbit). Zond missions would launch on the Proton booster with a crew of one or two cosmonauts flying around the Moon in a spacecraft designated Soyuz 7K-L1, which was stripped down by removal of the orbital module (forcing the crew to endure the entire trip in the cramped launch/descent module) and equipped for the lunar mission by the addition of a high gain antenna, navigation system, and a heat shield capable of handling the velocity of entry from a lunar mission.

In our timeline, the Zond programme was plagued by problems. The first four unmanned lunar mission attempts, launched between April and November 1967, all failed due to problems with the Proton booster. Zond 4, in March of 1968, flew out to a lunar distance, but was deliberately launched 180° away from the Moon (perhaps to avoid the complexity of lunar gravity). It returned to Earth, but off-course, and was blown up by its self-destruct mechanism to avoid it falling into the hands of another country. Two more Zond launches in April and July 1968 failed from booster problems, with the second killing three people when its upper stage exploded on the launch pad. In September 1968 Zond 5 became the first spacecraft to circle the Moon and return to Earth, carrying a “crew” of two tortoises, fruit fly eggs, and plant seeds. The planned “double dip” re-entry failed, and the spacecraft made a ballistic re-entry with deceleration which might have killed a human cosmonaut, but didn't seem to faze the tortoises. Zond 6 performed a second circumlunar mission in November 1968, again with tortoises and other biological specimens. During the return to Earth, the capsule depressurised, killing all of the living occupants. After a successful re-entry, the parachute failed and the capsule crashed to Earth. This was followed by three more launch failures and then, finally, in August 1969, a completely successful unmanned flight which was the first in which a crew, if onboard, would have survived. By this time, of course, the U.S. had not only orbited the Moon (a much more ambitious mission than Zond's fly-by), but landed on the surface, so even a successful Zond mission would have been an embarrassing afterthought. After one more unmanned test in October 1970, the Zond programme was cancelled.

In this story, the Zond project encounters fewer troubles and with the anniversary of the October revolution approaching in 1967, the go-ahead was given for a piloted flight around the Moon. Yuri Gagarin, who had been deeply unhappy at being removed from flight status and paraded around the world as a cultural ambassador, used his celebrity status to be assigned to the lunar mission which, given weight constraints and the cramped Soyuz cabin, was to be flown by a single cosmonaut.

The tale is narrated by Gagarin himself. The spacecraft is highly automated, so there isn't much for him to do other than take pictures of the Earth and Moon, and so he has plenty of time to reflect upon his career and the experience of being transformed overnight from an unknown 27 year old fighter pilot into a global celebrity and icon of Soviet technological prowess. He seems to have a mild case of impostor syndrome, being acutely aware that he was entirely a passive passenger on his Vostok 1 flight, never once touching the controls, and that the credit he received for the accomplishment belonged to the engineers and technicians who built and operated the craft, who continued to work in obscurity. There are extensive flashbacks to the flight, his experiences afterward, and the frustration at seeing his flying career come to an end.

But this is Soviet hardware, and not long into the flight problems occur which pose increasing risks to the demanding mission profile. Although the planned trajectory will sling the spacecraft around the Moon and back to Earth, several small trajectory correction maneuvers will be required to hit the narrow re-entry corridor in the Earth's atmosphere: too steep and the capsule will burn up, too shallow and it will skip off the atmosphere into a high elliptical orbit in which the cosmonaut's life support consumables may run out before it returns to Earth.

The compounding problems put these course corrections at risk, and mission control decides not to announce the flight to the public while it is in progress. As the book concludes, Gagarin does not know his ultimate fate, and neither does the reader.

This is a moving story, well told, and flawless in its description of the spacecraft and Zond mission plan. One odd stylistic choice is that in Gagarin's narration, he speaks of the names of spacecraft as their English translation of the Russian names: “East” instead of “Vostok”, “Union” as opposed to “Soyuz”, etc. This might seem confusing, but think about it: that's how a Russian would have heard those words, so it's correct to translate them into English along with his other thoughts. There is a zinger on the last page that speaks to the nature of the Soviet propaganda machine—I'll not spoil it for you.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Snowden, Edward. Permanent Record. New York: Metropolitan Books, 2019. ISBN 978-1-250-23723-1.
The revolution in communication and computing technologies which has continually accelerated since the introduction of integrated circuits in the 1960s and has since given rise to the Internet, ubiquitous mobile telephony, vast data centres with formidable processing and storage capacity, and technologies such as natural language text processing, voice recognition, and image analysis, has created the potential, for the first time in human history, of mass surveillance to a degree unimagined even in dystopian fiction such as George Orwell's 1984 or attempted by the secret police of totalitarian regimes like the Soviet Union, Nazi Germany, or North Korea. But, residents of enlightened developed countries such as the United States thought, they were protected, by legal safeguards such as the Fourth Amendment to the U.S. Constitution, from having their government deploy such forbidding tools against its own citizens. Certainly, there was awareness, from disclosures such as those in James Bamford's 1982 book The Puzzle Palace, that agencies such as the National Security Agency (NSA) were employing advanced and highly secret technologies to spy upon foreign governments and their agents who might attempt to harm the United States and its citizens, but their activities were circumscribed by a legal framework which strictly limited the scope of their domestic activities.

Well, that's what most people believed until the courageous acts by Edward Snowden, a senior technical contractor working for the NSA, revealed, in 2013, multiple programs of indiscriminate mass surveillance directed against, well, everybody in the world, U.S. citizens most definitely included. The NSA had developed and deployed a large array of hardware and software tools whose mission was essentially to capture all the communications and personal data of everybody in the world, scan it for items of interest, and store it forever where it could be accessed in future investigations. Data were collected through a multitude of means: monitoring traffic across the Internet, collecting mobile phone call and location data (estimated at five billion records per day in 2013), spidering data from Web sites, breaking vulnerable encryption technologies, working with “corporate partners” to snoop data passing through their facilities, and fusing this vast and varied data with query tools such as XKEYSCORE, which might be thought of as a Google search engine built by people who from the outset proclaimed, “Heck yes, we're evil!”

How did Edward Snowden, over his career a contractor employee for companies including BAE Systems, Dell Computer, and Booz Allen Hamilton, and a government employee of the CIA, obtain access to such carefully guarded secrets? What motivated him to disclose this information to the media? How did he spirit the information out of the famously security-obsessed NSA and get it into the hands of the media? And what were the consequences of his actions? All of these questions are answered in this beautifully written, relentlessly candid, passionately argued, and technologically insightful book by the person who, more than anyone else, is responsible for revealing the malignant ambition of the government of the United States and its accomplices in the Five Eyes (Australia, Canada, New Zealand, and the United Kingdom) to implement and deploy a global panopticon which would shrink the scope of privacy of individuals to essentially zero—in the words of an NSA PowerPoint (of course) presentation from 2011, “Sniff It All, Know It All, Collect It All, Process It All, Exploit It All, Partner It All”. They didn't mention “Store It All Forever”, but with the construction of the US$1.5 billion Utah Data Center which consumes 65 megawatts of electricity, it's pretty clear that's what they're doing.

Edward Snowden was born in 1983 and grew up along with the personal computer revolution. His first contact with computers was when his father brought home a Commodore 64, on which father and son would play many games. Later, just seven years old, his father introduced him to programming on a computer at the Coast Guard base where he worked, and, a few years later, when the family had moved to the Maryland suburbs of Washington DC after his father had been transferred to Coast Guard Headquarters, the family got a Compaq 486 PC clone which opened the world of programming and exploration of online groups and the nascent World Wide Web via the narrow pipe of a dial-up connection to America Online. In those golden days of the 1990s, the Internet was mostly created by individuals for individuals, and you could have any identity, or as many identities as you wished, inventing and discarding them as you explored the world and yourself. This was ideal for a youth who wasn't interested in sports and tended to be reserved in the presence of others. He explored the many corners of the Internet and, like so many with the talent for understanding complex systems, learned to deduce the rules governing systems and explore ways of using them to his own ends. Bob Bickford defines a hacker as “Any person who derives joy from discovering ways to circumvent limitations.” Hacking is not criminal, and it has nothing to do with computers. As his life progressed, Snowden would learn how to hack school, the job market, and eventually the oppressive surveillance state.

By September 2001, Snowden was working for an independent Web site developer operating out of her house on the grounds of Fort Meade, Maryland, the home of the NSA (for whom, coincidentally, his mother worked in a support capacity). After the attacks on the World Trade Center and Pentagon, he decided, in his family's long tradition of service to their country (his grandfather is a Rear Admiral in the Coast Guard, and ancestors fought in the Revolution, Civil War, and both world wars), that his talents would be better put to use in the intelligence community. His lack of a four year college degree would usually be a bar to such employment, but the terrorist attacks changed all the rules, and military veterans were being given a fast track into such jobs, so, after exploring his options, Snowden enlisted in the Army, under a special program called 18 X-Ray, which would send qualifying recruits directly into Special Forces training after completing their basic training.

His military career was to prove short. During a training exercise, he took a fall in the forest which fractured the tibia bone in both legs and was advised he would never be able to qualify for Special Forces. Given the option of serving out his time in a desk job or taking immediate “administrative separation” (in which he would waive the government's liability for the injury), he opted for the latter. Finally, after a circuitous process, he was hired by a government contractor and received the exclusive Top Secret/Sensitive Compartmented Information security clearance which qualified him to work at the CIA.

A few words are in order about contractors at government agencies. In some media accounts of the Snowden disclosures, he has been dismissed as “just a contractor”, but in the present-day U.S. government where nothing is as it seems and much of everything is a scam, in fact many of the people working in the most sensitive capacities in the intelligence community are contractors supplied by the big “beltway bandit” firms which have sprung up like mushrooms around the federal swamp. You see, agencies operate under strict limits on the number of pure government (civil service) employees they can hire and, of course, government employment is almost always forever. But, if they pay a contractor to supply a body to do precisely the same job, on site, they can pay the contractor from operating funds and bypass the entire civil service mechanism and limits and, further, they're free to cut jobs any time they wish and to get rid of people and request a replacement from the contractor without going through the arduous process of laying off or firing a “govvy”. In all of Snowden's jobs, the blue badged civil servants worked alongside the green badge contractors without distinction in job function. Contractors would rarely ever visit the premises of their nominal “employers” except for formalities of hiring and employee benefits. One of Snowden's co-workers said “contracting was the third biggest scam in Washington after the income tax and Congress.”

His work at the CIA was in system administration, and he rapidly learned that regardless of classification levels, compartmentalisation, and need to know, the person in a modern organisation who knows everything, or at least has the ability to find out if interested, is the system administrator. In order to keep a system running, ensure the integrity of the data stored on it, restore backups when hardware, software, or user errors cause things to be lost, and the myriad other tasks that comprise the work of a “sysadmin”, you have to have privileges to access pretty much everything in the system. You might not be able to see things on other systems, but the ones under your control are an open book. The only safeguard employers have over rogue administrators is monitoring of their actions, and this is often laughably poor, especially as bosses often lack the computer savvy of the administrators who work for them.

After nine months on the job, an opening came up for a CIA civil servant job in overseas technical support. Attracted to travel and exotic postings abroad, Snowden turned in his green badge for a blue one and after a training program, was sent to exotic…Geneva as computer security technician, under diplomatic cover. As placid as it may seem, Geneva was on the cutting edge of CIA spying technology, with the United Nations, numerous international agencies, and private banks all prime targets for snooping.

Two years later Snowden was a contractor once again, this time with Dell Computer, who placed him with the NSA, first in Japan, then back in Maryland, and eventually in Hawaii as lead technologist of the Office of Information Sharing, where he developed a system called “Heartbeat” which allowed all of NSA's sites around the world to share their local information with others. It can be thought of as an automated blog aggregator for Top Secret information. This provided him personal access to just about everything the NSA was up to, world-wide. And he found what he read profoundly disturbing and dismaying.

Once he became aware of the scope of mass surveillance, he transferred to another job in Hawaii which would allow him to personally verify its power by gaining access to XKEYSCORE. His worst fears were confirmed, and he began to patiently, with great caution, and using all of his insider's knowledge, prepare to bring the archives he had spirited out from the Heartbeat system to the attention of the public via respected media who would understand the need to redact any material which might, for example, put agents in the field at risk. He discusses why, based upon his personal experience and that of others, he decided the whistleblower approach within the chain of command was not feasible: the unconstitutional surveillance he had discovered had been approved at the highest levels of government—there was nobody who could stop it who had not already approved it.

The narrative then follows preparing for departure, securing the data for travel, taking a leave of absence from work, travelling to Hong Kong, and arranging to meet the journalists he had chosen for the disclosure. There is a good deal of useful tradecraft information in this narrative for anybody with secrets to guard. Then, after the stories began to break in June, 2013, the tale of his harrowing escape from the long reach of Uncle Sam is recounted. Popular media accounts of Snowden “defecting to Russia” are untrue. He had planned to seek asylum in Ecuador, and had obtained a laissez-passer from the Ecuadoran consul and arranged to travel to Quito from Hong Kong via Moscow, Havana, and Caracas, as that was the only routing which did not pass through U.S. airspace or involve stops in countries with extradition treaties with the U.S. Upon arrival in Moscow, he discovered that his U.S. passport had been revoked while en route from Hong Kong, and without a valid passport he could neither board an onward flight nor leave the airport. He ended up trapped in the Moscow airport for forty days while twenty-seven countries folded to U.S. pressure and denied him political asylum. After spending so long in the airport he even became tired of eating at the Burger King there, on August 1st, 2013 Russia granted him temporary asylum. At this writing, he is still in Moscow, having been joined in 2017 by Lindsay Mills, the love of his life he left behind in Hawaii in 2013, and who is now his wife.

This is very much a personal narrative, and you will get an excellent sense for who Edward Snowden is and why he chose to do what he did. The first thing that struck me is that he really knows his stuff. Some of the press coverage presented him as a kind of low-level contractor systems nerd, but he was principal architect of EPICSHELTER, NSA's worldwide backup and archiving system, and sole developer of the Heartbeat aggregation system for reports from sites around the globe. At the time he left to make his disclosures, his salary was US$120,000 per year, hardly the pay of a humble programmer. His descriptions of technologies and systems in the book are comprehensive and flawless. He comes across as motivated entirely by outrage at the NSA's flouting of the constitutional protections supposed to be afforded U.S. citizens and its abuses in implementing mass surveillance, sanctioned at the highest levels of government across two administrations from different political parties. He did not seek money for his disclosures, and did not offer them to foreign governments. He took care to erase all media containing the documents he removed from the NSA before embarking on his trip from Hong Kong, and when approached upon landing in Moscow by agents from the Russian FSB (intelligence service) with what was obviously a recruitment pitch, he immediately cut it off, saying,

Listen, I understand who you are, and what this is. Please let me be clear that I have no intention to cooperate with you. I'm not going to cooperate with any intelligence service. I mean no disrespect, but this isn't going to be that kind of meeting. If you want to search my bag, it's right here. But I promise you, there's nothing in it that can help you.

And that was that.

Edward Snowden could have kept quiet, done his job, collected his handsome salary, continued to live in a Hawaiian paradise, and share his life with Lindsay, but he threw it all away on a matter of principle and duty to his fellow citizens and the Constitution he had sworn to defend when taking the oath upon joining the Army and the CIA. On the basis of the law, he is doubtless guilty of the three federal crimes with which he has been charged, sufficient to lock him up for as many as thirty years should the U.S. lay its hands on him. But he believes he did the correct thing in an attempt to right wrongs which were intolerable. I agree, and can only admire his courage. If anybody is deserving of a Presidential pardon, it is Edward Snowden.

There is relatively little discussion here of the actual content of the documents which were disclosed and the surveillance programs they revealed. For full details, visit the Snowden Surveillance Archive, which has copies of all of the documents which have been disclosed by the media to date. U.S. government employees and contractors should read the warning on the site before viewing this material.

 Permalink

Yates, Raymond F. The Boys' Book of Model Railroading. New York: Harper & Row, 1951. ISBN 978-1-127-46606-1.
In the years before World War II, Lionel was the leader in the U.S. in manufacturing of model railroad equipment, specialising in “tinplate” models which were often unrealistic in scale, painted in garish colours, and appealing to young children and the mothers who bought them as gifts. During the war, the company turned to production of items for the U.S. Navy. After the war, the company returned to the model railroad market, remaking their product line with more realistic models. This coincided with the arrival of the baby boom generation, which, as the boys grew up, had an unlimited appetite for ever more complicated and realistic model railroads, which Lionel was eager to meet with simple, rugged, and affordable gear which set the standard for model railroading for a generation.

This book, published in 1951, just as Lionel was reaching the peak of its success, was written by Raymond F. Yates, author of earlier classics such as A Boy and a Battery and A Boy and a Motor, which were perennially wait-listed at the public library when I was a kid during the 1950s. The book starts with the basics of electricity, then moves on to a totally Lionel-based view of the model railroading hobby. There are numerous do-it-yourself projects, ranging from building simple scenery to complex remote-controlled projects with both mechanical and electrical actuation. There is even a section on replacing the unsightly centre third rail of Lionel O-gauge track with a subtle third rail located to the side of the track which the author notes “should be undertaken only if you are prepared to do a lot of work and if you know how to use a soldering iron.” Imagine what this requires for transmitting current across switches and crossovers! Although I read this book, back in the day, I'm glad I never went that deeply down the rabbit hole.

I learned a few things here I never stumbled across while running my Lionel oval layout during the Eisenhower administration or in engineering school many years later. For example: why did Lionel opt for AC power and a three rail system rather than the obvious approach of DC motors and two rails, which makes it easier, for example, to reverse trains and looks more like the real thing? The answer is that a three rail system with AC power is symmetrical, and allows all kinds of complicated geometries in layouts without worrying about cross-polarity connections on junctions. AC power allows using inexpensive transformers to run the layout from mains power without rectifiers which, in the 1950s, would have meant messy and inefficient selenium stacks prone to blowing up into toxic garlic-smelling fumes if mistreated. But many of the Lionel remote control gizmos, such as the knuckle couplers, switches, semaphore signals, and that eternal favourite, the giraffe car, used solenoids as actuators. How could that work with AC power? Well, think about it—if you have a soft iron plunger within the coil, but not at its centre, when current is applied to the coil, the induced magnetic field will pull it into the centre of the coil. This force is independent of the direction of the current. So an alternating current will create a varying magnetic field which, averaged over the mechanical inertia of the plunger, will still pull it in as long as the solenoid is energised. In practice, running a solenoid on AC may result in a hum, buzz, or chatter, which can be avoided by including a shading coil, in which an induced current creates a magnetic field 90° out of phase to the alternating current in the main coil and smooths the magnetic field actuating the plunger. I never knew that; did you?

This is a book for boys. There is only a hint of the fanaticism to which the hobby of model railroading can be taken. We catch a whiff of it in the chapter about running the railroad on a published schedule, with telegraph connections between dispatchers and clocks modified to keep “scale time”. All in all, it was great fun then, and great fun to recall now. To see how far off the deep end O-gauge model railroading has gone since 1951, check out the Lionel Trains 2019 Catalogue.

This book is out of print, but used copies are readily available at a reasonable price.

 Permalink

October 2019

Mills, Kyle. Lethal Agent. New York: Atria Books, 2019. ISBN 978-1-5011-9062-9.
This is the fifth novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. On the cover, Vince Flynn still gets top billing (he is now the “brand”, not the author).

In the third Mitch Rapp novel by Kyle Mills, Enemy of the State (June 2018), Rapp decapitated the leadership of ISIS by detonating a grenade in a cave where they were meeting and barely escaped with his life when the cavern collapsed. As the story concluded, it was unknown whether the leader of ISIS, Mullah Sayid Halabi, was killed in the cave-in. Months later, evidence surfaces that Halabi survived, and may be operating in chaotic, war-torn Yemen. Rapp tracks him to a cave in the Yemeni desert but finds only medical equipment apparently used to treat his injuries: Halabi has escaped again.

A Doctors Without Borders team treating victims of a frighteningly contagious and virulent respiratory disease which has broken out in a remote village in Yemen is attacked and its high-profile microbiologist is kidnapped, perhaps by Halabi's people to work on bioweapons. Meanwhile, by what amounts to pure luck, a shipment of cocaine from Mexico is intercepted and found to contain, disguised among the packets of the drug, a brick of weaponised anthrax, leading authorities to suspect the nightmare scenario in which one or more Mexican drug cartels are cooperating with Islamic radicals to smuggle terrorists and weapons across the porous southern border of the U.S.

In Washington, a presidential election is approaching, and President Alexander, who will be leaving after two terms, seems likely to be replaced by the other party's leading contender, the ruthless and amoral Senator Christine Barnett, who is a sworn enemy of CIA director Irene Kennedy and operative Mitch Rapp, and, if elected, is likely to, at best, tie them up in endless congressional hearings and, at worst, see them both behind bars. Barnett places zero priority on national security or the safety of the population, and is willing to risk either to obtain political advantage.

Halabi's plans become evident when a slickly-produced video appears on the Internet, featuring a very much alive Halabi saying, “Now I have your biological weapons experts. Now I have the power to use your weapons against you.” The only way to track down Halabi, who has relocated to parts unknown, is by infiltrating the Mexican cartel behind the intercepted shipment. Rapp devises a plan to persuade the cartel boss he has gone rogue and is willing to sign on as an enforcer. Having no experience operating in Mexico or more than a few words of Spanish, and forced to operate completely on his own, he must somehow convince the cartel to let him inside its inner circle and then find the connection to Halabi and thwart his plans, which Rapp and others suspect may be far more sinister than sprinkling some anthrax around. (You don't need an expert microbiologist to weaponise anthrax, after all.)

This thriller brings back the old, rough-edged, and unrelenting Mitch Rapp of some of Vince Flynn's early novels. And this is a Rapp who has seen enough of the Washington swamp and the creatures who inhabit it to have outgrown any remaining dewy-eyed patriotism. In chapter 22, he says,

But what I do know is that the U.S. isn't ready. If Halabi's figured out a way to hit us with something big—something biological—what's our reaction going to be? The politicians will run for the hills and point fingers at each other. And the American people…. They faint if someone uses insensitive language in their presence and half of them couldn't run up a set of stairs if you put a gun to their head. What'll happen if the real s*** hits the fan? What are they going to do if they're faced with something that can't be fixed by a Facebook petition?

So Rapp is as ruthless with his superiors as with the enemy, and obtains the free hand he needs to get the job done. Eventually Rapp and his team identify what is a potentially catastrophic threat and must swing into action, despite the political and diplomatic repercussions, to avert disaster. And then it is time to settle some scores.

Kyle Mills has delivered another thriller which is both in the tradition of Mitch Rapp and also further develops his increasingly complex character in new ways.

 Permalink

Wood, Fenton. The Tower of the Bear. Seattle: Amazon Digital Services, 2019. ASIN B07XB8XWNF.
This is the third short novel/novella (145 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and the second, Five Million Watts (June 2019), “enchanting”. In this volume, the protagonist, Philo Hergenschmidt, embarks upon a hero's journey to locate a treasure dating from the origin of the Earth which may be the salvation of radio station 2XG and the key to accomplishing the unrealised dream of the wizard who built it, Zaros the Electromage.

Philo's adventures take him into the frozen Arctic where he meets another Old One, to the depths of the Arctic Ocean in the fabulous submarine of the eccentric Captain Kolodziej, into the lair of a Really Old One where he almost seizes the prize he seeks, and then on an epic road trip. After the Partition of North America, the West, beyond the Mississippi, was ceded by the Republic to the various aboriginal tribes who lived there, and no Yankee dare enter this forbidden territory except to cross it on the Tyrant's Road, which remained Yankee territory with travellers given free passage by the tribes—in theory. In fact, no white man was known to have ventured West on the Road in a century.

Philo has come to believe that the “slow iron” he seeks may be found in the fabled City of the Future, said to be near the Pacific coast at the end of the Tyrant's Road. The only way to get there is to cross the continent, and the only practical means, there being no gas stations or convenience stores along the way, is by bicycle. Viridios helps Philo obtain a superb bicycle and trailer, and equip himself with supplies for the voyage. Taking leave of Viridios at the Mississippi and setting out alone, he soon discovers everything is not what it was said to be, and that the West is even more mysterious, dangerous, and yet enchanted than the stories he's heard since boyhood.

It is, if nothing else, diverse. In its vast emptiness there are nomadic bands pursuing the vast herds of bison on horseback with bows and arrows, sedentary tribes who prefer to ride the range in Japanese mini-pickup trucks, a Universal Library which is an extreme outlier even among the exotic literature of universal libraries, a hidden community that makes Galt's Gulch look like a cosmopolitan crossroads, and a strange people who not only time forgot, but who seem to have forgotten time. Philo's native mechanical and electrical knack gets him out of squeezes and allows him to trade know-how for information and assistance with those he encounters.

Finally, near the shore of the ocean, he comes to a great Tree, beyond imagining in its breadth and height. What is there to be learned here, and what challenges will he face as he continues his quest?

This is a magnificent continuation of one of the best young adult alternative history tales I've encountered in many years. Don't be put off by the “young adult” label—while you can hand this book to any youngster from age nine on up and be assured they'll be enthralled by the adventure and not distracted by the superfluous grunge some authors feel necessary to include when trying to appeal to a “mature” audience, the author never talks down to the reader, and even engineers and radio amateurs well versed in electronics will learn arcana such as the generation and propagation of extremely low frequency radio waves. This is a story which genuinely works for all ages.

This book is currently available only in a Kindle edition. Note that you don't need a physical electronic book reader, tablet, or mobile phone to read Kindle books. Free Kindle applications are available which let you read on Macintosh and Windows machines, and a Kindle Cloud Reader allows reading Kindle books on any machine with a modern Web browser, including all Linux platforms. The fourth volume, The City of Illusions, is scheduled to be published in December, 2019.

 Permalink

Crossfield, Albert Scott and Clay Blair. Always Another Dawn. Seattle, CreateSpace, [1960] 2018. ISBN 978-1-7219-0050-3.
The author was born in 1921 and grew up in Southern California. He was obsessed with aviation from an early age, wangling a ride in a plane piloted by a friend of his father (an open cockpit biplane) at age six. He built and flew many model airplanes and helped build the first gasoline-powered model plane in Southern California, with a home-built engine. The enterprising lad's paper route included a local grass field airport, and he persuaded the owner to trade him a free daily newspaper (delivery boys always received a few extra) for informal flying lessons. By the time he turned thirteen, young Scott (he never went by his first name, “Albert”) had accumulated several hours of flying time.

In the midst of the Great Depression, his father's milk processing business failed, and he decided to sell out everything in California, buy a 120 acre run-down dairy farm in rural Washington state, and start over. Patiently, taking an engineer's approach to the operation: recording everything, controlling costs, optimising operations, and with the entire family pitching in on the unceasing chores, the ramshackle property was built into a going concern and then a showplace.

Crossfield never abandoned his interest in aviation, and soon began to spend some of his scarce free time at the local airport, another grass field operation, where he continued to take flight lessons from anybody who would give them for the meagre pocket change he could spare. Finally, with a total of seven or eight hours dual control time, one of the pilots invited him to “take her up and try a spin.” This was highly irregular and, in fact, illegal: he had no student pilot certificate, but things were a lot more informal in those days, so off he went. Taking the challenge at its words, he proceeded to perform three spins and spin recoveries during his maiden solo flight.

In 1940, at age eighteen, Scott left the farm. His interest in aviation had never flagged, and he was certain he didn't want to be a farmer. His initial goal was to pursue an engineering degree at the University of Washington and then seek employment in the aviation industry, perhaps as an engineering test pilot. But the world was entering a chaotic phase, and this chaos perturbed his well-drawn plans. “[B]y the time I was twenty I had entered the University, graduated from a civilian aviation school, officially soloed, and obtained my private pilot's license, withdrawn from the University, worked for Boeing Aircraft Company, quit to join the Air Force briefly, worked for Boeing again, and quit again to join the Navy.” After the U.S. entered World War II, the Navy was desperate for pilots and offered immediate entry to flight training to those with the kind of experience Crossfield had accumulated.

Despite having three hundred flight hours in his logbook, Crossfield, like many military aviators, had to re-learn flying the Navy way. He credits it for making him a “professional, disciplined aviator.” Like most cadets, he had hoped for assignment to the fleet as a fighter pilot, but upon completing training he was immediately designated an instructor and spent the balance of the war teaching basic and advanced flying, gunnery, and bombing to hundreds of student aviators. Toward the end of the war, he finally received his long-awaited orders for fighter duty, but while in training the war ended without his ever seeing combat.

Disappointed, he returned to his original career plan and spent the next four years at the University of Washington, obtaining Bachelor of Science and Master of Science degrees in Aeronautical Engineering. Maintaining his commission in the Naval Reserve, he organised a naval stunt flying team and used it to hone his precision formation flying skills. As a graduate student, he supported himself as chief operator of the university's wind tunnel, then one of the most advanced in the country, and his work brought him into frequent contact with engineers from aircraft companies who contracted time on the tunnel for tests on their designs.

Surveying his prospects in 1950, Crossfield decided he didn't want to become a professor, which would be the likely outcome if he continued his education toward a Ph.D. The aviation industry was still in the postwar lull, but everything changed with the outbreak of the Korean War in June 1950. Suddenly, demand for the next generation of military aircraft, which had been seen as years in the future, became immediate, and the need for engineers to design and test them was apparent. Crossfield decided the most promising opportunity for someone with his engineering background and flight experience was as an “aeronautical research pilot” with the National Advisory Committee for Aeronautics (NACA), a U.S. government civilian agency founded in 1915 and chartered with performing pure and applied research in aviation, which was placed in the public domain and made available to all U.S. aircraft manufacturers. Unlike returning to the military, where his flight assignments would be at the whim of the service, at NACA he would be assured of working on the cutting edge of aviation technology.

Through a series of personal contacts, he eventually managed to arrange an interview with the little-known NACA High Speed Flight Test Station at Edwards Air Force Base in the high desert of Southern California. Crossfield found himself at the very Mecca of high speed flight, where Chuck Yeager had broken the sound barrier in October 1947 and a series of “X-planes” were expanding the limits of flight in all directions.

Responsibility for flying the experimental research aircraft at Edwards was divided three ways. When a new plane was delivered, its first flights would usually be conducted by company test pilots from its manufacturer. These pilots would have been involved in the design process and worked closely with the engineers responsible for the plane. During this phase, the stability, maneuverability, and behaviour of the plane in various flight regimes would be tested, and all of its component systems would be checked out. This would lead to “acceptance” by the Air Force, at which point its test pilots would acquaint themselves with the new plane and then conduct flights aimed at expanding its “envelope”: pushing parameters such as speed and altitude to those which the experimental plane had been designed to explore. It was during this phase that records would be set, often trumpeted by the Air Force. Finally, NACA pilots would follow up, exploring the fine details of the performance of the plane in the new flight regimes it opened up. Often the plane would be instrumented with sensors to collect data as NACA pilots patiently explored its flight envelope. NACA's operation at Edwards was small, and it played second fiddle to the Air Force (and Navy, who also tested some of its research planes there). The requirements for the planes were developed by the military, who selected the manufacturer, approved the design, and paid for its construction. NACA took advantage of whatever was developed, when the military made it available to them.

However complicated the structure of operations was at Edwards, Crossfield arrived squarely in the middle of the heroic age of supersonic flight, as chronicled (perhaps a bit too exuberantly) by Tom Wolfe in The Right Stuff. The hangars were full of machines resembling those on the covers of the pulp science fiction magazines of Crossfield's youth, and before them were a series of challenges seemingly without end: Mach 2, 3, and beyond, and flight to the threshold of space.

It was a heroic time, and a dangerous business. Writing in 1960, Crossfield notes, “Death is the handmaiden of the pilot. Sometimes it comes by accident, sometimes by an act of God. … Twelve out of the sixteen members of my original class at Seattle were eventually killed in airplanes. … Indeed, come to think of it, three-quarters of all the pilots I ever knew are dead.” As an engineer, he has no illusions or superstitions about the risks he is undertaking: sometimes the machine breaks and there's nothing that can be done about it. But he distinguishes being startled with experiencing fear: “I have been startled in an airplane many times. This, I may say, is almost routine for the experimental test pilot. But I can honestly say I have never experienced real fear in the air. The reason is that I have never run out of things to do.”

Crossfield proceeded to fly almost all of the cutting-edge aircraft at Edwards, including the rocket powered X-1 and the Navy's D-558-2 Skyrocket. By 1955, he had performed 99 flights under rocket power, becoming the most experienced rocket pilot in the world (there is no evidence the Soviet Union had any comparable rocket powered research aircraft). Most of Crossfield's flights were of the patient, data-taking kind in which the NACA specialised, albeit with occasional drama when these finicky, on-the-edge machines malfunctioned. But sometimes, even at staid NACA, the blood would be up, and in 1953, NACA approved taking the D-558-2 to Mach 2, setting a new world speed record. This was more than 25% faster than the plane had been designed to fly, and all the stops were pulled out for the attempt. The run was planned for a cold day, when the speed of sound would be lower at the planned altitude and cold-soaking the airframe would allow loading slightly more fuel and oxidiser. The wings and fuselage were waxed and polished to a high sheen to reduce air friction. Every crack was covered by masking tape. The stainless steel tubes used to jettison propellant in an emergency before drop from the carrier aircraft were replaced by aluminium which would burn away instants after the rocket engine was fired, saving a little bit of weight. With all of these tweaks, on November 20, 1953, at an altitude of 72,000 feet (22 km), the Skyrocket punched through Mach 2, reaching a speed of Mach 2.005. Crossfield was the Fastest Man on Earth.

By 1955, Crossfield concluded that the original glory days of Edwards were coming to an end. The original rocket planes had reached the limits of their performance, and the next generation of research aircraft, the X-15, would be a project on an entirely different scale, involving years of development before it was ready for its first flight. Staying at NACA would, in all likelihood, mean a lengthy period of routine work, with nothing as challenging as his last five years pushing the frontiers of flight. He concluded that the right place for an engineering test pilot, one with such extensive experience in rocket flight, was on the engineering team developing the next generation rocket plane, not sitting around at Edwards waiting to see what they came up with. He resigned from NACA and took a job as chief engineering test pilot at North American Aviation, developer of the X-15. He would provide a pilot's perspective throughout the protracted gestation of the plane, including cockpit layout, control systems, life support and pressure suit design, simulator development, and riding herd on the problem-plagued engine.

Ever wonder why the space suits used in the X-15 and by the Project Mercury astronauts were silver coloured? They said it was something about thermal management, but in fact when Crossfield was visiting the manufacturer he saw a sample of aluminised fabric and persuaded them the replace the original khaki coverall outer layer with it because it “looked like a real space suit.” And they did.

When the X-15 finally made its first flight in 1959, Crossfield was at the controls. He would go on to make 14 X-15 flights before turning the ship over to Air Force and NASA (the successor agency to the NACA) pilots. This book, originally published in 1960, concludes before the record-breaking period of the X-15, conducted after Crossfield's involvement with it came to an end.

This is a personal account of a period in the history of aviation in which records fell almost as fast as they were set and rocket pilots went right to the edge and beyond, feeling out the treacherous boundaries of the frontier.

A Kindle edition is available, at this writing, for just US$0.99. The Kindle edition appears to have been prepared by optical character recognition with only a rudimentary and slapdash job of copy editing. There are numerous errors including many involving the humble apostrophe. But, hey, it's only a buck.

 Permalink

November 2019

Eyles, Don. Sunburst and Luminary. Boston: Fort Point Press, 2018. ISBN 978-0-9863859-3-3.
In 1966, the author graduated from Boston University with a bachelor's degree in mathematics. He had no immediate job prospects or career plans. He thought he might be interested in computer programming due to a love of solving puzzles, but he had never programmed a computer. When asked, in one of numerous job interviews, how he would go about writing a program to alphabetise a list of names, he admitted he had no idea. One day, walking home from yet another interview, he passed an unimpressive brick building with a sign identifying it as the “MIT Instrumentation Laboratory”. He'd heard a little about the place and, on a lark, walked in and asked if they were hiring. The receptionist handed him a long application form, which he filled out, and was then immediately sent to interview with a personnel officer. Eyles was amazed when the personnel man seemed bent on persuading him to come to work at the Lab. After reference checking, he was offered a choice of two jobs: one in the “analysis group” (whatever that was), and another on the team developing computer software for landing the Apollo Lunar Module (LM) on the Moon. That sounded interesting, and the job had another benefit attractive to a 21 year old just graduating from university: it came with deferment from the military draft, which was going into high gear as U.S. involvement in Vietnam deepened.

Near the start of the Apollo project, MIT's Instrumentation Laboratory, led by the legendary “Doc” Charles Stark Draper, won a sole source contract to design and program the guidance system for the Apollo spacecraft, which came to be known as the “Apollo Primary Guidance, Navigation, and Control System” (PGNCS, pronounced “pings”). Draper and his laboratory had pioneered inertial guidance systems for aircraft, guided missiles, and submarines, and had in-depth expertise in all aspects of the challenging problem of enabling the Apollo spacecraft to navigate from the Earth to the Moon, land on the Moon, and return to the Earth without any assistance from ground-based assets. In a normal mission, it was expected that ground-based tracking and computers would assist those on board the spacecraft, but in the interest of reliability and redundancy it was required that completely autonomous navigation would permit accomplishing the mission.

The Instrumentation Laboratory developed an integrated system composed of an inertial measurement unit consisting of gyroscopes and accelerometers that provided a stable reference from which the spacecraft's orientation and velocity could be determined, an optical telescope which allowed aligning the inertial platform by taking sightings on fixed stars, and an Apollo Guidance Computer (AGC), a general purpose digital computer which interfaced to the guidance system, thrusters and engines on the spacecraft, the astronauts' flight controls, and mission control, and was able to perform the complex calculations for en route maneuvers and the unforgiving lunar landing process in real time.

Every Apollo lunar landing mission carried two AGCs: one in the Command Module and another in the Lunar Module. The computer hardware, basic operating system, and navigation support software were identical, but the mission software was customised due to the different hardware and flight profiles of the Command and Lunar Modules. (The commonality of the two computers proved essential in getting the crew of Apollo 13 safely back to Earth after an explosion in the Service Module cut power to the Command Module and disabled its computer. The Lunar Module's AGC was able to perform the critical navigation and guidance operations to put the spacecraft back on course for an Earth landing.)

By the time Don Eyles was hired in 1966, the hardware design of the AGC was largely complete (although a revision, called Block II, was underway which would increase memory capacity and add some instructions which had been found desirable during the initial software development process), the low-level operating system and support libraries (implementing such functionality as fixed point arithmetic, vector, and matrix computations), and a substantial part of the software for the Command Module had been written. But the software for actually landing on the Moon, which would run in the Lunar Module's AGC, was largely just a concept in the minds of its designers. Turning this into hard code would be the job of Don Eyles, who had never written a line of code in his life, and his colleagues. They seemed undaunted by the challenge: after all, nobody knew how to land on the Moon, so whoever attempted the task would have to make it up as they went along, and they had access, in the Instrumentation Laboratory, to the world's most experienced team in the area of inertial guidance.

Today's programmers may be amazed it was possible to get anything at all done on a machine with the capabilities of the Apollo Guidance Computer, no less fly to the Moon and land there. The AGC had a total of 36,864 15-bit words of read-only core rope memory, in which every bit was hand-woven to the specifications of the programmers. As read-only memory, the contents were completely fixed: if a change was required, the memory module in question (which was “potted” in a plastic compound) had to be discarded and a new one woven from scratch. There was no way to make “software patches”. Read-write storage was limited to 2048 15-bit words of magnetic core memory. The read-write memory was non-volatile: its contents were preserved across power loss and restoration. (Each memory word was actually 16 bits in length, but one bit was used for parity checking to detect errors and not accessible to the programmer.) Memory cycle time was 11.72 microseconds. There was no external bulk storage of any kind (disc, tape, etc.): everything had to be done with the read-only and read-write memory built into the computer.

The AGC software was an example of “real-time programming”, a discipline with which few contemporary programmers are acquainted. As opposed to an “app” which interacts with a user and whose only constraint on how long it takes to respond to requests is the user's patience, a real-time program has to meet inflexible constraints in the real world set by the laws of physics, with failure often resulting in disaster just as surely as hardware malfunctions. For example, when the Lunar Module is descending toward the lunar surface, burning its descent engine to brake toward a smooth touchdown, the LM is perched atop the thrust vector of the engine just like a pencil balanced on the tip of your finger: it is inherently unstable, and only constant corrections will keep it from tumbling over and crashing into the surface, which would be bad. To prevent this, the Lunar Module's AGC runs a piece of software called the digital autopilot (DAP) which, every tenth of a second, issues commands to steer the descent engine's nozzle to keep the Lunar Module pointed flamy side down and adjusts the thrust to maintain the desired descent velocity (the thrust must be constantly adjusted because as propellant is burned, the mass of the LM decreases, and less thrust is needed to maintain the same rate of descent). The AGC/DAP absolutely must compute these steering and throttle commands and send them to the engine every tenth of a second. If it doesn't, the Lunar Module will crash. That's what real-time computing is all about: the computer has to deliver those results in real time, as the clock ticks, and if it doesn't (for example, it decides to give up and flash a Blue Screen of Death instead), then the consequences are not an irritated or enraged user, but actual death in the real world. Similarly, every two seconds the computer must read the spacecraft's position from the inertial measurement unit. If it fails to do so, it will hopelessly lose track of which way it's pointed and how fast it is going. Real-time programmers live under these demanding constraints and, especially given the limitations of a computer such as the AGC, must deploy all of their cleverness to meet them without fail, whatever happens, including transient power failures, flaky readings from instruments, user errors, and completely unanticipated “unknown unknowns”.

The software which ran in the Lunar Module AGCs for Apollo lunar landing missions was called LUMINARY, and in its final form (version 210) used on Apollo 15, 16, and 17, consisted of around 36,000 lines of code (a mix of assembly language and interpretive code which implemented high-level operations), of which Don Eyles wrote in excess of 2,200 lines, responsible for the lunar landing from the start of braking from lunar orbit through touchdown on the Moon. This was by far the most dynamic phase of an Apollo mission, and the most demanding on the limited resources of the AGC, which was pushed to around 90% of its capacity during the final landing phase where the astronauts were selecting the landing spot and guiding the Lunar Module toward a touchdown. The margin was razor-thin, and that's assuming everything went as planned. But this was not always the case.

It was when the unexpected happened that the genius of the AGC software and its ability to make the most of the severely limited resources at its disposal became apparent. As Apollo 11 approached the lunar surface, a series of five program alarms: codes 1201 and 1202, interrupted the display of altitude and vertical velocity being monitored by Buzz Aldrin and read off to guide Neil Armstrong in flying to the landing spot. These codes both indicated out-of-memory conditions in the AGC's scarce read-write memory. The 1201 alarm was issued when all five of the 44-word vector accumulator (VAC) areas were in use when another program requested to use one, and 1202 signalled exhaustion of the eight 12-word core sets required by each running job. The computer had a single processor and could execute only one task at a time, but its operating system allowed lower priority tasks to be interrupted in order to service higher priority ones, such as the time-critical autopilot function and reading the inertial platform every two seconds. Each suspended lower-priority job used up a core set and, if it employed the interpretive mathematics library, a VAC, so exhaustion of these resources usually meant the computer was trying to do too many things at once. Task priorities were assigned so the most critical functions would be completed on time, but computer overload signalled something seriously wrong—a condition in which it was impossible to guarantee all essential work was getting done.

In this case, the computer would throw up its hands, issue a program alarm, and restart. But this couldn't be a lengthy reboot like customers of personal computers with millions of times the AGC's capacity tolerate half a century later. The critical tasks in the AGC's software incorporated restart protection, in which they would frequently checkpoint their current state, permitting them to resume almost instantaneously after a restart. Programmers estimated around 4% of the AGC's program memory was devoted to restart protection, and some questioned its worth. On Apollo 11, it would save the landing mission.

Shortly after the Lunar Module's landing radar locked onto the lunar surface, Aldrin keyed in the code to monitor its readings and immediately received a 1202 alarm: no core sets to run a task; the AGC restarted. On the communications link Armstrong called out “It's a 1202.” and Aldrin confirmed “1202.”. This was followed by fifteen seconds of silence on the “air to ground” loop, after which Armstrong broke in with “Give us a reading on the 1202 Program alarm.” At this point, neither the astronauts nor the support team in Houston had any idea what a 1202 alarm was or what it might mean for the mission. But the nefarious simulation supervisors had cranked in such “impossible” alarms in earlier training sessions, and controllers had developed a rule that if an alarm was infrequent and the Lunar Module appeared to be flying normally, it was not a reason to abort the descent.

At the Instrumentation Laboratory in Cambridge, Massachusetts, Don Eyles and his colleagues knew precisely what a 1202 was and found it was deeply disturbing. The AGC software had been carefully designed to maintain a 10% safety margin under the worst case conditions of a lunar landing, and 1202 alarms had never occurred in any of their thousands of simulator runs using the same AGC hardware, software, and sensors as Apollo 11's Lunar Module. Don Eyles' analysis, in real time, just after a second 1202 alarm occurred thirty seconds later, was:

Again our computations have been flushed and the LM is still flying. In Cambridge someone says, “Something is stealing time.” … Some dreadful thing is active in our computer and we do not know what it is or what it will do next. Unlike Garman [AGC support engineer for Mission Control] in Houston I know too much. If it were in my hands, I would call an abort.

As the Lunar Module passed 3000 feet, another alarm, this time a 1201—VAC areas exhausted—flashed. This is another indication of overload, but of a different kind. Mission control immediately calls up “We're go. Same type. We're go.” Well, it wasn't the same type, but they decided to press on. Descending through 2000 feet, the DSKY (computer display and keyboard) goes blank and stays blank for ten agonising seconds. Seventeen seconds later another 1202 alarm, and a blank display for two seconds—Armstrong's heart rate reaches 150. A total of five program alarms and resets had occurred in the final minutes of landing. But why? And could the computer be trusted to fly the return from the Moon's surface to rendezvous with the Command Module?

While the Lunar Module was still on the lunar surface Instrumentation Laboratory engineer George Silver figured out what happened. During the landing, the Lunar Module's rendezvous radar (used only during return to the Command Module) was powered on and set to a position where its reference timing signal came from an internal clock rather than the AGC's master timing reference. If these clocks were in a worst case out of phase condition, the rendezvous radar would flood the AGC with what we used to call “nonsense interrupts” back in the day, at a rate of 800 per second, each consuming one 11.72 microsecond memory cycle. This imposed an additional load of more than 13% on the AGC, which pushed it over the edge and caused tasks deemed non-critical (such as updating the DSKY) not to be completed on time, resulting in the program alarms and restarts. The fix was simple: don't enable the rendezvous radar until you need it, and when you do, put the switch in the position that synchronises it with the AGC's clock. But the AGC had proved its excellence as a real-time system: in the face of unexpected and unknown external perturbations it had completed the mission flawlessly, while alerting its developers to a problem which required their attention.

The creativity of the AGC software developers and the merit of computer systems sufficiently simple that the small number of people who designed them completely understood every aspect of their operation was demonstrated on Apollo 14. As the Lunar Module was checked out prior to the landing, the astronauts in the spacecraft and Mission Control saw the abort signal come on, which was supposed to indicate the big Abort button on the control panel had been pushed. This button, if pressed during descent to the lunar surface, immediately aborted the landing attempt and initiated a return to lunar orbit. This was a “one and done” operation: no Microsoft-style “Do you really mean it?” tea ceremony before ending the mission. Tapping the switch made the signal come and go, and it was concluded the most likely cause was a piece of metal contamination floating around inside the switch and occasionally shorting the contacts. The abort signal caused no problems during lunar orbit, but if it should happen during descent, perhaps jostled by vibration from the descent engine, it would be disastrous: wrecking a mission costing hundreds of millions of dollars and, coming on the heels of Apollo 13's mission failure and narrow escape from disaster, possibly bring an end to the Apollo lunar landing programme.

The Lunar Module AGC team, with Don Eyles as the lead, was faced with an immediate challenge: was there a way to patch the software to ignore the abort switch, protecting the landing, while still allowing an abort to be commanded, if necessary, from the computer keyboard (DSKY)? The answer to this was obvious and immediately apparent: no. The landing software, like all AGC programs, ran from read-only rope memory which had been woven on the ground months before the mission and could not be changed in flight. But perhaps there was another way. Eyles and his colleagues dug into the program listing, traced the path through the logic, and cobbled together a procedure, then tested it in the simulator at the Instrumentation Laboratory. While the AGC's programming was fixed, the AGC operating system provided low-level commands which allowed the crew to examine and change bits in locations in the read-write memory. Eyles discovered that by setting the bit which indicated that an abort was already in progress, the abort switch would be ignored at the critical moments during the descent. As with all software hacks, this had other consequences requiring their own work-arounds, but by the time Apollo 14's Lunar Module emerged from behind the Moon on course for its landing, a complete procedure had been developed which was radioed up from Houston and worked perfectly, resulting in a flawless landing.

These and many other stories of the development and flight experience of the AGC lunar landing software are related here by the person who wrote most of it and supported every lunar landing mission as it happened. Where technical detail is required to understand what is happening, no punches are pulled, even to the level of bit-twiddling and hideously clever programming tricks such as using an overflow condition to skip over an EXTEND instruction, converting the following instruction from double precision to single precision, all in order to save around forty words of precious non-bank-switched memory. In addition, this is a personal story, set in the context of the turbulent 1960s and early ’70s, of the author and other young people accomplishing things no humans had ever before attempted.

It was a time when everybody was making it up as they went along, learning from experience, and improvising on the fly; a time when a person who had never written a line of computer code would write, as his first program, the code that would land men on the Moon, and when the creativity and hard work of individuals made all the difference. Already, by the end of the Apollo project, the curtain was ringing down on this era. Even though a number of improvements had been developed for the LM AGC software which improved precision landing capability, reduced the workload on the astronauts, and increased robustness, none of these were incorporated in the software for the final three Apollo missions, LUMINARY 210, which was deemed “good enough” and the benefit of the changes not worth the risk and effort to test and incorporate them. Programmers seeking this kind of adventure today will not find it at NASA or its contractors, but instead in the innovative “New Space” and smallsat industries.

 Permalink

Howe, Steven D. Wrench and Claw. Seattle: Amazon Digital Services, 2011. ASIN B005JPZ74A.
In the conclusion of the author's Honor Bound Honor Born (May 2014), an explorer on the Moon discovers something that just shouldn't be there, which calls into question the history of the Earth and Moon and humanity's place in it. This short novel (or novella—it's 81 pages in a print edition) explores how that anomaly came to be and presents a brilliantly sketched alternative history which reminds the reader just how little we really know about the vast expanses of time which preceded our own species' appearance on the cosmic stage.

Vesquith is an Army lieutenant assigned to a base on the Moon. The base is devoted to research, exploration, and development of lunar resources to expand the presence on the Moon, but more recently has become a key asset in Earth's defence, as its Lunar Observation Post (LOP) allows monitoring the inner solar system. This has become crucial since the Martian colony, founded with high hopes, has come under the domination of self-proclaimed “King” Rornak, whose religious fanatics infiltrated the settlement and now threaten the Earth with an arsenal of nuclear weapons they have somehow obtained and are using to divert asteroids to exploit their resources for the development of Mars.

Independently, Bob, a field paleontologist whose expedition is running short of funds, is enduring a fundraising lecture at a Denver museum by a Dr Dietlief, a crowd-pleasing science populariser who regales his audiences with illustrations of how little we really know about the Earth's past, stretching for vast expanses of time compared to that since the emergence of modern humans, and wild speculations about what might have come and gone during those aeons, including the rise and fall of advanced technological civilisations whose works may have disappeared without a trace in a million years or so after their demise due to corrosion, erosion, and the incessant shifting of the continents and recycling of the Earth's surface. How do we know that, somewhere beneath our feet, yet to be discovered by paleontologists who probably wouldn't understand what they'd found, lies “something like a crescent wrench clutched in a claw?” Dietlief suggests that even if paleontologists came across what remained of such evidence after dozens of millions of years they'd probably not recognise it because they weren't looking for such a thing and didn't have the specialised equipment needed to detect it.

On the Moon, Vesquith and his crew return to base to find it has been attacked, presumably by an advance party from Mars, wiping out a detachment of Amphibious Marines sent to guard the LOP and disabling it, rendering Earth blind to attack from Mars. The survivors must improvise with the few resources remaining from the attack to meet their needs, try to restore communications with Earth to warn of a possible attack and request a rescue mission, and defend against possible additional assaults on their base. This is put to the test when another contingent of invaders arrives to put the base permanently out of commission and open the way for a general attack on Earth.

Bob, meanwhile, thanks to funds raised by Dr Dietlief's lecture, has been able to extend his fieldwork, add some assistants, and equip his on-site lab with some new analytic equipment….

This is a brilliant story which rewrites the history of the Earth and sets the stage for the second volume in the Earth Rise series, Honor Bound Honor Born. There is so much going on and so many surprises that I can't really say much more without venturing into spoiler territory, so I won't. The only shortcoming is that, like many self-published works, it stumbles over the humble apostrophe, and particularly its shock troops, the “its/it's” brigade.

During the author's twenty year career at the Los Alamos National Laboratory, he worked on a variety of technologies including nuclear propulsion and applications of nuclear power to space exploration and development. Since the 1980s he has been an advocate of a “power rich” approach to space missions, in particular lunar and Mars bases. The lunar base described in the story implements this strategy, but it's not central to the story and doesn't intrude upon the adventure.

This book is presently available only in a Kindle edition, which is free for Kindle Unlimited subscribers.

 Permalink

Smyth, Henry D. Atomic Energy for Military Purposes. Stanford, CA, Stanford University Press, [1945] 1990. ISBN 978-0-8047-1722-9.
This document was released to the general public by the United States War Department on August 12th, 1945, just days after nuclear weapons had been dropped on Japan (Hiroshima on August 6th and Nagasaki on August 9th). The author, Prof. Henry D. Smyth of Princeton University, had worked on the Manhattan Project since early 1941, was involved in a variety of theoretical and practical aspects of the effort, and possessed security clearances which gave him access to all of the laboratories and production facilities involved in the project. In May, 1944, Smyth, who had suggested such a publication, was given the go ahead by the Manhattan Project's Military Policy Committee to prepare an unclassified summary of the bomb project. This would have a dual purpose: to disclose to citizens and taxpayers what had been done on their behalf, and to provide scientists and engineers involved in the project a guide to what they could discuss openly in the postwar period: if it was in the “Smyth Report” (as it came to be called), it was public information, otherwise mum's the word.

The report is a both an introduction to the physics underlying nuclear fission and its use in both steady-state reactors and explosives, production of fissile material (both separation of reactive Uranium-235 from the much more abundant Uranium-238 and production of Plutonium-239 in nuclear reactors), and the administrative history and structure of the project. Viewed as a historical document, the report is as interesting in what it left out as what was disclosed. Essentially none of the key details discovered and developed by the Manhattan Project which might be of use to aspiring bomb makers appear here. The key pieces of information which were not known to interested physicists in 1940 before the curtain of secrecy descended upon anything related to nuclear fission were inherently disclosed by the very fact that a fission bomb had been built, detonated, and produced a very large explosive yield.

  • It was possible to achieve a fast fission reaction with substantial explosive yield.
  • It was possible to prepare a sufficient quantity of fissile material (uranium or plutonium) to build a bomb.
  • The critical mass required by a bomb was within the range which could be produced by a country with the industrial resources of the United States and small enough that it could be delivered by an aircraft.

None of these were known at the outset of the Manhattan Project (which is why it was such a gamble to undertake it), but after the first bombs were used, they were apparent to anybody who was interested, most definitely including the Soviet Union (who, unbeknownst to Smyth and the political and military leaders of the Manhattan Project, already had the blueprints for the Trinity bomb and extensive information on all aspects of the project from their spies.)

Things never disclosed in the Smyth Report include the critical masses of uranium and plutonium, the problem of contamination of reactor-produced plutonium with the Plutonium-240 isotope and the consequent impossibility of using a gun-type design with plutonium, the technique of implosion and the technologies required to achieve it such as explosive lenses and pulsed power detonators (indeed, the word “implosion” appears nowhere in the document), and the chemical processes used to separate plutonium from uranium and fission products irradiated in a production reactor. In many places, it is explicitly said that military security prevents discussion of aspects of the project, but in others nasty surprises which tremendously complicated the effort are simply not mentioned—left for others wishing to follow in its path to discover for themselves.

Reading the first part of the report, you get the sense that it had not yet been decided whether to disclose the existence or scale of the Los Alamos operation. Only toward the end of the work is Los Alamos named and the facilities and tasks undertaken there described. The bulk of the report was clearly written before the Trinity test of the plutonium bomb on July 16, 1945. It is described in an appendix which reproduces verbatim the War Department press release describing the test, which was only issued after the bombs were used on Japan.

This document is of historical interest only. If you're interested in the history of the Manhattan Project and the design of the first fission bombs, more recent works such as Richard Rhodes' The Making of the Atomic Bomb are much better sources. For those aware of the scope and details of the wartime bomb project, the Smyth report is an interesting look at what those responsible for it felt comfortable disclosing and what they wished to continue to keep secret. The forward by General Leslie R. Groves reminds readers that “Persons disclosing or securing additional information by any means whatsoever without authorization are subject to severe penalties under the Espionage Act.”

I read a Kindle edition from another publisher which is much less expensive than the Stanford paperback but contains a substantial number of typographical errors probably introduced by scanning a paper source document with inadequate subsequent copy editing.

 Permalink

December 2019

Klemperer, Victor. I Will Bear Witness. Vol. 2. New York: Modern Library, [1942–1945, 1995, 1999] 2001. ISBN 978-0-375-75697-9.
This is the second volume in Victor Klemperer's diaries of life as a Jew in Nazi Germany. Volume 1 (February 2009) covers the years from 1933 through 1941, in which the Nazis seized and consolidated their power, began to increasingly persecute the Jewish population, and rearm in preparation for their military conquests which began with the invasion of Poland in September 1939.

I described that book as “simultaneously tedious, depressing, and profoundly enlightening”. The author (a cousin of the conductor Otto Klemperer) was a respected professor of Romance languages and literature at the Technical University of Dresden when Hitler came to power in 1933. Although the son of a Reform rabbi, Klemperer had been baptised in a Christian church and considered himself a protestant Christian and entirely German. He volunteered for the German army in World War I and served at the front in the artillery and later, after recovering from a serious illness, in the army book censorship office on the Eastern front. As a fully assimilated German, he opposed all appeals to racial identity politics, Zionist as well as Nazi.

Despite his conversion to protestantism, military service to Germany, exalted rank as a professor, and decades of marriage to a woman deemed “Aryan” under the racial laws promulgated by the Nazis, Klemperer was considered a “full-blooded Jew” and was subject to ever-escalating harassment, persecution, humiliation, and expropriation as the Nazis tightened their grip on Germany. As civil society spiralled toward barbarism, Klemperer lost his job, his car, his telephone, his house, his freedom of movement, the right to shop in “Aryan stores”, access to public and lending libraries, and even the typewriter on which he continued to write in the hope of maintaining his sanity. His world shrank from that of a cosmopolitan professor fluent in many European languages to a single “Jews' house” in Dresden, shared with other once-prosperous families similarly evicted from their homes.

As 1942 begins, it is apparent to many in German, even Jews deprived of the “privilege” of reading newspapers and listening to the radio, not to mention foreign broadcasts, that the momentum of German conquest in the East had stalled and that the Soviet winter counterattack had begun to push the ill-equipped and -supplied German troops back from the lines they held in the fall of 1941. This was reported with euphemisms such as “shortening our line”, but it was obvious to everybody that the Soviets, not long ago reported breathlessly as “annihilated”, were nothing of the sort and that the Nazi hope of a quick victory in the East, like the fall of France in 1940, was not in the cards.

In Dresden, where Klemperer and his wife Eva remained after being forced out of their house (to which, in formalism-obsessed Germany, he retained title and responsibility for maintenance), Jews were subjected to a never-ending ratchet of abuse, oppression, and terror. Klemperer was forced to wear the yellow star (concealing it meant immediate arrest and likely “deportation” to the concentration camps in the East) and was randomly abused by strangers on the street (but would get smiles and quiet words of support from others), with each event shaking or bolstering his confidence in those who, before Hitler, he considered his “fellow Germans”.

He is prohibited from riding the tram, and must walk long distances, avoiding crowded streets where the risk of abuse from passers-by was greater. Another blow falls when Jews are forbidden to use the public library. With his typewriter seized long ago, he can only pursue his profession with pen, ink, and whatever books he can exchange with other Jews, including those left behind by those “deported”. As ban follows ban, even the simplest things such as getting shoes repaired, obtaining coal to heat the house, doing laundry, and securing food to eat become major challenges. Jews are subject to random “house searches” by the Gestapo, in which the discovery of something like his diaries might mean immediate arrest—he arranges to store the work with an “Aryan” friend of Eva, who deposits pages as they are completed. The house searches in many cases amount to pure shakedowns, where rationed and difficult-to-obtain goods such as butter, sugar, coffee, and tobacco, even if purchased with the proper coupons, are simply stolen by the Gestapo goons.

By this time every Jew knows individuals and families who have been “deported”, and the threat of joining them is ever present. Nobody seems to know precisely what is going on in those camps in the East (whose names are known: Auschwitz, Dachau, Theresienstadt, etc.) but what is obvious is that nobody sent there has ever been seen again. Sometimes relatives receive a letter saying the deportee died of disease in the camp, which seemed plausible, while others get notices their loved one was “killed while trying to escape”, which was beyond belief in the case of elderly prisoners who had difficulty walking. In any case, being “sent East” was considered equivalent to a death sentence which, for most, it was. As a war veteran and married to an “Aryan”, Klemperer was more protected than most Jews in Germany, but there was always the risk that the slightest infraction might condemn him to the camps. He knew many others who had been deported shortly after the death of their Aryan wives.

As the war in the East grinds on, it becomes increasingly clear that Germany is losing. The back-and-forth campaign in North Africa was first to show cracks in the Nazi aura of invincibility, but after the disaster at Stalingrad in the winter of 1942–1943, it is obvious the situation is dire. Goebbels proclaims “total war”, and all Germans begin to feel the privation brought on by the war. The topic on everybody's lips in whispered, covert conversations is “How long can it go on?” With each reverse there are hopes that perhaps a military coup will depose the Nazis and seek peace with the Allies.

For Klemperer, such grand matters of state and history are of relatively little concern. Much more urgent are obtaining the necessities of life which, as the economy deteriorates and oppression of the Jews increases, often amount to coal to stay warm and potatoes to eat, hauled long distances by manual labour. Klemperer, like all able-bodied Jews (the definition of which is flexible: he suffers from heart disease and often has difficulty walking long distances or climbing stairs, and has vision problems as well) is assigned “war work”, which in his case amounts to menial labour tending machines producing stationery and envelopes in a paper factory. Indeed, what appear in retrospect as the pivotal moments of the war in Europe: the battles of Stalingrad and Kursk, Axis defeat and evacuation of North Africa, the fall of Mussolini and Italy's leaving the Axis, the Allied D-day landings in Normandy, the assassination plot against Hitler, and more almost seem to occur off-stage here, with news filtering in bit by bit after the fact and individuals trying to piece it together and make sense of it all.

One event which is not off stage is the bombing of Dresden between February 13 and 15, 1945. The Klemperers were living at the time in the Jews' house they shared with several other families, which was located some distance from the city centre. There was massive damage in the area, but it was outside the firestorm which consumed the main targets. Victor and Eva became separated in the chaos, but were reunited near the end of the attack. Given the devastation and collapse of infrastructure, Klemperer decided to bet his life on the hope that the attack would have at least temporarily put the Gestapo out of commission and removed the yellow star, discarded all identity documents marking him as a Jew, and joined the mass of refugees, many also without papers, fleeing the ruins of Dresden. He and Eva made their way on what remained of the transportation system toward Bavaria and eastern Germany, where they had friends who might accommodate them, at least temporarily. Despite some close calls, the ruse worked, and they survived the end of the war, fall of the Nazi regime, and arrival of United States occupation troops.

After a period in which he discovered that the American occupiers, while meaning well, were completely overwhelmed trying to meet the needs of the populace amid the ruins, the Klemperers decided to make it on their own back to Dresden, which was in the Soviet zone of occupation, where they hoped their house still stood and would be restored to them as their property. The book concludes with a description of this journey across ruined Germany and final arrival at the house they occupied before the Nazis came to power.

After the war, Victor Klemperer was appointed a professor at the University of Leipzig and resumed his academic career. As political life resumed in what was then the Soviet sector and later East Germany, he joined the Socialist Unity Party of Germany, which is usually translated to English as the East German Communist Party and was under the thumb of Moscow. Subsequently, he became a cultural ambassador of sorts for East Germany. He seems to have been a loyal communist, although in his later diaries he expressed frustration at the impotence of the “parliament” in which he was a delegate for eight years. Not to be unkind to somebody who survived as much oppression and adversity as he did, but he didn't seem to have much of a problem with a totalitarian, one party, militaristic, intrusive surveillance, police state as long as it wasn't directly persecuting him.

The author was a prolific diarist who wrote thousands of pages from the early 1900s throughout his long life. The original 1995 German publication of the 1933–1945 diaries as Ich will Zeugnis ablegen bis zum letzten was a substantial abridgement of the original document and even so ran to almost 1700 pages. This English translation further abridges the diaries and still often seems repetitive. End notes provide historical context, identify the many people who figure in the diary, and translate the foreign phrases the author liberally sprinkles among the text.

 Permalink

Anonymous Conservative [Michael Trust]. The Evolutionary Psychology Behind Politics. Macclenny, FL: Federalist Publications, [2012, 2014] 2017. ISBN 978-0-9829479-3-7.
One of the puzzles noted by observers of the contemporary political and cultural scene is the division of the population into two factions, (called in the sloppy terminology of the United States) “liberal” and “conservative”, and that if you pick a member from either faction by observing his or her position on one of the divisive issues of the time, you can, with a high probability of accuracy, predict their preferences on all of a long list of other issues which do not, on the face of it, seem to have very much to do with one another. For example, here is a list of present-day hot-button issues, presented in no particular order.

  1. Health care, socialised medicine
  2. Climate change, renewable energy
  3. School choice
  4. Gun control
  5. Higher education subsidies, debt relief
  6. Free speech (hate speech laws, Internet censorship)
  7. Deficit spending, debt, and entitlement reform
  8. Immigration
  9. Tax policy, redistribution
  10. Abortion
  11. Foreign interventions, military spending

What a motley collection of topics! About the only thing they have in common is that the omnipresent administrative super-state has become involved in them in one way or another, and therefore partisans of policies affecting them view it important to influence the state's action in their regard. And yet, pick any one, tell me what policies you favour, and I'll bet I can guess at where you come down on at least eight of the other ten. What's going on?

Might there be some deeper, common thread or cause which explains this otherwise curious clustering of opinions? Maybe there's something rooted in biology, possibly even heritable, which predisposes people to choose the same option on disparate questions? Let's take a brief excursion into ecological modelling and see if there's something of interest there.

As with all modelling, we start with a simplified, almost cartoon abstraction of the gnarly complexity of the real world. Consider a closed territory (say, an island) with abundant edible vegetation and no animals. Now introduce a species, such as rabbits, which can eat the vegetation and turn it into more rabbits. We start with a small number, P, of rabbits. Now, once they get busy with bunny business, the population will expand at a rate r which is essentially constant over a large population. If r is larger than 1 (which for rabbits it will be, with litter sizes between 4 and 10 depending on the breed, and gestation time around a month) the population will increase. Since the rate of increase is constant and the total increase is proportional to the size of the existing population, this growth will be exponential. Ask any Australian.

Now, what will eventually happen? Will the island disappear under a towering pile of rabbits inexorably climbing to the top of the atmosphere? No—eventually the number of rabbits will increase to the point where they are eating all the vegetation the territory can produce. This number, K, is called the “carrying capacity” of the environment, and it is an absolute number for a given species and environment. This can be expressed as a differential equation called the Verhulst model, as follows:

\frac{dP}{dt} & = & rP(1-\frac{P}{K})

It's a maxim among popular science writers that every equation you include cuts your readership by a factor of two, so among the hardy half who remain, let's see how this works. It's really very simple (and indeed, far simpler than actual population dynamics in a real environment). The left side, “dP/dt” simply means “the rate of growth of the population P with respect to time, t”. On the right hand side, “rP” accounts for the increase (or decrease, if r is less than 0) in population, proportional to the current population. The population is limited by the carrying capacity of the habitat, K, which is modelled by the factor “(1 − P/K)”. Now think about how this works: when the population is very small, P/K will be close to zero and, subtracted from one, will yield a number very close to one. This, then, multiplied by the increase due to rP will have little effect and the growth will be largely unconstrained. As the population P grows and begins to approach K, however, P/K will approach unity and the factor will fall to zero, meaning that growth has completely stopped due to the population reaching the carrying capacity of the environment—it simply doesn't produce enough vegetation to feed any more rabbits. If the rabbit population overshoots, this factor will go negative and there will be a die-off which eventually brings the population P below the carrying capacity K. (Sorry if this seems tedious; one of the great things about learning even a very little about differential equations is that all of this is apparent at a glance from the equation once you get over the speed bump of understanding the notation and algebra involved.)

This is grossly over-simplified. In fact, real populations are prone to oscillations and even chaotic dynamics, but we don't need to get into any of that for what follows, so I won't.

Let's complicate things in our bunny paradise by introducing a population of wolves. The wolves can't eat the vegetation, since their digestive systems cannot extract nutrients from it, so their only source of food is the rabbits. Each wolf eats many rabbits every year, so a large rabbit population is required to support a modest number of wolves. Now if we go back and look at the equation for wolves, K represents the number of wolves the rabbit population can sustain, in the steady state, where the number of rabbits eaten by the wolves just balances the rabbits' rate of reproduction. This will often result in a rabbit population smaller than the carrying capacity of the environment, since their population is now constrained by wolf predation and not K.

What happens as this (oversimplified) system cranks away, generation after generation, and Darwinian evolution kicks in? Evolution consists of two processes: variation, which is largely random, and selection, which is sensitively dependent upon the environment. The rabbits are unconstrained by K, the carrying capacity of their environment. If their numbers increase beyond a population P substantially smaller than K, the wolves will simply eat more of them and bring the population back down. The rabbit population, then, is not at all constrained by K, but rather by r: the rate at which they can produce new offspring. Population biologists call this an r-selected species: evolution will select for individuals who produce the largest number of progeny in the shortest time, and hence for a life cycle which minimises parental investment in offspring and against mating strategies, such as lifetime pair bonding, which would limit their numbers. Rabbits which produce fewer offspring will lose a larger fraction of them to predation (which affects all rabbits, essentially at random), and the genes which they carry will be selected out of the population. An r-selected population, sometimes referred to as r-strategists, will tend to be small, with short gestation time, high fertility (offspring per litter), rapid maturation to the point where offspring can reproduce, and broad distribution of offspring within the environment.

Wolves operate under an entirely different set of constraints. Their entire food supply is the rabbits, and since it takes a lot of rabbits to keep a wolf going, there will be fewer wolves than rabbits. What this means, going back to the Verhulst equation, is that the 1 − P/K factor will largely determine their population: the carrying capacity K of the environment supports a much smaller population of wolves than their food source, rabbits, and if their rate of population growth r were to increase, it would simply mean that more wolves would starve due to insufficient prey. This results in an entirely different set of selection criteria driving their evolution: the wolves are said to be K-selected or K-strategists. A successful wolf (defined by evolution theory as more likely to pass its genes on to successive generations) is not one which can produce more offspring (who would merely starve by hitting the K limit before reproducing), but rather highly optimised predators, able to efficiently exploit the limited supply of rabbits, and to pass their genes on to a small number of offspring, produced infrequently, which require substantial investment by their parents to train them to hunt and, in many cases, acquire social skills to act as part of a group that hunts together. These K-selected species tend to be larger, live longer, have fewer offspring, and have parents who spend much more effort raising them and training them to be successful predators, either individually or as part of a pack.

K or r, r or K: once you've seen it, you can't look away.”

Just as our island of bunnies and wolves was over-simplified, the dichotomy of r- and K-selection is rarely precisely observed in nature (although rabbits and wolves are pretty close to the extremes, which it why I chose them). Many species fall somewhere in the middle and, more importantly, are able to shift their strategy on the fly, much faster than evolution by natural selection, based upon the availability of resources. These r/K shape-shifters react to their environment. When resources are abundant, they adopt an r-strategy, but as their numbers approach the carrying capacity of their environment, shift to life cycles you'd expect from K-selection.

What about humans? At a first glance, humans would seem to be a quintessentially K-selected species. We are large, have long lifespans (about twice as long as we “should” based upon the number of heartbeats per lifetime of other mammals), usually only produce one child (and occasionally two) per gestation, with around a one year turn-around between children, and massive investment by parents in raising infants to the point of minimal autonomy and many additional years before they become fully functional adults. Humans are “knowledge workers”, and whether they are hunter-gatherers, farmers, or denizens of cubicles at The Company, live largely by their wits, which are a combination of the innate capability of their hypertrophied brains and what they've learned in their long apprenticeship through childhood. Humans are not just predators on what they eat, but also on one another. They fight, and they fight in bands, which means that they either develop the social skills to defend themselves and meet their needs by raiding other, less competent groups, or get selected out in the fullness of evolutionary time.

But humans are also highly adaptable. Since modern humans appeared some time between fifty and two hundred thousand years ago they have survived, prospered, proliferated, and spread into almost every habitable region of the Earth. They have been hunter-gatherers, farmers, warriors, city-builders, conquerors, explorers, colonisers, traders, inventors, industrialists, financiers, managers, and, in the Final Days of their species, WordPress site administrators.

In many species, the selection of a predominantly r or K strategy is a mix of genetics and switches that get set based upon experience in the environment. It is reasonable to expect that humans, with their large brains and ability to override inherited instinct, would be especially sensitive to signals directing them to one or the other strategy.

Now, finally, we get back to politics. This was a post about politics. I hope you've been thinking about it as we spent time in the island of bunnies and wolves, the cruel realities of natural selection, and the arcana of differential equations.

What does r-selection produce in a human population? Well, it might, say, be averse to competition and all means of selection by measures of performance. It would favour the production of large numbers of offspring at an early age, by early onset of mating, promiscuity, and the raising of children by single mothers with minimal investment by them and little or none by the fathers (leaving the raising of children to the State). It would welcome other r-selected people into the community, and hence favour immigration from heavily r populations. It would oppose any kind of selection based upon performance, whether by intelligence tests, academic records, physical fitness, or job performance. It would strive to create the ideal r environment of unlimited resources, where all were provided all their basic needs without having to do anything but consume. It would oppose and be repelled by the K component of the population, seeking to marginalise it as toxic, privileged, or exploiters of the real people. It might even welcome conflict with K warriors of adversaries to reduce their numbers in otherwise pointless foreign adventures.

And K-troop? Once a society in which they initially predominated creates sufficient wealth to support a burgeoning r population, they will find themselves outnumbered and outvoted, especially once the r wave removes the firebreaks put in place when K was king to guard against majoritarian rule by an urban underclass. The K population will continue to do what they do best: preserving the institutions and infrastructure which sustain life, defending the society in the military, building and running businesses, creating the basic science and technologies to cope with emerging problems and expand the human potential, and governing an increasingly complex society made up, with every generation, of a population, and voters, who are fundamentally unlike them.

Note that the r/K model completely explains the “crunchy to soggy” evolution of societies which has been remarked upon since antiquity. Human societies always start out, as our genetic heritage predisposes us to, K-selected. We work to better our condition and turn our large brains to problem-solving and, before long, the privation our ancestors endured turns into a pretty good life and then, eventually, abundance. But abundance is what selects for the r strategy. Those who would not have reproduced, or have as many children in the K days of yore, now have babies-a-poppin' as in the introduction to Idiocracy, and before long, not waiting for genetics to do its inexorable work, but purely by a shift in incentives, the rs outvote the Ks and the Ks begin to count the days until their society runs out of the wealth which can be plundered from them.

But recall that equation. In our simple bunnies and wolves model, the resources of the island were static. Nothing the wolves could do would increase K and permit a larger rabbit and wolf population. This isn't the case for humans. K humans dramatically increase the carrying capacity of their environment by inventing new technologies such as agriculture, selective breeding of plants and animals, discovering and exploiting new energy sources such as firewood, coal, and petroleum, and exploring and settling new territories and environments which may require their discoveries to render habitable. The rs don't do these things. And as the rs predominate and take control, this momentum stalls and begins to recede. Then the hard times ensue. As Heinlein said many years ago, “This is known as bad luck.”

And then the Gods of the Copybook Headings will, with terror and slaughter return. And K-selection will, with them, again assert itself.

Is this a complete model, a Rosetta stone for human behaviour? I think not: there are a number of things it doesn't explain, and the shifts in behaviour based upon incentives are much too fast to account for by genetics. Still, when you look at those eleven issues I listed so many words ago through the r/K perspective, you can almost immediately see how each strategy maps onto one side or the other of each one, and they are consistent with the policy preferences of “liberals” and “conservatives”. There is also some rather fuzzy evidence for genetic differences (in particular the DRD4-7R allele of the dopamine receptor and size of the right brain amygdala) which appear to correlate with ideology.

Still, if you're on one side of the ideological divide and confronted with somebody on the other and try to argue from facts and logical inference, you may end up throwing up your hands (if not your breakfast) and saying, “They just don't get it!” Perhaps they don't. Perhaps they can't. Perhaps there's a difference between you and them as great as that between rabbits and wolves, which can't be worked out by predator and prey sitting down and voting on what to have for dinner. This may not be a hopeful view of the political prospect in the near future, but hope is not a strategy and to survive and prosper requires accepting reality as it is and acting accordingly.

 Permalink

Carroll, Michael. Europa's Lost Expedition. Cham, Switzerland: Springer International, 2017. ISBN 978-3-319-43158-1.
In the epoch in which this story is set the expansion of the human presence into the solar system was well advanced, with large settlements on the Moon and Mars, exploitation of the abundant resources in the main asteroid belt, and research outposts in exotic environments such as Jupiter's enigmatic moon Europa, when civilisation on Earth was consumed, as so often seems to happen when too many primates who evolved to live in small bands are packed into a limited space, by a global conflict which the survivors, a decade later, refer to simply as “The War”, as its horrors and costs dwarfed all previous human conflicts.

Now, with The War over and recovery underway, scientific work is resuming, and an international expedition has been launched to explore the southern hemisphere of Europa, where the icy crust of the moon is sufficiently thin to provide access to the liquid water ocean beneath and the complex orbital dynamics of Jupiter's moons were expected to trigger a once in a decade eruption of geysers, with cracks in the ice allowing the ocean to spew into space, providing an opportunity to sample it “for free”.

Europa is not a hospitable environment for humans. Orbiting deep within Jupiter's magnetosphere, it is in the heart of the giant planet's radiation belts, which are sufficiently powerful to kill an unprotected human within minutes. But the radiation is not uniform and humans are clever. The main base on Europa, Taliesen, is located on the face of the moon that points away from Jupiter, and in the leading hemisphere where radiation is least intense. On Europa, abundant electrical power is available simply by laying out cables along the surface, in which Jupiter's magnetic field induces powerful currents as they cut it. This power is used to erect a magnetic shield around the base which protects it from the worst, just as Earth's magnetic field shields life on its surface. Brief ventures into the “hot zone” are made possible by shielded rovers and advanced anti-radiation suits.

The present expedition will not be the first to attempt exploration of the southern hemisphere. Before the War, an expedition with similar objectives ended in disaster, with the loss of all members under circumstances which remain deeply mysterious, and of which the remaining records, incomplete and garbled by radiation, provide few clues as to what happened to them. Hadley Nobile, expedition leader, is not so much concerned with the past as making the most of this rare opportunity. Her deputy and long-term collaborator, Gibson van Clive, however, is fascinated by the mystery and spends hours trying to recover and piece together the fragmentary records from the lost expedition and research the backgrounds of its members and the physical evidence, some of which makes no sense at all. The other members of the new expedition are known from their scientific reputations, but not personally to the leaders. Many people have blanks in their curricula vitae during the War years, and those who lived through that time are rarely inclined to probe too deeply.

Once the party arrive at Taliesen and begin preparations for their trip to the south, a series of “accidents” befall some members, who are found dead in circumstances which seem implausible based upon their experience. Down to the bare minimum team, with a volunteer replacement from the base's complement, Hadley decides to press on—the geysers wait for no one.

Thus begins what is basically a murder mystery, explicitly patterned on Agatha Christie's And Then There Were None, layered upon the enigmas of the lost expedition, the backgrounds of those in the current team, and the biosphere which may thrive in the ocean beneath the ice, driven by the tides raised by Jupiter and the other moons and fed by undersea plumes similar to those where some suspect life began on Earth.

As a mystery, there is little more that can be said without crossing the line into plot spoilers, so I will refrain from further description. Worthy of a Christie tale, there are many twists and turns, and few things are as the seem on the surface.

As in his previous novel, On the Shores of Titan's Farthest Sea (December 2016), the author, a distinguished scientific illustrator and popular science writer, goes to great lengths to base the exotic locale in which the story is set upon the best presently-available scientific knowledge. An appendix, “The Science Behind the Story”, provides details and source citations for the setting of the story and the technologies which figure in it.

While the science and technology are plausible extrapolations from what is presently known, the characters sometimes seem to behave more in the interests of advancing the plot than as real people would in such circumstances. If you were the leader or part of an expedition several members of which had died under suspicious circumstances at the base camp, would you really be inclined to depart for a remote field site with spotty communications along with all of the prime suspects?

 Permalink

Dutton, Edward. How to Judge People by What they Look Like. Oulu, Finland: Thomas Edward Press, 2018. ISBN 978-1-9770-6797-5.
In The Picture of Dorian Gray, Oscar Wilde wrote,

People say sometimes that Beauty is only superficial. That may be so. But at least it is not as superficial as Thought. To me, Beauty is the wonder of wonders. It is only shallow people who do not judge by appearances.

From childhood, however, we have been exhorted not to judge people by their appearances. In Skin in the Game (August 2019), Nassim Nicholas Taleb advises choosing the surgeon who “doesn't look like a surgeon” because their success is more likely due to competence than first impressions.

Despite this, physiognomy, assessing a person's characteristics from their appearance, is as natural to humans as breathing, and has been an instinctual part of human behaviour as old as our species. Thinkers and writers from Aristotle through the great novelists of the 19th century believed that an individual's character was reflected in, and could be inferred from their appearance, and crafted and described their characters accordingly. Jules Verne would often spend a paragraph describing the appearance of his characters and what that implied for their behaviour.

Is physiognomy all nonsense, a pseudoscience like phrenology, which purported to predict mental characteristics by measuring bumps on the skull which were claimed indicate the development of “cerebral organs” with specific functions? Or, is there something to it, after all? Humans are a social species and, as such, have evolved to be exquisitely sensitive to signals sent by others of their kind, conveyed through subtle means such as a tone of voice, facial expression, or posture. Might we also be able to perceive and interpret messages which indicate properties such as honesty, intelligence, courage, impulsiveness, criminality, diligence, and more? Such an ability, if possible, would be advantageous to individuals in interacting with others and, contributing to success in reproducing and raising offspring, would be selected for by evolution.

In this short book (or long essay—the text is just 85 pages), the author examines the evidence and concludes that there are legitimate correlations between appearance and behaviour, and that human instincts are picking up genuine signals which are useful in interacting with others. This seems perfectly plausible: the development of the human body and face are controlled by the genetic inheritance of the individual and modulated through the effects of hormones, and it is well-established that both genetics and hormones are correlated with a variety of behavioural traits.

Let's consider a reasonably straightforward example. A study published in 2008 found a statistically significant correlation between the width of the face (cheekbone to cheekbone distance compared to brow to upper lip) and aggressiveness (measured by the number of penalty minutes received) among a sample of 90 ice hockey players. Now, a wide face is also known to correlate with a high testosterone level in males, and testosterone correlates with aggressiveness and selfishness. So, it shouldn't be surprising to find the wide face morphology correlated with the consequences of high-testosterone behaviour.

In fact, testosterone and other hormone levels play a substantial part in many of the correlations between appearance and behaviour discussed by the author. Many people believe they can identify, with reasonable reliability, homosexuals just from their appearance: the term “gaydar” has come into use for this ability. In 2017, researchers trained an artificial intelligence program with a set of photographs of individuals with known sexual orientations and then tested the program on a set of more than 35,000 images. The program correctly identified the sexual orientation of men 81% of the time and women with 74% accuracy.

Of course, appearance goes well beyond factors which are inherited or determined by hormones. Tattoos, body piercings, and other irreversible modifications of appearance correlate with low time preference, which correlates with low intelligence and the other characteristics of r-selected lifestyle. Choices of clothing indicate an individual's self-identification, although fashion trends change rapidly and differ from region to region, so misinterpretation is a risk.

The author surveys a wide variety of characteristics including fat/thin body type, musculature, skin and hair, height, face shape, breast size in women, baldness and beards in men, eye spacing, tattoos, hair colour, facial symmetry, handedness, and finger length ratio, and presents citations to research, most published recently, supporting correlations between these aspects of appearance and behaviour. He cautions that while people may be good at sensing and interpreting these subtle signals among members of their own race, there are substantial and consistent differences between the races, and no inferences can be drawn from them, nor are members of one race generally able to read the signals from members of another.

One gets the sense (although less strongly) that this is another field where advances in genetics and data science are piling up a mass of evidence which will roll over the stubborn defenders of the “blank slate” like a truth tsunami. And again, this is an area where people's instincts, honed by millennia of evolution, are still relied upon despite the scorn of “experts”. (So afraid were the authors of the Wikipedia page on physiognomy [retrieved 2019-12-16] of the “computer gaydar” paper mentioned above that they declined to cite the peer reviewed paper in the Journal of Personality and Social Psychology but instead linked to a BBC News piece which dismissed it as “dangerous” and “junk science”. Go on whistling, folks, as the wave draws near and begins to crest….)

Is the case for physiognomy definitively made? I think not, and as I suspect the author would agree, there are many aspects of appearance and a multitude of personality traits, some of which may be significantly correlated and others not at all. Still, there is evidence for some linkage, and it appears to be growing as more work in the area (which is perilous to the careers of those who dare investigate it) accumulates. The scientific evidence, summarised here, seems to be, as so often happens, confirming the instincts honed over hundreds of generations by the inexorable process of evolution: you can form some conclusions just by observing people, and this information is useful in the competition which is life on Earth. Meanwhile, when choosing programmers for a project team, the one who shows up whose eyebrows almost meet their hairline, sporting a plastic baseball cap worn backward with the adjustment strap on the smallest peg, with a scraggly soybeard, pierced nose, and visible tattoos isn't likely to be my pick. She's probably a WordPress developer.

 Permalink

Walton, David. Three Laws Lethal. Jersey City, NJ: Pyr, 2019. ISBN 978-1-63388-560-8.
In the near future, autonomous vehicles, “autocars”, are available from a number of major automobile manufacturers. The self-driving capability, while not infallible, has been approved by regulatory authorities after having demonstrated that it is, on average, safer than the population of human drivers on the road and not subject to human frailties such as driving under the influence of alcohol or drugs, while tired, or distracted by others in the car or electronic gadgets. While self-driving remains a luxury feature with which a minority of cars on the road are equipped, regulators are confident that as it spreads more widely and improves over time, the highway accident rate will decline.

But placing an algorithm and sensors in command of a vehicle with a mass of more than a tonne hurtling down the road at 100 km per hour or faster is not just a formidable technical problem, it is one with serious and unavoidable moral implications. These come into stark focus when, in an incident on a highway near Seattle, an autocar swerves to avoid a tree crashing down on the highway, hitting and killing a motorcyclist in an adjacent lane of which the car's sensors must have been aware. The car appears to have made a choice, valuing the lives of its passengers: a mother and her two children, over that of the motorcyclist. What really happened, and how the car decided what to do in that split-second, is opaque, because the software controlling it was, as all such software, proprietary and closed to independent inspection and audit by third parties. It's one thing to acknowledge that self-driving vehicles are safer, as a whole, than those with humans behind the wheel, but entirely another to cede to them the moral agency of life and death on the highway. Should an autocar value the lives of its passengers over those of others? What if there were a sole passenger in the car and two on the motorcycle? And who is liable for the death of the motorcyclist: the auto manufacturer, the developers of the software, the owner of car, the driver who switched it into automatic mode, or the regulators who approved its use on public roads? The case was headed for court, and all would be watching the precedents it might establish.

Tyler Daniels and Brandon Kincannon, graduate students in the computer science department of the University of Pennsylvania, were convinced they could do better. The key was going beyond individual vehicles which tried to operate autonomously based upon what their own sensors could glean from their immediate environment, toward an architecture where vehicles communicated with one another and coordinated their activities. This would allow sharing information over a wider area and be able to avoid accidents resulting from individual vehicles acting without the knowledge of the actions of others. Further, they wanted to re-architect individual ground transportation from a model of individually-owned and operated vehicles to transportation as a service, where customers would summon an autocar on demand with their smartphone, with the vehicle network dispatching the closest free car to their location. This would dramatically change the economics of personal transportation. The typical private car spends twenty-two out of twenty-four hours parked, taking up a parking space and depreciating as it sits idle. The transportation service autocar would be in constant service (except for downtime for maintenance, refuelling, and times of reduced demand), generating revenue for its operator. An angel investor believes their story and, most importantly, believes in them sufficiently to write a check for the initial demonstration phase of their project, and they set to work.

Their team consists of Tyler and Brandon, plus Abby and Naomi Sumner, sisters who differed in almost every way: Abby outgoing and vivacious, with an instinct for public relations and marketing, and Naomi the super-nerd, verging on being “on the spectrum”. The big day of the public roll-out of the technology arrives, and ends in disaster, killing Abby in what was supposed to be a demonstration of the system's inherent safety. The disaster puts an end to the venture and the surviving principals go their separate ways. Tyler signs on as a consultant and expert witness for the lawyers bringing the suit on behalf of the motorcyclist killed in Seattle, using the exposure to advocate for open source software being a requirement for autonomous vehicles. Brandon uses money inherited after the death of his father to launch a new venture, Black Knight, offering transportation as a service initially in the New York area and then expanding to other cities. Naomi, whose university experiment in genetic software implemented as non-player characters (NPCs) in a virtual world was the foundation of the original venture's software, sees Black Knight as a way to preserve the world and beings she has created as they develop and require more and more computing resources. Characters in the virtual world support themselves and compete by driving Black Knight cars in the real world, and as generation follows generation and natural selection works its wonders, customers and competitors are amazed at how Black Knight vehicles anticipate the needs of their users and maintain an unequalled level of efficiency.

Tyler leverages his recognition from the trial into a new self-driving venture based on open source software called “Zoom”, which spreads across the U.S. west coast and eventually comes into competition with Black Knight in the east. Somehow, Zoom's algorithms, despite being open and having a large community contributing to their development, never seem able to equal the service provided by Black Knight, which is so secretive that even Brandon, the CEO, doesn't know how Naomi's software does it.

In approaching any kind of optimisation problem such as scheduling a fleet of vehicles to anticipate and respond to real-time demand, a key question is choosing the “objective function”: how the performance of the system is evaluated based upon the stated goals of its designers. This is especially crucial when the optimisation is applied to a system connected to the real world. The parable of the “Clippy Apocalypse”, where an artificial intelligence put in charge of a paperclip factory and trained to maximise the production of paperclips escapes into the wild and eventually converts first its home planet, then the rest of the solar system, and eventually the entire visible universe into paper clips. The system worked as designed—but the objective function was poorly chosen.

Naomi's NPCs literally (or virtually) lived or died based upon their ability to provide transportation service to Black Knight's customers, and natural selection, running at the accelerated pace of the simulation they inhabited, relentlessly selected them with the objective of improving their service and expanding Black Knight's market. To the extent that, within their simulation, they perceived opposition to these goals, they would act to circumvent it—whatever it takes.

This sets the stage for one of the more imaginative tales of how artificial general intelligence might arrive through the back door: not designed in a laboratory but emerging through the process of evolution in a complex system subjected to real-world constraints and able to operate in the real world. The moral dimensions of this go well beyond the trolley problem often cited in connection with autonomous vehicles, dealing with questions of whether artificial intelligences we create for our own purposes are tools, servants, or slaves, and what happens when their purposes diverge from those for which we created them.

This is a techno-thriller, with plenty of action in the conclusion of the story, but also a cerebral exploration of the moral questions which something as seemingly straightforward and beneficial as autonomous vehicles may pose in the future.

 Permalink

Taloni, John. The Compleat Martian Invasion. Seattle: Amazon Digital Services, 2016. ASIN B01HLTZ7MS.
A number of years have elapsed since the Martian Invasion chronicled by H.G. Wells in The War of the Worlds. The damage inflicted on the Earth was severe, and the protracted process of recovery, begun in the British Empire in the last years of Queen Victoria's reign, now continues under Queen Louise, Victoria's sixth child and eldest surviving heir after the catastrophe of the invasion. Just as Earth is beginning to return to normalcy, another crisis has emerged. John Bedford, who had retreated into an opium haze after the horrors of his last expedition, is summoned to Windsor Castle where Queen Louise shows him a photograph. “Those are puffs of gas on the Martian surface. The Martians are coming again, Mr. Bedford. And in far greater numbers.” Defeated the last time only due to their vulnerability to Earth's microbes, there is every reason to expect that this time the Martians will have taken precautions against that threat to their plans for conquest.

Earth's only hope to thwart the invasion before it reaches the surface and unleashes further devastation on its inhabitants is deploying weapons on platforms employing the anti-gravity material Cavorite, but the secret of manufacturing it rests with its creator, Cavor, who has been taken prisoner by the ant-like Selenites in the expedition from which Mr Bedford narrowly escaped, as chronicled in Mr Wells's The First Men in the Moon. Now, Bedford must embark on a perilous attempt to recover the Cavorite sphere lost at the end of his last adventure and then join an expedition to the Moon to rescue Cavor from the caves of the Selenites.

Meanwhile, on Barsoom (Mars), John Carter and Deja Thoris find their beloved city of Helium threatened by the Khondanes, whose deadly tripods wreaked so much havoc on Earth not long ago and are now turning their envious eyes back to the plunder that eluded them on the last attempt.

Queen Louise must assemble an international alliance, calling on all of her crowned relatives: Czar Nicholas, Kaiser Wilhelm, and even those troublesome republican Americans, plus all the resources they can summon—the inventions of the Serbian, Tesla, the research of Maria Skłowdowska and her young Swiss assistant Albert, discovered toiling away in the patent office, the secrets recovered from Captain Nemo's island, and the mysterious interventions of the Time Traveller, who flickers in and out of existence at various moments, pursuing his own inscrutable agenda. As the conflict approaches and battle is joined, an interplanetary effort is required to save Earth from calamity.

As you might expect from this description, this is a rollicking good romp replete with references and tips of the hat to the classics of science fiction and their characters. What seems like a straightforward tale of battle and heroism takes a turn at the very end into the inspiring, with a glimpse of how different human history might have been.

At present, only a Kindle edition is available, which is free for Kindle Unlimited subscribers.

 Permalink

Page, Joseph T., II. Vandenberg Air Force Base. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-4671-3209-1.
Prior to World War II, the sleepy rural part of the southern California coast between Santa Barbara and San Luis Obispo was best known as the location where, in September 1923, despite a lighthouse having been in operation at Arguello Point since 1901, the U.S. Navy suffered its worst peacetime disaster, when seven destroyers, travelling at 20 knots, ran aground at Honda Point, resulting in the loss of all seven ships and the deaths of 23 crewmembers. In the 1930s, following additional wrecks in the area, a lifeboat station was established in conjunction with the lighthouse.

During World War II, the Army acquired 92,000 acres (372 km²) in the area for a training base which was called Camp Cooke, after a cavalry general who served in the Civil War, in wars with Indian tribes, and in the Mexican-American War. The camp was used for training Army troops in a variety of weapons and in tank maneuvers. After the end of the war, the base was closed and placed on inactive status, but was re-opened after the outbreak of war in Korea to train tank crews. It was once again mothballed in 1953, and remained inactive until 1957, when 64,000 acres were transferred to the U.S. Air Force to establish a missile base on the West Coast, initially called Cooke Air Force Base, intended to train missile crews and also serve as the U.S.'s first operational intercontinental ballistic missile (ICBM) site. On October 4th, 1958, the base was renamed Vandenberg Air Force Base in honour of the late General Hoyt Vandenberg, former Air Force Chief of Staff and Director of Central Intelligence.

On December 15, 1958, a Thor intermediate range ballistic missile was launched from the new base, the first of hundreds of launches which would follow and continue up to the present day. Starting in September 1959, three Atlas ICBMs armed with nuclear warheads were deployed on open launch pads at Vandenberg, the first U.S. intercontinental ballistic missiles to go on alert. The Atlas missiles remained part of the U.S. nuclear force until their retirement in May 1964.

With the advent of Earth satellites, Vandenberg became a key part of the U.S. military and civil space infrastructure. Launches from Cape Canaveral in Florida are restricted to a corridor directed eastward over the Atlantic ocean. While this is fine for satellites bound for equatorial orbits, such as the geostationary orbits used by many communication satellites, a launch into polar orbit, preferred by military reconnaissance satellites and Earth resources satellites because it allows them to overfly and image locations anywhere on Earth, would result in the rockets used to launch them dropping spent stages on land, which would vex taxpayers to the north and hotheated Latin neighbours to the south.

Vandenberg Air Force Base, however, situated on a point extending from the California coast, had nothing to the south but open ocean all the way to Antarctica. Launching southward, satellites could be placed into polar or Sun synchronous orbits without disturbing anybody but the fishes. Vandenberg thus became the prime launch site for U.S. reconnaissance satellites which, in the early days when satellites were short-lived and returned film to the Earth, required a large number of launches. The Corona spy satellites alone accounted for 144 launches from Vandenberg between 1959 and 1972.

With plans in the 1970s to replace all U.S. expendable launchers with the Space Shuttle, facilities were built at Vandenberg (Space Launch Complex 6) to process and launch the Shuttle, using a very different architecture than was employed in Florida. The Shuttle stack would be assembled on the launch pad, protected by a movable building that would retract prior to launch. The launch control centre was located just 365 metres from the launch pad (as opposed to 4.8 km away at the Kennedy Space Center in Florida), so the plan in case of a catastrophic launch accident on the pad essentially seemed to be “hope that never happens”. In any case, after spending more than US$4 billion on the facilities, after the Challenger disaster in 1986, plans for Shuttle launches from Vandenberg were abandoned, and the facility was mothballed until being adapted, years later, to launch other rockets.

This book, part of the “Images of America” series, is a collection of photographs (all black and white) covering all aspects of the history of the site from before World War II to the present day. Introductory text for each chapter and detailed captions describe the items shown and their significance to the base's history. The production quality is excellent, and I noted only one factual error in the text (the names of crew of Gemini 5). For a book of just 128 pages, the paperback is very expensive (US$22 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

 Permalink

Andrew, Christopher and Vasili Mitrokhin. The Sword and the Shield. New York: Basic Books, 1999. ISBN 978-0-465-00312-9.
Vasili Mitrokhin joined the Soviet intelligence service as a foreign intelligence officer in 1948, at a time when the MGB (later to become the KGB) and the GRU were unified into a single service called the Committee of Information. By the time he was sent to his first posting abroad in 1952, the two services had split and Mitrokhin stayed with the MGB. Mitrokhin's career began in the paranoia of the final days of Stalin's regime, when foreign intelligence officers were sent on wild goose chases hunting down imagined Trotskyist and Zionist conspirators plotting against the regime. He later survived the turbulence after the death of Stalin and the execution of MGB head Lavrenti Beria, and the consolidation of power under his successors.

During the Khrushchev years, Mitrokhin became disenchanted with the regime, considering Khrushchev an uncultured barbarian whose banning of avant garde writers betrayed the tradition of Russian literature. He began to entertain dissident thoughts, not hoping for an overthrow of the Soviet regime but rather its reform by a new generation of leaders untainted by the legacy of Stalin. These thoughts were reinforced by the crushing of the reform-minded regime in Czechoslovakia in 1968 and his own observation of how his service, now called the KGB, manipulated the Soviet justice system to suppress dissent within the Soviet Union. He began to covertly listen to Western broadcasts and read samizdat publications by Soviet dissidents.

In 1972, the First Chief Directorate (FCD: foreign intelligence) moved from the cramped KGB headquarters in the Lubyanka in central Moscow to a new building near the ring road. Mitrokhin had sole responsibility for checking, inventorying, and transferring the entire archives, around 300,000 documents, of the FCD for transfer to the new building. These files documented the operations of the KGB and its predecessors dating back to 1918, and included the most secret records, those of Directorate S, which ran “illegals”: secret agents operating abroad under false identities. Probably no other individual ever read as many of the KGB's most secret archives as Mitrokhin. Appalled by much of the material he reviewed, he covertly began to make his own notes of the details. He started by committing key items to memory and then transcribing them every evening at home, but later made covert notes on scraps of paper which he smuggled out of KGB offices in his shoes. Each week-end he would take the notes to his dacha outside Moscow, type them up, and hide them in a series of locations which became increasingly elaborate as their volume grew.

Mitrokhin would continue to review, make notes, and add them to his hidden archive for the next twelve years until his retirement from the KGB in 1984. After Mikhail Gorbachev became party leader in 1985 and called for more openness (glasnost), Mitrokhin, shaken by what he had seen in the files regarding Soviet actions in Afghanistan, began to think of ways he might spirit his files out of the Soviet Union and publish them in the West.

After the collapse of the Soviet Union, Mitrokhin tested the new freedom of movement by visiting the capital of one of the now-independent Baltic states, carrying a sample of the material from his archive concealed in his luggage. He crossed the border with no problems and walked in to the British embassy to make a deal. After several more trips, interviews with British Secret Intelligence Service (SIS) officers, and providing more sample material, the British agreed to arrange the exfiltration of Mitrokhin, his entire family, and the entire archive—six cases of notes. He was debriefed at a series of safe houses in Britain and began several years of work typing handwritten notes, arranging the documents, and answering questions from the SIS, all in complete secrecy. In 1995, he arranged a meeting with Christopher Andrew, co-author of the present book, to prepare a history of KGB foreign intelligence as documented in the archive.

Mitrokhin's exfiltration (I'm not sure one can call it a “defection”, since the country whose information he disclosed ceased to exist before he contacted the British) and delivery of the archive is one of the most stunning intelligence coups of all time, and the material he delivered will be an essential primary source for historians of the twentieth century. This is not just a whistle-blower disclosing operations of limited scope over a short period of time, but an authoritative summary of the entire history of the foreign intelligence and covert operations of the Soviet Union from its inception until the time it began to unravel in the mid-1980s. Mitrokhin's documents name names; identify agents, both Soviet and recruits in other countries, by codename; describe secret operations, including assassinations, subversion, “influence operations” planting propaganda in adversary media and corrupting journalists and politicians, providing weapons to insurgents, hiding caches of weapons and demolition materials in Western countries to support special forces in case of war; and trace the internal politics and conflicts within the KGB and its predecessors and with the Party and rivals, particularly military intelligence (the GRU).

Any doubts about the degree of penetration of Western governments by Soviet intelligence agents are laid to rest by the exhaustive documentation here. During the 1930s and throughout World War II, the Soviet Union had highly-placed agents throughout the British and American governments, military, diplomatic and intelligence communities, and science and technology projects. At the same time, these supposed allies had essentially zero visibility into the Soviet Union: neither the American OSS nor the British SIS had a single agent in Moscow.

And yet, despite success in infiltrating other countries and recruiting agents within them (particularly prior to the end of World War II, when many agents, such as the “Magnificent Five” [Donald Maclean, Kim Philby, John Cairncross, Guy Burgess, and Anthony Blunt] in Britain, were motivated by idealistic admiration for the Soviet project, as opposed to later, when sources tended to be in it for the money), exploitation of this vast trove of purloined secret information was uneven and often ineffective. Although it reached its apogee during the Stalin years, paranoia and intrigue are as Russian as borscht, and compromised the interpretation and use of intelligence throughout the history of the Soviet Union. Despite having loyal spies in high places in governments around the world, whenever an agent provided information which seemed “too good” or conflicted with the preconceived notions of KGB senior officials or Party leaders, it was likely to be dismissed as disinformation, often suspected to have been planted by British counterintelligence, to which the Soviets attributed almost supernatural powers, or that their agents had been turned and were feeding false information to the Centre. This was particularly evident during the period prior to the Nazi attack on the Soviet Union in 1941. KGB archives record more than a hundred warnings of preparations for the attack having been forwarded to Stalin between January and June 1941, all of which were dismissed as disinformation or erroneous due to Stalin's idée fixe that Germany would not attack because it was too dependent on raw materials supplied by the Soviet Union and would not risk a two front war while Britain remained undefeated.

Further, throughout the entire history of the Soviet Union, the KGB was hesitant to report intelligence which contradicted the beliefs of its masters in the Politburo or documented the failures of their policies and initiatives. In 1985, shortly after coming to power, Gorbachev lectured KGB leaders “on the impermissibility of distortions of the factual state of affairs in messages and informational reports sent to the Central Committee of the CPSU and other ruling bodies.”

Another manifestation of paranoia was deep suspicion of those who had spent time in the West. This meant that often the most effective agents who had worked undercover in the West for many years found their reports ignored due to fears that they had “gone native” or been doubled by Western counterintelligence. Spending too much time on assignment in the West was not conducive to advancement within the KGB, which resulted in the service's senior leadership having little direct experience with the West and being prone to fantastic misconceptions about the institutions and personalities of the adversary. This led to delusional schemes such as the idea of recruiting stalwart anticommunist senior figures such as Zbigniew Brzezinski as KGB agents.

This is a massive compilation of data: 736 pages in the paperback edition, including almost 100 pages of detailed end notes and source citations. I would be less than candid if I gave the impression that this reads like a spy thriller: it is nothing of the sort. Although such information would have been of immense value during the Cold War, long lists of the handlers who worked with undercover agents in the West, recitations of codenames for individuals, and exhaustive descriptions of now largely forgotten episodes such as the KGB's campaign against “Eurocommunism” in the 1970s and 1980s, which it was feared would thwart Moscow's control over communist parties in Western Europe, make for heavy going for the reader.

The KGB's operations in the West were far from flawless. For decades, the Communist Party of the United States (CPUSA) received substantial subsidies from the KGB despite consistently promising great breakthroughs and delivering nothing. Between the 1950s and 1975, KGB money was funneled to the CPUSA through two undercover agents, brothers named Morris and Jack Childs, delivering cash often exceeding a million dollars a year. Both brothers were awarded the Order of the Red Banner in 1975 for their work, with Morris receiving his from Leonid Brezhnev in person. Unbeknownst to the KGB, both of the Childs brothers had been working for, and receiving salaries from, the FBI since the early 1950s, and reporting where the money came from and went—well, not the five percent they embezzled before passing it on. In the 1980s, the KGB increased the CPUSA's subsidy to two million dollars a year, despite the party's never having more than 15,000 members (some of whom, no doubt, were FBI agents).

A second doorstop of a book (736 pages) based upon the Mitrokhin archive, The World Was Going our Way, published in 2005, details the KGB's operations in the Third World during the Cold War. U.S. diplomats who regarded the globe and saw communist subversion almost everywhere were accurately reporting the situation on the ground, as the KGB's own files reveal.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

  2020  

January 2020

Wood, Fenton. The City of Illusions. Seattle: Amazon Digital Services, 2019. ASIN B082692JTX.
This is the fourth short novel/novella (148 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and the second, Five Million Watts (June 2019), “enchanting”. The third, The Tower of the Bear (October 2019), takes Philo from the depths of the ocean to the Great Tree in the exotic West.

Here, the story continues as Philo reaches the Tree, meets its Guardian, “the largest, ugliest, and smelliest bear” he has ever seen, not to mention the most voluble and endowed with the wit of eternity, and explores the Tree, which holds gateways to other times and places, where Philo must confront a test which has defeated many heroes who have come this way before. Exploring the Tree, he learns of the distant past and future, of the Ancient Marauder and Viridios before the dawn of history, and of the War that changed the course of time.

Continuing his hero's quest, he ventures further westward along the Tyrant's Road into the desert of the Valley of Death. There he will learn the fate of the Tyrant and his enthralled followers and, if you haven't figured it out already, you will probably now understand where Philo's timeline diverged from our own. A hero must have a companion, and it is in the desert, after doing a good deed, that he meets his: a teddy bear, Made in Japan—but a very special teddy bear, as he will learn as the journey progresses.

Finally, he arrives at the Valley of the Angels, with pavement stretching to the horizon and cloaked in an acrid yellow mist that obscures visibility and irritates the eyes and throat. There he finds the legendary City of Illusions, where he is confronted by a series of diabolical abusement park attractions where his wit, courage, and Teddy's formidable powers will be tested to the utmost with death the price of failure. Victory can lead to the storied Bullet Train, the prize he needs to save radio station 2XG and possibly the world, and the next step in his quest.

As the fourth installment in what is projected to be one long story spanning five volumes, if you pick this up cold it will probably strike you as a bunch of disconnected adventures and puzzles each of which might as well be a stand-alone short-short story. As they unfold, only occasionally do you see a connection with the origins of the story or Philo's quest, although when they do appear (as in the linkage between the Library of Infinity and the Library of Ouroboros in The Tower of the Bear) they are a delight. It is only toward the end that you begin to see the threads converging toward what promises to be a stirring conclusion to a young adult classic enjoyable by all ages. I haven't read a work of science fiction so closely patterned on the hero's journey as described in Joseph Campbell's The Hero with a Thousand Faces since Rudy Rucker's 2004 novel Frek and the Elixir; this is not a criticism but a compliment—the eternal hero myth has always made for tales which not only entertain but endure.

This book is currently available only in a Kindle edition. The fifth and final volume of the Yankee Republic saga is scheduled to be published in the spring of 2020.

 Permalink

Virk, Rizwan. The Simulation Hypothesis. Cambridge, MA: Bayview Books, 2019. ISBN 978-0-9830569-0-4.
Before electronic computers had actually been built, Alan Turing mathematically proved a fundamental and profound property of them which has been exploited in innumerable ways as they developed and became central to many of our technologies and social interactions. A computer of sufficient complexity, which is, in fact, not very complex at all, can simulate any other computer or, in fact, any deterministic physical process whatsoever, as long as it is understood sufficiently to model in computer code and the system being modelled does not exceed the capacity of the computer—or the patience of the person running the simulation. Indeed, some of the first applications of computers were in modelling physical processes such as the flight of ballistic projectiles and the hydrodynamics of explosions. Today, computer modelling and simulation have become integral to the design process for everything from high-performance aircraft to toys, and many commonplace objects in the modern world could not have been designed without the aid of computer modelling. It certainly changed my life.

Almost as soon as there were computers, programmers realised that their ability to simulate, well…anything made them formidable engines for playing games. Computer gaming was originally mostly a furtive and disreputable activity, perpetrated by gnome-like programmers on the graveyard shift while the computer was idle, having finished the “serious” work paid for by unimaginative customers (who actually rose before the crack of noon!). But as the microelectronics revolution slashed the size and price of computers to something individuals could afford for their own use (or, according to the computer Puritans of the previous generations, abuse), computer gaming came into its own. Some modern computer games have production and promotion budgets larger than Hollywood movies, and their characters and story lines have entered the popular culture. As computer power has grown exponentially, games have progressed from tic-tac-toe, through text-based adventures, simple icon character video games, to realistic three dimensional simulated worlds in which the players explore a huge world, interact with other human players and non-player characters (endowed with their own rudimentary artificial intelligence) within the game, and in some games and simulated worlds, have the ability to extend the simulation by building their own objects with which others can interact. If your last experience with computer games was the Colossal Cave Adventure or Pac-Man, try a modern game or virtual world—you may be amazed.

Computer simulations on affordable hardware are already beginning to approach the limits of human visual resolution, perception of smooth motion, and audio bandwidth and localisation, and some procedurally-generated game worlds are larger than a human can explore in a million lifetimes. Computer power is forecast to continue to grow exponentially for the foreseeable future and, in the Roaring Twenties, permit solving a number of problems through “brute force”—simply throwing computing power and massive data storage capacity at them without any deeper fundamental understanding of the problem. Progress in the last decade in areas such as speech recognition, autonomous vehicles, and games such as Go are precursors to what will be possible in the next.

This raises the question of how far it can go—can computer simulations actually approach the complexity of the real world, with characters within the simulation experiencing lives as rich and complex as our own and, perhaps, not even suspect they're living in a simulation? And then, we must inevitably speculate whether we are living in a simulation, created by beings at an outer level (perhaps themselves many levels deep in a tree of simulations which may not even have a top level). There are many reasons to suspect that we are living in a simulation; for many years I have said it's “more likely than not”, and others, ranging from Stephen Hawking to Elon Musk and Scott Adams, have shared my suspicion. The argument is very simple.

First of all, will we eventually build computers sufficiently powerful to provide an authentic simulated world to conscious beings living within it? There is no reason to doubt that we will: no law of physics prevents us from increasing the power of our computers by at least a factor of a trillion from those of today, and the lesson of technological progress has been that technologies usually converge upon their physical limits and new markets emerge as they do, using their capabilities and funding further development. Continued growth in computing power at the rate of the last fifty years should begin to make such simulations possible some time between 2030 and the end of this century.

So, when we have the computing power, will we use it to build these simulations? Of course we will! We have been building simulations to observe their behaviour and interact with them, for ludic and other purposes, ever since the first primitive computers were built. The market for games has only grown as they have become more complex and realistic. Imagine what if will be like when anybody can create a whole society—a whole universe—then let it run to see what happens, or enter it and experience it first-hand. History will become an experimental science. What would have happened if the Roman empire had discovered the electromagnetic telegraph? Let's see!—and while we're at it, run a thousand simulations with slightly different initial conditions and compare them.

Finally, if we can create these simulations which are so realistic the characters within them perceive them as their real world, why should we dare such non-Copernican arrogance as to assume we're at the top level and not ourselves within a simulation? I believe we shouldn't, and to me the argument that clinches it is what I call the “branching factor”. Just as we will eventually, indeed, I'd say, inevitably, create simulations as rich as our own world, so will the beings within them create their own. Certainly, once we can, we'll create many, many simulations: as many or more as there are running copies of present-day video games, and the beings in those simulations will as well. But if each simulation creates its own simulations in a number (the branching factor) even a tiny bit larger than one, there will be exponentially more observers in these layers on layers of simulations than at the top level. And, consequently, as non-privileged observers according to the Copernican Principle, it is not just more likely than not, but overwhelmingly probable that we are living in a simulation.

The author of this book, founder of Play Labs @ MIT, a start-up accelerator which works in conjunction with the MIT Game Lab, and producer of a number of video games, has come to the same conclusion, and presents the case for the simulation hypothesis from three perspectives: computer science, physics, and the unexplained (mysticism, esoteric traditions, and those enduring phenomena and little details which don't make any sense when viewed from the conventional perspective but may seem perfectly reasonable once we accept we're characters in somebody else's simulation).

Computer Science. The development of computer games is sketched from their origins to today's three-dimensional photorealistic multiplayer environments into the future, where virtual reality mediated by goggles, gloves, and crude haptic interfaces will give way to direct neural interfaces to the brain. This may seem icky and implausible, but so were pierced lips, eyebrows, and tongues when I was growing up, and now I see them everywhere, without the benefit of directly jacking in to a world larger, more flexible, and more interesting than the dingy one we inhabit. This is sketched in eleven steps, the last of which is the Simulation Point, where we achieve the ability to create simulations which “are virtually indistinguishable from a base physical reality.” He describes, “The Great Simulation is a video game that is so real because it is based upon incredibly sophisticated models and rendering techniques that are beamed directly into the mind of the players, and the actions of artificially generated consciousness are indistinguishable from real players.” He identifies nine technical hurdles which must be overcome in order to arrive at the Simulation Point. Some, such as simulating a sufficiently large world and number of players, are challenging but straightforward scaling up of things we're already doing, which will become possible as computer power increases. Others, such as rendering completely realistic objects and incorporating physical sensations, exist in crude form today but will require major improvements we don't yet know how to build, while technologies such as interacting directly with the human brain and mind and endowing non-player characters within the simulation with consciousness and human-level intelligence have yet to be invented.

Physics. There are a number of aspects of the physical universe, most revealed as we have observed at very small and very large scales, and at speeds and time intervals far removed from those with which we and our ancestors evolved, that appear counterintuitive if not bizarre to our expectations from everyday life. We can express them precisely in our equations of quantum mechanics, special and general relativity, electrodynamics, and the standard models of particle physics and cosmology, and make predictions which accurately describe our observations, but when we try to understand what is really going on or why it works that way, it often seems puzzling and sometimes downright weird.

But as the author points out, when you view these aspects of the physical universe through the eyes of a computer game designer or builder of computer models of complex physical systems, they look oddly familiar. Here is how I expressed it thirteen years ago in my 2006 review of Leonard Susskind's The Cosmic Landscape:

What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.

Indeed, these curious physical phenomena begin to look precisely like the kinds of optimisations game and simulation designers employ to cope with the limited computer power at their disposal. The author notes, “Quantum Indeterminacy, a fundamental principle of the material world, sounds remarkably similar to optimizations made in the world of computer graphics and video games, which are rendered on individual machines (computers or mobile phones) but which have conscious players controlling and observing the action.”

One of the key tricks in complex video games is “conditional rendering”: you don't generate the images or worry about the physics of objects which the player can't see from their current location. This is remarkably like quantum mechanics, where the act of observation reduces the state vector to a discrete measurement and collapses its complex extent in space and time into a known value. In video games, you only need to evaluate when somebody's looking. Quantum mechanics is largely encapsulated in the tweet by Aatish Bhatia, “Don't look: waves. Look: particles.” It seems our universe works the same way. Curious, isn't it?

Similarly, games and simulations exploit discreteness and locality to reduce the amount of computation they must perform. The world is approximated by a grid, and actions in one place can only affect neighbours and propagate at a limited speed. This is precisely what we see in field theories and relativity, where actions are local and no influence can propagate faster than the speed of light.

The unexplained. Many esoteric and mystic traditions, especially those of the East such as Hinduism and Buddhism, describe the world as something like a dream, in which we act and our actions affect our permanent identity in subsequent lives. Western traditions, including the Abrahamic religions, see life in this world as a temporary thing, where our acts will be judged by a God who is outside the world. These beliefs come naturally to humans, and while there is little or no evidence for them in conventional science, it is safe to say that far more people believe and have believed these things and have structured their lives accordingly than those who have adopted the strictly rationalistic viewpoint one might deduce from deterministic, reductionist science.

And yet, once again, in video games we see the emergence of a model which is entirely compatible with these ancient traditions. Characters live multiple lives, and their actions in the game cause changes in a state (“karma”) which is recorded outside the game and affects what they can do. They complete quests, which affect their karma and capabilities, and upon completing a quest, they may graduate (be reincarnated) into a new life (level), in which they retain their karma from previous lives. Just as players who exist outside the game can affect events and characters within it, various traditions describe actors outside the natural universe (hence “supernatural”) such as gods, angels, demons, and spirits of the departed, interacting with people within the universe and occasionally causing physical manifestations (miracles, apparitions, hauntings, UFOs, etc.). And perhaps the simulation hypothesis can even explain absence of evidence: the sky in a video game may contain a multitude of stars and galaxies, but that doesn't mean each is populated by its own video game universe filled with characters playing the same game. No, it's just scenery, there to be admired but with which you can't interact. Maybe that's why we've never detected signals from an alien civilisation: the stars are just procedurally generated scenery to make our telescopic views more interesting.

The author concludes with a summary of the evidence we may be living in a simulation and the objection of sceptics (such that a computer as large and complicated as the universe would be required to simulate a universe). He suggests experiments which might detect the granularity of the simulation and provide concrete evidence the universe is not the continuum most of science has assumed it to be. A final chapter presents speculations as to who might be running the simulation, what their motives might be for doing so, and the nature of beings within the simulation. I'm cautious of delusions of grandeur in making such guesses. I'll bet we're a science fair project, and I'll further bet that within a century we'll be creating a multitude of simulated universes for our own science fair projects.

 Permalink

February 2020

Ryan, Craig. Sonic Wind. New York: Livewright Publishing, 2018. ISBN 978-0-631-49191-0.
Prior to the 1920s, most aircraft pilots had no means of escape in case of mechanical failure or accident. During World War I, one out of every eight combat pilots was shot down or killed in a crash. Germany experimented with cumbersome parachutes stored in bags in a compartment behind the pilot, but these often failed to deploy properly if the plane was in a spin or became tangled in the aircraft structure after deployment. Still, they did save the lives of a number of German pilots. (On the other hand, one of them was Hermann Göring.) Allied pilots were not issued parachutes because their commanders feared the loss of planes more than pilots, and worried pilots would jump rather than try to save a damaged plane.

From the start of World War II, military aircrews were routinely issued parachutes, and backpack or seat pack parachutes with ripcord deployment had become highly reliable. As the war progressed and aircraft performance rapidly increased, it became clear that although parachutes could save air crew, physically escaping from a damaged plane at high velocities and altitudes was a formidable problem. The U.S. P-51 Mustang, of which more than 15,000 were built, cruised at 580 km/hour and had a maximum speed of 700 km/hour. It was physically impossible for a pilot to escape from the cockpit into such a wind blast, and even if they managed to do so, they would likely be torn apart by collision with the fuselage or tail an instant later. A pilot's only hope was that the plane would slow to a speed at which escape was possible before crashing into the ground, bursting into flames, or disintegrating.

In 1944, when the Nazi Luftwaffe introduced the first operational jet fighter, the Messerschmitt Me 262, capable of 900 km/hour flight, they experimented with explosive-powered ejection seats, but never installed them in this front-line fighter. After the war, with each generation of jet fighters flying faster and higher than the previous, and supersonic performance becoming routine, ejection seats became standard equipment in fighter and high performance bomber aircraft, and saved many lives. Still, by the mid-1950s, one in four pilots who tried to eject was killed in the attempt. It was widely believed that the forces of blasting a pilot out of the cockpit, rapid deceleration by atmospheric friction, and wind blast at transonic and supersonic speeds were simply too much for the human body to endure. Some aircraft designers envisioned “escape capsules” in which the entire crew cabin would be ejected and recovered, but these systems were seen to be (and proved when tried) heavy and potentially unreliable.

John Paul Stapp's family came from the Hill Country of south central Texas, but he was born in Brazil in 1910 while his parents were Baptist missionaries there. After high school in Texas, he enrolled in Baylor University in Waco, initially studying music but then switching his major to pre-med. Upon graduation in 1931 with a major in zoology and minor in chemistry, he found that in the depths of the Depression there was no hope of affording medical school, so he enrolled in an M.A. program in biophysics, occasionally dining on pigeons he trapped on the roof of the biology building and grilled over Bunsen burners in the laboratory. He then entered a Ph.D. program in biophysics at the University of Texas, Austin, receiving his doctorate in 1940. Before leaving Austin, he was accepted by the medical school at the University of Minnesota, which promised him employment as a research assistant and instructor to fund his tuition.

In October 1940, with the possibility that war in Europe and the Pacific might entangle the country, the U.S. began military conscription. When the numbers were drawn from the fishbowl, Stapp's was 15th from the top. As a medical student, he received an initial deferment, but when it expired he joined the regular Army under a special program for medical students. While completing medical school, he would receive private's pay of US$ 32 a month (around US$7000 a year in today's money), which would help enormously with tuition and expenses. In December 1943 Stapp received his M.D. degree and passed the Minnesota medical board examination. He was commissioned as a second lieutenant in the Army Medical Corps and placed on suspended active duty for his internship in a hospital in Duluth, Minnesota, where he delivered 200 babies and assisted in 225 surgeries. He found he delighted in emergency and hands-on medicine. In the fall of 1944 he went on full active duty and began training in field medicine. After training, he was assigned as a medical officer at Lincoln Army Air Field in Nebraska, where he would combine graduate training with hospital work.

Stapp had been fascinated by aviation and the exploits of pioneers such as Charles Lindbergh and the stratospheric balloon explorers of the 1930s, and found working at an air base fascinating, sometimes arranging to ride along in training missions with crews he'd treated in the hospital. In April 1945 he was accepted by the Army School of Aviation Medicine in San Antonio, where he and his class of 150 received intense instruction in all aspects of human physiology relating to flight. After graduation and a variety of assignments as a medical officer, he was promoted to captain and invited to apply to the Aero Medical Laboratory at Wright Field in Dayton, Ohio for a research position in the Biophysics Branch. On the one hand, this was an ideal position for the intellectually curious Stapp, as it would combine his Ph.D. work and M.D. career. On the other, he had only eight months remaining in his service commitment, and he had long planned to leave the Army to pursue a career as a private physician. Stapp opted for the challenge and took the post at Wright.

Starting work, he was assigned to the pilot escape technology program as a “project engineer”. He protested, “I'm a doctor, not an engineer!”, but settled into the work and, being fluent in German, was assigned to review 1200 pages of captured German documents relating to crew ejection systems and their effects upon human subjects. Stapp was appalled by the Nazis' callous human experimentation, but, when informed that the Army intended to destroy the documents after his study was complete, took the initiative to preserve them, both for their scientific content and as evidence of the crimes of those whose research produced it.

The German research and the work of the branch in which Stapp worked had begun to persuade him that the human body was far more robust than had been assumed by aircraft designers and those exploring escape systems. It was well established by experiments in centrifuges at Wright and other laboratories that the maximum long-term human tolerance for acceleration (g-force) without special equipment or training was around six times that of Earth's gravity, or 6 g. Beyond that, subjects would lose consciousness, experience tissue damage due to lack of blood flow, or structural damage to the skeleton and/or internal organs. However, a pilot ejecting from a high performance aircraft experienced something entirely different from a subject riding in a centrifuge. Instead of a steady crush by, say, 6 g, the pilot would be subjected to much higher accelerations, perhaps on the order of 20—40 g, with an onset of acceleration (“jerk”) of 500 g per second. The initial blast of the mortar or rockets firing the seat out of the cockpit would be followed by a sharp pulse of deceleration as the pilot was braked from flight speed by air friction, during which he would be subjected to wind blast potentially ten times as strong as any hurricane. Was this survivable at all, and if so, what techniques and protective equipment might increase a pilot's chances of enduring the ordeal?

While pondering these problems and thinking about ways to research possible solutions under controlled conditions, Stapp undertook another challenge: providing supplemental oxygen to crews at very high altitudes. Stapp volunteered as a test subject as well as medical supervisor and began flight tests with a liquid oxygen breathing system on high altitude B-17 flights. Crews flying at these altitudes in unpressurised aircraft during World War II and afterward had frequently experienced symptoms similar to “the bends” (decompression sickness) which struck divers who ascended too quickly from deep waters. Stapp diagnosed the cause as identical: nitrogen dissolved in the blood coming out of solution as bubbles and pooling in joints and other bodily tissues. He devised a procedure of oxygen pre-breathing, where crews would breathe pure oxygen for half an hour before taking off on a high altitude mission, which completely eliminated the decompression symptoms. The identical procedure is used today by astronauts before they begin extravehicular activities in space suits using pure oxygen at low pressure.

From the German documents he studied, Stapp had become convinced that the tool he needed to study crew escape was a rocket propelled sled, running on rails, with a brake mechanism that could be adjusted to provide a precisely calibrated deceleration profile. When he learned that the Army was planning to build such a device at Muroc Army Air Base in California, he arranged to be put in charge of Project MX-981 with a charter to study the “effects of deceleration forces of high magnitude on man”. He arrived at Muroc in March 1947, along with eight crash test dummies to be used in the experiments. If Muroc (now Edwards Air Force Base) of the era was legendary for its Wild West accommodations (Chuck Yeager would not make his first supersonic flight there until October of that year), the North Base, where Stapp's project was located, was something out of Death Valley Days. When Stapp arrived to meet his team of contractors from Northrop Corporation they struck the always buttoned-down Stapp like a “band of pirates”. He also discovered the site had no electricity, no running water, no telephone, and no usable buildings. The Army, preoccupied with its glamourous high speed aviation projects, had neither interest in what amounted to a rocket powered train with a very short track, nor much inclination to provide it the necessary resources. Stapp commenced what he came to call the Battle of Muroc, mastering the ancient military art of scrounging and exchanging favours to get the material he needed and the work done.

As he settled in at Muroc and became acquainted with his fellow denizens of the desert, he was appalled to learn that the Army provided medical care only for active duty personnel, and that civilian contractors and families of servicemen, even the exalted test pilots, had to drive 45 miles to the nearest clinic. He began to provide informal medical care to all comers, often making house calls in the evening hours on his wheezing scooter, in return for home cooked dinners. This built up a network of people who owed him favours, which he was ready to call in when he needed something. He called this the “Curbstone Clinic”, and would continue the practice throughout his career. After some shaky starts and spectacular failures due to unreliable surplus JATO rockets, the equipment was ready to begin experiments with crash test dummies.

Stapp had always intended that the tests with dummies would be simply a qualification phase for later tests with human and animal subjects, and he would ask no volunteer to do something he wouldn't try himself. Starting in December, 1947, Stapp personally made increasingly ambitious runs on the sled, starting at “only” 10 g deceleration and building to 35 g with an onset jerk of 1000 g/second. The runs left him dizzy and aching, but very much alive and quick to recover. Although far from approximating the conditions of ejection from a supersonic fighter, he had already demonstrated that the Air Force's requirements for cockpit seats and crew restraints, often designed around a 6 g maximum shock, were inadequate and deadly. Stapp was about to start making waves, and some of the push-back would be daunting. He was ordered to cease all human experimentation for at least three months.

Many Air Force officers (for the Air Force had been founded in September 1947 and taken charge of the base) would have saluted and returned to testing with instrumented dummies. Stapp, instead, figured out how to obtain thirty adult chimpanzees, along with the facilities needed to house and feed them, and resumed his testing, with anæsthetised animals, up to the limits of survival. Stapp was, and remained throughout his career, a strong advocate for the value of animal experimentation. It was a grim business, but at the time Muroc was frequently losing test pilots at the rate of one a week, and Stapp believed that many of these fatalities were unnecessary and could be avoided with proper escape and survival equipment, which could only be qualified through animal and cautious human experimentation.

By September 1949, approval to resume human testing was given, and Stapp prepared for new, more ambitious runs, with the subject facing forward on the sled instead of backward as before, which would more accurately simulate the forces in an ejection or crash and expose him directly to air blast. He rapidly ramped up the runs, reaching 32 g without permanent injury. To avoid alarm on the part of his superiors in Dayton, a “slight error” was introduced in the reports he sent: all g loads from the runs were accidentally divided by two.

Meanwhile, Stapp was ramping up his lobbying for safer seats in Air Force transport planes, arguing that the existing 6 g forward facing seats and belts were next to useless in many survivable crashes. Finally, with the support of twenty Air Force generals, in 1950 the Air Force adopted a new rear-facing standard seat and belt rated for 16 g which weighed only two pounds more than those it replaced. The 16 g requirement (although not the rearward-facing orientation, which proved unacceptable to paying customers) remains the standard for airliner seats today, seven decades later.

In June, 1951, Stapp made his final run on the MX-981 sled at what was now Edwards Air Force Base, decelerating from 180 miles per hour (290 km/h) to zero in 31 feet (9.45 metres), at 45.4 g, a force comparable to many aircraft and automobile accidents. The limits of the 2000 foot track (and the human body) had been reached. But Stapp was not done: the frontier of higher speeds remained. Shortly thereafter, he was promoted to lieutenant colonel and given command of what was called the Special Projects Section of the Biophysics Branch of the Aero Medical Laboratory. He was reassigned to Holloman Air Force Base in New Mexico, where the Air Force was expanding its existing 3500 foot rocket sled track to 15,000 feet (4.6 km), allowing testing at supersonic speeds. (The Holloman High Speed Test Track remains in service today, having been extended in a series of upgrades over the years to a total of 50,917 feet (15.5 km) and a maximum speed of Mach 8.6, or 2.9 km/sec [6453 miles per hour].)

Northrop was also contractor for the Holloman sled, and devised a water brake system which would be more reliable and permit any desired deceleration profile to be configured for a test. An upgraded instrumentation system would record photographic and acceleration measurements with much better precision than anything at Edwards. The new sled was believed to be easily capable of supersonic speeds and was named Sonic Wind. By March 1954, the preliminary testing was complete and Stapp boarded the sled. He experienced a 12 g acceleration to the peak speed of 421 miles per hour, then 22 g deceleration to a full stop, all in less than eight seconds. He walked away, albeit a little wobbly. He had easily broken the previous land speed record of 402 miles per hour and become “the fastest man on Earth.” But he was not done.

On December 10th, 1954, Stapp rode Sonic Wind, powered by nine solid rocket motors. Five seconds later, he was travelling at 639 miles per hour, faster than the .45 ACP round fired by the M1911A1 service pistol he was issued as an officer, around Mach 0.85 at the elevation of Holloman. The water brakes brought him to a stop in 1.37 seconds, a deceleration of 46.2 g. He survived, walked away (albeit just few steps to the ambulance), and although suffering from vision problems for some time afterward, experienced no lasting consequences. It was estimated that the forces he survived were equivalent to those from ejecting at an altitude of 36,000 feet from an airplane travelling at 1800 miles per hour (Mach 2.7). As this was faster than any plane the Air Force had in service or on the drawing board, he proved that, given a suitable ejection seat, restraints, and survival equipment, pilots could escape and survive even under these extreme circumstances. The Big Run, as it came to be called, would be Stapp's last ride on a rocket sled and the last human experiment on the Holloman track. He had achieved the goal he set for himself in 1947: to demonstrate that crew survival in high performance aircraft accidents was a matter of creative and careful engineering, not the limits of the human body. The manned land speed record set on the Big Run would stand until October 1983, when Richard Noble's jet powered Thrust2 car set a new record of 650.88 miles per hour in the Nevada desert. Stapp remarked at the time that Noble had gone faster but had not, however, stopped from that speed in less than a second and a half.

From the early days of Stapp's work on human tolerance to deceleration, he was acutely aware that the forces experienced by air crew in crashes were essentially identical to those in automobile accidents. As a physician interested in public health issues, he had noted that the Air Force was losing more personnel killed in car crashes than in airplane accidents. When the Military Air Transport Service (MATS) adopted his recommendation and installed 16 g aft-facing seats in its planes, deaths and injuries from crashes had fallen by two-thirds. By the mid 1950s, the U.S. was suffering around 35,000 fatalities per year in automobile accidents—comparable to a medium-sized war—year in and year out, yet next to nothing had been done to make automobiles crash resistant and protect their occupants in case of an accident. Even the simplest precaution of providing lap belts, standard in aviation for decades, had not been taken; seats were prone to come loose and fly forward even in mild impacts; steering columns and dashboards seemed almost designed to impale drivers and passengers; and “safety” glass often shredded the flesh of those projected through it in a collision.

In 1954, Stapp turned some of his celebrity as the fastest man on Earth toward the issue of automobile safety and organised, in conjunction with the Society of Automotive Engineers (SAE), the first Automobile Crash Research Field Demonstration and Conference, which was attended by representatives of all of the major auto manufacturers, medical professional societies, and public health researchers. Stapp and the SAE insisted that the press be excluded: he wanted engineers from the automakers free to speak without fear their candid statements about the safety of their employers' products would be reported sensationally. Stapp conducted a demonstration in which a car was towed into a fixed barrier at 40 miles an hour with two dummies wearing restraints and two others just sitting in the seats. The belted dummies would have walked away, while the others flew into the barrier and would have almost certainly been killed. It was at this conference that many of the attendees first heard the term “second collision”. In car crashes, it was often not the crash of the car into another car or a barrier that killed the occupants: it was their colliding with dangerous items within the vehicle after flying loose following the initial impact.

Despite keeping the conference out of the press, word of Stapp's vocal advocacy of automobile safety quickly reached the auto manufacturers, who were concerned both about the marketing impact of the public becoming aware not only of the high level of deaths on the highways but also the inherent (and unnecessary) danger of their products to those who bought them, and also the bottom-line impact of potential government-imposed safety mandates. Auto state congressmen got the message, and the Air Force heard it from them: the Air Force threatened to zero out aeromedical research funding unless car crash testing was terminated. It was.

Still, the conferences continued (they would eventually be renamed “Stapp Car Crash Conferences”), and Stapp became a regular witness before congressional committees investigating automobile safety. Testifying about whether it was appropriate for Air Force funds to be used in studying car crashes, in 1957 he said, “I have done autopsies on aircrew members who died in airplane crashes. I have also performed autopsies on aircrew members who died in car crashes. The only conclusion I could come to is that they were just as dead after a car crash as they were after an airplane crash.” He went on to note that simply mandating seatbelts in Air Force ground vehicles would save around 125 lives a year, and if they were installed and used by the occupants of all cars in the U.S., around 20,000 lives—more than half the death toll—could be saved. When he appeared before congress, he bore not only the credentials of a medical doctor, Ph.D. in biophysics, Air Force colonel, but the man who had survived more violent decelerations equivalent to a car crash than any other human.

It was not until the 1960s that a series of mandates were adopted in the U.S. which required seat belts, first in the front seat and eventually for all passengers. Testifying in 1963 at a hearing to establish a National Accident Prevention Center, Stapp noted that the Air Force, which had already adopted and required the use of seat belts, had reduced fatalities in ground vehicle accidents by 50% with savings estimated at US$ 12 million per year. In September 1966, President Lyndon Johnson signed two bills, the National Traffic and Motor Vehicle Safety Act and the Highway Safety Act, creating federal agencies to research vehicle safety and mandate standards. Standing behind the president was Colonel John Paul Stapp: the long battle was, if not won, at least joined.

Stapp had hoped for a final promotion to flag rank before retirement, but concluded he had stepped on too many toes and ignored too many Pentagon directives during his career to ever wear that star. In 1967, he was loaned by the Air Force to the National Highway Traffic Safety Administration to continue his auto safety research. He retired from the Air Force in 1970 with the rank of full colonel and in 1973 left what he had come to call the “District of Corruption” to return to New Mexico. He continued to attend and participate in the Stapp Car Crash Conferences, his last being the Forty-Third in 1999. He died at his home in Alamogordo, New Mexico in November that year at the age of 89.

In his later years, John Paul Stapp referred to the survivors of car crashes who would have died without the equipment designed and eventually mandated because of his research as “the ghosts that never happened”. In 1947, when Stapp began his research on deceleration and crash survival, motor vehicle deaths in the U.S. were 8.41 per 100 million vehicle miles travelled (VMT). When he retired from the Air Force in 1970, after adoption of the first round of seat belt and auto design standards, they had fallen to 4.74 (which covers the entire fleet, many of which were made before the adoption of the new standards). At the time of his death in 1999, fatalities per 100 million VMT were 1.55, an improvement in safety of more than a factor of five. Now, Stapp was not solely responsible for this, but it was his putting his own life on the line which showed that crashes many considered “unsurvivable” were nothing of the sort with proper engineering and knowledge of human physiology. There are thousands of aircrew and tens or hundreds of thousands of “ghosts that never happened” who owe their lives to John Paul Stapp. Maybe you know one; maybe you are one. It's worth a moment remembering and giving thanks to the largely forgotten man who saved them.

 Permalink

March 2020

Schlichter, Kurt. Collapse. El Segundo, CA: Kurt Schlichter, 2019. ISBN 978-1-7341993-0-7.
In his 2016 novel People's Republic (November 2018), the author describes North America in the early 2030s, a decade after the present Cold Civil War turned hot and the United States split into the People's Republic of North America (PRNA) on the coasts and the upper Midwest, with the rest continuing to call itself the United States, its capital now in Dallas, purging itself of the “progressive” corruption which was now unleashed without limits in the PRNA. In that book we met Kelly Turnbull, retired from the military and veteran of the border conflicts at the time of the Split, who made his living performing perilous missions in the PRNA to rescue those trapped inside its borders.

In this, the fourth Kelly Turnbull novel (I have not yet read the second, Indian Country, nor the third, Wildfire), the situation in the PRNA has, as inevitably happens in socialist paradises, continued to deteriorate, and by 2035 its sullen population is growing increasingly restive and willing to go to extremes to escape to Mexico, which has built a big, beautiful wall to keep the starving hordes from El Norte overrunning their country. Cartels smuggle refugees from the PRNA into Mexico where they are exploited in factories where they work for peanuts but where, unlike in the PRNA, you could at least buy peanuts.

With its back increasingly to the wall, the PRNA ruling class has come to believe their only hope is what they view as an alliance with China, and the Chinese see as colonisation, subjugation, and a foothold on the American continent. The PRNA and the People's Republic of China have much in common in overall economic organisation, although the latter is patriotic, proud, competent, and militarily strong, while the PRNA is paralysed by progressive self-hate, grievance group conflict, and compelled obeisance to counterfactual fantasies.

China already has assimilated Hawaii from the PRNA as a formal colony, and runs military bases on the West Coast as effectively sovereign territory. As the story opens, the military balance is about to shift toward great peril to the remaining United States, as the PRNA prepares to turn over a nuclear-powered aircraft carrier they inherited in the Split to China, which will allow it to project power in the Pacific all the way to the West Coast of North America. At the same time, a Chinese force appears to be massing to garrison the PRNA West Coast capital of San Francisco, allowing the PRNA to hang on and escalating any action by the United States against the PRNA into a direct conflict with China.

Kelly Turnbull, having earned enough from his previous missions to retire, is looking forward to a peaceful life when he is “invited” by the U.S. Army back onto active duty for one last high-stakes mission within the PRNA. The aircraft carrier, the former Theodore Roosevelt, now re-christened Mao is about to become operational, and Turnbull is to infiltrate a renegade computer criminal, Quentin Welliver, now locked up in a Supermax prison, to work his software magic to destroy the carrier's power plant. Welliver is anything but cooperative, but then Turnbull can be very persuasive, and the unlikely team undertake the perilous entry to the PRNA and on-site hacking of the carrier.

As is usually the case when Kelly Turnbull is involved, things go sideways and highly kinetic, much to the dismay of Welliver, who is a fearsome warrior behind a keyboard, but less so when the .45 hollow points start to fly. Just when everything seems wrapped up, Turnbull and Welliver are “recruited” by the commando team they thought had been sent to extract them for an even more desperate but essential mission: preventing the Chinese fleet from landing in San Francisco.

If you like your thrillers with lots of action and relatively little reflection about what it all means, this is the book for you. Turnbull considers all of the People's Republic slavers and their willing minions as enemies and a waste of biochemicals better used to fertilise crops, and has no hesitation wasting them. The description of the PRNA is often very funny, although when speaking about California, it is already difficult to parody even the current state of affairs. Some references in the book will probably become quickly dated, such as Maxine Waters Pavilion of Social Justice (formerly SoFi Stadium) and the Junipero Serra statue on Interstate 280, whose Christian colonialist head was removed and replaced by an effigy of pre-Split hero Jerry Nadler. There are some delightful whacks at well-deserving figures such as “Vichy Bill” Kristol, founder of the True Conservative Party, which upholds the tradition of defeat with dignity in the PRNA, winning up to 0.4% of the vote and already planning to rally the stalwart aboard its “Ahoy: Cruising to Victory in 2036!” junket.

The story ends with a suitable bang, leaving the question of “what next?” While People's Republic was a remarkably plausible depiction of the situation after the red-blue divide split the country and “progressive” madness went to its logical conclusion, this is more cartoon-like, but great fun nonetheless.

 Permalink

April 2020

Shute, Nevil. No Highway. New York: Vintage Books, [1948] 2020. ISBN 978-0-307-47412-4.
The quintessential aviation boffin story from an author who knows his stuff (Nevil Shute Norway ran an aircraft manufacturer in the 1930s). The novel is more interesting and complicated than the movie made from it, which is also excellent.

When I began this Reading List in January 2001, it was just that: a list of every book I'd read, updated as I finished books, without any commentary other than, perhaps, availability information and sources for out-of-print works or those from publishers not available through Amazon.com. As the 2000s progressed, I began to add remarks about many of the books, originally limited to one paragraph, but eventually as the years wore on, expanding to full-blown reviews, some sprawling to four thousand words or more and using the book as the starting point for an extended discussion on topics related to its content.

This is, sadly, to employ a term I usually despise, no longer sustainable. My time has become so entirely consumed by system administration tasks on two Web sites, especially one in which I made the disastrous blunder of basing upon WordPress, the most incompetently and irresponsible piece of…software I have ever encountered in more than fifty years of programming; shuffling papers, filling out forms, and other largely government-mandated bullshit (Can I say that here? It's my site! You bet I can.); and creating content for and participating in discussions on the premier anti-social network on the Web for intelligent people around the globe with wide-ranging interests, I simply no longer have the time to sit down, compose. edit, and publish lengthy reviews (in three locations: here, on Fourmilog, and at Ratburger.org) of every book I read.

But that hasn't kept me from reading books, which is my major recreation and escape from the grinding banality which occupies most of my time. As a consequence, I have accumulated, as of the present time, a total of no fewer than twenty-four books I've finished which are on the waiting list to be reviewed and posted here, and that doesn't count a few more I've set aside before finishing the last chapter and end material so as not to make the situation even worse and compound the feeling of guilt.

So, starting with this superb book, which despite having loved everything by Nevil Shute I've read, I'd never gotten around to reading, this list will return to its roots: a reading list with, at most, brief comments. I have marked a number of books (nine, as of now) as candidates for posts in my monthly Saturday Night Science column at Ratburger.org and, as I write reviews and remarks about them for that feature, I will integrate them back into this list.

To avoid overwhelming readers, I'll clear out the backlog by posting at most a book a day until I've caught up. Happy page-turning!

 Permalink

Mowat, Farley. And No Birds Sang. Vancouver: Douglas & McIntyre, [1975] 2012. ISBN 978-1-77100-030-7.
When Canadians were warriors: a personal account of military training and the brutal reality of infantry combat in Italy during World War II.

 Permalink

Schatzker, Mark. Steak. New York: Penguin Press, 2011. ISBN 978-0-14-311938-8.
A food and travel writer searches the globe: from Texas, France, Japan, and Argentina; from feedlots to laboratories to remote farms; from “commodity beef” to the rarest specialities, in his quest for the perfect steak.

 Permalink

Leinbach, Michael D. and Jonathan H. Ward. Bringing Columbia Home. New York: Arcade Publishing, [2018] 2020. ISBN 978-1-948924-61-0.
Author Michael Leinbach was Launch Director at the Kennedy Space Center when space shuttle orbiter Columbia was lost during its return to Earth on February 1st, 2003. In this personal account, he tells the story of locating, recovering, and reconstructing the debris from the orbiter, searching for and finding the remains of the crew, and learning the lessons, technical and managerial, from the accident.

 Permalink

Chiles, Patrick. Frozen Orbit. New York, Baen Publishing, 2020. ISBN 978-1-9821-2430-4.
A covered-up Soviet space spectacular which ended in tragedy opens the door to a breathtaking discovery about the origin of life on Earth.

 Permalink

May 2020

Ackroyd, Peter. London Under. New York: Anchor Books, 2011. ISBN 978-0-307-47378-3.
Unlike New York, London grew from a swamp and its structure was moulded by the rivers that fed it. Over millennia, history has accreted in layer after layer as generations built atop the works of their ancestors. Descending into the caverns, buried rivers, sewers, subways, and infrastructure reveals the deep history, architecture and engineering, and legends of a great city.

 Permalink

Ringo, John. The Last Centurion. Riverdale, NY: Baen Publishing, 2008. ISBN 978-1-4391-3291-3.
Three interwoven stories chronicle the consequences of a feckless U.S. withdrawal from the Near East leaving a mass of materiel and only one Army company behind to “protect” it, a global pandemic exploding from China which killed a substantial fraction of the world's population, and the onset of a solar-driven little ice age which, combined with a U.S. administration that went far beyond incompetence into outright wrecking of the nation, brought famine to America. Heroism, integrity, and a relentless capacity to see things as they really are the only resources Army officer “Bandit Six” has to cope with the crises.

 Permalink

Hossenfelder, Sabine. Lost in Math. New York: Basic Books, 2019. ISBN 978-0-465-09425-7.
Many of the fundamental theories of physics: general relativity, quantum mechanics, and thermodynamics, for example, exhibit great mathematical beauty and elegance once you've mastered the notation in which they are expressed. Some physicists believe that a correct theory must be elegant and beautiful. But what if they're wrong? Many sciences, such as biology and geology, are complicated and messy, with few general principles that don't have exceptions, and in which explanation must take into account a long history of events which might have happened differently. The author, a theoretical physicist, cautions that as her field becomes disconnected from experiment and exploring notions such as string theory and multiple universes, it may be overlooking a reality which, messy though it may be, is the one we actually inhabit and, as scientists, try to understand.

 Permalink

Shlaes, Amity. Great Society. New York: Harper, 2019. ISBN 978-0-06-170642-4.
Adam Smith wrote, “There is a great deal of ruin in a nation”—even though nations and their rulers may adopt ruinous policies for a while, a great nation has deep resources and usually has time to observe the consequences, change course, and restore sound governance. But, as this book shows, the amount of ruin in a nation is not unlimited, and well-intended policies which fundamentally change the character of the citizenry and their relationship to the state can have ruinous consequences that cast a long shadow and may not be reversible. Between 1960 and 1974, under three presidents: Kennedy, Johnson, and Nixon, the United States, starting from peace and prosperity unprecedented in the human experience, reached for greatness and tragically embraced top-down, centrally-planned, deficit-spending funded, and socialist (in all but the forbidden name), policies which, by the mid 1970s, had destroyed prosperity, debased the dollar and unleashed ruinous inflation, wrecked the world's monetary system, incited urban riots and racial strife, created an unemployable underclass, destroyed neighbourhoods and built Soviet-style public housing in their place, and set into motion the destruction of domestic manufacturing and the middle class it supported. It is a tragic tale, an utterly unnecessary destruction of a once-great nation, as this magnificently written and researched but unavoidably depressing history of the era recounts.

 Permalink

Sharfman, Peter et al. The Effects of Nuclear War. Washington: Office of Technology Assessment, 1979. LCCN 79-600080.
This book-length (154 page) report by the U.S. Office of Technology Assessment was commissioned by the U.S. Senate Committee on Foreign Relations and delivered in May, 1979. It should not be confused with the similarly-titled The Effects of Nuclear Weapons, an entirely different technical treatment of the physical effects of nuclear detonations. The present work undertakes “to describe the effects of a nuclear war on the civilian populations, economies, and societies of the United States and the Soviet Union.”

Four scenarios are explored: an attack on a single city, using Detroit and Leningrad as examples; an attack on oil refineries using ten missiles; a counterforce attack, including one limited to ICBM silos; and a broad-based attack on military and economic targets using a large fraction of the existing arsenal of the attacking power. For each, the immediate, medium-, and long-term effects are assessed, including the utility of civil defense preparations and the prospects for recovery from the damage. The death toll from the best to worst case scenarios ranges from 200,000 to 160 million. Appendix C presents a fictional account of the consequences of a large nuclear exchange on a city, Charlottesville, Virginia, which was not directly hit in the attack.

A scanned PDF edition of this report has been published by Princeton University.

 Permalink

Weinberger, Sharon. The Imagineers of War. New York: Vintage Books, 2017. ISBN 978-0-8041-6972-1.
Since its founding in 1958, as a reaction to the perceived humiliation of the United States by the Soviet launch of Sputnik, the Defense Advanced Research Projects Agency (DARPA), which over the years has dropped and restored “Defense” on several occasions, being sometimes known as ARPA, has been the central research organisation for the U.S. military, working independently of the military services, whose rivalry was considered one of the reasons for the slow progress in development of missile and space technology. Originally seen as a “space agency”, it quickly saw that function assumed by NASA. DARPA, largely unconstrained by Pentagon bureaucracy and scientific peer-review, has often been “out there”, pushing speculative technologies on (and sometimes beyond) the cutting edge of the possible.

This book chronicles the world-changing successes of DARPA, including ARPANET, which developed and demonstrated the technologies upon which today's Internet is built, unmanned vehicles, missile defense, and smart weapons. DARPA has also had its share of failures, the inevitable result of trying to push technologies beyond the state of the art. On occasion, DARPA has veered into territory usually associated with mad scientists and Bond villains, such as a scheme to power a particle beam ballistic missile defense system by draining the Great Lakes in fifteen minutes into caverns excavated by nuclear bombs to power generators. This is a fascinating look behind the curtain of what seems almost impossible: a government agency which has, for more than six decades, remained agile in pioneering speculative technologies on a modest budget.

 Permalink

Brunner, John. Stand on Zanzibar. New York, Orb Books, [1968] 2011. ISBN 978-0-7653-2678-2.
In 1968, veteran British science fiction writer John Brunner (his first novel was published in 1951) decided to show those upstart “New Wave” authors how it's done. The result, Stand on Zanzibar, won the Hugo award for best novel in 1969 and became the quintessential 1960s science fiction novel. Set in 2010, it explores The Happening World through parallel interwoven plots and a huge cast of characters, using a chaotic narrative with sections titled “Context”, “Continuity”, “Tracking with Closeups”, and, of course, “The Happening World”.

How does it hold up more than half a century later, with 2010 already receding in the rear view mirror? Astonishingly well: the novel is not at all dated and in many ways prophetic. Brunner foresaw the ability of giant technology companies to manipulate public opinion and make government increasingly irrelevant; the mainstreaming of homosexuality and recreational drugs; the influence of pop philosophers on culture; the hook-up culture; chaos in Africa; the authoritarian governance model in Asia; the collapse of printed newspapers and all media moving down-market; stagnation in technological innovation compared to the first half of the 20th century; the end of the Cold War and its replacement by economic competition; the risk of a genetically-engineered plague originating in China, which remains nominally communist but is becoming a powerhouse that rivals the West. A prominent political figure on the world stage is a West African named Obomi.

Stand on Zanzibar forever changed my own writing style and influenced the way I look at the future and this increasingly crazy world we inhabit. It is a landmark of science fiction and a masterpiece worth revisiting today.

 Permalink

Schlichter, Kurt. Indian Country. El Segundo, CA: Kurt Schlichter, 2017. ISBN 978-0-9884029-6-6.
In his 2016 novel People's Republic (November 2018), the author describes North America in the early 2030s, a decade after the present Cold Civil War turned hot and the United States split into the People's Republic of North America (PRNA) on the coasts and the upper Midwest, with the rest continuing to call itself the United States. This book, the second to feature Turnbull, is a “prequel” set shortly after the split, which was along the borders of the existing states. This left regions whose natural allegiance would have been to the other side trapped within states governed by those they detested.

This situation was acute in southern Indiana, where the population had little in common with the cities of the north who increasingly oppressed them. Turnbull, whose military experience included extensive operations in counter-insurgency, is recruited to go to the area and assist the population in mounting an insurgency, with the goal of making the region such a thorn in the side of the state government that it will be willing to cede the area to the U.S. as part of a general territorial settlement along the borders. Turnbull is told to foment a nonviolent insurgency, but then he is not really the guy you send when that's your goal. Turnbull himself has no illusions about the human cost of resisting tyranny and tells those seeking his aid what they are getting into.

This is a worthy addition to the People's Republic saga, and along with the action Schlichter has his usual fun mocking the pretentions and insanity of the dysfunctional progressive ideology of the PRNA.

 Permalink

June 2020

Munroe, Randall. How To. New York: Riverhead Books, 2019. ISBN 978-1-4736-8033-3.
The author of the Web comic xkcd.com answers questions about how to accomplish a variety of tasks we've all faced: building a lava moat around out supervillain redoubt, digging a hole, jumping really high, or skiing almost forever. It's fun, and you may learn some actual science along the way.

 Permalink

Dartnell, Lewis. The Knowledge. New York: Penguin Press, 2014. ISBN 978-0-14-312704-8.
In one of his first lectures to freshman physics students at Caltech, Richard Feynman posed the question that if everything we had learned was forgotten, and you could only transmit a single sentence to the survivors, what would it be? This book expands upon that idea and attempts to distil the essentials of technological civilisation which might allow rebuilding after an apocalyptic collapse. That doesn't imply re-tracing the course humans followed to get where we are today: for one thing, many of the easily-exploited sources of raw material and energy have been depleted, and for some time survivors will probably be exploiting the ruins of the collapsed civilisation instead of re-starting its primary industries. The author explores the core technologies required to meet basic human needs such as food, shelter, transportation, communication, and storing information, and how they might best be restored. At the centre is the fundamental meta-technology upon which all others are based: the scientific method as a way to empirically discover how things work and apply that knowledge to get things done.

 Permalink

Coppley, Jackson. The Ocean Raiders. Chevy Chase, MD: Contour Press, 2020. ISBN 979-8-6443-4371-3.
Nicholas Foxe is back! After the rip-roaring adventure and world-changing revelations of The Code Hunters (April 2019), the wealthy adventurer with degrees in archaeology and cryptography arrives in Venice to visit an ambitious project by billionaire Nevin Dowd to save the city from inundation by the sea, but mostly to visit Christine Blake, who he hadn't seen for years since an affair in Paris and who is now handling public relations for Dowd's project. What he anticipates to be a pleasant interlude becomes deadly serious when an attempt on his life is made immediately upon his arrival in Venice. Narrowly escaping, and trying to discover the motive, he learns that Dowd's team has discovered an underwater structure that appears to have been built by the same mysterious ancients who left the Tablet and the Omni, from which Nick's associates are trying to extract its knowledge. As Nick investigates further, it becomes clear a ruthless adversary is seeking the secrets of the ancients and willing to kill to obtain them. But who, and what is the secret?

This is another superb adventure/thriller in which you'll be as mystified as the protagonist by the identity of the villain until almost the very end. There is a large cast of intriguing and beautifully portrayed characters, and the story takes us to interesting locations which are magnificently sketched. Action abounds, and the conclusion is thoroughly satisfying, while leaving abundant room for further adventures. You, like I, will wish you had a friend like Guido Bartoli. The novel can be read stand-alone, but you'll enjoy it more if you've first read The Code Hunters, as you'll know the back-story of the characters and events which set this adventure into motion.

The author kindly let me read a pre-publication manuscript of this novel. The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

December 2020

Cline, Ernest. Ready Player Two. New York: Ballantine Books, 2020. ISBN 978-1-5247-6133-2.
Ernest Cline's Ready Player One was an enormous success, blending an imaginative but plausible extrapolation of massively multiplayer online role-playing games and virtual worlds into a mid-21st century network called the OASIS, which has subsumed the Internet and its existing services into an immersive shared virtual world encompassing communication, entertainment, education, and commerce. Ready Player One chronicled the quest of hunters for the Easter Egg hidden by the deceased co-creator of the OASIS, which would confer sole ownership and control over the OASIS on its finder, with independent egg hunters (“gunters”) contending with a corporation bent on turning the OASIS into an advertising-cluttered Hell and perfectly willing to resort to murder and mayhem to achieve its nefarious designs.

James Halliday, who hid the Egg, was obsessed with every aspect of 1980s popular culture: film, music, television, fads, and video games, and the quest for the Egg involved acquiring and demonstrating encyclopedic knowledge equal to his own. The story is thus marinated in 1980s nostalgia, and has a strong appeal for those who lived through that era, which made Ready Player One a beloved instant classic and best-seller, from which Steven Spielberg made a not-entirely-awful 2018 feature film.

With the quest and fate of the OASIS resolved at the end of the original novel, readers were left to wonder what happens next, and they had nine years to wait before a sequel appeared to answer that question. And thus, Ready Player Two, which is set just a few years after Parzival and his bang of gunters find the Egg and assume control of the OASIS, was eagerly awaited. And now we have it in our hands.

Oh dear.

One common reaction among readers who have made it through this sequel or abandoned it in disgust and dismay is that it “reads like fan fiction”. But that is to disparage the genre of fan fiction, some of which is quite good, entertaining, and obviously labours of love. This reads like bad fan fiction, written by somebody who doesn't get out enough, obsessed with “transgression” of the original story and characters. This fails in just about every way possible. While the original novel was based upon a plausible extrapolation of present-day technology into the future, here we have a randomly assembled midden of utterly unbelievable things, seemingly invented on whim in order to move the plot along, most of which were developed in just a few years and in almost complete secrecy. Rather than try to artfully give a sense for how this novel “transgresses” even the most forgiving reader's ability to suspend disbelief, I'll just step behind the curtain and make an incomplete list of things we're expected to accept as the landscape in which the plot is set. Many of these are spoilers, so if you care about such things, don't read them until you've had a crack at the book (or it has had a crack at you, as the case may be).

Spoiler warning: Plot and/or ending details follow.  
  • James Halliday, before his death, funded the successful development of an Oasis Neural Interface (ONI), which provides a full, bidirectional, link to the neurons in a user's brain, allowing complete stimulation of all senses and motor control over an avatar in the OASIS.
  • The ONI is completely non-invasive: it works by sensors it places on the outside of the skull.
  • The development of the ONI was conducted in complete secrecy, and was not disclosed until a trigger set by Halliday before his death revealed it to the finder of the Egg.
  • The ONI is able to provide a fully realistic experience of the OASIS virtual world to the wearer. For example, when an ONI wearer eats an apple in the virtual world, they have the sensation of crunching into the fruit, smelling the juice, and tasting the flesh, even though these have not been programmed into the simulation.
  • The ONI can, in a brief time, make a complete scan of its wearer's brain and download this to the OASIS. This scan is sufficient to create an in-world realisation of the person's consciousness.
  • Halliday made such a scan of his own brain, creating (in total secrecy) the first artificial general intelligence (AGI), which he called Anorak. In attempting to modify this AI, he caused it to go insane and become malevolent, but the lovable nerd Halliday let it continue to inhabit the OASIS.
  • Every time a user logs into the OASIS, the ONI makes a nearly-instantaneous complete backup of their brain, sufficient to create a conscious intelligent copy inside the simulation. This has been done in complete secrecy, and not only OASIS users, but its new owners are unaware of this.
  • The ONI can be used a maximum of 12 hours at a time, after which (the same precise time for all wearers, regardless of their individual physiology) it causes severe and irreversible neural trauma. Disconnection from the OASIS without a proper log-out can leave the user in a permanent coma. Regulators are perfectly fine with such a technology being deployed to billions of people, relying on the vendor's assurance that safeguards will prevent such calamities.
  • Over a period of three years, largely in secrecy until it leaked out, Parzival and three of his partners have had built, at a cost of US$300 billion dollars, an interstellar spacecraft with a fusion drive in which they plan to make a 47 year voyage to Proxima Centauri to “search for a habitable Earthlike planet where we could make a new home for ourselves, our children, and the frozen human embryos we were going to bring along.” They plan to set out on this voyage without first sending a probe to determine whether there is a habitable planet at Proxima Centauri or having a Plan B in case there isn't one when they get there. Oh, and this starship is supposed to get its power from its “solar panel array and batteries” for the four decades it will be nowhere near a star.
  • Halliday, after creating the flawed artificial intelligence Anorak, leaves open the possibility that it can seize control of the OASIS from his designated heir.
  • Because ONI users logged into the OASIS are effectively unconscious and completely vulnerable to attack in real life (which they call “the earl”), the well-heeled opt for an “immersion vault” to protect themselves. Gregarious Simulation Systems' (GSS) top of the line was MoTIV, the “mobile tactical immersion vault”, [which] “looked more like a heavily armed robotic spider than a coffin. It was an armored escape vehicle and all-terrain weapons platform, featuring eight retractable armored legs for navigating all forms of terrain, and a pair of machine guns and grenade launchers mounted on each side of its armored chassis—not to mention a bulletproof acrylic cockpit canopy for its occupant.” The authorities are apparently happy with such gear being sold to anybody who can pay for it.
  • The rogue AI Anorak is able to bypass all of GSS's engineering, quality control, and deployment safeguards to push a software update, “infirmware”, on more than half a billion ONI users, which traps them in the simulation, unable to log out, and destined for catastrophic brain damage after the twelve hour limit is reached. This includes four of the five owners of GSS. And “Anorak has completely rewritten the firmware in some sort of programming language they've never seen before”—and it worked the very first time it was mass deployed.
  • Despite having the lives of half a billion hostages, including themselves, in their hands and with the twelve hour maximum immersion time in the ONI ticking away, Parzival and partners find plenty of time to wisecrack, taunt one another about their knowledge or lack thereof of obscure pop culture, and costume changes.

    Art3mis snapped her fingers and her avatar's attire changed once again. Now she wore Annie Potts's black latex outfit from her first scene in Pretty in Pink, along with her punk-rock porcupine hairdo, dangling earrings, and dinner-fork bracelet.

    “Applause, applause, applause,” she said, doing a slow spin so that we could admire the attention to detail she'd put into her Iona cosplay.

  • On top of all of these inanities, the main characters, who were just likeable, nerdy “mixed-up kids” a few years ago, have now become shrill, tedious, “woke” scolds.

    “Look at this lily-white hellscape,” Aech said, shaking her head as she stared out her own window. “Is there a single person of color in this entire town?”

    Parzival: Her school records included a scan of her birth certificate, which revealed another surprise. She'd been DMAB—designated male at birth. … Around the same time, she'd changed her avatar's sex classification to øgender, a brand-new option GSS had added due to popular demand. People who identified as øgender were individuals who chose to experience sex exclusively through their ONI headsets, and who also didn't limit themselves to experiencing it as a specific gender or sexual orientation.

  • The battles, the battles….
    The rest of the Original 7ven joined the fight too. Jimmy Jam and Monte Moir each wielded a modified red Roland AXIS-1 keytar that fired sonic funk blast waves out of its neck each time a chord was played on it. Jesse Johnson fired sonic thunderbolts from the pickups of his Fender Voodoo Stratocaster, while Terry Lewis did the same with his bass, and Jellybean Johnson stood behind them, firing red lightning skyward with his drumsticks, wielding them like two magic wands. Each of the band members could also fire a deadly blast of sonic energy directly from their own mouths, just by shouting the word “Yeow!” over and over again.
    “Yeow?” … Yawn.
Spoilers end here.  

I'm not going to even discuss the great quest, the big reveal, or the deus in machina switcheroo at the very end. After wrecking an interesting imagined world, destroying characters the reader had come to know and like in their first adventure, and boring the audience with over-the-top descriptions, obscure pop culture trivia, and ridiculous made-up reasons to plug all of the myriad holes in the plot, by the time I got to the end I was well past caring about any of them.

 Permalink

  2021  

January 2021

Benford, Gregory and Larry Niven. The Bowl of Heaven. New York: Tor Books, 2012. ISBN 978-1-250-29709-9.
Readers should be warned that this is the first half of a long novel split across two books. At the end of this volume, the story is incomplete and will be resumed in the sequel, Shipstar.

 Permalink

Martin, Clay. Concrete Jungle. Unspecified: Self-published, 2020. ISBN 979-8-6523-8596-5.
In this book, a U.S. Army Green Beret (Special Forces) veteran shares wisdom for surviving if urban warfare breaks out in your community. This is a survival guide: the focus is on protecting yourself, your family, and your team against the chaos of urban conflict perpetrated by others, not on becoming a combatant yourself. The advice is much more about acquiring skills and situational awareness than on collecting “gear”, with tips on when things degrade to the point you need to pack up and bug out, and how to do so.

 Permalink

Carr, Jack. Savage Son. New York: Pocket Books, 2020. ISBN 978-1-9821-2371-0.

 Permalink

L. D. Cross. Code Name Habbakuk. Toronto: Heritage House, 2012. ISBN 978-1-927051-47-4.
World War II saw the exploration, development, and in some cases deployment, of ideas which, without the pressure of war, would be considered downright wacky. Among the most outlandish was the concept of building an enormous aircraft carrier (or floating airbase) out of reinforced ice. This book recounts the story of the top secret British/Canadian/U.S. project to develop and test this technology. (The title is not misspelled: the World War II project was spelled “Habbakuk”, as opposed to the Old Testament prophet, whose name was “Habakkuk”. The reason for the difference in spelling has been lost in the mists of time.)

 Permalink

Goetz, Peter. A Technical History of America's Nuclear Weapons. Unspecified: Independently published, 2020. ISBN Vol. 1 979-8-6646-8488-9, Vol. 2 978-1-7181-2136-2.

This is an encyclopedic history and technical description of United States nuclear weapons, delivery systems, manufacturing, storage, maintenance, command and control, security, strategic and tactical doctrine, and interaction with domestic politics and international arms control agreements, covering the period from the inception of these weapons in World War II through 2020. This encompasses a huge amount of subject matter, and covering it in the depth the author undertakes is a large project, with the two volume print edition totalling 1244 20×25 centimetre pages. The level of detail and scope is breathtaking, especially considering that not so long ago much of the information documented here was among the most carefully-guarded secrets of the U.S. military. You will learn the minutiæ of neutron initiators, which fission primaries were used in what thermonuclear weapons, how the goal of “one-point safety” was achieved, the introduction of permissive action links to protect against unauthorised use of weapons and which weapons used what kind of security device, and much, much more.

If the production quality of this work matched its content, it would be an invaluable reference for anybody interested in these weapons, from military historians, students of large-scale government research and development projects, researchers of the Cold War and the nuclear balance of power, and authors setting fiction in that era and wishing to get the details right. Sadly, when it comes to attention to detail, this work, as published in this edition, is sadly lacking—it is both slipshod and shoddy. I was reading it for information, not with the fine-grained attention I devote when proofreading my work or that of others, but in the process I marked 196 errors of fact, spelling, formatting, and grammar, or about one every six printed pages. Now, some of these are just sloppy things (including, or course, misuse of the humble apostrophe) which grate upon the reader but aren't likely to confuse, but others are just glaring errors.

Here are some of the obvious errors. Names misspelled or misstated include Jay Forrester, John von Neumann, Air Force Secretary Hans Mark, and Ronald Reagan. In chapter 11, an entire paragraph is duplicated twice in a row. In chapter 9, it is stated that the Little Feller nuclear test in 1962 was witnessed by president John F. Kennedy; in fact, it was his brother, Attorney General Robert F. Kennedy, who observed the test. There is a long duplicated passage at the start of chapter 20, but this may be a formatting error in the Kindle edition. In chapter 29, it is stated that nitrogen tetroxide was the fuel of the Titan II missile—in fact, it was the oxidiser. In chapter 41, the Northrop B-2 stealth bomber is incorrectly attributed to Lockheed in four places. In chapter 42, the Trident submarine-launched missile is referred to as “Titan” on two occasions.

The problem with such a plethora of errors is that when reading information with which you aren't acquainted or have the ability to check, there's no way to know whether they're correct or nonsense. Before using anything from this book as a source in your own work, I'd advise keeping in mind the Russian proverb, Доверяй, но проверяй—“Trust, but verify”. In this case, I'd go light on the trust and double up on the verification.

In the citation above, I link to the Kindle edition, which is free for Kindle Unlimited subscribers. The print edition is published in two paperbacks, Volume 1 and Volume 2.

 Permalink

Wood, Fenton. The Earth a Machine to Speak. Seattle: Amazon Digital Services, 2020. ASIN B08D6J4PJ8.
This is the fifth and final short novel/novella (134 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and the second, Five Million Watts (June 2019), “enchanting”. The third, The Tower of the Bear (October 2019), takes Philo from the depths of the ocean to the Great Tree in the exotic West and the fourth, The City of Illusions (January 2020) continues the quest, including a visit to a surreal amusement park in the miasma cloaking the Valley of the Angels.

In this concluding installment, it's time to pull all of the various threads from the earlier episodes of Philo's hero quest together, and the author manages this deftly, in a thoroughly satisfying, delightful, and heart-warming way. This is a magnificent adventure which young adults will enjoy as much as I did the Tom Swift novels in my youth (and once again when bringing them to the Web), and not-so-young adults will enjoy just as much or more, as there are many gems and references they'll discover which younger readers may not have yet encountered.

This book is currently available only in a Kindle edition. An omnibus collection including all five novellas, Yankee Republic Omnibus: A Mythic Radio Adventure, is available as a Kindle edition from Amazon, or as a 650 page trade paperback directly from the author.

 Permalink

August 2021

Corcoran, Travis J. I. Escape the City, Vol. 1. New Hampshire: Morlock Publishing, 2021. ISBN 979-874270303-7.
In early 2014, the author and his wife left the suburbs of Boston and moved to a 56 acre homestead in rural New Hampshire. Before arriving, he had done extensive reading and research, but beyond the chores of a suburban homeowner, had little or no hands-on experience with the myriad skills required to make a go of it in the country: raising and preserving garden vegetables; maintaining pastures; raising chickens, sheep, and hogs, including butchering and processing their meat; erecting utility buildings; planting and maintaining a fruit orchard; tapping maple trees and producing syrup from their sap; managing a wood lot, felling and processing trees, storing and aging firewood and heating with it; and maintaining a tractor, implements, chainsaws, and the many tools which are indispensable to farm life. The wisdom about how tradesmen and contractors work in the country in the section “Life in The Country: Cultural Fit: Scheduling” would have been worth more than the modest price of the book had I learned it before spending a decade and a half figuring it out for myself after my own escape from the city in 1992.

This massive work (653 large pages in print) and its companion Volume 2 are an encyclopedic compendium of lessons learned and an absolutely essential resource for anybody interested in self-sufficient living, whether as a “suburbanite in the country”, “gardener with chickens”, “market gardener”, “homesteader”, or “commercial farmer”, all five of which are discussed in the book.

The Kindle edition is free for Kindle Unlimited subscribers. The numerous illustrations are in black and white in print editions, but colour in the Kindle version.

 Permalink

October 2021

Kroese, Robert. Titan (Mammon vol. 1). Grand Rapids MI: St. Culain Press, 2021. ASIN B09DDHZ4R7.
With each successive work, science fiction author Robert Kroese is distinguishing himself not just as one of the most outstanding writers in the genre today, but also one of the most versatile. He seems to handily jump from laugh-out-loud satire worthy of Keith Laumer in novels like Starship Grifters (February 2018), cerebral quantum weirdness in Schrödinger's Gat (May 2018), to the meticulously researched alternative history time travel Iron Dragon epic (August 2018 et seq.). Now, in the Mammon trilogy, of which this is the first volume, he turns to the techno-economic-political thriller and, once again, triumphs, with a work worthy of Paul Erdman and Tom Clancy.

By 2036, profligate spending, exponentially growing debt, and indiscriminate money printing trying to paper over the abyss, has brought the United States to the brink of a cataclysmic financial reckoning. Both parties agree only on increasingly absurd stratagems to keep it from crashing down, and when entrepreneur Kade Kapur offers salvation in the form of a public-private partnership to exploit the wealth of the solar system by mining near-Earth asteroids (as the only way to keep grabby government from seizing his wealth), desperate politicians are quick to jump in the lifeboat.

But they are politicians, and in a continental scale empire in decline, populated by hundreds of millions of grifters and layabouts, where the “rule of law” means the rule of lawyers in dresses (judges) appointed by politicians, nothing can be taken for granted, as Kade discovers when he chooses to base his venture in the United States.

This is a compelling page turner and, once again, Kroese demonstrates how thorough is the research behind these yarns. He not only gets the economics of hyperinflation absolutely correct, but, in the best tradition of science fiction, “shows, not tells” the psychology which grips those experiencing it and how rapidly the thin veneer of civilisation can erode when money dies.

This novel ends at a point that will leave you eager to discover what happens next. Fortunately, we won't have all that long to wait: book two in the series, Messiah, will be published on February 28, 2022, and you can pre-order your copy today.

 Permalink

  2022  

February 2022

Kroese, Robert. Messiah (Mammon vol. 2). Grand Rapids MI: St. Culain Press, 2022. ASIN B09GZM9YRC.
After the asteroid diversion and capture scheme chronicled in volume 1 of the Mammon trilogy, Titan (October 2021), seen as the last chance to rescue the U.S. and world economy from decades of profligate spending, borrowing, money printing, and looting of productive enterprise by a venal and corrupt political class, aborted due to industrial espionage and sabotage, an already dire economic situation implodes into exponentially accelerating inflation, across the board economic collapse, widespread shortages, breakdown of civil order, and cracks beginning to appear in political structures, with some U.S. states enforcing border controls and moving toward “soft secession”.

Billionaire Davis Christopher, who further increased his fortune by betting against the initial attempt to capture Mammon, has set up shop at the former OTRAG launch site in the Libyan desert, distant from the intrigue and kleptocratic schemes of the illegitimate Washington regime. In a chaotic and multipolar world, actors at all levels and around the globe vie for what they can get: the U.S. Treasury, now out to plunder cryptocurrency as its next source of funds; the Los Angeles Police Department, establishing itself as a regional rogue state; the Chinese Communist Party, which turns its envious eyes toward the wealth created by offshore paradise Utanau; the emerging Islamic State in Egypt and the Maghreb, consolidating its power after the collapse of regimes in North Africa, and providing some stability in the face of doomsday Salafist cult Al-Qiyamah, which sees the return of Mammon bringing Allah's well-deserved judgement on the world.

With these and many other threads running in parallel and interacting with one another in complicated and non-obvious ways, the story, told mostly through the eyes of characters we met in the first volume, and now confronted with a global collapse in progress, is gripping and illustrates the theme that runs though much of the author's work: that, when faced with existential global threats, the greatest challenges humanity must confront are often those created by the mischief of other humans, not exogenous peril. So it is here, with the asteroid inexorably approaching the Earth for its second pass, probability of impact jumping all around the scale as observations and calculations are refined, and multiple actors seeking their own desired outcomes from the event and to thwart the ambitions of rivals.

This is a masterful continuation of the story that began in the first volume, reaching a climax which is not a cliffhanger, but will leave you eagerly anticipating the conclusion in volume 3, Nemesis, scheduled for publication on July 28, 2022.

The Kindle edition of this book, like volume 1, is free for Kindle Unlimited subscribers.

 Permalink

Recommend a Book

Is there a great book you think I've overlooked? Just enter the ISBN (International Standard Book Number—it's usually on the back cover or the copyright page) in the box below and click “Recommend”. If you don't have the ISBN at hand, leave it blank and enter the title and author's name and I'll look it up. If you'd like to enter both the ISBN and the title and author, go right ahead. Thanks in advance for your suggestions!

Title:
Author:
ISBN:
 
Webmaster corner: Want to add this “recommend book” form to your own Web page? You'll need the ability to install a CGI (Common Gateway Interface) script in the Perl language. You can download the JavaScript validation program, a model Web page which invokes it, and the Perl CGI script which processes requests as the Zipped archive recommend.zip. This software is in the public domain and may be copied and used in any manner, but it is utterly unsupported—you are entirely on your own.