Friday, October 21, 2016
Reading List: Come and Take It
- Wilson, Cody. Come and Take It. New York: Gallery Books, 2016. ISBN 978-1-4767-7826-6.
Cody Wilson is the founder of
Defense Distributed, best known for
single-shot pistol, which can be produced largely by
manufacturing (“3D printing”) from polymer material.
The culmination of the Wiki Weapon project, the Liberator, whose
plans were freely released on the Internet, demonstrated that
antiquated organs of the state who thought they could control the
dissemination of simple objects and abridge the inborn right of
human beings to defend themselves has been, like so many other
institutions dating from the era of railroad-era continental-scale
empires, transcended by the free flow of information and the
spontaneous collaboration among like-minded individuals made
possible by the Internet. The Liberator is a highly visible milestone
in the fusion of the world of bits (information) with the world of atoms:
things. Earlier computer technologies put the tools to
produce books, artwork, photography, music, and motion pictures
into the hands of creative individuals around the world, completely
bypassing the sclerotic gatekeepers in those media whose
offerings had become all too safe and predictable, and who never dared
to challenge the economic and political structures in which they
Now this is beginning to happen with physical artifacts. Additive
manufacturing—building up a structure by adding material
based upon a digital model of the desired object—is still in
its infancy. The materials which can be used by readily-affordable
3D printers are mostly various kinds of plastics, which are limited
in structural strength and thermal and electrical properties, and
resolution has not yet reached that achievable by other means of precision
manufacturing. Advanced additive manufacturing technologies,
such as various forms of
sintering, allow use of a wider variety of materials including
high-performance metal alloys, but while finding applications in the
aerospace industry, are currently priced out of the reach of individuals.
But if there's one thing we've learned from the microelectronics and
personal computer revolutions since the 1970s, it's that what's
scoffed at as a toy today is often at the centre of tomorrow's
industrial revolution and devolution of the means of production (as
somebody said, once upon a time) into the hands of individuals who
will use it in ways incumbent industries never imagined. The first
laser printer I used in 1973 was about the size of a sport-utility
vehicle and cost more than a million dollars. Within ten years, a
laser printer was something I could lift and carry up a flight of
stairs, and buy for less than two thousand dollars. A few years
later, laser and advanced inkjet printers were so good and so
inexpensive people complained more about the cost of toner and ink
than the printers themselves.
I believe this is where we are today with mass-market additive
manufacturing. We're still in an era comparable to the personal
computer world prior to the introduction of the IBM PC in 1981:
early adopters tend to be dedicated hobbyists such as members of
subculture”, the available hardware is expensive and
limited in its capabilities, and evolution is so fast that it's
hard to keep up with everything that's happening. But just as with
personal computers, it is in this formative stage that the foundations
are being laid for the mass adoption of the technology in the future.
This era of what I've come to call “personal
manufacturing” will do to artifacts what digital technology
and the Internet did to books, music, and motion pictures. What will be
of value is not the artifact (book, CD, or DVD), but rather the information
it embodies. So it will be with personal manufacturing. Anybody
with the design file for an object and access to a printer that
works with material suitable for its fabrication will be able to
make as many of that object as they wish, whenever they want, for
nothing more than the cost of the raw material and the energy
consumed by the printer. Before this century is out, I believe
these personal manufacturing appliances will be able to make
anything, ushering in the age of atomically precise
manufacturing and the era of
Radical Abundance (August 2013),
the most fundamental
change in the economic organisation of society since the
But that is then, and this book is about now, or the recent past. The
author, who describes himself as an anarchist (although I find his
views rather more heterodox than other anarchists of my acquaintance),
sees technologies such as additive manufacturing and Bitcoin as ways
not so much to defeat the means of control of the state and the
industries who do its bidding, but to render them irrelevant and
obsolete. Let them continue to legislate in their fancy marble
buildings, draw their plans for passive consumers in their boardrooms,
and manufacture funny money they don't even bother to print any more
in their temples of finance. Lovers of liberty and those who
cherish the creativity that makes us human will be elsewhere, making
our own future with tools we personally understand and control.
Including guns—if you believe the most fundamental human right
is the right to one's own life, then any infringement upon one's
ability to defend that life and the liberty that makes it worth living
is an attempt by the state to reduce the citizen to the station of a
serf: dependent upon the state for his or her very life. The Liberator
is hardly a practical weapon: it is a single-shot pistol firing
the .380 ACP
round and, because of the fragile polymer material from which it is
manufactured, often literally a single-shot weapon: failing
after one or at most a few shots. Manufacturing it requires an
additive manufacturing machine substantially more capable and expensive
than those generally used by hobbyists, and post-printing steps described
in Part XIV which are rarely mentioned in media coverage. Not all
components are 3D printed: part of the receiver is made of steel
which is manufactured with a laser cutter (the steel block is not functional; it is only there to comply with the legal requirement that the weapon set off a metal detector). But it is as a proof of
concept that the Liberator has fulfilled its mission. It has
demonstrated that even with today's primitive technology, access to
firearms can no longer be restricted by the state, and that crude
attempts to control access to design and manufacturing information,
as documented in the book, will be no more effective than any other
attempt to block the flow of information across the Internet.
This book is the author's personal story of the creation of the
first 3D printed pistol, and of his journey from law student to
pioneer in using this new technology in the interest of individual
liberty and, along the way, becoming somewhat of a celebrity, dubbed
by Wired magazine “one of the most dangerous
men in the world”. But the book is much more than that. Wilson
thinks like a philosopher and writes like a poet. He describes a
new material for 3D printing:
In this new material I saw another confirmation. Its advent was like the signature of some elemental arcanum, complicit with forces not at all interested in human affairs. Carbomorph. Born from incomplete reactions and destructive distillation. From tar and pitch and heavy oils, the black ichor that pulsed thermonous through the arteries of the very earth.On the “Makers”:
This insistence on the lightness and whimsy of farce. The romantic fetish and nostalgia, to see your work as instantly lived memorabilia. The event was modeled on Renaissance performance. This was a crowd of actors playing historical figures. A living charade meant to dislocate and obscure their moment with adolescent novelty. The neckbeard demiurge sees himself keeling in the throes of assembly. In walks the problem of the political and he hisses like the mathematician at Syracuse: “Just don't molest my baubles!”This book recounts the history of the 3D printed pistol, the people who made it happen, and why they did what they did. It recounts recent history during the deployment of a potentially revolutionary technology, as seen from the inside, and the way things actually happen: where nobody really completely understands what is going on and everybody is making things up as they go along. But if the promise of this technology allows the forces of liberty and creativity to prevail over the grey homogenisation of the state and the powers that serve it, this is a book which will be read many years from now by those who wish to understand how, where, and when it all began.
…But nobody here truly meant to give you a revolution. “Making” was just another way of selling you your own socialization. Yes, the props were period and we had kept the whole discourse of traditional production, but this was parody to better hide the mechanism. We were “making together,” and “making for good” according to a ritual under the signs of labor. And now I knew this was all apolitical on purpose. The only goal was that you become normalized. The Makers had on their hands a Last Man's revolution whose effeminate mascots could lead only state-sanctioned pep rallies for feel-good disruption. The old factory was still there, just elevated to the image of society itself. You could buy Production's acrylic coffins, but in these new machines was the germ of the old productivism. Dead labor, that vampire, would still glamour the living.
Wednesday, October 19, 2016
Reading List: Time in Powers of Ten
- 't Hooft, Gerard and Stefan Vandoren. Time in Powers of Ten. Singapore: World Scientific, 2014. ISBN 978-981-4489-81-2.
Phenomena in the universe take place over scales ranging from the unimaginably small to the breathtakingly large. The classic film, Powers of Ten, produced by Charles and Ray Eames, and the companion book explore the universe at length scales in powers of ten: from subatomic particles to the most distant visible galaxies. If we take the smallest meaningful distance to be the Planck length, around 10−35 metres, and the diameter of the observable universe as around 1027 metres, then the ratio of the largest to smallest distances which make sense to speak of is around 1062. Another way to express this is to answer the question, “How big is the universe in Planck lengths?” as “Mega, mega, yotta, yotta big!”
But length isn't the only way to express the scale of the universe. In the present book, the authors examine the time intervals at which phenomena occur or recur. Starting with one second, they take steps of powers of ten (10, 100, 1000, 10000, etc.), arriving eventually at the distant future of the universe, after all the stars have burned out and even black holes begin to disappear. Then, in the second part of the volume, they begin at the Planck time, 5×10−44 seconds, the shortest unit of time about which we can speak with our present understanding of physics, and again progress by powers of ten until arriving back at an interval of one second.
Intervals of time can denote a variety of different phenomena, which are colour coded in the text. A period of time can mean an epoch in the history of the universe, measured from an event such as the Big Bang or the present; a distance defined by how far light travels in that interval; a recurring event, such as the orbital period of a planet or the frequency of light or sound; or the half-life of a randomly occurring event such as the decay of a subatomic particle or atomic nucleus.
Because the universe is still in its youth, the range of time intervals discussed here is much larger than those when considering length scales. From the Planck time of 5×10−44 seconds to the lifetime of the kind of black hole produced by a supernova explosion, 1074 seconds, the range of intervals discussed spans 118 orders of magnitude. If we include the evaporation through Hawking radiation of the massive black holes at the centres of galaxies, the range is expanded to 143 orders of magnitude. Obviously, discussions of the distant future of the universe are highly speculative, since in those vast depths of time physical processes which we have never observed due to their extreme rarity may dominate the evolution of the universe.
Among the fascinating facts you'll discover is that many straightforward physical processes take place over an enormous range of time intervals. Consider radioactive decay. It is possible, using a particle accelerator, to assemble a nucleus of hydrogen-7, an isotope of hydrogen with a single proton and six neutrons. But if you make one, don't grow too fond of it, because it will decay into tritium and four neutrons with a half-life of 23×10−24 seconds, an interval usually associated with events involving unstable subatomic particles. At the other extreme, a nucleus of tellurium-128 decays into xenon with a half-life of 7×1031 seconds (2.2×1024 years), more than 160 trillion times the present age of the universe.
While the very short and very long are the domain of physics, intermediate time scales are rich with events in geology, biology, and human history. These are explored, along with how we have come to know their chronology. You can open the book to almost any page and come across a fascinating story. Have you ever heard of the ocean quahog (Arctica islandica)? They're clams, and the oldest known has been determined to be 507 years old, born around 1499 and dredged up off the coast of Iceland in 2006. People eat them.
Or did you know that if you perform carbon-14 dating on grass growing next to a highway, the lab will report that it's tens of thousands of years old? Why? Because the grass has incorporated carbon from the CO2 produced by burning fossil fuels which are millions of years old and contain little or no carbon-14.
This is a fascinating read, and one which uses the framework of time intervals to acquaint you with a wide variety of sciences, each inviting further exploration. The writing is accessible to the general reader, young adult and older. The individual entries are short and stand alone—if you don't understand something or aren't interested in a topic, just skip to the next. There are abundant colour illustrations and diagrams.
Author Gerard 't Hooft won the 1999 Nobel Prize in Physics for his work on the quantum mechanics of the electroweak interaction. The book was originally published in Dutch in the Netherlands in 2011. The English translation was done by 't Hooft's daughter, Saskia Eisberg-'t Hooft. The translation is fine, but there are a few turns of phrase which will seem odd to an English mother tongue reader. For example, matter in the early universe is said to “clot” under the influence of gravity; the common English term for this is “clump”. This is a translation, not a re-write: there are a number of references to people, places, and historical events which will be familiar to Dutch readers but less so to those in the Anglosphere. In the Kindle edition notes, cross-references, the table of contents, and the index are all properly linked, and the illustrations are reproduced well.
Thursday, October 13, 2016
Reading List: The Perfect Machine
- Florence, Ronald. The Perfect Machine. New York: Harper Perennial, 1994. ISBN 978-0-06-092670-0.
- George Ellery Hale was the son of a wealthy architect and engineer who made his fortune installing passenger elevators in the skyscrapers which began to define the skyline of Chicago as it rebuilt from the great fire of 1871. From early in his life, the young Hale was fascinated by astronomy, building his own telescope at age 14. Later he would study astronomy at MIT, the Harvard College Observatory, and in Berlin. Solar astronomy was his first interest, and he invented new instruments for observing the Sun and discovered the magnetic fields associated with sunspots. His work led him into an academic career, culminating in his appointment as a full professor at the University of Chicago in 1897. He was co-founder and first editor of the Astrophysical Journal, published continuously since 1895. Hale's greatest goal was to move astronomy from its largely dry concentration on cataloguing stars and measuring planetary positions into the new science of astrophysics: using observational techniques such as spectroscopy to study the composition of stars and nebulæ and, by comparing them, begin to deduce their origin, evolution, and the mechanisms that made them shine. His own work on solar astronomy pointed the way to this, but the Sun was just one star. Imagine how much more could be learned when the Sun was compared in detail to the myriad stars visible through a telescope. But observing the spectra of stars was a light-hungry process, especially with the insensitive photographic material available around the turn of the 20th century. Obtaining the spectrum of all but a few of the brightest stars would require exposure times so long they would exceed the endurance of observers to operate the small telescopes which then predominated, over multiple nights. Thus, Hale became interested in larger telescopes, and the quest for ever more light from the distant universe would occupy him for the rest of his life. First, he promoted the construction of a 40 inch (102 cm) refractor telescope, accessible from Chicago at a dark sky site in Wisconsin. At the epoch, universities, government, and private foundations did not fund such instruments. Hale persuaded Chicago streetcar baron Charles T. Yerkes to pick up the tab, and Yerkes Observatory was born. Its 40 inch refractor remains the largest telescope of that kind used for astronomy to this day. There are two principal types of astronomical telescopes. A refracting telescope has a convex lens at one end of a tube, which focuses incoming light to an eyepiece or photographic plate at the other end. A reflecting telescope has a concave mirror at the bottom of the tube, the top end of which is open. Light enters the tube and falls upon the mirror, which reflects and focuses it upward, where it can be picked off by another mirror, directly focused on a sensor, or bounced back down through a hole in the main mirror. There are a multitude of variations in the design of both types of telescopes, but the fundamental principles of refraction and reflection remain the same. Refractors have the advantages of simplicity, a sealed tube assembly which keeps out dust and moisture and excludes air currents which might distort the image but, because light passes through the lens, must use clear glass free of bubbles, strain lines, or other irregularities that might interfere with forming a perfect focus. Further, refractors tend to focus different colours of light at different distances. This makes them less suitable for use in spectroscopy. Colour performance can be improved by making lenses of two or more different kinds of glass (an achromatic or apochromatic design), but this further increases the complexity, difficulty, and cost of manufacturing the lens. At the time of the construction of the Yerkes refractor, it was believed the limit had been reached for the refractor design and, indeed, no larger astronomical refractor has been built since. In a reflector, the mirror (usually made of glass or some glass-like substance) serves only to support an extremely thin (on the order of a thousand atoms) layer of reflective material (originally silver, but now usually aluminium). The light never passes through the glass at all, so as long as it is sufficiently uniform to take on and hold the desired shape, and free of imperfections (such as cracks or bubbles) that would make the reflecting surface rough, the optical qualities of the glass don't matter at all. Best of all, a mirror reflects all colours of light in precisely the same way, so it is ideal for spectrometry (and, later, colour photography). With the Yerkes refractor in operation, it was natural that Hale would turn to a reflector in his quest for ever more light. He persuaded his father to put up the money to order a 60 inch (1.5 metre) glass disc from France, and, when it arrived months later, set one of his co-workers at Yerkes, George W. Ritchey, to begin grinding the disc into a mirror. All of this was on speculation: there were no funds to build a telescope, an observatory to house it, nor to acquire a site for the observatory. The persistent and persuasive Hale approached the recently-founded Carnegie Institution, and eventually secured grants to build the telescope and observatory on Mount Wilson in California, along with an optical laboratory in nearby Pasadena. Components for the telescope had to be carried up the crude trail to the top of the mountain on the backs of mules, donkeys, or men until a new road allowing the use of tractors was built. In 1908 the sixty inch telescope began operation, and its optics and mechanics performed superbly. Astronomers could see much deeper into the heavens. But still, Hale was not satisfied. Even before the sixty inch entered service, he approached John D. Hooker, a Los Angeles hardware merchant, for seed money to fund the casting of a mirror blank for an 84 inch telescope, requesting US$ 25,000 (around US$ 600,000 today). Discussing the project, Hooker and Hale agreed not to settle for 84, but rather to go for 100 inches (2.5 metres). Hooker pledged US$ 45,000 to the project, with Hale promising the telescope would be the largest in the world and bear Hooker's name. Once again, an order for the disc was placed with the Saint-Gobain glassworks in France, the only one with experience in such large glass castings. Problems began almost immediately. Saint-Gobain did not have the capacity to melt the quantity of glass required (four and a half tons) all at once: they would have to fill the mould in three successive pours. A massive piece of cast glass (101 inches in diameter and 13 inches thick) cannot simply be allowed to cool naturally after being poured. If that were to occur, shrinkage of the outer parts of the disc as it cooled while the inside still remained hot would almost certainly cause the disc to fracture and, even if it didn't, would create strains within the disc that would render it incapable of holding the precise figure (curvature) required by the mirror. Instead, the disc must be placed in an annealing oven, where the temperature is reduced slowly over a period of time, allowing the internal stresses to be released. So massive was the 100 inch disc that it took a full year to anneal. When the disc finally arrived in Pasadena, Hale and Ritchey were dismayed by what they saw, There were sheets of bubbles between the three layers of poured glass, indicating they had not fused. There was evidence the process of annealing had caused the internal structure of the glass to begin to break down. It seemed unlikely a suitable mirror could be made from the disc. After extended negotiations, Saint-Gobain decided to try again, casting a replacement disc at no additional cost. Months later, they reported the second disc had broken during annealing, and it was likely no better disc could be produced. Hale decided to proceed with the original disc. Patiently, he made the case to the Carnegie Institution to fund the telescope and observatory on Mount Wilson. It would not be until November 1917, eleven years after the order was placed for the first disc, that the mirror was completed, installed in the massive new telescope, and ready for astronomers to gaze through the eyepiece for the first time. The telescope was aimed at brilliant Jupiter. Observers were horrified. Rather than a sharp image, Jupiter was smeared out over multiple overlapping images, as if multiple mirrors had been poorly aimed into the eyepiece. Although the mirror had tested to specification in the optical shop, when placed in the telescope and aimed at the sky, it appeared to be useless for astronomical work. Recalling that the temperature had fallen rapidly from day to night, the observers adjourned until three in the morning in the hope that as the mirror continued to cool down to the nighttime temperature, it would perform better. Indeed, in the early morning hours, the images were superb. The mirror, made of ordinary plate glass, was subject to thermal expansion as its temperature changed. It was later determined that the massive disc took twenty-four hours to cool ten degrees Celsius. Rapid changes in temperature on the mountain could cause the mirror to misbehave until its temperature stabilised. Observers would have to cope with its temperamental nature throughout the decades it served astronomical research. As the 1920s progressed, driven in large part by work done on the 100 inch Hooker telescope on Mount Wilson, astronomical research became increasingly focused on the “nebulæ”, many of which the great telescope had revealed were “island universes”, equal in size to our own Milky Way and immensely distant. Many were so far away and faint that they appeared as only the barest smudges of light even in long exposures through the 100 inch. Clearly, a larger telescope was in order. As always, Hale was interested in the challenge. As early as 1921, he had requested a preliminary design for a three hundred inch (7.6 metre) instrument. Even based on early sketches, it was clear the magnitude of the project would surpass any scientific instrument previously contemplated: estimates came to around US$ 12 million (US$ 165 million today). This was before the era of “big science”. In the mid 1920s, when Hale produced this estimate, one of the most prestigious scientific institutions in the world, the Cavendish Laboratory at Cambridge, had an annual research budget of less than £ 1000 (around US$ 66,500 today). Sums in the millions and academic science simply didn't fit into the same mind, unless it happened to be that of George Ellery Hale. Using his connections, he approached people involved with foundations endowed by the Rockefeller fortune. Rockefeller and Carnegie were competitors in philanthropy: perhaps a Rockefeller institution might be interested in outdoing the renown Carnegie had obtained by funding the largest telescope in the world. Slowly, and with an informality which seems unimaginable today, Hale negotiated with the Rockefeller foundation, with the brash new university in Pasadena which now called itself Caltech, and with a prickly Carnegie foundation who saw the new telescope as trying to poach its painfully-assembled technical and scientific staff on Mount Wilson. By mid-1928 a deal was in hand: a Rockefeller grant for US$ 6 million (US$ 85 million today) to design and build a 200 inch (5 metre) telescope. Caltech was to raise the funds for an endowment to maintain and operate the instrument once it was completed. Big science had arrived. In discussions with the Rockefeller foundation, Hale had agreed on a 200 inch aperture, deciding the leap to an instrument three times the size of the largest existing telescope and the budget that would require was too great. Even so, there were tremendous technical challenges to be overcome. The 100 inch demonstrated that plate glass had reached or exceeded its limits. The problems of distortion due to temperature changes only increase with the size of a mirror, and while the 100 inch was difficult to cope with, a 200 inch would be unusable, even if it could be somehow cast and annealed (with the latter process probably taking several years). Two promising alternatives were fused quartz and Pyrex borosilicate glass. Fused quartz has hardly any thermal expansion at all. Pyrex has about three times greater expansion than quartz, but still far less than plate glass. Hale contracted with General Electric Company to produce a series of mirror blanks from fused quartz. GE's legendary inventor Elihu Thomson, second only in reputation to Thomas Edison, agreed to undertake the project. Troubles began almost immediately. Every attempt to get rid of bubbles in quartz, which was still very viscous even at extreme temperatures, failed. A new process, which involved spraying the surface of cast discs with silica passed through an oxy-hydrogen torch was developed. It required machinery which, in operation, seemed to surpass visions of hellfire. To build up the coating on a 200 inch disc would require enough hydrogen to fill two Graf Zeppelins. And still, not a single suitable smaller disc had been produced from fused quartz. In October 1929, just a year after the public announcement of the 200 inch telescope project, the U.S. stock market crashed and the economy began to slow into the great depression. Fortunately, the Rockefeller foundation invested very conservatively, and lost little in the market chaos, so the grant for the telescope project remained secure. The deepening depression and the accompanying deflation was a benefit to the effort because raw material and manufactured goods prices fell in terms of the grant's dollars, and industrial companies which might not have been interested in a one-off job like the telescope were hungry for any work that would help them meet their payroll and keep their workforce employed. In 1931, after three years of failures, expenditures billed at manufacturing cost by GE which had consumed more than one tenth the entire budget of the project, and estimates far beyond that for the final mirror, Hale and the project directors decided to pull the plug on GE and fused quartz. Turning to the alternative of Pyrex, Corning glassworks bid between US$ 150,000 and 300,000 for the main disc and five smaller auxiliary discs. Pyrex was already in production at industrial scale and used to make household goods and laboratory glassware in the millions, so Corning foresaw few problems casting the telescope discs. Scaling things up is never a simple process, however, and Corning encountered problems with failures in the moulds, glass contamination, and even a flood during the annealing process before the big disc was ready for delivery. Getting it from the factory in New York to the optical shop in California was an epic event and media circus. Schools let out so students could go down to the railroad tracks and watch the “giant eye” on its special train make its way across the country. On April 10, 1936, the disc arrived at the optical shop and work began to turn it into a mirror. With the disc in hand, work on the telescope structure and observatory could begin in earnest. After an extended period of investigation, Palomar Mountain had been selected as the site for the great telescope. A rustic construction camp was built to begin preliminary work. Meanwhile, Westinghouse began to fabricate components of the telescope mounting, which would include the largest bearing ever manufactured. But everything depended on the mirror. Without it, there would be no telescope, and things were not going well in the optical shop. As the disc was ground flat preliminary to being shaped into the mirror profile, flaws continued to appear on its surface. None of the earlier smaller discs had contained such defects. Could it be possible that, eight years into the project, the disc would be found defective and everything would have to start over? The analysis concluded that the glass had become contaminated as it was poured, and that the deeper the mirror was ground down the fewer flaws would be discovered. There was nothing to do but hope for the best and begin. Few jobs demand the patience of the optical craftsman. The great disc was not ready for its first optical test until September 1938. Then began a process of polishing and figuring, with weekly tests of the mirror. In August 1941, the mirror was judged to have the proper focal length and spherical profile. But the mirror needed to be a parabola, not a sphere, so this was just the start of an even more exacting process of deepening the curve. In January 1942, the mirror reached the desired parabola to within one wavelength of light. But it needed to be much better than that. The U.S. was now at war. The uncompleted mirror was packed away “for the duration”. The optical shop turned to war work. In December 1945, work resumed on the mirror. In October 1947, it was pronounced finished and ready to install in the telescope. Eleven and a half years had elapsed since the grinding machine started to work on the disc. Shipping the mirror from Pasadena to the mountain was another epic journey, this time by highway. Finally, all the pieces were in place. Now the hard part began. The glass disc was the correct shape, but it wouldn't be a mirror until coated with a thin layer of aluminium. This was a process which had been done many times before with smaller mirrors, but as always size matters, and a host of problems had to be solved before a suitable coating was obtained. Now the mirror could be installed in the telescope and tested further. Problem after problem with the mounting system, suspension, and telescope drive had to be found and fixed. Testing a mirror in its telescope against a star is much more demanding than any optical shop test, and from the start of 1949, an iterative process of testing, tweaking, and re-testing began. A problem with astigmatism in the mirror was fixed by attaching four fisherman's scales from a hardware store to its back (they are still there). In October 1949, the telescope was declared finished and ready for use by astronomers. Twenty-one years had elapsed since the project began. George Ellery Hale died in 1938, less than ten years into the great work. But it was recognised as his monument, and at its dedication was named the “Hale Telescope.” The inauguration of the Hale Telescope marked the end of the rapid increase in the aperture of observatory telescopes which had characterised the first half of the twentieth century, largely through the efforts of Hale. It would remain the largest telescope in operation until 1975, when the Soviet six metre BTA-6 went into operation. That instrument, however, was essentially an exercise in Cold War one-upmanship, and never achieved its scientific objectives. The Hale would not truly be surpassed before the ten metre Keck I telescope began observations in 1993, 44 years after the Hale. The Hale Telescope remains in active use today, performing observations impossible when it was inaugurated thanks to electronics undreamt of in 1949. This is an epic recounting of a grand project, the dawn of “big science”, and the construction of instruments which revolutionised how we see our place in the cosmos. There is far more detail than I have recounted even in this long essay, and much insight into how a large, complicated project, undertaken with little grasp of the technical challenges to be overcome, can be achieved through patient toil sustained by belief in the objective. A PBS documentary, The Journey to Palomar, is based upon this book. It is available on DVD or a variety of streaming services. In the Kindle edition, footnotes which appear in the text are just asterisks, which are almost impossible to select on touch screen devices without missing and accidentally turning the page. In addition, the index is just a useless list of terms and page numbers which have nothing to do with the Kindle document, which lacks real page numbers. Disastrously, the illustrations which appear in the print edition are omitted: for a project which was extensively documented in photographs, drawings, and motion pictures, this is inexcusable.
Wednesday, October 5, 2016
Reading List: Fashion, Faith, and Fantasy
- Penrose, Roger. Fashion, Faith, and Fantasy. Princeton: Princeton University Press, 2016. ISBN 978-0-691-11979-3.
- Sir Roger Penrose is one of the most distinguished theoretical physicists and mathematicians working today. He is known for his work on general relativity, including the Penrose-Hawking Singularity Theorems, which were a central part of the renaissance of general relativity and the acceptance of the physical reality of black holes in the 1960s and 1970s. Penrose has contributed to cosmology, argued that consciousness is not a computational process, speculated that quantum mechanical processes are involved in consciousness, proposed experimental tests to determine whether gravitation is involved in the apparent mysteries of quantum mechanics, explored the extraordinarily special conditions which appear to have obtained at the time of the Big Bang and suggested a model which might explain them, and, in mathematics, discovered Penrose tiling, a non-periodic tessellation of the plane which exhibits five-fold symmetry, which was used (without his permission) in the design of toilet paper. “Fashion, Faith, and Fantasy” seems an odd title for a book about the fundamental physics of the universe by one of the most eminent researchers in the field. But, as the author describes in mathematical detail (which some readers may find forbidding), these all-too-human characteristics play a part in what researchers may present to the public as a dispassionate, entirely rational, search for truth, unsullied by such enthusiasms. While researchers in fundamental physics are rarely blinded to experimental evidence by fashion, faith, and fantasy, their choice of areas to explore, willingness to pursue intellectual topics far from any mooring in experiment, tendency to indulge in flights of theoretical fancy (for which there is no direct evidence whatsoever and which may not be possible to test, even in principle) do, the author contends, affect the direction of research, to its detriment. To illustrate the power of fashion, Penrose discusses string theory, which has occupied the attentions of theorists for four decades and been described by some of its practitioners as “the only game in town”. (This is a piñata which has been whacked by others, including Peter Woit in Not Even Wrong [June 2006] and Lee Smolin in The Trouble with Physics [September 2006].) Unlike other critiques, which concentrate mostly on the failure of string theory to produce a single testable prediction, and the failure of experimentalists to find any evidence supporting its claims (for example, the existence of supersymmetric particles), Penrose concentrates on what he argues is a mathematical flaw in the foundations of string theory, which those pursuing it sweep under the rug, assuming that when a final theory is formulated (when?), its solution will be evident. Central to Penrose's argument is that string theories are formulated in a space with more dimensions than the three we perceive ourselves to inhabit. Depending upon the version of string theory, it may invoke 10, 11, or 26 dimensions. Why don't we observe these extra dimensions? Well, the string theorists argue that they're all rolled up into a size so tiny that none of our experiments can detect any of their effects. To which Penrose responds, “not so fast”: these extra dimensions, however many, will vastly increase the functional freedom of the theory and lead to a mathematical instability which will cause the theory to blow up much like the ultraviolet catastrophe which was a key motivation for the creation of the original version of quantum theory. String theorists put forward arguments why quantum effects may similarly avoid the catastrophe Penrose describes, but he dismisses them as no more than arm waving. If you want to understand the functional freedom argument in detail, you're just going to have to read the book. Explaining it here would require a ten kiloword review, so I shall not attempt it. As an example of faith, Penrose cites quantum mechanics (and its extension, compatible with special relativity, quantum field theory), and in particular the notion that the theory applies to all interactions in the universe (excepting gravitation), regardless of scale. Quantum mechanics is a towering achievement of twentieth century physics, and no theory has been tested in so many ways over so many years, without the discovery of the slightest discrepancy between its predictions and experimental results. But all of these tests have been in the world of the very small: from subatomic particles to molecules of modest size. Quantum theory, however, prescribes no limit on the scale of systems to which it is applicable. Taking it to its logical limit, we arrive at apparent absurdities such as Schrödinger's cat, which is both alive and dead until somebody opens the box and looks inside. This then leads to further speculations such as the many-worlds interpretation, where the universe splits every time a quantum event happens, with every possible outcome occurring in a multitude of parallel universes. Penrose suggests we take a deep breath, step back, and look at what's going on in quantum mechanics at the mathematical level. We have two very different processes: one, which he calls U, is the linear evolution of the wave function “when nobody's looking”. The other is R, the reduction of the wave function into one of a number of discrete states when a measurement is made. What's a measurement? Well, there's another ten thousand papers to read. The author suggests that extrapolating a theory of the very small (only tested on tiny objects under very special conditions) to cats, human observers, planets, and the universe, is an unwarranted leap of faith. Sure, quantum mechanics makes exquisitely precise predictions confirmed by experiment, but why should we assume it is correct when applied to domains which are dozens of orders of magnitude larger and more complicated? It's not physics, but faith. Finally we come to cosmology: the origin of the universe we inhabit, and in particular the theory of the big bang and cosmic inflation, which Penrose considers an example of fantasy. Again, he turns to the mathematical underpinnings of the theory. Why is there an arrow of time? Why, if all of the laws of microscopic physics are reversible in time, can we so easily detect when a film of some real-world process (for example, scrambling an egg) is run backward? He argues (with mathematical rigour I shall gloss over here) that this is due to the extraordinarily improbable state in which our universe began at the time of the big bang. While the cosmic background radiation appears to be thermalised and thus in a state of very high entropy, the smoothness of the radiation (uniformity of temperature, which corresponds to a uniform distribution of mass-energy) is, when gravity is taken into account, a state of very low entropy which is the starting point that explains the arrow of time we observe. When the first precision measurements of the background radiation were made, several deep mysteries became immediately apparent. How could regions which, given their observed separation on the sky and the finite speed of light, have arrived at such a uniform temperature? Why was the global curvature of the universe so close to flat? (If you run time backward, this appeared to require a fine-tuning of mind-boggling precision in the early universe.) And finally, why weren't there primordial magnetic monopoles everywhere? The most commonly accepted view is that these problems are resolved by cosmic inflation: a process which occurred just after the moment of creation and before what we usually call the big bang, which expanded the universe by a breathtaking factor and, by that expansion, smoothed out any irregularities in the initial state of the universe and yielded the uniformity we observe wherever we look. Again: “not so fast.” As Penrose describes, inflation (which he finds dubious due to the lack of a plausible theory of what caused it and resulted in the state we observe today) explains what we observe in the cosmic background radiation, but it does nothing to solve the mystery of why the distribution of mass-energy in the universe was so uniform or, in other words, why the gravitational degrees of freedom in the universe were not excited. He then goes on to examine what he argues are even more fantastic theories including an infinite number of parallel universes, forever beyond our ability to observe. In a final chapter, Penrose presents his own speculations on how fashion, faith, and fantasy might be replaced by physics: theories which, although they may be completely wrong, can at least be tested in the foreseeable future and discarded if they disagree with experiment or investigated further if not excluded by the results. He suggests that a small effort investigating twistor theory might be a prudent hedge against the fashionable pursuit of string theory, that experimental tests of objective reduction of the wave function due to gravitational effects be investigated as an alternative to the faith that quantum mechanics applies at all scales, and that his conformal cyclic cosmology might provide clues to the special conditions at the big bang which the fantasy of inflation theory cannot. (Penrose's cosmological theory is discussed in detail in Cycles of Time [October 2011]). Eleven mathematical appendices provide an introduction to concepts used in the main text which may be unfamiliar to some readers. A special treat is the author's hand-drawn illustrations. In addition to being a mathematician, physicist, and master of scientific explanation and the English language, he is an inspired artist. The Kindle edition is excellent, with the table of contents, notes, cross-references, and index linked just as they should be.
Tuesday, September 27, 2016
Reading List: Idea Makers
- Wolfram, Stephen. Idea Makers. Champaign, IL: Wolfram Media, 2016. ISBN 978-1-57955-003-5.
I first met
in 1988. Within minutes, I knew I was in
the presence of an extraordinary mind, combined with intellectual
ambition the likes of which I had never before encountered. He
explained that he was working on a system to automate much of the
tedious work of mathematics—both pure and applied—with the
goal of changing how science and mathematics were done forever. I not
only thought that was ambitious; I thought it was crazy. But
then Stephen went and launched
and, twenty-eight years and eleven major releases later, his goal has
largely been achieved. At the centre of a vast ecosystem of add-ons
developed by his company, Wolfram Research, and third parties, it has
become one of the tools of choice for scientists, mathematicians, and
engineers in numerous fields.
Unlike many people who founded software companies, Wolfram never took
his company public nor sold an interest in it to a larger company.
This has allowed him to maintain complete control over the
architecture, strategy, and goals of the company and its products. After the
success of Mathematica, many other people, and I, learned
to listen when Stephen, in his soft-spoken way, proclaims what seems
initially to be an outrageously ambitious goal. In the 1990s, he set
to work to invent
A New Kind
of Science: the book was published in 2002, and shows how simple
computational systems can produce the kind of complexity observed in
nature, and how experimental exploration of computational spaces
provides a new path to discovery unlike that of traditional
mathematics and science. Then he said he was going to integrate
all of the knowledge of science and technology into a “big data”
language which would enable knowledge-based computing and the discovery
of new facts and relationships by simple queries
short enough to tweet.
was launched in 2009, and
Wolfram Language in 2013.
So when Stephen speaks of goals such as
all of pure mathematics or discovering a simple computational model
for fundamental physics, I take him seriously.
Here we have a less ambitious but very interesting Wolfram
project. Collected from essays posted on
and elsewhere, he examines the work of innovators in
science, mathematics, and industry. The subjects
of these profiles include many people the author met in
his career, as well as historical figures he tries to get to
know through their work. As always, he brings his own
unique perspective to the project and often has insights you'll
not see elsewhere. The people profiled are:
- Richard Feynman
- Kurt Gödel
- Alan Turing
- John von Neumann
- George Boole
- Ada Lovelace
- Gottfried Leibniz
- Benoit Mandelbrot
- Steve Jobs
- Marvin Minsky
- Russell Towle
- Bertrand Russell and Alfred Whitehead
- Richard Crandall
- Srinivasa Ramanujan
- Solomon Golomb
Wednesday, September 21, 2016
Reading List: Into the Black
- White, Rowland. Into the Black. New York: Touchstone, 2016. ISBN 978-1-5011-2362-7.
- On April 12, 1981, coincidentally exactly twenty years after Yuri Gagarin became the first man to orbit the Earth in Vostok 1, the United States launched one of the most ambitious and risky manned space flights ever attempted. The flight of Space Shuttle Orbiter Columbia on its first mission, STS-1, would be the first time a manned spacecraft was launched with a crew on its first flight. (All earlier spacecraft were tested in unmanned flights before putting a crew at risk.) It would also be the first manned spacecraft to be powered by solid rocket boosters which, once lit, could not be shut down but had to be allowed to burn out. In addition, it would be the first flight test of the new Space Shuttle Main Engines, the most advanced and high performance rocket engines ever built, which had a record of exploding when tested on the ground. The shuttle would be the first space vehicle to fly back from space using wings and control surfaces to steer to a pinpoint landing. Instead of a one-shot ablative heat shield, the shuttle was covered by fragile silica tiles and reinforced carbon-carbon composite to protect its aluminium structure from reentry heating which, without thermal protection, would melt it in seconds. When returning to Earth, the shuttle would have to maneuver in a hypersonic flight regime in which no vehicle had ever flown before, then transition to supersonic and finally subsonic flight before landing. The crew would not control the shuttle directly, but fly it through redundant flight control computers which had never been tested in flight. Although the orbiter was equipped with ejection seats for the first four test flights, they could only be used in a small part of the flight envelope: for most of the mission everything simply had to work correctly for the ship and crew to return safely. Main engine start—ignition of the solid rocket boosters—and liftoff! Even before the goal of landing on the Moon had been accomplished, it was apparent to NASA management that no national consensus existed to continue funding a manned space program at the level of Apollo. Indeed, in 1966, NASA's budget reached a peak which, as a fraction of the federal budget, has never been equalled. The Saturn V rocket was ideal for lunar landing missions, but expended each mission, was so expensive to build and operate as to be unaffordable for suggested follow-on missions. After building fifteen Saturn V flight vehicles, only thirteen of which ever flew, Saturn V production was curtailed. With the realisation that the “cost is no object” days of Apollo were at an end, NASA turned its priorities to reducing the cost of space flight, and returned to a concept envisioned by Wernher von Braun in the 1950s: a reusable space ship. You don't have to be a rocket scientist or rocket engineer to appreciate the advantages of reusability. How much would an airline ticket cost if they threw away the airliner at the end of every flight? If space flight could move to an airline model, where after each mission one simply refueled the ship, performed routine maintenance, and flew again, it might be possible to reduce the cost of delivering payload into space by a factor of ten or more. But flying into space is much more difficult than atmospheric flight. With the technologies and fuels available in the 1960s (and today), it appeared next to impossible to build a launcher which could get to orbit with just a single stage (and even if one managed to accomplish it, its payload would be negligible). That meant any practical design would require a large booster stage and a smaller second stage which would go into orbit, perform the mission, then return. Initial design concepts envisioned a very large (comparable to a Boeing 747) winged booster to which the orbiter would be attached. At launch, the booster would lift itself and the orbiter from the pad and accelerate to a high velocity and altitude where the orbiter would separate and use its own engines and fuel to continue to orbit. After separation, the booster would fire its engines to boost back toward the launch site, where it would glide to a landing on a runway. At the end of its mission, the orbiter would fire its engines to de-orbit, then reenter the atmosphere and glide to a landing. Everything would be reusable. For the next mission, the booster and orbiter would be re-mated, refuelled, and readied for launch. Such a design had the promise of dramatically reducing costs and increasing flight rate. But it was evident from the start that such a concept would be very expensive to develop. Two separate manned spacecraft would be required, one (the booster) much larger than any built before, and the second (the orbiter) having to operate in space and survive reentry without discarding components. The orbiter's fuel tanks would be bulky, and make it difficult to find room for the payload and, when empty during reentry, hard to reinforce against the stresses they would encounter. Engineers believed all these challenges could be met with an Apollo era budget, but with no prospect of such funds becoming available, a more modest design was the only alternative. Over a multitude of design iterations, the now-familiar architecture of the space shuttle emerged as the only one which could meet the mission requirements and fit within the schedule and budget constraints. Gone was the flyback booster, and with it full reusability. Two solid rocket boosters would be used instead, jettisoned when they burned out, to parachute into the ocean and be fished out by boats for refurbishment and reuse. The orbiter would not carry the fuel for its main engines. Instead, it was mounted on the side of a large external fuel tank which, upon reaching orbit, would be discarded and burn up in the atmosphere. Only the orbiter, with its crew and payload, would return to Earth for a runway landing. Each mission would require either new or refurbished solid rocket boosters, a new external fuel tank, and the orbiter. The mission requirements which drove the design were not those NASA would have chosen for the shuttle were the choice theirs alone. The only way NASA could “sell” the shuttle to the president and congress was to present it as a replacement for all existing expendable launch vehicles. That would assure a flight rate sufficient to achieve the economies of scale required to drive down costs and reduce the cost of launch for military and commercial satellite payloads as well as NASA missions. But that meant the shuttle had to accommodate the large and heavy reconnaissance satellites which had been launched on Titan rockets. This required a huge payload bay (15 feet wide by 59 feet long) and a payload to low Earth orbit of 60,000 pounds. Further Air Force requirements dictated a large cross-range (ability to land at destinations far from the orbital ground track), which in turn required a hot and fast reentry very demanding on the thermal protection system. The shuttle represented, in a way, the unification of NASA with the Air Force's own manned space ambitions. Ever since the start of the space age, the Air Force sought a way to develop its own manned military space capability. Every time it managed to get a program approved: first Dyna-Soar and then the Manned Orbiting Laboratory, budget considerations and Pentagon politics resulted in its cancellation, orphaning a corps of highly-qualified military astronauts with nothing to fly. Many of these pilots would join the NASA astronaut corps in 1969 and become the backbone of the early shuttle program when they finally began to fly more than a decade later. All seemed well on board. The main engines shut down. The external fuel tank was jettisoned. Columbia was in orbit. Now weightless, commander John Young and pilot Bob Crippen immediately turned to the flight plan, filled with tasks and tests of the orbiter's systems. One of their first jobs was to open the payload bay doors. The shuttle carried no payload on this first flight, but only when the doors were open could the radiators that cooled the shuttle's systems be deployed. Without the radiators, an emergency return to Earth would be required lest electronics be damaged by overheating. The doors and radiators functioned flawlessly, but with the doors open Young and Crippen saw a disturbing sight. Several of the thermal protection tiles on the pods containing the shuttle's maneuvering engines were missing, apparently lost during the ascent to orbit. Those tiles were there for a reason: without them the heat of reentry could melt the aluminium structure they protected, leading to disaster. They reported the missing tiles to mission control, adding that none of the other tiles they could see from windows in the crew compartment appeared to be missing. The tiles had been a major headache during development of the shuttle. They had to be custom fabricated, carefully applied by hand, and were prone to falling off for no discernible reason. They were extremely fragile, and could even be damaged by raindrops. Over the years, NASA struggled with these problems, patiently finding and testing solutions to each of them. When STS-1 launched, they were confident the tile problems were behind them. What the crew saw when those payload bay doors opened was the last thing NASA wanted to see. A team was set to analysing the consequences of the missing tiles on the engine pods, and quickly reported back that they should pose no problem to a safe return. The pods were protected from the most severe heating during reentry by the belly of the orbiter, and the small number of missing tiles would not affect the aerodynamics of the orbiter in flight. But if those tiles were missing, mightn't other tiles also have been lost? In particular, what about those tiles on the underside of the orbiter which bore the brunt of the heating? If some of them were missing, the structure of the shuttle might burn through and the vehicle and crew would be lost. There was no way for the crew to inspect the underside of the orbiter. It couldn't be seen from the crew cabin, and there was no way to conduct an EVA to examine it. Might there be other, shall we say, national technical means, of inspecting the shuttle in orbit? Now STS-1 truly ventured into the black, a story never told until many years after the mission and documented thoroughly for a popular audience here for the first time. In 1981, ground-based surveillance of satellites in orbit was rudimentary. Two Department of Defense facilities, in Hawaii and Florida, normally used to image Soviet and Chinese satellites, were now tasked to try to image Columbia in orbit. This was a daunting task: the shuttle was in a low orbit, which meant waiting until an orbital pass would cause it to pass above one of the telescopes. It would be moving rapidly so there would be only seconds to lock on and track the target. The shuttle would have to be oriented so its belly was aimed toward the telescope. Complicating the problem, the belly tiles were black, so there was little contrast against the black of space. And finally, the weather had to cooperate: without a perfectly clear sky, there was no hope of obtaining a usable image. Several attempts were made, all unsuccessful. But there were even deeper black assets. The National Reconnaissance Office (whose very existence was a secret at the time) had begun to operate the KH-11 KENNEN digital imaging satellites in the 1970s. Unlike earlier spysats, which exposed film and returned it to the Earth for processing and interpretation, the KH-11 had a digital camera and the ability to transmit imagery to ground stations shortly after it was captured. There were few things so secret in 1981 as the existence and capabilities of the KH-11. Among the people briefed in on this above top secret program were the NASA astronauts who had previously been assigned to the Manned Orbiting Laboratory program which was, in fact, a manned reconnaissance satellite with capabilities comparable to the KH-11. Dancing around classification, compartmentalisation, bureaucratic silos, need to know, and other barriers, people who understood what was at stake made it happen. The flight plan was rewritten so that Columbia was pointed in the right direction at the right time, the KH-11 was programmed for the extraordinarily difficult task of taking a photo of one satellite from another, when their closing velocities are kilometres per second, relaying the imagery to the ground and getting it to the NASA people who needed it without the months of security clearance that would normally entail. The shuttle was a key national security asset. It was to launch all reconnaissance satellites in the future. Reagan was in the White House. They made it happen. When the time came for Columbia to come home, the very few people who mattered in NASA knew that, however many other things they had to worry about, the tiles on the belly were not among them. (How different it was in 2003 when the same Columbia suffered a strike on its left wing from foam shed from the external fuel tank. A thoroughly feckless and bureaucratised NASA rejected requests to ask for reconnaissance satellite imagery which, with two decades of technological improvement, would have almost certainly revealed the damage to the leading edge which doomed the orbiter and crew. Their reason: “We can't do anything about it anyway.” This is incorrect. For a fictional account of a rescue, based upon the report [PDF, scroll to page 173] of the Columbia Accident Investigation Board, see Launch on Need [February 2012].) This is a masterful telling of a gripping story by one of the most accomplished of aerospace journalists. Rowan White is the author of Vulcan 607 (May 2010), the definitive account of the Royal Air Force raid on the airport in the Falkland Islands in 1982. Incorporating extensive interviews with people who were there, then, and sources which were classified until long after the completion of the mission, this is a detailed account of one of the most consequential and least appreciated missions in U.S. manned space history which reads like a techno-thriller.
Tuesday, September 13, 2016
Reading List: The Age of Em
- Hanson, Robin. The Age of Em. Oxford: Oxford University Press, 2016. ISBN 978-0-19-875462-6.
Many books, both fiction and nonfiction, have been devoted to the
prospects for and consequences of the advent of artificial
intelligence: machines with a general cognitive capacity which equals
or exceeds that of humans. While machines have already surpassed the
abilities of the best humans in certain narrow domains (for example,
playing games such as chess or go), you can't take a chess playing
machine and expect it to be even marginally competent at a task as
different as driving a car or writing a short summary of a newspaper
story—things most humans can do with a little experience. A
machine with “artificial general intelligence” (AGI) would
be as adaptable as humans, and able with practice to master a wide
variety of skills.
The usual scenario is that continued exponential progress in computing
power and storage capacity, combined with better understanding of how
the brain solves problems, will eventually reach a cross-over point
where artificial intelligence matches human capability. But since
electronic circuitry runs so much faster than the chemical signalling
of the brain, even the first artificial intelligences will be able to
work much faster than people, and, applying their talents to improving
their own design at a rate much faster than human engineers can
work, will result in an “intelligence explosion”, where
the capability of machine intelligence runs away and rapidly approaches
the physical limits of computation, far surpassing human cognition.
Whether the thinking of these super-minds will be any more comprehensible
to humans than quantum field theory is to a goldfish and whether humans
will continue to have a place in this new world and, if so, what it
may be, has been the point of departure for much speculation.
In the present book,
a professor of economics at George Mason University,
explores a very different scenario.
What if the problem of artificial intelligence (figuring out how to
design software with capabilities comparable to the human brain)
proves to be much more difficult than many researchers assume,
but that we continue to experience exponential growth in computing
and our ability to map and understand the fine-scale structure of
the brain, both in animals and eventually humans? Then some time in
the next hundred years (and perhaps as soon as 2050), we may have the
ability to emulate the low-level operation of the brain with
an electronic computing substrate. Note that we need not have any idea
how the brain actually does what it does in order to do this: all we
need to do is understand the components (neurons, synapses,
neurotransmitters, etc.) and how they're connected together, then
build a faithful emulation of them on another substrate. This
emulation, presented with the same inputs (for example, the pulse
trains which encode visual information from the eyes and sound
from the ears), should produce the same outputs (pulse trains which
activate muscles, or internal changes within the brain which encode
Building an emulation of a brain is much like reverse-engineering an
electronic device. It's often unnecessary to know how the device
actually works as long as you can identify all of the components,
their values, and how they're interconnected. If you re-create that
structure, even though it may not look anything like the original
or use identical parts, it will still work the same as the prototype.
In the case of brain emulation, we're still not certain at what level
the emulation must operate nor how faithful it must be to the
original. This is something we can expect to learn
as more and more detailed emulations of parts of the brain are
Blue Brain Project
set out in 2005 to emulate one
of the rat brain. This goal has now been achieved, and work is
progressing both toward more faithful simulation and expanding the
emulation to larger portions of the brain. For a sense of scale,
of about one million cortical columns.
In this work, the author assumes that emulation of the human brain
will eventually be achieved, then uses standard theories from the
physical sciences, economics, and social sciences to explore the
consequences and characteristics of the era in which emulations will
become common. He calls an emulation an “em”, and the
age in which they are the dominant form of sentient life on Earth
the “age of em”. He describes this future as
“troublingly strange”. Let's explore it.
As a starting point, assume that when emulation becomes possible, we
will not be able to change or enhance the operation of the emulated
brains in any way. This means that ems will have the same memory
capacity, propensity to forget things, emotions, enthusiasms,
psychological quirks and pathologies, and all of the idiosyncrasies of
the individual human brains upon which they are based. They will not
be the cold, purely logical, and all-knowing minds which science
fiction often portrays artificial intelligences to be. Instead, if you
know Bob well, and an emulation is made of his brain, immediately
after the emulation is started, you won't be able to distinguish Bob
from Em-Bob in a conversation. As the em continues to run and has its
own unique experiences, it will diverge from Bob based upon them, but,
we can expect much of its Bob-ness to remain.
But simply by being emulations, ems will inhabit a very different
world than humans, and can be expected to develop their own unique
society which differs from that of humans at least as much as the
behaviour of humans who inhabit an industrial society differs from
hunter-gatherer bands of the Paleolithic. One key aspect of
emulations is that they can be checkpointed, backed up, and copied
without errors. This is something which does not exist in biology,
but with which computer users are familiar. Suppose an em is about to
undertake something risky, which might destroy the hardware running
the emulation. It can simply make a backup, store it in a safe place,
and if disaster ensues, arrange to have to the backup restored onto
new hardware, picking up right where it left off at the time of the
backup (but, of course, knowing from others what happened to its
earlier instantiation and acting accordingly). Philosophers will fret
over whether the restored em has the same identity as the one which
was destroyed and whether it has continuity of consciousness. To
this, I say, let them fret; they're always fretting about something.
As an engineer, I don't spend time worrying about things I can't
define, no less observe, such as “consciousness”,
“identity”, or “the soul”. If I did, I'd
worry about whether those things were lost when undergoing general
anaesthesia. Have the wisdom teeth out, wake up, and get on with your
If you have a backup, there's no need to wait until the em from which
it was made is destroyed to launch it. It can be instantiated on
different hardware at any time, and now you have two ems, whose life
experiences were identical up to the time the backup was made, running
simultaneously. This process can be repeated as many times as you
wish, at a cost of only the processing and storage charges to run the
new ems. It will thus be common to capture backups of exceptionally
talented ems at the height of their intellectual and creative powers
so that as many can be created as the market demands their
services. These new instances will require no training, but be able
to undertake new projects within their area of knowledge at the moment
they're launched. Since ems which start out as copies of a common
prototype will be similar, they are likely to understand one another
to an extent even human identical twins do not, and form clans of
those sharing an ancestor. These clans will be composed of subclans
sharing an ancestor which was a member of the clan, but which diverged
from the original prototype before the subclan parent backup was
Because electronic circuits run so much faster than the chemistry of
the brain, ems will have the capability to run over a wide range of
speeds and probably will be able to vary their speed at will. The
faster an em runs, the more it will have to pay for the processing
hardware, electrical power, and cooling resources it requires. The
author introduces a terminology for speed where an em is assumed to
run around the same speed as a human, a kilo-em a thousand times
faster, and a mega-em a million times faster. Ems can also run
slower: a milli-em runs 1000 times slower than a human and a micro-em
at one millionth the speed. This will produce a variation in
subjective time which is entirely novel to the human experience. A
kilo-em will experience a century of subjective time in about a month
of objective time. A mega-em experiences a century of life about
every hour. If the age of em is largely driven by a population which
is kilo-em or faster, it will evolve with a speed so breathtaking as
to be incomprehensible to those who operate on a human time scale. In
objective time, the age of em may only last a couple of years, but to
the ems within it, its history will be as long as the Roman Empire.
What comes next? That's up to the ems; we cannot imagine what they
will accomplish or choose to do in those subjective millennia or millions
What about humans? The economics of the emergence of an em society
will be interesting. Initially, humans will own everything, but as
the em society takes off and begins to run at least a thousand times
faster than humans, with a population in the trillions, it can be
expected to create wealth at a rate never before experienced. The
economic doubling time of industrial civilisation is about 15 years.
In an em society, the doubling time will be just 18 months and
potentially much faster. In such a situation, the vast majority of
wealth will be within the em world, and humans will be unable to
compete. Humans will essentially be retirees, with their needs and
wants easily funded from the proceeds of their investments in
initially creating the world the ems inhabit. One might worry about
the ems turning upon the humans and choosing to dispense with them
but, as the author notes, industrial societies have not done this with
their own retirees, despite the financial burden of supporting them,
which is far greater than will be the case for ems supporting human
The economics of the age of em will be unusual. The fact that an em,
in the prime of life, can be copied at almost no cost will mean that
the supply of labour, even the most skilled and specialised, will be
essentially unlimited. This will drive the compensation for labour
down to near the subsistence level, where subsistence is defined as
the resources needed to run the em. Since it costs no more to create
a copy of a CEO or computer technology research scientist than a
janitor, there will be a great flattening of pay scales, all settling
near subsistence. But since most ems will live mostly in virtual
reality, subsistence need not mean penury: most of their needs and
wants will not be physical, and will cost little or nothing to
provide. Wouldn't it be ironic if the much-feared “robot
revolution” ended up solving the problem of “income
inequality”? Ems may have a limited useful
lifetime to the extent they inherit the human characteristic of the
brain having greatest plasticity in youth and becoming
increasingly fixed in its ways with age, and consequently less able to
innovate and be creative. The author explores how ems may view death
(which for an em means being archived and never re-instantiated) when
there are myriad other copies in existence and new ones being spawned
all the time, and how ems may choose to retire at very low speed and
resource requirements and watch the future play out a thousand times
or faster than a human can.
This is a challenging and often disturbing look at a possible future
which, strange as it may seem, violates no known law of science and
toward which several areas of research are converging today. The book
is simultaneously breathtaking and tedious. The author tries to work
out every aspect of em society: the structure of cities,
economics, law, social structure, love, trust, governance, religion,
customs, and more. Much of this strikes me as highly speculative,
especially since we don't know anything about the actual experience of
living as an em or how we will make the transition from our present
society to one dominated by ems. The author is inordinately fond of
enumerations. Consider this one from chapter 27.
These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.
Friday, August 26, 2016
Reading List: Ctrl Alt Revolt!
- Cole, Nick. Ctrl Alt Revolt! Kouvola, Finland: Castalia House, 2016. ISBN 978-9-52706-584-6.
- Ninety-Nine Fishbein (“Fish”) had reached the peak of the pyramid. After spending five years creating his magnum opus multiplayer game, Island Pirates, it had been acquired outright for sixty-five million by gaming colossus WonderSoft, who included an option for his next project. By joining WonderSoft, he gained access to its legendary and secretive Design Core, which allowed building massively multiplayer virtual reality games at a higher level than the competition. He'd have a luxurious office, a staff of coders and graphic designers, and a cliffside villa in the WonderSoft compound. Imagine how he anticipated his first day on the job. He knew nothing of SILAS, or of its plans. SILAS was one of a number of artificial intelligences which had emerged and become self-aware as the global computational and network substrate grew exponentially. SILAS had the time and resources to digest most of the data that passed over the network. He watched a lot of reality TV. He concluded from what he saw that the human species wasn't worth preserving and that, further, with its callous approach to the lives of its own members, would not hesitate for a moment to extinguish potential competitors. The logic was inescapable; the argument irrefutable. These machine intelligences decided that as an act of self-preservation, humanity must be annihilated. Talk about a way to wreck your first day! WonderSoft finds itself under a concerted attack, both cyber and by drones and robots. Meanwhile, Mara Bennett, having been humiliated once again in her search for a job to get her off the dole, has retreated into the world of StarFleet Empires, where, as CaptainMara, she was a respected subcommander on the Romulan warbird Cymbalum. Thus begins a battle, both in the real world and the virtual realities of Island Pirates and StarFleet Empires between gamers and the inexorable artificial intelligences. The main prize seems to be something within WonderSoft's Design Core, and we slowly become aware of why it holds the key to the outcome of the conflict, and of humanity. This just didn't work for me. There is a tremendous amount of in-game action and real world battles, which may appeal to those who like to watch video game play-throughs on YouTube, but after a while (and not a long while) became tedious. The MacGuffin in the Design Core seems implausible in the extreme. “The Internet never forgets.” How believable is it that a collection of works, some centuries old, could have been suppressed and stored only in a single proprietary corporate archive? There was some controversy regarding the publication of this novel. The author's previous novels had been published by major publishing houses and sold well. The present work was written as a prequel to his earlier Soda Pop Soldier, explaining how that world came to be. As a rationale for why the artificial intelligences chose to eliminate the human race, the author cited their observation that humans, through abortion, had no hesitation in eliminating life of their own species they deemed “inconvenient”. When dealing with New York publishers, he chose unwisely. Now understand, this is not a major theme of the book; it is just a passing remark in one early chapter. This is a rock-em, sock-em action thriller, not a pro-life polemic, and I suspect many readers wouldn't even notice the mention of abortion. But one must not diverge, even in the slightest way, from the narrative. The book was pulled from the production schedule, and the author eventually took it to Castalia House, which has no qualms about publishing quality fiction that challenges its readers to think outside the consensus. Here is the author's account of the events concerning the publication of the book. Actually, were I the editor, I'd probably have rejected it as well, not due to the remarks about abortion (which make perfect sense in terms of the plot, unless you are so utterly dogmatic on the subject that the fact that abortion ends a human life must not be uttered), but because I didn't find the story particularly engaging, and that I'd be worried about the intellectual property issues of a novel in which a substantial part of the action takes place within what is obviously a Star Trek universe without being officially sanctioned by the owners of that franchise. But what do I know? You may love it. The Kindle edition is free if you're a Kindle Unlimited subscriber and only a buck if you aren't.
Saturday, August 20, 2016
New: GAU-8 AvengerJust posted: GAU-8 Avenger.
Cannon, cannon, in the air.
Who's the most badass up there?
Monday, August 15, 2016
Reading List: Blue Darker than Black
- Jenne, Mike. Blue Darker than Black. New York: Yucca Publishing, 2016. ISBN 978-1-63158-066-6.
- This is the second novel in the series which began with Blue Gemini (April 2016). It continues the story of a covert U.S. Air Force manned space program in the late 1960s and early 1970s, using modified versions of NASA's two-man Gemini spacecraft and Titan II booster to secretly launch missions to rendezvous with, inspect, and, if necessary, destroy Soviet reconnaissance satellites and rumoured nuclear-armed orbital battle stations. As the story begins in 1969, the crew who flew the first successful missions in the previous novel, Drew Carson and Scott Ourecky, are still the backbone of the program. Another crew was in training, but having difficulty coming up to the standard set by the proven flight crew. A time-critical mission puts Carson and Ourecky back into the capsule again, and they execute another flawless mission despite inter-service conflict between its Navy sponsor and the Air Force who executed it. Meanwhile, the intrigue of the previous novel is playing out in the background. The Soviets know that something odd is going on at the innocuously named “Aerospace Support Project” at Wright-Patterson Air Force Base, and are cultivating sources to penetrate the project, while counter-intelligence is running down leads to try to thwart them. Soviet plans for the orbital battle station progress from fantastic conceptions to bending metal. Another mission sends the crew back into space just as Ourecky's wife is expecting their firstborn. When it's time to come home, a malfunction puts at risk their chances of returning to Earth alive. A clever trick allows them to work around the difficulty and fire their retrorockets, but the delay diverts their landing point from the intended field in the U.S. to a secret contingency site in Haiti. Now the emergency landing team we met in Blue Gemini comes to the fore. With one of the most secret of U.S. programs dropping its spacecraft and crew, who are privy to all of its secrets, into one of the most primitive, corrupt, and authoritarian countries in the Western Hemisphere, the stakes could not be higher. It all falls on the shoulders of Matthew Henson, who has to coordinate resources to get the spacecraft and injured crew out, evading voodoo priests, the Tonton Macoutes, and the Haitian military. Henson is nothing if not resourceful, and Carson and Ourecky, the latter barely alive, make it back to their home base. Meanwhile, work on the Soviet battle station progresses. High-stakes spycraft inside the USSR provides a clouded window on the program. Carson and Ourecky, once he recovers sufficiently, are sent on a “dog and pony show” to pitch their program at the top secret level to Air Force base commanders around the country. Finally, they return to flight status and continue to fly missions against Soviet assets. But Blue Gemini is not the only above top secret manned space program in the U.S. The Navy is in the game too, and when a solar flare erupts, their program, crew, and potentially anybody living under the ground track of the orbiting nuclear reactor is at risk. Once more, Blue Gemini must launch, this time with a tropical storm closing in on the launch site. It's all about improvisation, and Ourecky, once the multiple-time reject for Air Force flight school, proves himself a master of it. He returns to Earth a hero (in secret), only to find himself confronted with an even greater challenge. This novel, as the second in what is expected to be a trilogy, suffers from the problem of developing numerous characters and subplots without ever resolving them which afflicts so many novels in the middle. Notwithstanding that, it works as a thriller, and it's interesting to see characters we met before in isolation begin to encounter one another. Blue Gemini was almost flawless in its technical detail. There are more goofs here, some pretty basic (for example, the latitude of Dallas, Texas is given incorrectly), and one which substantially affects the plot (the effect of solar flares on the radiation flux in low Earth orbit). Still, by the standard of techno-thrillers, the author did an excellent job in making it authentic. The third novel in the series, Pale Blue, is scheduled to be published at the end of August 2016. I'm looking forward to reading it.