Computing

Albrecht, Katherine and Liz McIntyre. Spychips. Nashville: Nelson Current, 2005. ISBN 0-452-28766-9.
Imagine a world in which every manufactured object, and even living creatures such as pets, livestock, and eventually people, had an embedded tag with a unique 96-bit code which uniquely identified it among all macroscopic objects on the planet and beyond. Further, imagine that these tiny, unobtrusive and non-invasive tags could be interrogated remotely, at a distance of up to several metres, by safe radio frequency queries which would provide power for them to transmit their identity. What could you do with this? Well, a heck of a lot. Imagine, for example, a refrigerator which sensed its entire contents, and was able to automatically place an order on the Internet for home delivery of whatever was running short, or warned you that the item you'd just picked up had passed its expiration date. Or think about breezing past the checkout counter at the Mall-Mart with a cart full of stuff without even slowing down—all of the goods would be identified by the portal at the door, and the total charged to the account designated by the tag in your customer fidelity card. When you're shopping, you could be automatically warned when you pick up a product which contains an ingredient to which you or a member of your family is allergic. And if a product is recalled, you'll be able to instantly determine whether you have one of the affected items, if your refrigerator or smart medicine cabinet hasn't already done so. The benefits just go on and on…imagine.

This is the vision of an “Internet of Things”, in which all tangible objects are, in a real sense, on-line in real-time, with their position and status updated by ubiquitous and networked sensors. This is not a utopian vision. In 1994 I sketched Unicard, a unified personal identity document, and explored its consequences; people laughed: “never happen”. But just five years later, the Auto-ID Labs were formed at MIT, dedicated to developing a far more ubiquitous identification technology. With the support of major companies such as Procter & Gamble, Philip Morris, Wal-Mart, Gillette, and IBM, and endorsement by organs of the United States government, technology has been developed and commercialised to implement tagging everything and tracking its every movement.

As I alluded to obliquely in Unicard, this has its downsides. In particular, the utter and irrevocable loss of all forms of privacy and anonymity. From the moment you enter a store, or your workplace, or any public space, you are tracked. When you pick up a product, the amount of time you look at it before placing it in your shopping cart or returning it to the shelf is recorded (and don't even think about leaving the store without paying for it and having it logged to your purchases!). Did you pick the bargain product? Well, you'll soon be getting junk mail and electronic coupons on your mobile phone promoting the premium alternative with a higher profit margin to the retailer. Walk down the street, and any miscreant with a portable tag reader can “frisk” you without your knowledge, determining the contents of your wallet, purse, and shopping bag, and whether you're wearing a watch worth snatching. And even when you discard a product, that's a public event: garbage voyeurs can drive down the street and correlate what you throw out by the tags of items in your trash and the tags on the trashbags they're in.

“But we don't intend to do any of that”, the proponents of radio frequency identification (RFID) protest. And perhaps they don't, but if it is possible and the data are collected, who knows what will be done with it in the future, particularly by governments already installing surveillance cameras everywhere. If they don't have the data, they can't abuse them; if they do, they may; who do you trust with a complete record of everywhere you go, and everything you buy, sell, own, wear, carry, and discard?

This book presents, in a form that non-specialists can understand, the RFID-enabled future which manufacturers, retailers, marketers, academics, and government are co-operating to foist upon their consumers, clients, marks, coerced patrons, and subjects respectively. It is not a pretty picture. Regrettably, this book could be much better than it is. It's written in a kind of breathy muckraking rant style, with numerous paragraphs like (p. 105):

Yes, you read that right, they plan to sell data on our trash. Of course. We should have known that BellSouth was just another megacorporation waiting in the wings to swoop down on the data revealed once its fellow corporate cronies spychip the world.
I mean, I agree entirely with the message of this book, having warned of modest steps in that direction eleven years before its publication, but prose like this makes me feel like I'm driving down the road in a 1964 Vance Packard getting all righteously indignant about things we'd be better advised to coldly and deliberately draw our plans against. This shouldn't be so difficult, in principle: polls show that once people grasp the potential invasion of privacy possible with RFID, between 2/3 and 3/4 oppose it. The problem is that it's being deployed via stealth, starting with bulk pallets in the supply chain and, once proven there, migrated down to the individual product level.

Visibility is a precious thing, and one of the most insidious properties of RFID tags is their very invisibility. Is there a remotely-powered transponder sandwiched into the sole of your shoe, linked to the credit card number and identity you used to buy it, which “phones home” every time you walk near a sensor which activates it? Who knows? See how the paranoia sets in? But it isn't paranoia if they're really out to get you. And they are—for our own good, naturally, and for the children, as always.

In the absence of a policy fix for this (and the extreme unlikelihood of any such being adopted given the natural alliance of business and the state in tracking every move of their customers/subjects), one extremely handy technical fix would be a broadband, perhaps software radio, which listened on the frequency bands used by RFID tag readers and snooped on the transmissions of tags back to them. Passing the data stream to a package like RFDUMP would allow decoding the visible information in the RFID tags which were detected. First of all, this would allow people to know if they were carrying RFID tagged products unbeknownst to them. Second, a portable sniffer connected to a PDA would identify tagged products in stores, which clients could take to customer service desks and ask to be returned to the shelves because they were unacceptable for privacy reasons. After this happens several tens of thousands of times, it may have an impact, given the razor-thin margins in retailing. Finally, there are “active measures”. These RFID tags have large antennas which are connected to a super-cheap and hence fragile chip. Once we know the frequency it's talking on, why we could…. But you can work out the rest, and since these are all unlicensed radio bands, there may be nothing wrong with striking an electromagnetic blow for privacy.

EMP,
EMP!
Don't you put,
your tag on me!

November 2007 Permalink

Awret, Uziel, ed. The Singularity. Exeter, UK: Imprint Academic, 2016. ISBN 978-1-845409-07-4.
For more than half a century, the prospect of a technological singularity has been part of the intellectual landscape of those envisioning the future. In 1965, in a paper titled “Speculations Concerning the First Ultraintelligent Machine” statistician I. J. Good wrote,

Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment.

Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.

  1. Progress in computing comes to a halt before reaching parity with human brain power, due to technological limits, economics (inability to afford the new technologies required, or lack of applications to fund the intermediate steps), or intervention by authority (for example, regulation motivated by a desire to avoid the risks and displacement due to super-human intelligence).
  2. Computing continues to advance, but we find that the human brain is either far more complicated than we believed it to be, or that something is going on in there which cannot be modelled or simulated by a deterministic computational process. The goal of human-level artificial intelligence recedes into the distant future.
  3. Blooie! Human level machine intelligence is achieved, successive generations of machine intelligences run away to approach the physical limits of computation, and before long machine intelligence exceeds that of humans to the degree humans surpass the intelligence of mice (or maybe insects).

Now, the thing about this is that many people will dismiss such speculation as science fiction having nothing to do with the “real world” they inhabit. But there's no more conservative form of forecasting than observing a trend which has been in existence for a long time (in the case of growth in computing power, more than a century, spanning multiple generations of very different hardware and technologies), and continuing to extrapolate it into the future and then ask, “What happens then?” When you go through this exercise and an answer pops out which seems to indicate that within the lives of many people now living, an event completely unprecedented in the history of our species—the emergence of an intelligence which far surpasses that of humans—might happen, the prospects and consequences bear some serious consideration.

The present book, based upon two special issues of the Journal of Consciousness Studies, attempts to examine the probability, nature, and consequences of a singularity from a variety of intellectual disciplines and viewpoints. The volume begins with an essay by philosopher David Chalmers originally published in 2010: “The Singularity: a Philosophical Analysis”, which attempts to trace various paths to a singularity and evaluate their probability. Chalmers does not attempt to estimate the time at which a singularity may occur—he argues that if it happens any time within the next few centuries, it will be an epochal event in human history which is worth thinking about today. Chalmers contends that the argument for artificial intelligence (AI) is robust because there appear to be multiple paths by which we could get there, and hence AI does not depend upon a fragile chain of technological assumptions which might break at any point in the future. We could, for example, continue to increase the performance and storage capacity of our computers, to such an extent that the “deep learning” techniques already used in computing applications, combined with access to a vast amount of digital data on the Internet, may cross the line of human intelligence. Or, we may continue our progress in reverse-engineering the microstructure of the human brain and apply our ever-growing computing power to emulating it at a low level (this scenario is discussed in detail in Robin Hanson's The Age of Em [September 2016]). Or, since human intelligence was produced by the process of evolution, we might set our supercomputers to simulate evolution itself (which we're already doing to some extent with genetic algorithms) in order to evolve super-human artificial intelligence (not only would computer-simulated evolution run much faster than biological evolution, it would not be random, but rather directed toward desired results, much like selective breeding of plants or livestock).

Regardless of the path or paths taken, the outcomes will be one of the three discussed above: either a singularity or no singularity. Assume, arguendo, that the singularity occurs, whether before 2050 as some optimists project or many decades later. What will it be like? Will it be good or bad? Chalmers writes,

I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.

…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment?

What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained?

Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry?

Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what?

Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter?

Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”.

Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens.

Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them.

This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:

If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.

March 2017 Permalink

Barrat, James. Our Final Invention. New York: Thomas Dunne Books, 2013. ISBN 978-0-312-62237-4.
As a member of that crusty generation who began programming mainframe computers with punch cards in the 1960s, the phrase “artificial intelligence” evokes an almost visceral response of scepticism. Since its origin in the 1950s, the field has been a hotbed of wildly over-optimistic enthusiasts, predictions of breakthroughs which never happened, and some outright confidence men preying on investors and institutions making research grants. John McCarthy, who organised the first international conference on artificial intelligence (a term he coined), predicted at the time that computers would achieve human-level general intelligence within six months of concerted research toward that goal. In 1970 Marvin Minsky said “In from three to eight years we will have a machine with the general intelligence of an average human being.” And these were serious scientists and pioneers of the field; the charlatans and hucksters were even more absurd in their predictions.

And yet, and yet…. The exponential growth in computing power available at constant cost has allowed us to “brute force” numerous problems once considered within the domain of artificial intelligence. Optical character recognition (machine reading), language translation, voice recognition, natural language query, facial recognition, chess playing at the grandmaster level, and self-driving automobiles were all once thought to be things a computer could never do unless it vaulted to the level of human intelligence, yet now most have become commonplace or are on the way to becoming so. Might we, in the foreseeable future, be able to brute force human-level general intelligence?

Let's step back and define some terms. “Artificial General Intelligence” (AGI) means a machine with intelligence comparable to that of a human across all of the domains of human intelligence (and not limited, say, to playing chess or driving a vehicle), with self-awareness and the ability to learn from mistakes and improve its performance. It need not be embodied in a robot form (although some argue it would have to be to achieve human-level performance), but could certainly pass the Turing test: a human communicating with it over whatever channels of communication are available (in the original formulation of the test, a text-only teleprinter) would not be able to determine whether he or she were communicating with a machine or another human. “Artificial Super Intelligence” (ASI) denotes a machine whose intelligence exceeds that of the most intelligent human. Since a self-aware intelligent machine will be able to modify its own programming, with immediate effect, as opposed to biological organisms which must rely upon the achingly slow mechanism of evolution, an AGI might evolve into an ASI in an eyeblink: arriving at intelligence a million times or more greater than that of any human, a process which I. J. Good called an “intelligence explosion”.

What will it be like when, for the first time in the history of our species, we share the planet with an intelligence greater than our own? History is less than encouraging. All members of genus Homo which were less intelligent than modern humans (inferring from cranial capacity and artifacts, although one can argue about Neanderthals) are extinct. Will that be the fate of our species once we create a super intelligence? This book presents the case that not only will the construction of an ASI be the final invention we need to make, since it will be able to anticipate anything we might invent long before we can ourselves, but also our final invention because we won't be around to make any more.

What will be the motivations of a machine a million times more intelligent than a human? Could humans understand such motivations any more than brewer's yeast could understand ours? As Eliezer Yudkowsky observed, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Indeed, when humans plan to construct a building, do they take into account the wishes of bacteria in soil upon which the structure will be built? The gap between humans and ASI will be as great. The consequences of creating ASI may extend far beyond the Earth. A super intelligence may decide to propagate itself throughout the galaxy and even beyond: with immortality and the ability to create perfect copies of itself, even travelling at a fraction of the speed of light it could spread itself into all viable habitats in the galaxy in a few hundreds of millions of years—a small fraction of the billions of years life has existed on Earth. Perhaps ASI probes from other extinct biological civilisations foolish enough to build them are already headed our way.

People are presently working toward achieving AGI. Some are in the academic and commercial spheres, with their work reasonably transparent and reported in public venues. Others are “stealth companies” or divisions within companies (does anybody doubt that Google's achieving an AGI level of understanding of the information it Hoovers up from the Web wouldn't be a overwhelming competitive advantage?). Still others are funded by government agencies or operate within the black world: certainly players such as NSA dream of being able to understand all of the information they intercept and cross-correlate it. There is a powerful “first mover” advantage in developing AGI and ASI. The first who obtains it will be able to exploit its capability against those who haven't yet achieved it. Consequently, notwithstanding the worries about loss of control of the technology, players will be motivated to support its development for fear their adversaries might get there first.

This is a well-researched and extensively documented examination of the state of artificial intelligence and assessment of its risks. There are extensive end notes including references to documents on the Web which, in the Kindle edition, are linked directly to their sources. In the Kindle edition, the index is just a list of “searchable terms”, not linked to references in the text. There are a few goofs, as you might expect for a documentary film maker writing about technology (“Newton's second law of thermodynamics”), but nothing which invalidates the argument made herein.

I find myself oddly ambivalent about the whole thing. When I hear “artificial intelligence” what flashes through my mind remains that dielectric material I step in when I'm insufficiently vigilant crossing pastures in Switzerland. Yet with the pure increase in computing power, many things previously considered AI have been achieved, so it's not implausible that, should this exponential increase continue, human-level machine intelligence will be achieved either through massive computing power applied to cognitive algorithms or direct emulation of the structure of the human brain. If and when that happens, it is difficult to see why an “intelligence explosion” will not occur. And once that happens, humans will be faced with an intelligence that dwarfs that of their entire species; which will have already penetrated every last corner of its infrastructure; read every word available online written by every human; and which will deal with its human interlocutors after gaming trillions of scenarios on cloud computing resources it has co-opted.

And still we advance the cause of artificial intelligence every day. Sleep well.

December 2013 Permalink

Blum, Andrew. Tubes. New York: HarperCollins, 2012. ISBN 978-0-06-199493-7.
The Internet has become a routine fixture in the lives of billions of people, the vast majority of whom have hardly any idea how it works or what physical infrastructure allows them to access and share information almost instantaneously around the globe, abolishing, in a sense, the very concept of distance. And yet the Internet exists—if it didn't, you wouldn't be able to read this. So, if it exists, where is it, and what is it made of?

In this book, the author embarks upon a quest to trace the Internet from that tangle of cables connected to the router behind his couch to the hardware which enables it to communicate with its peers worldwide. The metaphor of the Internet as a cloud—simultaneously everywhere and nowhere—has become commonplace, and yet as the author begins to dig into the details, he discovers the physical Internet is nothing like a cloud: it is remarkably centralised (a large Internet exchange or “peering location” will tend grow ever larger, since networks want to connect to a place where the greatest number of other networks connect), often grungy (when pulling fibre optic cables through century-old conduits beneath the streets of Manhattan, one's mind turns more to rats than clouds), and anything but decoupled from the details of geography (undersea cables must choose a route which minimises risk of breakage due to earthquakes and damage from ship anchors in shallow water, while taking the shortest route and connecting to the backbone at a location which will provide the lowest possible latency).

The author discovers that while much of the Internet's infrastructure is invisible to the layman, it is populated, for the most part, with people and organisations open and willing to show it off to visitors. As an amateur anthropologist, he surmises that to succeed in internetworking, those involved must necessarily be skilled in networking with one another. A visit to a NANOG gathering introduces him to this subculture and the retail politics of peering.

Finally, when non-technical people speak of “the Internet”, it isn't just the interconnectivity they're thinking of but also the data storage and computing resources accessible via the network. These also have a physical realisation in the form of huge data centres, sited based upon the availability of inexpensive electricity and cooling (a large data centre such as those operated by Google and Facebook may consume on the order of 50 megawatts of electricity and dissipate that amount of heat). While networking people tend to be gregarious bridge-builders, data centre managers view themselves as defenders of a fortress and closely guard the details of their operations from outside scrutiny. When Google was negotiating to acquire the site for their data centre in The Dalles, Oregon, they operated through an opaque front company called “Design LLC”, and required all parties to sign nondisclosure agreements. To this day, if you visit the facility, there's nothing to indicate it belongs to Google; on the second ring of perimeter fencing, there's a sign, in Gothic script, that says “voldemort industries”—don't be evil! (p. 242) (On p. 248 it is claimed that the data centre site is deliberately obscured in Google Maps. Maybe it once was, but as of this writing it is not. From above, apart from the impressive power substation, it looks no more exciting than a supermarket chain's warehouse hub.) The author finally arranges to cross the perimeter, get his retina scanned, and be taken on a walking tour around the buildings from the outside. To cap the visit, he is allowed inside to visit—the lunchroom. The food was excellent. He later visits Facebook's under-construction data centre in the area and encounters an entirely different culture, so perhaps not all data centres are Morlock territory.

The author comes across as a quintessential liberal arts major (which he was) who is alternately amused by the curious people he encounters who understand and work with actual things as opposed to words, and enthralled by the wonder of it all: transcending space and time, everywhere and nowhere, “free” services supported by tens of billions of dollars of power-gobbling, heat-belching infrastructure—oh, wow! He is also a New York collectivist whose knee-jerk reaction is “public, good; private, bad” (notwithstanding that the build-out of the Internet has been almost exclusively a private sector endeavour). He waxes poetic about the city-sponsored (paid for by grants funded by federal and state taxpayers plus loans) fibre network that The Dalles installed which, he claims, lured Google to site its data centre there. The slightest acquaintance with economics or, for that matter, arithmetic, demonstrates the absurdity of this. If you're looking for a site for a multi-billion dollar data centre, what matters is the cost of electricity and the climate (which determines cooling expenses). Compared to the price tag for the equipment inside the buildings, the cost of running a few (or a few dozen) kilometres of fibre is lost in the round-off. In fact, we know, from p. 235 that the 27 kilometre city fibre run cost US$1.8 million, while Google's investment in the data centre is several billion dollars.

These quibbles aside, this is a fascinating look at the physical substrate of the Internet. Even software people well-acquainted with the intricacies of TCP/IP may have only the fuzziest comprehension of where a packet goes after it leaves their site, and how it gets to the ultimate destination. This book provides a tour, accessible to all readers, of where the Internet comes together, and how counterintuitive its physical realisation is compared to how we think of it logically.

In the Kindle edition, end-notes are bidirectionally linked to the text, but the index is just a list of page numbers. Since the Kindle edition does include real page numbers, you can type in the number from the index, but that's hardly as convenient as books where items in the index are directly linked to the text. Citations of Internet documents in the end notes are given as URLs, but not linked; the reader must copy and paste them into a browser's address bar in order to access the documents.

September 2012 Permalink

Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.
Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.

Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.

Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.

As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.

“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.

This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.

One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.

At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.

As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.

That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

September 2014 Permalink

Brin, David. The Transparent Society. Cambridge, MA: Perseus Books, 1998. ISBN 0-7382-0144-8.
Having since spent some time pondering The Digital Imprimatur, I find the alternative Brin presents here rather more difficult to dismiss out of hand than when I first encountered it.

October 2003 Permalink

Carr, Nicholas G. Does IT Matter? Boston: Harvard Business School Press, 2004. ISBN 1-59139-444-9.
This is an expanded version of the author's May 2003 Harvard Business Review paper titled “IT Doesn't Matter”, which sparked a vituperous ongoing debate about the r˘le of information technology (IT) in modern business and its potential for further increases in productivity and competitive advantage for companies who aggressively adopt and deploy it. In this book, he provides additional historical context, attempts to clear up common misperceptions of readers of the original article, and responds to its critics. The essence of Carr's argument is that information technology (computer hardware, software, and networks) will follow the same trajectory as other technologies which transformed business in the past: railroads, machine tools, electricity, the telegraph and telephone, and air transport. Each of these technologies combined high risk with the potential for great near-term competitive advantage for their early adopters, but eventually became standardised “commodity inputs” which all participants in the market employ in much the same manner. Each saw a furious initial period of innovation, emergence of standards to permit interoperability (which, at the same time, made suppliers interchangeable and the commodity fungible), followed by a rapid “build-out” of the technological infrastructure, usually accompanied by over-optimistic hype from its boosters and an investment bubble and the inevitable crash. Eventually, the infrastructure is in place, standards have been set, and a consensus reached as to how best to use the technology in each industry, at which point it's unlikely any player in the market will be able to gain advantage over another by, say, finding a clever new way to use railroads, electricity, or telephones. At this point the technology becomes a commodity input to all businesses, and largely disappears off the strategic planning agenda. Carr believes that with the emergence of low-cost commodity computers adequate for the overwhelming majority of business needs, and the widespread adoption of standard vendor-supplied software such as office suites, enterprise resource planning (ERP), and customer relationship management (CRM) packages, corporate information technology has reached this level of maturity, where senior management should focus on cost-cutting, security, and maintainability rather than seeking competitive advantage through innovation. Increasingly, companies adapt their own operations to fit the ERP software they run, as opposed to customising the software for their particular needs. While such procrusteanism was decried in the IBM mainframe era, today it's touted as deploying “industry best practices” throughout the economy, tidily packaged as a “company in a box”. (Still, one worries about the consequences for innovation.) My reaction to Carr's argument is, “How can anybody find this remotely controversial?” Not only do we have a dozen or so historical examples of the adoption of new technologies, the evidence for the maturity of corporate information technology is there for anybody to see. In fact, in February 1997, I predicted that Microsoft's ability to grow by adding functionality to its products was about to reach the limit, and looking back, it was with Office 97 that customers started to push back, feeling the added “features” (such as the notorious talking paper clip) and initial lack of downward compatibility with earlier versions was for Microsoft's benefit, not their own. How can one view Microsoft's giving back half its cash hoard to shareholders in a special dividend in 2004 (and doubling its regular dividend, along with massive stock buybacks), as anything other than acknowledgement of this reality. You only give your cash back to the investors (or buy your own stock), when you can't think of anything else to do with it which will generate a better return. So, if there's to be a a “next big thing”, Microsoft do not anticipate it coming from them.

August 2004 Permalink

Dyson, Freeman J. The Sun, the Genome, and the Internet. Oxford: Oxford University Press, 1999. ISBN 0-19-513922-4.
The text in this book is set in a hideous flavour of the Adobe Caslon font in which little curlicue ligatures connect the letter pairs “ct” and “st” and, in addition, the “ligatures” for “ff”, “fi”, “fl”, and “ft” lop off most of the bar of the “f”, leaving it looking like a droopy “l”. This might have been elegant for chapter titles, but it's way over the top for body copy. Dyson's writing, of course, more than redeems the bad typography, but you gotta wonder why we couldn't have had the former without the latter.

September 2003 Permalink

Eggers, Dave. The Circle. New York: Alfred A. Knopf, 2013. ISBN 978-0-345-80729-8.
There have been a number of novels, many in recent years, which explore the possibility of human society being taken over by intelligent machines. Some depict the struggle between humans and machines, others envision a dystopian future in which the machines have triumphed, and a few explore the possibility that machines might create a “new operating system” for humanity which works better than the dysfunctional social and political systems extant today. This novel goes off in a different direction: what might happen, without artificial intelligence, but in an era of exponentially growing computer power and data storage capacity, if an industry leading company with tendrils extending into every aspect of personal interaction and commerce worldwide, decided, with all the best intentions, “What the heck? Let's be evil!”

Mae Holland had done everything society had told her to do. One of only twelve of the 81 graduates of her central California high school to go on to college, she'd been accepted by a prestigious college and graduated with a degree in psychology and massive student loans she had no prospect of paying off. She'd ended up moving back in with her parents and taking a menial cubicle job at the local utility company, working for a creepy boss. In frustration and desperation, Mae reaches out to her former college roommate, Annie, who has risen to an exalted position at the hottest technology company on the globe: The Circle. The Circle had started by creating the Unified Operating System, which combined all aspects of users' interactions—social media, mail, payments, user names—into a unique and verified identity called TruYou. (Wonder where they got that idea?)

Before long, anonymity on the Internet was a thing of the past as merchants and others recognised the value of knowing their customers and of information collected across their activity on all sites. The Circle and its associated businesses supplanted existing sites such as Google, Facebook, and Twitter, and with the tight integration provided by TruYou, created new kinds of interconnection and interaction not possible when information was Balkanised among separate sites. With the end of anonymity, spam and fraudulent schemes evaporated, and with all posters personally accountable, discussions became civil and trolls slunk back under the bridge.

With an effective monopoly on electronic communication and commercial transactions (if everybody uses TruYou to pay, what option does a merchant have but to accept it and pay The Circle's fees?), The Circle was assured a large, recurring, and growing revenue stream. With the established businesses generating so much cash, The Circle invested heavily in research and development of new technologies: everything from sustainable housing, access to DNA databases, crime prevention, to space applications.

Mae's initial job was far more mundane. In Customer Experience, she was more or less working in a call centre, except her communications with customers were over The Circle's message services. The work was nothing like that at the utility company, however. Her work was monitored in real time, with a satisfaction score computed from follow-ups surveys by clients. To advance, a score near 100 was required, and Mae had to follow-up any scores less than that to satisfy the customer and obtain a perfect score. On a second screen, internal “zing” messages informed her of activity on the campus, and she was expected to respond and contribute.

As she advances within the organisation, Mae begins to comprehend the scope of The Circle's ambitions. One of the founders unveils a plan to make always-on cameras and microphones available at very low cost, which people can install around the world. All the feeds will be accessible in real time and archived forever. A new slogan is unveiled: “All that happens must be known.

At a party, Mae meets a mysterious character, Kalden, who appears to have access to parts of The Circle's campus unknown to her associates and yet doesn't show up in the company's exhaustive employee social networks. Her encounters and interactions with him become increasingly mysterious.

Mae moves up, and is chosen to participate to a greater extent in the social networks, and to rate products and ideas. All of this activity contributes to her participation rank, computed and displayed in real time. She swallows a sensor which will track her health and vital signs in real time, display them on a wrist bracelet, and upload them for analysis and early warning diagnosis.

Eventually, she volunteers to “go transparent”: wear a body camera and microphone every waking moment, and act as a window into The Circle for the general public. The company had pushed transparency for politicians, and now was ready to deploy it much more widely.

Secrets Are Lies
Sharing Is Caring
Privacy Is Theft

To Mae's family and few remaining friends outside The Circle, this all seems increasingly bizarre: as if the fastest growing and most prestigious high technology company in the world has become a kind of grotesque cult which consumes the lives of its followers and aspires to become universal. Mae loves her sense of being connected, the interaction with a worldwide public, and thinks it is just wonderful. The Circle internally tests and begins to roll out a system of direct participatory democracy to replace existing political institutions. Mae is there to report it. A plan to put an end to most crime is unveiled: Mae is there.

The Circle is closing. Mae is contacted by her mysterious acquaintance, and presented with a moral dilemma: she has become a central actor on the stage of a world which is on the verge of changing, forever.

This is a superbly written story which I found both realistic and chilling. You don't need artificial intelligence or malevolent machines to create an eternal totalitarian nightmare. All it takes a few years' growth and wider deployment of technologies which exist today, combined with good intentions, boundless ambition, and fuzzy thinking. And the latter three commodities are abundant among today's technology powerhouses.

Lest you think the technologies which underlie this novel are fantasy or far in the future, they were discussed in detail in David Brin's 1999 The Transparent Society and my 1994 “Unicard” and 2003 “The Digital Imprimatur”. All that has changed is that the massive computing, communication, and data storage infrastructure envisioned in those works now exists or will within a few years.

What should you fear most? Probably the millennials who will read this and think, “Wow! This will be great.” “Democracy is mandatory here!

May 2016 Permalink

Ferguson, Niels and Bruce Schneier. Practical Cryptography. Indianapolis: Wiley Publishing, 2003. ISBN 0-471-22357-3.
This is one of the best technical books I have read in the last decade. Those who dismiss this volume as Applied Cryptography Lite” are missing the point. While the latter provides in-depth information on a long list of cryptographic systems (as of its 1996 publication date), Practical Cryptography provides specific recommendations to engineers charged with implementing secure systems based on the state of the art in 2003, backed up with theoretical justification and real-world experience. The book is particularly effective in conveying just how difficult it is to build secure systems, and how “optimisation”, “features”, and failure to adopt a completely paranoid attitude when evaluating potential attacks on the system can lead directly to the bull's eye of disaster. Often-overlooked details such as entropy collection to seed pseudorandom sequence generators, difficulties in erasing sensitive information in systems which cache data, and vulnerabilities of systems to timing-based attacks are well covered here.

November 2003 Permalink

Ferry, Georgina. A Computer Called LEO. London: Fourth Estate, 2003. ISBN 1-84115-185-8.
I'm somewhat of a computer history buff (see my Babbage and UNIVAC pages), but I knew absolutely nothing about the world's first office computer before reading this delightful book. On November 29, 1951 the first commercial computer application went into production on the LEO computer, a vacuum tube machine with mercury delay line memory custom designed and built by—(UNIVAC? IBM?)—nope: J. Lyons & Co. Ltd. of London, a catering company which operated the Lyons Teashops all over Britain. LEO was based on the design of the Cambridge EDSAC, but with additional memory and modifications for commercial work. Many present-day disasters in computerisation projects could be averted from the lessons of Lyons, who not only designed, built, and programmed the first commercial computer from scratch but understood from the outset that the computer must fit the needs and operations of the business, not the other way around, and managed thereby to succeed on the very first try. LEO remained on the job for Lyons until January 1965. (How many present-day computers will still be running 14 years after they're installed?) A total of 72 LEO II and III computers, derived from the original design, were built, and some remained in service as late as 1981. The LEO Computers Society maintains an excellent Web site with many photographs and historical details.

February 2004 Permalink

Feynman, Richard P. Feynman Lectures on Computation. Edited by Anthony J.G. Hey and Robin W. Allen. Reading MA: Addison-Wesley, 1996. ISBN 0-201-48991-0.
This book is derived from Feynman's lectures on the physics of computation in the mid 1980s at CalTech. A companion volume, Feynman and Computation (see September 2002), contains updated versions of presentations by guest lecturers in this course.

May 2003 Permalink

Fulton, Steve and Jeff Fulton. HTML5 Canvas. Sebastopol, CA: O'Reilly, 2013. ISBN 978-1-449-33498-7.
I only review computer books if I've read them in their entirety, as opposed to using them as references while working on projects. For much of 2017 I've been living with this book open, referring to it as I performed a comprehensive overhaul of my Fourmilab site, and I just realised that by now I have actually read every page, albeit not in linear order, so a review is in order; here goes.

The original implementation of World Wide Web supported only text and, shortly thereafter, embedded images in documents. If you wanted to do something as simple as embed an audio or video clip, you were on your own, wading into a morass of browser- and platform-specific details, plug-ins the user may have to install and then forever keep up to date, and security holes due to all of this non-standard and often dodgy code. Implementing interactive content on the Web, for example scientific simulations for education, required using an embedded language such as Java, whose initial bright promise of “Write once, run anywhere” quickly added the rejoinder “—yeah, right” as bloat in the language, incessant security problems, cross-platform incompatibilities, the need for the user to forever keep external plug-ins updated lest existing pages cease working, caused Java to be regarded as a joke—a cruel joke upon those who developed Web applications based upon it. By the latter half of the 2010s, the major browsers had either discontinued support for Java or announced its removal in future releases.

Fortunately, in 2014 the HTML5 standard was released. For the first time, native, standardised support was added to the Web's fundamental document format to support embedded audio, video, and interactive content, along with Application Programming Interfaces (APIs) in the JavaScript language, interacting with the document via the Document Object Model (DOM), which has now been incorporated into the HTML5 standard. For the first time it became possible, using only standards officially adopted by the World Wide Web Consortium, to create interactive Web pages incorporating multimedia content. The existence of this standard provides a strong incentive for browser vendors to fully implement and support it, and increases the confidence of Web developers that pages they create which are standards-compliant will work on the multitude of browsers, operating systems, and hardware platforms which exist today.

(That encomium apart, I find much to dislike about HTML5. In my opinion its sloppy syntax [not requiring quotes on tag attributes nor the closing of many tags] is a great step backward from XHTML 1.0, which strictly conforms to XML syntax and can be parsed by a simple and generic XML parser, without the Babel-sized tower of kludges and special cases which are required to accommodate the syntactic mumbling of HTML5. A machine-readable language should be easy to read and parse by a machine, especially in an age where only a small minority of Web content creators actually write HTML themselves, as opposed to using a content management system of some kind. Personally, I continue to use XHTML 1.0 for all content on my Web site which does not require the new features in HTML5, and I observe that the home page of the World Wide Web Consortium is, itself, in XHTML 1.0 Strict. And there's no language version number in the header of an HTML5 document. Really—what's up with that? But HTML5 is the standard we've got, so it's the standard we have to use in order to benefit from the capabilities it provides: onward.)

One of the most significant new features in HTML5 is its support for the Canvas element. A canvas is a rectangular area within a page which is treated as an RGBA bitmap (the “A” denotes “alpha”, which implements transparency for overlapping objects). A canvas is just what its name implies: a blank area on which you can draw. The drawing is done in JavaScript code via the Canvas API, which is documented in this book, along with tutorials and abundant examples which can be downloaded from the publisher's Web site. The API provides the usual functions of a two-dimensional drawing model, including lines, arcs, paths, filled objects, transformation matrices, clipping, and colours, including gradients. A text API allows drawing text on the canvas, using a subset of CSS properties to define fonts and their display attributes.

Bitmap images may be painted on the canvas, scaled and rotated, if you wish, using the transformation matrix. It is also possible to retrieve the pixel data from a canvas or portion of it, manipulate it at low-level, and copy it back to that or another canvas using JavaScript typed arrays. This allows implementation of arbitrary image processing. You might think that pixel-level image manipulation in JavaScript would be intolerably slow, but with modern implementations of JavaScript in current browsers, it often runs within a factor of two of the speed of optimised C code and, unlike the C code, works on any platform from within a Web page which requires no twiddling by the user to build and install on their computer.

The canvas API allows capturing mouse and keyboard events, permitting user interaction. Animation is implemented using JavaScript's standard setTimeout method. Unlike some other graphics packages, the canvas API does not maintain a display list or refresh buffer. It is the responsibility of your code to repaint the image on the canvas from scratch whenever it changes. Contemporary browsers buffer the image under construction to prevent this process from being seen by the user.

HTML5 audio and video are not strictly part of the canvas facility (although you can display a video on a canvas), but they are discussed in depth here, each in its own chapter. Although the means for embedding this content into Web pages are now standardised, the file formats for audio and video are, more than a quarter century after the creation of the Web, “still evolving”. There is sage advice for developers about how to maximise portability of pages across browsers and platforms.

Two chapters, 150 pages of this 750 page book (don't be intimidated by its length—a substantial fraction is code listings you don't need to read unless you're interested in the details), are devoted to game development using the HTML5 canvas and multimedia APIs. A substantial part of this covers topics such as collision detection, game physics, smooth motion, and detecting mouse hits in objects, which are generic subjects in computer graphics and not specific to its HTML5 implementation. Reading them, however, may give you some tips useful in non-game applications.

Projects at Fourmilab which now use HTML5 canvas are:

Numerous other documents on the site have been updated to HTML5, using the audio and video embedding capabilities described in the book.

All of the information on the APIs described in the book is available on the Web for free. But you won't know what to look for unless you've read an explanation of how they work and looked at sample code which uses them. This book provides that information, and is useful as a desktop reference while you're writing code.

A Kindle edition is available, which you can rent for a limited period of time if you only need to refer to it for a particular project.

July 2017 Permalink

Gershenfeld, Neil. Fab. New York: Basic Books, 2005. ISBN 0-465-02745-8.
Once, every decade or so, you encounter a book which empowers you in ways you never imagined before you opened it, and ultimately changes your life. This is one of those books. I am who I am (not to sound too much like Popeye) largely because in the fall of 1967 I happened to read Daniel McCracken's FORTRAN book and realised that there was nothing complicated at all about programming computers—it was a vocational skill that anybody could learn, much like operating a machine tool. (Of course, as you get deeper into the craft, you discover there is a great body of theory to master, but there's much you can accomplish if you're willing to work hard and learn on the job before you tackle the more abstract aspects of the art.) But this was not only something that I could do but, more importantly, I could learn by doing—and that's how I decided to spend the rest of my professional life and I've never regretted having done so. I've never met a genuinely creative person who wished to spend a nanosecond in a classroom downloading received wisdom at dial-up modem bandwidth. In fact, I suspect the absence of such people in the general population is due to the pernicious effects of the Bismarck worker-bee indoctrination to which the youth of most “developed” societies are subjected today.

We all know that, some day, society will pass through the nanotechnological singularity, after which we'll be eternally free, eternally young, immortal, and incalculably rich: hey—works for me!   But few people realise that if the age of globalised mass production is analogous to that of mainframe computers and if the desktop nano-fabricator is equivalent to today's personal supercomputer, we're already in the equivalent of the minicomputer age of personal fabrication. Remember minicomputers? Not too large, not too small, and hence difficult to classify: too expensive for most people to buy, but within the budget of groups far smaller than the governments and large businesses who could afford mainframes.

The minicomputer age of personal fabrication is as messy as the architecture of minicomputers of four decades before: there are lots of different approaches, standards, interfaces, all mutually incompatible: isn't innovation wonderful? Well, in this sense no!   But it's here, now. For a sum in the tens of thousands of U.S. dollars, it is now possible to equip a “Fab Lab” which can make “almost anything”. Such a lab can fit into a modestly sized room, and, provided with electrical power and an Internet connection, can empower whoever crosses its threshold to create whatever their imagination can conceive. In just a few minutes, their dream can become tangible hardware in the real world.

The personal computer revolution empowered almost anybody (at least in the developed world) to create whatever information processing technology their minds could imagine, on their own, or in collaboration with others. The Internet expanded the scope of this collaboration and connectivity around the globe: people who have never met one another are now working together to create software which will be used by people who have never met the authors to everybody's mutual benefit. Well, software is cool, but imagine if this extended to stuff. That's what Fab is about. SourceForge currently hosts more than 135,500 software development projects—imagine what will happen when StuffForge.net (the name is still available, as I type this sentence!) hosts millions of OpenStuff things you can download to your local Fab Lab, make, and incorporate into inventions of your own imagination. This is the grand roll-back of the industrial revolution, the negation of globalisation: individuals, all around the world, creating for themselves products tailored to their own personal needs and those of their communities, drawing upon the freely shared wisdom and experience of their peers around the globe. What a beautiful world it will be!

Cynics will say, “Sure, it can work at MIT—you have one of the most talented student bodies on the planet, supported by a faculty which excels in almost every discipline, and an industrial plant with bleeding edge fabrication technologies of all kinds.” Well, yes, it works there. But the most inspirational thing about this book is that it seems to work everywhere: not just at MIT but also in South Boston, rural India, Norway far north of the Arctic Circle, Ghana, and Costa Rica—build it and they will make. At times the author seems unduly amazed that folks without formal education and the advantages of a student at MIT can imagine, design, fabricate, and apply a solution to a problem in their own lives. But we're human beings—tool-making primates who've prospered by figuring things out and finding ways to make our lives easier by building tools. Is it so surprising that putting the most modern tools into the hands of people who daily confront the most fundamental problems of existence (access to clean water, food, energy, and information) will yield innovations which surprise even professors at MIT?

This book is so great, and so inspiring, that I will give the author a pass on his clueless attack on AutoCAD's (never attributed) DXF file format on pp. 46–47, noting simply that the answer to why it's called “DXF” is that Lotus had already used “DIF” for their spreadsheet interchange files and we didn't want to create confusion with their file format, and that the reason there's more than one code for an X co-ordinate is that many geometrical objects require more than one X co-ordinate to define them (well, duh).

The author also totally gets what I've been talking about since Unicard and even before that as “Gizmos”, that every single device in the world, and every button on every device will eventually have its own (IPv6) Internet address and be able to interact with every other such object in every way that makes sense. I envisioned MIDI networks as the cheapest way to implement this bottom-feeder light-switch to light-bulb network; the author, a decade later, opts for a PCM “Internet 0”—works for me. The medium doesn't matter; it's that the message makes it end to end so cheaply that you can ignore the cost of the interconnection that ultimately matters.

The author closes the book with the invitation:

Finally, demand for fab labs as a research project, as a collection of capabilities, as a network of facilities, and even as a technological empowerment movement is growing beyond what can be handled by the initial collection of people and institutional partners that were involved in launching them. I/we welcome your thoughts on, and participation in, shaping their future operational, organizational, and technological form.
Well, I am but a humble programmer, but here's how I'd go about it. First of all, I'd create a “Fabrication Trailer“ which could visit every community in the United States, Canada, and Mexico; I'd send it out on the road in every MIT vacation season to preach the evangel of “make” to every community it visited. In, say, one of eighty of such communities, one would find a person who dreamed of this happening in his or her lifetime who was empowered by seeing it happen; provide them a template which, by writing a cheque, can replicate the fab and watch it spread. And as it spreads, and creates wealth, it will spawn other Fab Labs.

Then, after it's perfected in a couple of hundred North American copies, design a Fab Lab that fits into an ocean cargo container and can be shipped anywhere. If there isn't electricity and Internet connectivity, also deliver the diesel generator or solar panels and satellite dish. Drop these into places where they're most needed, along with a wonk who can bootstrap the locals into doing things with these tools which astound even those who created them. Humans are clever, tool-making primates; give us the tools to realise what we imagine and then stand back and watch what happens!

The legacy media bombard us with conflict, murder, and mayhem. But the future is about creation and construction. What does An Army of Davids do when they turn their creativity and ingenuity toward creating solutions to problems perceived and addressed by individuals? Why, they'll call it a renaissance! And that's exactly what it will be.

For more information, visit the Web site of The Center for Bits and Atoms at MIT, which the author directs. Fab Central provides links to Fab Labs around the world, the machines they use, and the open source software tools you can download and start using today.

December 2006 Permalink

Hall, Eldon C. Journey to the Moon: The History of the Apollo Guidance Computer. Reston, VA: AIAA, 1996. ISBN 1-56347-185-X.

September 2001 Permalink

Hammersley, Ben. Content Syndication with RSS. Sebastopol, CA: O'Reilly, 2003. ISBN 0-596-00383-8.
Sometimes the process of setting standards for the Internet just leaves you wanting to avert your eyes. The RSS standard, used by Web loggers, news sites, and other to provide “feeds” which apprise other sites of updates to their content is a fine example of what happens when standards go bad. At first, there was the idea that RSS would be fully RDF compliant, but then out came version 0.9 which used RDF incompletely and improperly. Then came 0.91, which stripped out RDF entirely, which was followed by version 1.0, which re-incorporated full support for RDF along with modules and XML namespaces. Two weeks later, along came version 0.92 (I'm not making this up), which extended 0.91 and remained RDF free. Finally, late in 2002, RSS 2.0 arrived, a further extension of 0.92, and not in any way based on 1.0—got that? Further, the different standards don't even agree on what “RSS” stands for; personally, I'd opt for “Ridiculous Standard Setting”. For the poor guy who simply wants to provide feeds to let folks know what's changed on a Web log or site, this is a huge mess, as it is for those who wish to monitor such feeds. This book recounts the tawdry history of RSS, provides examples of the various dialects, and provides useful examples for generating and using RSS feeds, as well as an overview of the RSS world, including syndication directories, aggregators, desktop feed reader tools, and Publish and Subscribe architectures.

November 2004 Permalink

Hanson, Robin. The Age of Em. Oxford: Oxford University Press, 2016. ISBN 978-0-19-875462-6.
Many books, both fiction and nonfiction, have been devoted to the prospects for and consequences of the advent of artificial intelligence: machines with a general cognitive capacity which equals or exceeds that of humans. While machines have already surpassed the abilities of the best humans in certain narrow domains (for example, playing games such as chess or go), you can't take a chess playing machine and expect it to be even marginally competent at a task as different as driving a car or writing a short summary of a newspaper story—things most humans can do with a little experience. A machine with “artificial general intelligence” (AGI) would be as adaptable as humans, and able with practice to master a wide variety of skills.

The usual scenario is that continued exponential progress in computing power and storage capacity, combined with better understanding of how the brain solves problems, will eventually reach a cross-over point where artificial intelligence matches human capability. But since electronic circuitry runs so much faster than the chemical signalling of the brain, even the first artificial intelligences will be able to work much faster than people, and, applying their talents to improving their own design at a rate much faster than human engineers can work, will result in an “intelligence explosion”, where the capability of machine intelligence runs away and rapidly approaches the physical limits of computation, far surpassing human cognition. Whether the thinking of these super-minds will be any more comprehensible to humans than quantum field theory is to a goldfish and whether humans will continue to have a place in this new world and, if so, what it may be, has been the point of departure for much speculation.

In the present book, Robin Hanson, a professor of economics at George Mason University, explores a very different scenario. What if the problem of artificial intelligence (figuring out how to design software with capabilities comparable to the human brain) proves to be much more difficult than many researchers assume, but that we continue to experience exponential growth in computing and our ability to map and understand the fine-scale structure of the brain, both in animals and eventually humans? Then some time in the next hundred years (and perhaps as soon as 2050), we may have the ability to emulate the low-level operation of the brain with an electronic computing substrate. Note that we need not have any idea how the brain actually does what it does in order to do this: all we need to do is understand the components (neurons, synapses, neurotransmitters, etc.) and how they're connected together, then build a faithful emulation of them on another substrate. This emulation, presented with the same inputs (for example, the pulse trains which encode visual information from the eyes and sound from the ears), should produce the same outputs (pulse trains which activate muscles, or internal changes within the brain which encode memories).

Building an emulation of a brain is much like reverse-engineering an electronic device. It's often unnecessary to know how the device actually works as long as you can identify all of the components, their values, and how they're interconnected. If you re-create that structure, even though it may not look anything like the original or use identical parts, it will still work the same as the prototype. In the case of brain emulation, we're still not certain at what level the emulation must operate nor how faithful it must be to the original. This is something we can expect to learn as more and more detailed emulations of parts of the brain are built. The Blue Brain Project set out in 2005 to emulate one neocortical column of the rat brain. This goal has now been achieved, and work is progressing both toward more faithful simulation and expanding the emulation to larger portions of the brain. For a sense of scale, the human neocortex consists of about one million cortical columns.

In this work, the author assumes that emulation of the human brain will eventually be achieved, then uses standard theories from the physical sciences, economics, and social sciences to explore the consequences and characteristics of the era in which emulations will become common. He calls an emulation an “em”, and the age in which they are the dominant form of sentient life on Earth the “age of em”. He describes this future as “troublingly strange”. Let's explore it.

As a starting point, assume that when emulation becomes possible, we will not be able to change or enhance the operation of the emulated brains in any way. This means that ems will have the same memory capacity, propensity to forget things, emotions, enthusiasms, psychological quirks and pathologies, and all of the idiosyncrasies of the individual human brains upon which they are based. They will not be the cold, purely logical, and all-knowing minds which science fiction often portrays artificial intelligences to be. Instead, if you know Bob well, and an emulation is made of his brain, immediately after the emulation is started, you won't be able to distinguish Bob from Em-Bob in a conversation. As the em continues to run and has its own unique experiences, it will diverge from Bob based upon them, but, we can expect much of its Bob-ness to remain.

But simply by being emulations, ems will inhabit a very different world than humans, and can be expected to develop their own unique society which differs from that of humans at least as much as the behaviour of humans who inhabit an industrial society differs from hunter-gatherer bands of the Paleolithic. One key aspect of emulations is that they can be checkpointed, backed up, and copied without errors. This is something which does not exist in biology, but with which computer users are familiar. Suppose an em is about to undertake something risky, which might destroy the hardware running the emulation. It can simply make a backup, store it in a safe place, and if disaster ensues, arrange to have to the backup restored onto new hardware, picking up right where it left off at the time of the backup (but, of course, knowing from others what happened to its earlier instantiation and acting accordingly). Philosophers will fret over whether the restored em has the same identity as the one which was destroyed and whether it has continuity of consciousness. To this, I say, let them fret; they're always fretting about something. As an engineer, I don't spend time worrying about things I can't define, no less observe, such as “consciousness”, “identity”, or “the soul”. If I did, I'd worry about whether those things were lost when undergoing general anaesthesia. Have the wisdom teeth out, wake up, and get on with your life.

If you have a backup, there's no need to wait until the em from which it was made is destroyed to launch it. It can be instantiated on different hardware at any time, and now you have two ems, whose life experiences were identical up to the time the backup was made, running simultaneously. This process can be repeated as many times as you wish, at a cost of only the processing and storage charges to run the new ems. It will thus be common to capture backups of exceptionally talented ems at the height of their intellectual and creative powers so that as many can be created as the market demands their services. These new instances will require no training, but be able to undertake new projects within their area of knowledge at the moment they're launched. Since ems which start out as copies of a common prototype will be similar, they are likely to understand one another to an extent even human identical twins do not, and form clans of those sharing an ancestor. These clans will be composed of subclans sharing an ancestor which was a member of the clan, but which diverged from the original prototype before the subclan parent backup was created.

Because electronic circuits run so much faster than the chemistry of the brain, ems will have the capability to run over a wide range of speeds and probably will be able to vary their speed at will. The faster an em runs, the more it will have to pay for the processing hardware, electrical power, and cooling resources it requires. The author introduces a terminology for speed where an em is assumed to run around the same speed as a human, a kilo-em a thousand times faster, and a mega-em a million times faster. Ems can also run slower: a milli-em runs 1000 times slower than a human and a micro-em at one millionth the speed. This will produce a variation in subjective time which is entirely novel to the human experience. A kilo-em will experience a century of subjective time in about a month of objective time. A mega-em experiences a century of life about every hour. If the age of em is largely driven by a population which is kilo-em or faster, it will evolve with a speed so breathtaking as to be incomprehensible to those who operate on a human time scale. In objective time, the age of em may only last a couple of years, but to the ems within it, its history will be as long as the Roman Empire. What comes next? That's up to the ems; we cannot imagine what they will accomplish or choose to do in those subjective millennia or millions of years.

What about humans? The economics of the emergence of an em society will be interesting. Initially, humans will own everything, but as the em society takes off and begins to run at least a thousand times faster than humans, with a population in the trillions, it can be expected to create wealth at a rate never before experienced. The economic doubling time of industrial civilisation is about 15 years. In an em society, the doubling time will be just 18 months and potentially much faster. In such a situation, the vast majority of wealth will be within the em world, and humans will be unable to compete. Humans will essentially be retirees, with their needs and wants easily funded from the proceeds of their investments in initially creating the world the ems inhabit. One might worry about the ems turning upon the humans and choosing to dispense with them but, as the author notes, industrial societies have not done this with their own retirees, despite the financial burden of supporting them, which is far greater than will be the case for ems supporting human retirees.

The economics of the age of em will be unusual. The fact that an em, in the prime of life, can be copied at almost no cost will mean that the supply of labour, even the most skilled and specialised, will be essentially unlimited. This will drive the compensation for labour down to near the subsistence level, where subsistence is defined as the resources needed to run the em. Since it costs no more to create a copy of a CEO or computer technology research scientist than a janitor, there will be a great flattening of pay scales, all settling near subsistence. But since most ems will live mostly in virtual reality, subsistence need not mean penury: most of their needs and wants will not be physical, and will cost little or nothing to provide. Wouldn't it be ironic if the much-feared “robot revolution” ended up solving the problem of “income inequality”? Ems may have a limited useful lifetime to the extent they inherit the human characteristic of the brain having greatest plasticity in youth and becoming increasingly fixed in its ways with age, and consequently less able to innovate and be creative. The author explores how ems may view death (which for an em means being archived and never re-instantiated) when there are myriad other copies in existence and new ones being spawned all the time, and how ems may choose to retire at very low speed and resource requirements and watch the future play out a thousand times or faster than a human can.

This is a challenging and often disturbing look at a possible future which, strange as it may seem, violates no known law of science and toward which several areas of research are converging today. The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don't know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems. The author is inordinately fond of enumerations. Consider this one from chapter 27.

These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.

But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.

September 2016 Permalink

Hawkins, Jeff with Sandra Blakeslee. On Intelligence. New York: Times Books, 2004. ISBN 0-8050-7456-2.
Ever since the early days of research into the sub-topic of computer science which styles itself “artificial intelligence”, such work has been criticised by philosophers, biologists, and neuroscientists who argue that while symbolic manipulation, database retrieval, and logical computation may be able to mimic, to some limited extent, the behaviour of an intelligent being, in no case does the computer understand the problem it is solving in the sense a human does. John R. Searle's “Chinese Room” thought experiment is one of the best known and extensively debated of these criticisms, but there are many others just as cogent and difficult to refute.

These days, criticising artificial intelligence verges on hunting cows with a bazooka—unlike the early days in the 1950s when everybody expected the world chess championship to be held by a computer within five or ten years and mathematicians were fretting over what they'd do with their lives once computers learnt to discover and prove theorems thousands of times faster than they, decades of hype, fads, disappointment, and broken promises have instilled some sense of reality into the expectations most technical people have for “AI”, if not into those working in the field and those they bamboozle with the sixth (or is it the sixteenth) generation of AI bafflegab.

AI researchers sometimes defend their field by saying “If it works, it isn't AI”, by which they mean that as soon as a difficult problem once considered within the domain of artificial intelligence—optical character recognition, playing chess at the grandmaster level, recognising faces in a crowd—is solved, it's no longer considered AI but simply another computer application, leaving AI with the remaining unsolved problems. There is certainly some truth in this, but a closer look gives lie to the claim that these problems, solved with enormous effort on the part of numerous researchers, and with the application, in most cases, of computing power undreamed of in the early days of AI, actually represents “intelligence”, or at least what one regards as intelligent behaviour on the part of a living brain.

First of all, in no case did a computer “learn” how to solve these problems in the way a human or other organism does; in every case experts analysed the specific problem domain in great detail, developed special-purpose solutions tailored to the problem, and then implemented them on computing hardware which in no way resembles the human brain. Further, each of these “successes” of AI is useless outside its narrow scope of application: a chess-playing computer cannot read handwriting, a speech recognition program cannot identify faces, and a natural language query program cannot solve mathematical “word problems” which pose no difficulty to fourth graders. And while many of these programs are said to be “trained” by presenting them with collections of stimuli and desired responses, no amount of such training will permit, say, an optical character recognition program to learn to write limericks. Such programs can certainly be useful, but nothing other than the fact that they solve problems which were once considered difficult in an age when computers were much slower and had limited memory resources justifies calling them “intelligent”, and outside the marketing department, few people would remotely consider them so.

The subject of this ambitious book is not “artificial intelligence” but intelligence: the real thing, as manifested in the higher cognitive processes of the mammalian brain, embodied, by all the evidence, in the neocortex. One of the most fascinating things about the neocortex is how much a creature can do without one, for only mammals have them. Reptiles, birds, amphibians, fish, and even insects (which barely have a brain at all) exhibit complex behaviour, perception of and interaction with their environment, and adaptation to an extent which puts to shame the much-vaunted products of “artificial intelligence”, and yet they all do so without a neocortex at all. In this book, the author hypothesises that the neocortex evolved in mammals as an add-on to the old brain (essentially, what computer architects would call a “bag hanging on the side of the old machine”) which implements a multi-level hierarchical associative memory for patterns and a complementary decoder from patterns to detailed low-level behaviour which, wired through the old brain to the sensory inputs and motor controls, dynamically learns spatial and temporal patterns and uses them to make predictions which are fed back to the lower levels of the hierarchy, which in turns signals whether further inputs confirm or deny them. The ability of the high-level cortex to correctly predict inputs is what we call “understanding” and it is something which no computer program is presently capable of doing in the general case.

Much of the recent and present-day work in neuroscience has been devoted to imaging where the brain processes various kinds of information. While fascinating and useful, these investigations may overlook one of the most striking things about the neocortex: that almost every part of it, whether devoted to vision, hearing, touch, speech, or motion appears to have more or less the same structure. This observation, by Vernon B. Mountcastle in 1978, suggests there may be a common cortical algorithm by which all of these seemingly disparate forms of processing are done. Consider: by the time sensory inputs reach the brain, they are all in the form of spikes transmitted by neurons, and all outputs are sent in the same form, regardless of their ultimate effect. Further, evidence of plasticity in the cortex is abundant: in cases of damage, the brain seems to be able to re-wire itself to transfer a function to a different region of the cortex. In a long (70 page) chapter, the author presents a sketchy model of what such a common cortical algorithm might be, and how it may be implemented within the known physiological structure of the cortex.

The author is a founder of Palm Computing and Handspring (which was subsequently acquired by Palm). He subsequently founded the Redwood Neuroscience Institute, which has now become part of the Helen Wills Neuroscience Institute at the University of California, Berkeley, and in March of 2005 founded Numenta, Inc. with the goal of developing computer memory systems based on the model of the neocortex presented in this book.

Some academic scientists may sniff at the pretensions of a (very successful) entrepreneur diving into their speciality and trying to figure out how the brain works at a high level. But, hey, nobody else seems to be doing it—the computer scientists are hacking away at their monster programs and parallel machines, the brain community seems stuck on functional imaging (like trying to reverse-engineer a microprocessor in the nineteenth century by looking at its gross chemical and electrical properties), and the neuron experts are off dissecting squid: none of these seem likely to lead to an understanding (there's that word again!) of what's actually going on inside their own tenured, taxpayer-funded skulls. There is undoubtedly much that is wrong in the author's speculations, but then he admits that from the outset and, admirably, presents an appendix containing eleven testable predictions, each of which can falsify all or part of his theory. I've long suspected that intelligence has more to do with memory than computation, so I'll confess to being predisposed toward the arguments presented here, but I'd be surprised if any reader didn't find themselves thinking about their own thought processes in a different way after reading this book. You won't find the answers to the mysteries of the brain here, but at least you'll discover many of the questions worth pondering, and perhaps an idea or two worth exploring with the vast computing power at the disposal of individuals today and the boundless resources of data in all forms available on the Internet.

December 2006 Permalink

Hey, Anthony J.G. ed. Feynman and Computation. Boulder, CO: Westview Press, 2002. ISBN 0-8133-4039-X.

September 2002 Permalink

Howard, Michael, David LeBlanc, and John Viega. 19 Deadly Sins of Software Security. Emeryville, CA: Osborne, 2005. ISBN 0-07-226085-8.
During his brief tenure as director of the National Cyber Security Division of the U.S. Department of Homeland Security, Amit Yoran (who wrote the foreword to this book) got a lot of press attention when he claimed, “Ninety-five percent of software bugs are caused by the same 19 programming flaws.” The list of these 19 dastardly defects was assembled by John Viega who, with his two co-authors, both of whom worked on computer security at Microsoft, attempt to exploit its notoriety in this poorly written, jargon-filled, and utterly worthless volume. Of course, I suppose that's what one should expect when a former official of the agency of geniuses who humiliate millions of U.S. citizens every day to protect them from the peril of grandmothers with exploding sneakers team up with a list of authors that includes a former “security architect for Microsoft's Office division”—why does the phrase “macro virus” immediately come to mind?

Even after reading this entire ramble on the painfully obvious, I cannot remotely guess who the intended audience was supposed to be. Software developers who know enough to decode what the acronym-packed (many never or poorly defined) text is trying to say are already aware of the elementary vulnerabilities being discussed and ways to mitigate them. Those without knowledge of competent programming practice are unlikely to figure out what the authors are saying, since their explanations in most cases assume the reader is already aware of the problem. The book is also short (281 pages), generous with white space, and packed with filler: the essential message of what to look out for in code can be summarised in a half-page table: in fact, it has been, on page 262! Not only does every chapter end with a summary of “do” and “don't” recommendations, all of these lists are duplicated in a ten page appendix at the end, presumably added because the original manuscript was too short. Other obvious padding is giving examples of trivial code in a long list of languages (including proprietary trash such as C#, Visual Basic, and the .NET API); around half of the code samples are Microsoft-specific, as are the “Other Resources” at the end of each chapter. My favourite example is on pp. 176–178, which gives sample code showing how to read a password from a file (instead of idiotically embedding it in an application) in four different programming languages: three of them Microsoft-specific.

Like many bad computer books, this one seems to assume that programmers can learn only from long enumerations of specific items, as opposed to a theoretical understanding of the common cause which underlies them all. In fact, a total of eight chapters on supposedly different “deadly sins” can be summed up in the following admonition, “never blindly trust any data that comes from outside your complete control”. I had learned this both from my elders and brutal experience in operating system debugging well before my twentieth birthday. Apart from the lack of content and ill-defined audience, the authors write in a dialect of jargon and abbreviations which is probably how morons who work for Microsoft speak to one another: “app”, “libcall”, “proc”, “big-honking”, “admin”, “id” litter the text, and the authors seem to believe the word for a security violation is spelt “breech”. It's rare that I read a technical book in any field from which I learn not a single thing, but that's the case here. Well, I suppose I did learn that a prominent publisher and forty dollar cover price are no guarantee the content of a book will be of any value. Save your money—if you're curious about which 19 “sins” were chosen, just visit the Amazon link above and display the back cover of the book, which contains the complete list.

September 2006 Permalink

Knuth, Donald E. Literate Programming. Stanford: Center for the Study of Language and Information, 1992. ISBN 0-937073-80-6.

February 2001 Permalink

Knuth, Donald E. and Silvio Levy. The CWEB System of Structured Documentation. Reading, MA: Addison-Wesley, 1994. ISBN 0-201-57569-8.

April 2001 Permalink

Kopparapu, Chandra. Load Balancing Servers, Firewalls, and Caches. New York: John Wiley & Sons, 2002. ISBN 0-471-41550-2.
Don't even think about deploying a server farm or geographically dispersed mirror sites without reading this authoritative book. The Internet has become such a mountain of interconnected kludges that something as conceptually simple as spreading Web and other Internet traffic across a collection of independent servers or sites in the interest of increased performance and fault tolerance becomes a matter of enormous subtlety and hideous complexity. Most of the problems come from the need for “session persistence”: when a new user arrives at your site, you can direct them to any available server based on whatever load balancing algorithm you choose, but if the user's interaction with the server involves dynamically generated content produced by the server (for example, images generated by Earth and Moon Viewer, or items the user places in their shopping cart at a commerce site), subsequent requests by the user must be directed to the same server, as only it contains the state of the user's session.

(Some load balancer vendors will try to persuade you that session persistence is a design flaw in your Web applications which you should eliminate by making them stateless or by using a common storage pool shared by all the servers. Don't believe this. I defy you to figure out how an application as simple as Earth and Moon Viewer, which does nothing more complicated than returning a custom Web page which contains a dynamically generated embedded image, can be made stateless. And shared backing store [for example, Network Attached Storage servers] has its own scalability and fault tolerance challenges.)

Almost any simple scheme you can come up with to get around the session persistence problem will be torpedoed by one or more of the kludges and hacks through which a user's packet traverses between client and server: NAT, firewalls, proxy servers, content caches, etc. Consider what at first appears to be a foolproof scheme (albeit sub-optimal for load distribution): simply hash the client's IP address into a set of bins, one for each server, and direct the packets accordingly. Certainly, that would work, right? Wrong: huge ISPs such as AOL and EarthLink have farms of proxy servers between their customers and the sites they contact, and these proxy servers are themselves load balanced in a non-persistent manner. So even two TCP connections from the same browser retrieving, say, the text and an image from a single Web page, may arrive at your site apparently originating from different IP addresses!

This and dozens of other gotchas and ways to work around them are described in detail in this valuable book, which is entirely vendor-neutral, except for occasionally mentioning products to illustrate different kinds of architectures. It's a lot better to slap your forehead every few pages as you discover something else you didn't think of which will sabotage your best-laid plans than pull your hair out later after putting a clever and costly scheme into production and discovering that it doesn't work. When I started reading this book, I had no idea how I was going to solve the load balancing problem for the Fourmilab site, and now I know precisely how I'm going to proceed. This isn't a book you read for entertainment, but if you need to know this stuff, it's a great place to learn it.

February 2005 Permalink

Kurzweil, Ray. The Singularity Is Near. New York: Viking, 2005. ISBN 0-670-03384-7.
What happens if Moore's Law—the annual doubling of computing power at constant cost—just keeps on going? In this book, inventor, entrepreneur, and futurist Ray Kurzweil extrapolates the long-term faster than exponential growth (the exponent is itself growing exponentially) in computing power to the point where the computational capacity of the human brain is available for about US$1000 (around 2020, he estimates), reverse engineering and emulation of human brain structure permits machine intelligence indistinguishable from that of humans as defined by the Turing test (around 2030), and the subsequent (and he believes inevitable) runaway growth in artificial intelligence leading to a technological singularity around 2045 when US$1000 will purchase computing power comparable to that of all presently-existing human brains and the new intelligence created in that single year will be a billion times greater than that of the entire intellectual heritage of human civilisation prior to that date. He argues that the inhabitants of this brave new world, having transcended biological computation in favour of nanotechnological substrates “trillions of trillions of times more capable” will remain human, having preserved their essential identity and evolutionary heritage across this leap to Godlike intellectual powers. Then what? One might as well have asked an ant to speculate on what newly-evolved hominids would end up accomplishing, as the gap between ourselves and these super cyborgs (some of the precursors of which the author argues are alive today) is probably greater than between arthropod and anthropoid.

Throughout this tour de force of boundless technological optimism, one is impressed by the author's adamantine intellectual integrity. This is not an advocacy document—in fact, Kurzweil's view is that the events he envisions are essentially inevitable given the technological, economic, and moral (curing disease and alleviating suffering) dynamics driving them. Potential roadblocks are discussed candidly, along with the existential risks posed by the genetics, nanotechnology, and robotics (GNR) revolutions which will set the stage for the singularity. A chapter is devoted to responding to critics of various aspects of the argument, in which opposing views are treated with respect.

I'm not going to expound further in great detail. I suspect a majority of people who read these comments will, in all likelihood, read the book themselves (if they haven't already) and make up their own minds about it. If you are at all interested in the evolution of technology in this century and its consequences for the humans who are creating it, this is certainly a book you should read. The balance of these remarks discuss various matters which came to mind as I read the book; they may not make much sense unless you've read it (You are going to read it, aren't you?), but may highlight things to reflect upon as you do.

  • Switching off the simulation. Page 404 raises a somewhat arcane risk I've pondered at some length. Suppose our entire universe is a simulation run on some super-intelligent being's computer. (What's the purpose of the universe? It's a science fair project!) What should we do to avoid having the simulation turned off, which would be bad? Presumably, the most likely reason to stop the simulation is that it's become boring. Going through a technological singularity, either from the inside or from the outside looking in, certainly doesn't sound boring, so Kurzweil argues that working toward the singularity protects us, if we be simulated, from having our plug pulled. Well, maybe, but suppose the explosion in computing power accessible to the simulated beings (us) at the singularity exceeds that available to run the simulation? (This is plausible, since post-singularity computing rapidly approaches its ultimate physical limits.) Then one imagines some super-kid running top to figure out what's slowing down the First Superbeing Shooter game he's running and killing the CPU hog process. There are also things we can do which might increase the risk of the simulation's being switched off. Consider, as I've proposed, precision fundamental physics experiments aimed at detecting round-off errors in the simulation (manifested, for example, as small violations of conservation laws). Once the beings in the simulation twig to the fact that they're in a simulation and that their reality is no more accurate than double precision floating point, what's the point to letting it run?
  • Fifty bits per atom? In the description of the computational capacity of a rock (p. 131), the calculation assumes that 100 bits of memory can be encoded in each atom of a disordered medium. I don't get it; even reliably storing a single bit per atom is difficult to envision. Using the “precise position, spin, and quantum state” of a large ensemble of atoms as mentioned on p. 134 seems highly dubious.
  • Luddites. The risk from anti-technology backlash is discussed in some detail. (“Ned Ludd” himself joins in some of the trans-temporal dialogues.) One can imagine the next generation of anti-globalist demonstrators taking to the streets to protest the “evil corporations conspiring to make us all rich and immortal”.
  • Fundamentalism. Another risk is posed by fundamentalism, not so much of the religious variety, but rather fundamentalist humanists who perceive the migration of humans to non-biological substrates (at first by augmentation, later by uploading) as repellent to their biological conception of humanity. One is inclined, along with the author, simply to wait until these folks get old enough to need a hip replacement, pacemaker, or cerebral implant to reverse a degenerative disease to motivate them to recalibrate their definition of “purely biological”. Still, I'm far from the first to observe that Singularitarianism (chapter 7) itself has some things in common with religious fundamentalism. In particular, it requires faith in rationality (which, as Karl Popper observed, cannot be rationally justified), and that the intentions of super-intelligent beings, as Godlike in their powers compared to humans as we are to Saccharomyces cerevisiae, will be benign and that they will receive us into eternal life and bliss. Haven't I heard this somewhere before? The main difference is that the Singularitarian doesn't just aspire to Heaven, but to Godhood Itself. One downside of this may be that God gets quite irate.
  • Vanity. I usually try to avoid the “Washington read” (picking up a book and flipping immediately to the index to see if I'm in it), but I happened to notice in passing I made this one, for a minor citation in footnote 47 to chapter 2.
  • Spindle cells. The material about “spindle cells” on pp. 191–194 is absolutely fascinating. These are very large, deeply and widely interconnected neurons which are found only in humans and a few great apes. Humans have about 80,000 spindle cells, while gorillas have 16,000, bonobos 2,100 and chimpanzees 1,800. If you're intrigued by what makes humans human, this looks like a promising place to start.
  • Speculative physics. The author shares my interest in physics verging on the fringe, and, turning the pages of this book, we come across such topics as possible ways to exceed the speed of light, black hole ultimate computers, stable wormholes and closed timelike curves (a.k.a. time machines), baby universes, cold fusion, and more. Now, none of these things is in any way relevant to nor necessary for the advent of the singularity, which requires only well-understood mainstream physics. The speculative topics enter primarily in discussions of the ultimate limits on a post-singularity civilisation and the implications for the destiny of intelligence in the universe. In a way they may distract from the argument, since a reader might be inclined to dismiss the singularity as yet another woolly speculation, which it isn't.
  • Source citations. The end notes contain many citations of articles in Wired, which I consider an entertainment medium rather than a reliable source of technological information. There are also references to articles in Wikipedia, where any idiot can modify anything any time they feel like it. I would not consider any information from these sources reliable unless independently verified from more scholarly publications.
  • “You apes wanna live forever?” Kurzweil doesn't just anticipate the singularity, he hopes to personally experience it, to which end (p. 211) he ingests “250 supplements (pills) a day and … a half-dozen intravenous therapies each week”. Setting aside the shots, just envision two hundred and fifty pills each and every day! That's 1,750 pills a week or, if you're awake sixteen hours a day, an average of more than 15 pills per waking hour, or one pill about every four minutes (one presumes they are swallowed in batches, not spaced out, which would make for a somewhat odd social life). Between the year 2000 and the estimated arrival of human-level artificial intelligence in 2030, he will swallow in excess of two and a half million pills, which makes one wonder what the probability of choking to death on any individual pill might be. He remarks, “Although my program may seem extreme, it is actually conservative—and optimal (based on my current knowledge).” Well, okay, but I'd worry about a “strategy for preventing heart disease [which] is to adopt ten different heart-disease-prevention therapies that attack each of the known risk factors” running into unanticipated interactions, given how everything in biology tends to connect to everything else. There is little discussion of the alternative approach to immortality with which many nanotechnologists of the mambo chicken persuasion are enamoured, which involves severing the heads of recently deceased individuals and freezing them in liquid nitrogen in sure and certain hope of the resurrection unto eternal life.

October 2005 Permalink

Kurzweil, Ray. The Age of Spiritual Machines. New York: Penguin Books, 1999. ISBN 978-0-14-028202-3.
Ray Kurzweil is one of the most vocal advocates of the view that the exponential growth in computing power (and allied technologies such as storage capacity and communication bandwidth) at constant cost which we have experienced for the last half century, notwithstanding a multitude of well-grounded arguments that fundamental physical limits on the underlying substrates will bring it to an end (all of which have proven to be wrong), will continue for the foreseeable future: in all likelihood for the entire twenty-first century. Continued exponential growth in a technology for so long a period is unprecedented in the human experience, and the consequences as the exponential begins to truly “kick in” (although an exponential curve is self-similar, its consequences as perceived by observers whose own criteria for evaluation are more or less constant will be seen to reach a “knee” after which they essentially go vertical and defy prediction). In The Singularity Is Near (October 2005), Kurzweil argues that once the point is reached where computers exceed the capability of the human brain and begin to design their own successors, an almost instantaneous (in terms of human perception) blow-off will occur, with computers rapidly converging on the ultimate physical limits on computation, with capabilities so far beyond those of humans (or even human society as a whole) that attempting to envision their capabilities or intentions is as hopeless as a microorganism's trying to master quantum field theory. You might want to review my notes on 2005's The Singularity Is Near before reading the balance of these comments: they provide context as to the extreme events Kurzweil envisions as occurring in the coming decades, and there are no “spoilers” for the present book.

When assessing the reliability of predictions, it can be enlightening to examine earlier forecasts from the same source, especially if they cover a period of time which has come and gone in the interim. This book, published in 1999 near the very peak of the dot-com bubble provides such an opportunity, and it provides a useful calibration for the plausibility of Kurzweil's more recent speculations on the future of computing and humanity. The author's view of the likely course of the 21st century evolved substantially between this book and Singularity—in particular this book envisions no singularity beyond which the course of events becomes incomprehensible to present-day human intellects. In the present volume, which employs the curious literary device of “trans-temporal chat” between the author, a MOSH (Mostly Original Substrate Human), and a reader, Molly, who reports from various points in the century her personal experiences living through it, we encounter a future which, however foreign, can at least be understood in terms of our own experience.

This view of the human prospect is very odd indeed, and to this reader more disturbing (verging on creepy) than the approach of a technological singularity. What we encounter here are beings, whether augmented humans or software intelligences with no human ancestry whatsoever, that despite having at hand, by the end of the century, mental capacity per individual on the order of 1024 times that of the human brain (and maybe hundreds of orders of magnitude more if quantum computing pans out), still have identities, motivations, and goals which remain comprehensible to humans today. This seems dubious in the extreme to me, and my impression from Singularity is that the author has rethought this as well.

Starting from the publication date of 1999, the book serves up surveys of the scene in that year, 2009, 2019, 2029, and 2099. The chapter describing the state of computing in 2009 makes many specific predictions. The following are those which the author lists in the “Time Line” on pp. 277–278. Many of the predictions in the main text seem to me to be more ambitious than these, but I shall go with those the author chose as most important for the summary. I have reformatted these as a numbered list to make them easier to cite.

  1. A $1,000 personal computer can perform about a trillion calculations per second.
  2. Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
  3. Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
  4. The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
  5. Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
  6. Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
  7. Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech-to-text conversion) for the deaf, and computer-controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
  8. Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.
  9. Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which has been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvements in the price performance of computing.
  10. Human musicians routinely jam with cybernetic musicians.
  11. Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases.
  12. The neo-Luddite movement is growing.

I'm not going to score these in detail, as that would be both tedious and an invitation to endless quibbling over particulars, but I think most readers will agree that this picture of computing in 2009 substantially overestimates the actual state of affairs in the decade since 1999. Only item (3) seems to me to be arguably on the way to achievement, and yet I do not have a single wireless peripheral connected to any of my computers and Wi-Fi coverage remains spotty even in 2011. Things get substantially more weird the further out you go, and of course any shortfall in exponential growth lowers the baseline for further extrapolation, shifting subsequent milestones further out.

I find the author's accepting continued exponential growth as dogma rather off-putting. Granted, few people expected the trend we've lived through to continue for so long, but eventually you begin to run into physical constraints which seem to have little wiggle room for cleverness: the finite size of atoms, the electron's charge, and the speed of light. There's nothing wrong with taking unbounded exponential growth as a premise and then exploring what its implications would be, but it seems to me any forecast which is presented as a plausible future needs to spend more time describing how we'll actually get there: arm waving about three-dimensional circuitry, carbon nanotubes, and quantum computing doesn't close the sale for me. The author entirely lost me with note 3 to chapter 12 (p. 342), which concludes:

If engineering at the nanometer scale (nanotechnology) is practical in the year 2032, then engineering at the picometer scale should be practical in about forty years later (because 5.64 = approximately 1,000), or in the year 2072. Engineering at the femtometer (one thousandth of a trillionth of a meter, also referred to as a quadrillionth of a meter) scale should be feasible, therefore, by around the year 2112. Thus I am being a bit conservative to say that femtoengineering is controversial in 2099.

Nanoengineering involves manipulating individual atoms. Picoengineering will involve engineering at the level of subatomic particles (e.g., electrons). Femtoengineering will involve engineering inside a quark. This should not seem particularly startling, as contemporary theories already postulate intricate mechanisms within quarks.

This is just so breathtakingly wrong I am at a loss for where to begin, and it was just as completely wrong when the book was published two decades ago as it is today; nothing relevant to these statements has changed. My guess is that Kurzweil was thinking of “intricate mechanisms” within hadrons and mesons, particles made up of quarks and gluons, and not within quarks themselves, which then and now are believed to be point particles with no internal structure whatsoever and are, in any case, impossible to isolate from the particles they compose. When Richard Feynman envisioned molecular nanotechnology in 1959, he based his argument on the well-understood behaviour of atoms known from chemistry and physics, not a leap of faith based on drawing a straight line on a sheet of semi-log graph paper. I doubt one could find a single current practitioner of subatomic physics equally versed in the subject as was Feynman in atomic physics who would argue that engineering at the level of subatomic particles would be remotely feasible. (For atoms, biology provides an existence proof that complex self-replicating systems of atoms are possible. Despite the multitude of environments in the universe since the big bang, there is precisely zero evidence subatomic particles have ever formed structures more complicated than those we observe today.)

I will not further belabour the arguments in this vintage book. It is an entertaining read and will certainly expand your horizons as to what is possible and introduce you to visions of the future you almost certainly have never contemplated. But for a view of the future which is simultaneously more ambitious and plausible, I recommend The Singularity Is Near.

June 2011 Permalink

Kurzweil, Ray. How to Create a Mind. New York: Penguin Books, 2012. ISBN 978-0-14-312404-7.
We have heard so much about the exponential growth of computing power available at constant cost that we sometimes overlook the fact that this is just one of a number of exponentially compounding technologies which are changing our world at an ever-accelerating pace. Many of these technologies are interrelated: for example, the availability of very fast computers and large storage has contributed to increasingly making biology and medicine information sciences in the era of genomics and proteomics—the cost of sequencing a human genome, since the completion of the Human Genome Project, has fallen faster than the increase of computer power.

Among these seemingly inexorably rising curves have been the spatial and temporal resolution of the tools we use to image and understand the structure of the brain. So rapid has been the progress that most of the detailed understanding of the brain dates from the last decade, and new discoveries are arriving at such a rate that the author had to make substantial revisions to the manuscript of this book upon several occasions after it was already submitted for publication.

The focus here is primarily upon the neocortex, a part of the brain which exists only in mammals and is identified with “higher level thinking”: learning from experience, logic, planning, and, in humans, language and abstract reasoning. The older brain, which mammals share with other species, is discussed in chapter 5, but in mammals it is difficult to separate entirely from the neocortex, because the latter has “infiltrated” the old brain, wiring itself into its sensory and action components, allowing the neocortex to process information and override responses which are automatic in creatures such as reptiles.

Not long ago, it was thought that the brain was a soup of neurons connected in an intricately tangled manner, whose function could not be understood without comprehending the quadrillion connections in the neocortex alone, each with its own weight to promote or inhibit the firing of a neuron. Now, however, it appears, based upon improved technology for observing the structure and operation of the brain, that the fundamental unit in the brain is not the neuron, but a module of around 100 neurons which acts as a pattern recogniser. The internal structure of these modules seems to be wired up from directions from the genome, but the weights of the interconnections within the module are adjusted as the module is trained based upon the inputs presented to it. The individual pattern recognition modules are wired both to pass information on matches to higher level modules, and predictions back down to lower level recognisers. For example, if you've seen the letters “appl” and the next and final letter of the word is a smudge, you'll have no trouble figuring out what the word is. (I'm not suggesting the brain works literally like this, just using this as an example to illustrate hierarchical pattern recognition.)

Another important discovery is that the architecture of these pattern recogniser modules is pretty much the same regardless of where they appear in the neocortex, or what function they perform. In a normal brain, there are distinct portions of the neocortex associated with functions such as speech, vision, complex motion sequencing, etc., and yet the physical structure of these regions is nearly identical: only the weights of the connections within the modules and the dyamically-adapted wiring among them differs. This explains how patients recovering from brain damage can re-purpose one part of the neocortex to take over (within limits) for the portion lost.

Further, the neocortex is not the rat's nest of random connections we recently thought it to be, but is instead hierarchically structured with a topologically three dimensional “bus” of pre-wired interconnections which can be used to make long-distance links between regions.

Now, where this begins to get very interesting is when we contemplate building machines with the capabilities of the human brain. While emulating something at the level of neurons might seem impossibly daunting, if you instead assume the building block of the neocortex is on the order of 300 million more or less identical pattern recognisers wired together at a high level in a regular hierarchical manner, this is something we might be able to think about doing, especially since the brain works almost entirely in parallel, and one thing we've gotten really good at in the last half century is making lots and lots of tiny identical things. The implication of this is that as we continue to delve deeper into the structure of the brain and computing power continues to grow exponentially, there will come a point in the foreseeable future where emulating an entire human neocortex becomes feasible. This will permit building a machine with human-level intelligence without translating the mechanisms of the brain into those comparable to conventional computer programming. The author predicts “this will first take place in 2029 and become routine in the 2030s.”

Assuming the present exponential growth curves continue (and I see no technological reason to believe they will not), the 2020s are going to be a very interesting decade. Just as few people imagined five years ago that self-driving cars were possible, while today most major auto manufacturers have projects underway to bring them to market in the near future, in the 2020s we will see the emergence of computational power which is sufficient to “brute force” many problems which were previously considered intractable. Just as search engines and free encyclopedias have augmented our biological minds, allowing us to answer questions which, a decade ago, would have taken days in the library if we even bothered at all, the 300 million pattern recognisers in our biological brains are on the threshold of having access to billions more in the cloud, trained by interactions with billions of humans and, perhaps eventually, many more artificial intelligences. I am not talking here about implanting direct data links into the brain or uploading human brains to other computational substrates although both of these may happen in time. Instead, imagine just being able to ask a question in natural language and get an answer to it based upon a deep understanding of all of human knowledge. If you think this is crazy, reflect upon how exponential growth works or imagine travelling back in time and giving a demo of Google or Wolfram Alpha to yourself in 1990.

Ray Kurzweil, after pioneering inventions in music synthesis, optical character recognition, text to speech conversion, and speech recognition, is now a director of engineering at Google.

In the Kindle edition, the index cites page numbers in the print edition to which the reader can turn since the electronic edition includes real page numbers. Index items are not, however, directly linked to the text cited.

February 2014 Permalink

Lanier, Jaron. You Are Not a Gadget. New York: Alfred A. Knopf, 2010. ISBN 978-0-307-26964-5.
In The Fatal Conceit (March 2005) Friedrich A. Hayek observed that almost any noun in the English language is devalued by preceding it with “social”. In this book, virtual reality pioneer, musician, and visionary Jaron Lanier argues that the digital revolution, which began in the 1970s with the advent of the personal computer and became a new foundation for human communication and interaction with widespread access to the Internet and the Web in the 1990s, took a disastrous wrong turn in the early years of the 21st century with the advent of the so-called “Web 2.0” technologies and “social networking”—hey, Hayek could've told you!

Like many technologists, the author was optimistic that with the efflorescence of the ubiquitous Internet in the 1990s combined with readily-affordable computer power which permitted photorealistic graphics and high fidelity sound synthesis, a new burst of bottom-up creativity would be unleashed; creative individuals would be empowered to realise not just new art, but new forms of art, along with new ways to collaborate and distribute their work to a global audience. This Army of Davids (March 2006) world, however, seems to have been derailed or at least delayed, and instead we've come to inhabit an Internet and network culture which is darker and less innovative. Lanier argues that the phenomenon of technological “lock in” makes this particularly ominous, since regrettable design decisions whose drawbacks were not even perceived when they were made, tend to become entrenched and almost impossible to remedy once they are widely adopted. (For example, just look at the difficulties in migrating the Internet to IPv6.) With application layer protocols, fundamentally changing them becomes almost impossible once a multitude of independently maintained applications rely upon them to intercommunicate.

Consider MIDI, which the author uses as an example of lock-in. Originally designed to allow music synthesisers and keyboards to interoperate, it embodies a keyboardist's view of the concept of a note, which is quite different from that, say, of a violinist or trombone player. Even with facilities such as pitch bend, there are musical articulations played on physical instruments which cannot be represented in MIDI sequences. But since MIDI has become locked in as the lingua franca of electronic music production, in effect the musical vocabulary has been limited to those concepts which can be represented in MIDI, resulting in a digital world which is impoverished in potential compared to the analogue instruments it aimed to replace.

With the advent of “social networking”, we appear to be locking in a representation of human beings as database entries with fields chosen from a limited menu of choices, and hence, as with MIDI, flattening down the unbounded diversity and potential of human individuals to categories which, not coincidentally, resemble the demographic bins used by marketers to target groups of customers. Further, the Internet, through its embrace of anonymity and throwaway identities and consequent devaluing of reputation, encourages mob behaviour and “drive by” attacks on individuals which make many venues open to the public more like a slum than an affinity group of like-minded people. Lanier argues that many of the pathologies we observe in behaviour on the Internet are neither inherent nor inevitable, but rather the consequences of bad user interface design. But with applications built on social networking platforms proliferating as rapidly as me-too venture capital hoses money in their direction, we may be stuck with these regrettable decisions and their pernicious consequences for a long time to come.

Next, the focus turns to the cult of free and open source software, “cloud computing”, “crowd sourcing”, and the assumption that a “hive mind” assembled from a multitude of individuals collaborating by means of the Internet can create novel and valuable work and even assume some of the attributes of personhood. Now, this may seem absurd, but there are many people in the Silicon Valley culture to whom these are articles of faith, and since these people are engaged in designing the tools many of us will end up using, it's worth looking at the assumptions which inform their designs. Compared to what seemed the unbounded potential of the personal computer and Internet revolutions in their early days, what the open model of development has achieved to date seems depressingly modest: re-implementations of an operating system, text editor, and programming language all rooted in the 1970s, and creation of a new encyclopedia which is structured in the same manner as paper encyclopedias dating from a century ago—oh wow. Where are the immersive massively multi-user virtual reality worlds, or the innovative presentation of science and mathematics in an interactive exploratory learning environment, or new ways to build computer tools without writing code, or any one of the hundreds of breakthroughs we assumed would come along when individual creativity was unleashed by their hardware prerequisites becoming available to a mass market at an affordable price?

Not only have the achievements of the free and open movement been, shall we say, modest, the other side of the “information wants to be free” creed has devastated traditional content providers such as the music publishing, newspaper, and magazine businesses. Now among many people there's no love lost for the legacy players in these sectors, and a sentiment of “good riddance” is common, if not outright gloating over their demise. But what hasn't happened, at least so far, is the expected replacement of these physical delivery channels with electronic equivalents which generate sufficient revenue to allow artists, journalists, and other primary content creators to make a living as they did before. Now, certainly, these occupations are a meritocracy where only a few manage to support themselves, no less become wealthy, while far more never make it. But with the mass Internet now approaching its twentieth birthday, wouldn't you expect at least a few people to have figured out how to make it work for them and prospered as creators in this new environment? If so, where are they?

For that matter, what new musical styles, forms of artistic expression, or literary genres have emerged in the age of the Internet? Has the lack of a viable business model for such creations led to a situation the author describes as, “It's as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump.” One need only visit YouTube to see what he's talking about. Don't read the comments there—that path leads to despair, which is a low state.

Lanier's interests are eclectic, and a great many matters are discussed here including artificial intelligence, machine language translation, the financial crisis, zombies, neoteny in humans and human cultures, and cephalopod envy. Much of this is fascinating, and some is irritating, such as the discussion of the recent financial meltdown where it becomes clear the author simply doesn't know what he's talking about and misdiagnoses the causes of the catastrophe, which are explained so clearly in Thomas Sowell's The Housing Boom and Bust (March 2010).

I believe this is the octopus video cited in chapter 14. The author was dubious, upon viewing this, that it wasn't a computer graphics trick. I have not, as he has, dived the briny deep to meet cephalopods on their own turf, and I remain sceptical that the video represents what it purports to. This is one of the problems of the digital media age: when anything you can imagine can be persuasively computer synthesised, how can you trust any reportage of a remarkable phenomenon to be genuine if you haven't observed it for yourself?

Occasional aggravations aside, this is a thoughtful exploration of the state of the technologies which are redefining how people work, play, create, and communicate. Readers frustrated by the limitations and lack of imagination which characterises present-day software and network resources will discover, in reading this book, that tremendously empowering phrase, “it doesn't have to be that way”, and perhaps demand better of those bringing products to the market or perhaps embark upon building better tools themselves.

June 2010 Permalink

Lundstrom, David E. A Few Good Men from Univac. Cambridge, MA: MIT Press, 1987. ISBN 0-262-12120-4.
The author joined UNIVAC in 1955 and led the testing of the UNIVAC II which, unlike the UNIVAC I, was manufactured in the St. Paul area. (This book uses “Univac” as the name of the company and its computers; in my experience and in all the documents in my collection, the name, originally an acronym for “UNIVersal Automatic Computer”, was always written in all capitals: “UNIVAC”; that is the convention I shall use here.) He then worked on the development of the Navy Tactical Data System (NTDS) shipboard computer, which was later commercialised as the UNIVAC 490 real-time computer. The UNIVAC 1107 also used the NTDS circuit design and I/O architecture. In 1963, like many UNIVAC alumni, Lundstrom crossed the river to join Control Data, where he worked until retiring in 1985. At Control Data he was responsible for peripherals, terminals, and airline reservation system development. It was predictable but sad to observe how Control Data, founded by a group of talented innovators to escape the stifling self-destructive incompetence of UNIVAC management, rapidly built up its own political hierarchy which chased away its own best people, including Seymour Cray. It's as if at a board meeting somebody said, “Hey, we're successful now! Let's build a big office tower and fill it up with idiots and politicians to keep the technical geniuses from getting anything done.” Lundstrom provides an authentic view from the inside of the mainframe computer business over a large part of its history. His observations about why technology transfer usually fails and the destruction wreaked on morale by incessant reorganisations and management shifts in direction are worth pondering. Lundstrom's background is in hardware. In chapter 13, before describing software, he cautions that “Professional programmers are going to disagree violently with what I say.” Well, this professional programmer certainly did, but it's because most of what he goes on to say is simply wrong. But that's a small wart on an excellent, insightful, and thoroughly enjoyable book. This book is out of print; used copies are generally available but tend to be expensive—you might want to keep checking over a period of months as occasionally a bargain will come around.

December 2004 Permalink

Marasco, Joe. The Software Development Edge. Upper Saddle River, NJ: Addison-Wesley, 2005. ISBN 0-321-32131-6.
I read this book in manuscript form when it was provisionally titled The Psychology of Software Development.

December 2004 Permalink

McConnell, Brian. Beyond Contact: A Guide to SETI and Communicating with Alien Civilizations. Sebastopol, CA: O'Reilly, 2001. ISBN 0-596-00037-5.

April 2002 Permalink

Miranda, Eduardo Reck. Composing Music with Computers. Oxford: Focal Press, 2001. ISBN 0-240-51567-6.

May 2004 Permalink

Post, David G. In Search of Jefferson's Moose. New York: Oxford University Press, 2009. ISBN 978-0-19-534289-5.
In 1787, while serving as Minister to France, Thomas Jefferson took time out from his diplomatic duties to arrange to have shipped from New Hampshire across the Atlantic Ocean the complete skeleton, skin, and antlers of a bull moose, which was displayed in his residence in Paris. Jefferson was involved in a dispute with the Comte de Buffon, who argued that the fauna of the New World were degenerate compared to those of Europe and Asia. Jefferson concluded that no verbal argument or scientific evidence would be as convincing of the “structure and majesty of American quadrupeds” as seeing a moose in the flesh (or at least the bone), so he ordered one up for display.

Jefferson was a passionate believer in the exceptionality of the New World and the prospects for building a self-governing republic in its expansive territory. If it took hauling a moose all the way to Paris to convince Europeans disdainful of the promise of his nascent nation, then so be it—bring on the moose! Among Jefferson's voluminous writings, perhaps none expressed these beliefs as strongly as his magisterial Notes on the State of Virginia. The present book, subtitled “Notes on the State of Cyberspace” takes Jefferson's work as a model and explores this new virtual place which has been built based upon a technology which simply sends packets of data from place to place around the world. The parallels between the largely unexplored North American continent of Jefferson's time and today's Internet are strong and striking, as the author illustrates with extensive quotations from Jefferson interleaved in the text (set in italics to distinguish them from the author's own words) which are as applicable to the Internet today as the land west of the Alleghenies in the late 18th century.

Jefferson believed in building systems which could scale to arbitrary size without either losing their essential nature or becoming vulnerable to centralisation and the attendant loss of liberty and autonomy. And he believed that free individuals, living within such a system and with access to as much information as possible and the freedom to communicate without restrictions would self-organise to perpetuate, defend, and extend such a polity. While Europeans, notably Montesquieu, believed that self-governance was impossible in a society any larger than a city-state, and organised their national and imperial governments accordingly, Jefferson's 1784 plan for the government of new Western territory set forth an explicitly power law fractal architecture which, he believed, could scale arbitrarily large without depriving citizens of local control of matters which directly concerned them. This architecture is stunningly similar to that of the global Internet, and the bottom-up governance of the Internet to date (which Post explores in some detail) is about as Jeffersonian as one can imagine.

As the Internet has become a central part of global commerce and the flow of information in all forms, the eternal conflict between the decentralisers and champions of individual liberty (with confidence that free people will sort things out for themselves)—the Jeffersonians—and those who believe that only strong central authority and the vigorous enforcement of rules can prevent chaos—Hamiltonians—has emerged once again in the contemporary debate about “Internet governance”.

This is a work of analysis, not advocacy. The author, a law professor and regular contributor to The Volokh Conspiracy Web log, observes that, despite being initially funded by the U.S. Department of Defense, the development of the Internet to date has been one of the most Jeffersonian processes in history, and has scaled from a handful of computers in 1969 to a global network with billions of users and a multitude of applications never imagined by its creators, and all through consensual decision making and contractual governance with nary a sovereign gun-wielder in sight. So perhaps before we look to “fix” the unquestioned problems and challenges of the Internet by turning the Hamiltonians loose upon it, we should listen well to the wisdom of Jefferson, who has much to say which is directly applicable to exploring, settling, and governing this new territory which technology has opened up. This book is a superb way to imbibe the wisdom of Jefferson, while learning the basics of the Internet architecture and how it, in many ways, parallels that of aspects of Jefferson's time. Jefferson even spoke to intellectual property issues which read like today's news, railing against a “rascal” using an abusive patent of a long-existing device to extort money from mill owners (p. 197), and creating and distributing “freeware” including a design for a uniquely efficient plough blade based upon Newton's Principia which he placed in the public domain, having “never thought of monopolizing by patent any useful idea which happened to offer itself to me” (p. 196).

So astonishing was Jefferson's intellect that as you read this book you'll discover that he has a great deal to say about this new frontier we're opening up today. Good grief—did you know that the Oxford English Dictionary even credits Jefferson with being the first person to use the words “authentication” and “indecipherable” (p. 124)? The author's lucid explanations, deft turns of phrase, and agile leaps between the eighteenth and twenty-first centuries are worthy of the forbidding standard set by the man so extensively quoted here. Law professors do love their footnotes, and this is almost two books in one: the focused main text and the more rambling but fascinating footnotes, some of which span several pages. There is also an extensive list of references and sources for all of the Jefferson quotations in the end notes.

March 2009 Permalink

Purdy, Gregor N. Linux iptables Pocket Reference. Sebastopol, CA: O'Reilly, 2004. ISBN 0-596-00569-5.
Sure, you could just read the manual pages, but when your site is under attack and you're the “first responder”, this little book is just what you want in your sweaty fingers. It's also a handy reference to the fields in IP, TCP, UDP, and ICMP packets, which can be useful in interpreting packet dumps. Although intended as a reference, it's well worth taking the time (less than an hour) to read cover to cover. There are a number of very nice facilities in iptables/Netfilter which permit responding to common attacks. For example, the iplimit match allows blocking traffic from the bozone layer (yes, you—I know who you are and I know where you live) which ties up all of your HTTP server processes by connecting to them and then letting them time out or, slightly more sophisticated, feeding characters of a request every 20 seconds or so to keep it alive. The solution is:
    /sbin/iptables -A INPUT -p tcp --syn --dport 80 -m iplimit \
    	--iplimit-above 20 --iplimit-mask 32 -j REJECT
Anybody who tries to open more than 20 connections will get whacked on each additional SYN packet. You can see whether this rule is affecting too many legitimate connections with the status query:
    /sbin/iptables -L -v
Geekly reading, to be sure, but just the thing if you're responsible for defending an Internet server or site from malefactors in the Internet Slum.

February 2005 Permalink

Ray, Erik T. and Jason McIntosh. Perl and XML. Sebastopol, CA: O'Reilly, 2002. ISBN 0-596-00205-X.

May 2003 Permalink

Reynolds, Glenn. An Army of Davids. Nashville: Nelson Current, 2006. ISBN 1-5955-5054-2.
In this book, law professor and Řber blogger (InstaPundit.com) Glenn Reynolds explores how present and near-future technology is empowering individuals at the comparative expense of large organisations in fields as diverse as retailing, music and motion picture production, national security, news gathering, opinion journalism, and, looking further out, nanotechnology and desktop manufacturing, human longevity and augmentation, and space exploration and development (including Project Orion [pp. 228–233]—now there's a garage start-up I'd love to work on!). Individual empowerment is, like the technology which creates it, morally neutral: good people can do more good, and bad people can wreak more havoc. Reynolds is relentlessly optimistic, and I believe justifiably so; good people outnumber bad people by a large majority, and in a society which encourages them to be “a pack, not a herd” (the title of chapter 5), they will have the means in their hands to act as a societal immune system against hyper-empowered malefactors far more effective than heavy-handed top-down repression and fear-motivated technological relinquishment.

Anybody who's seeking “the next big thing” couldn't find a better place to start than this book. Chapters 2, 3 and 7, taken together, provide a roadmap for the devolution of work from downtown office towers to individual entrepreneurs working at home and in whatever environments attract them, and the emergence of “horizontal knowledge”, supplanting the top-down one-to-many model of the legacy media. There are probably a dozen ideas for start-ups with the potential of eBay and Amazon lurking in these chapters if you read them with the right kind of eyes. If the business and social model of the twenty-first century indeed comes to resemble that of the eighteenth, all of those self-reliant independent people are going to need lots of products and services they will find indispensable just as soon as somebody manages to think of them. Discovering and meeting these needs will pay well.

The “every person an entrepreneur” world sketched here raises the same concerns I expressed in regard to David Bolchover's The Living Dead (January 2006): this will be a wonderful world, indeed, for the intelligent and self-motivated people who will prosper once liberated from corporate cubicle indenture. But not everybody is like that: in particular, those people tend to be found on the right side of the bell curve, and for every one on the right, there's one equally far to the left. We have already made entire categories of employment for individuals with average or below-average intelligence redundant. In the eighteenth century, there were many ways in which such people could lead productive and fulfilling lives; what will they do in the twenty-first? Further, ever since Bismarck, government schools have been manufacturing worker-bees with little initiative, and essentially no concept of personal autonomy. As I write this, the Úlite of French youth is rioting over a proposal to remove what amounts to a guarantee of lifetime employment in a first job. How will people so thoroughly indoctrinated in collectivism fare in an individualist renaissance? As a law professor, the author spends much of his professional life in the company of high-intelligence, strongly-motivated students, many of whom contemplate an entrepreneurial career and in any case expect to be judged on their merits in a fiercely competitive environment. One wonders if his optimism might be tempered were he to spend comparable time with denizens of, say, the school of education. But the fact that there will be problems in the future shouldn't make us fear it—heaven knows there are problems enough in the present, and the last century was kind of a colossal monument to disaster and tragedy; whatever the future holds, the prescription of more freedom, more information, greater wealth and health, and less coercion presented here is certain to make it a better place to live.

The individualist future envisioned here has much in common with that foreseen in the 1970s by Timothy Leary, who coined the acronym “SMIILE” for “Space Migration, Intelligence Increase, Life Extension”. The “II” is alluded to in chapter 12 as part of the merging of human and machine intelligence in the singularity, but mightn't it make sense, as Leary advocated, to supplement longevity research with investigation of the nature of human intelligence and near-term means to increase it? Realising the promise and avoiding the risks of the demanding technologies of the future are going to require both intelligence and wisdom; shifting the entire bell curve to the right, combined with the wisdom of longer lives may be key in achieving the much to be desired future foreseen here.

InstaPundit visitors will be familiar with the writing style, which consists of relatively brief discussion of a multitude of topics, each with one or more references for those who wish to “read the whole thing” in more depth. One drawback of the print medium is that although many of these citations are Web pages, to get there you have to type in lengthy URLs for each one. An on-line edition of the end notes with all the on-line references as clickable links would be a great service to readers.

March 2006 Permalink

Rucker, Rudy. The Lifebox, the Seashell, and the Soul. New York: Thunder's Mouth Press, 2005. ISBN 1-56025-722-9.
I read this book in manuscript form. An online excerpt is available.

September 2004 Permalink

Schildt, Herbert. STL Programming from the Ground Up. Berkeley: Osborne, 1999. ISBN 0-07-882507-5.

May 2001 Permalink

Schmitt, Christopher. CSS Cookbook. Sebastopol, CA: O'Reilly, 2004. ISBN 0-596-00576-8.
It's taken a while, but Cascading Style Sheets have finally begun to live up to their promise of separating content from presentation on the Web, allowing a consistent design, specified in a single place and easily modified, to be applied to large collections of documents, and permitting content to be rendered in different ways depending on the media and audience: one style for online reading, another for printed output, an austere presentation for handheld devices, large type for readers with impaired vision, and a text-only format tailored for screen reader programs used by the blind. This book provides an overview of CSS solutions for common Web design problems, with sample code and screen shots illustrating what can be accomplished. It doesn't purport to be a comprehensive reference—you'll want to have Eric Meyer's Cascading Style Sheets: The Definitive Guide at hand as you develop your own CSS solutions, but Schmitt's book is valuable in showing how common problems can be solved in ways which aren't obvious from reading the specification or a reference book. Particularly useful for the real-world Web designer are Schmitt's discussion of which CSS features work and don't work in various popular browsers and suggestions of work-arounds to maximise the cross-platform portability of pages.

Many of the examples in this book are more or less obvious, and embody techniques which folks who've rolled their own Movable Type style sheets will be familiar, but every chapter has one or more gems which caused this designer of minimalist Web pages to slap his forehead and exclaim, “I didn't know you could do that!” Chapter 9, which presents a collection of brutal hacks, many involving exploiting parsing bugs, for working around browser incompatibilities may induce nausea in those who cherish standards compliance or worry about the consequences of millions of pages on the Web containing ticking time bombs which will cause them to fall flat on their faces when various browser bugs are fixed. One glimpses here the business model of the Web site designer who gets paid when the customer is happy with how the site looks in Exploder and views remediation of incompatibilities down the road as a source of recurring revenue. Still, if you develop and maintain Web sites at the HTML level, there are many ideas here which can lead to more effective Web pages, and encourage you to dig deeper into the details of CSS.

January 2005 Permalink

Schneider, Ben Ross, Jr. Travels in Computerland. Reading, MA: Addison-Wesley, 1974. ISBN 0-201-06737-4.
It's been almost thirty years since I first read this delightful little book, which is now sadly out of print. It's well worth the effort of tracking down a used copy. You can generally find one in readable condition for a reasonable price through the link above or through abebooks.com. If you're too young to have experienced the mainframe computer era, here's an illuminating and entertaining view of just how difficult it was to accomplish anything back then; for those of us who endured the iron age of computing, it is a superb antidote to nostalgia. The insights into organising and managing a decentralised, multidisciplinary project under budget and deadline constraints in an era of technological change are as valid today as they were in the 1970s. The glimpse of the embryonic Internet on pages 241–242 is a gem.

April 2003 Permalink

Spufford, Francis. Backroom Boys: The Secret Return of the British Boffin. London: Faber and Faber, 2003. ISBN 0-571-21496-7.
It is rare to encounter a book about technology and technologists which even attempts to delve into the messy real-world arena where science, engineering, entrepreneurship, finance, marketing, and government policy intersect, yet it is there, not solely in the technological domain, that the roots of both great successes and calamitous failures lie. Backroom Boys does just this and pulls it off splendidly, covering projects as disparate as the Black Arrow rocket, Concorde, mid 1980s computer games, mobile telephony, and sequencing the human genome. The discussion on pages 99 and 100 of the dynamics of new product development in the software business is as clear and concise a statement I've seen of the philosophy that's guided my own activities for the past 25 years. While celebrating the technological renaissance of post-industrial Britain, the author retains the characteristic British intellectual's disdain for private enterprise and economic liberty. In chapter 4, he describes Vodaphone's development of the mobile phone market: “It produced a blind, unplanned, self-interested search strategy, capitalism's classic method for exploring a new space in the market where profit may be found.” Well…yes…indeed, but that isn't just “capitalism's” classic method, but the very one employed with great success by life on Earth lo these four and a half billion years (see The Genius Within, April 2003). The wheels fall off in chapter 5. Whatever your position may have been in the battle between Celera and the public Human Genome Project, Spufford's collectivist bias and ignorance of economics (simply correcting the noncontroversial errors in basic economics in this chapter would require more pages than it fills) gets in the way of telling the story of how the human genome came to be sequenced five years before the original estimated date. A truly repugnant passage on page 173 describes “how science should be done”. Taxpayer-funded researchers, a fine summer evening, “floated back downstream carousing, with stubs of candle stuck to the prows, … and the voices calling to and fro across the water as the punts drifted home under the overhanging trees in the green, green, night.“ Back to the taxpayer-funded lab early next morning, to be sure, collecting their taxpayer-funded salaries doing the work they love to advance their careers. Nary a word here of the cab drivers, sales clerks, construction workers and, yes, managers of biotech start-ups, all taxed to fund this scientific utopia, who lack the money and free time to pass their own summer evenings so sublimely. And on the previous page, the number of cells in the adult body of C. elegans is twice given as 550. Gimme a break—everybody knows there are 959 somatic cells in the adult hermaphrodite, 1031 in the male; he's confusing adults with 558-cell newly-hatched L1 larvŠ.

May 2004 Permalink

Standage, Tom. The Victorian Internet. New York: Berkley, 1998. ISBN 0-425-17169-8.

September 2003 Permalink

Stephenson, Neal. Cryptonomicon. New York: Perennial, 1999. ISBN 0-380-78862-4.
I've found that I rarely enjoy, and consequently am disinclined to pick up, these huge, fat, square works of fiction cranked out by contemporary super scribblers such as Tom Clancy, Stephen King, and J.K. Rowling. In each case, the author started out and made their name crafting intricately constructed, tightly plotted page-turners, but later on succumbed to a kind of mid-career spread which yields flabby doorstop novels that give you hand cramps if you read them in bed and contain more filler than thriller. My hypothesis is that when a talented author is getting started, their initial books receive the close attention of a professional editor and benefit from the discipline imposed by an individual whose job is to flense the flab from a manuscript. But when an author becomes highly successful—a “property” who can be relied upon to crank out best-seller after best-seller, it becomes harder for an editor to restrain an author's proclivity to bloat and bloviation. (This is not to say that all authors are so prone, but some certainly are.) I mean, how would you feel giving Tom Clancy advice on the art of crafting thrillers, even though Executive Orders could easily have been cut by a third and would probably have been a better novel at half the size.

This is why, despite my having tremendously enjoyed his earlier Snow Crash and The Diamond Age, Neal Stephenson's Cryptonomicon sat on my shelf for almost four years before I decided to take it with me on a trip and give it a try. Hey, even later Tom Clancy can be enjoyed as “airplane” books as long as they fit in your carry-on bag! While ageing on the shelf, this book was one of the most frequently recommended by visitors to this page, and friends to whom I mentioned my hesitation to dive into the book unanimously said, “You really ought to read it.” Well, I've finished it, so now I'm in a position to tell you, “You really ought to read it.” This is simply one of the best modern novels I have read in years.

The book is thick, but that's because the story is deep and sprawling and requires a large canvas. Stretching over six decades and three generations, and melding genera as disparate as military history, cryptography, mathematics and computing, business and economics, international finance, privacy and individualism versus the snooper state and intrusive taxation, personal eccentricity and humour, telecommunications policy and technology, civil and military engineering, computers and programming, the hacker and cypherpunk culture, and personal empowerment as a way of avoiding repetition of the tragedies of the twentieth century, the story defies classification into any neat category. It is not science fiction, because all of the technologies exist (or plausibly could have existed—well, maybe not the Galvanick Lucipher [p. 234; all page citations are to the trade paperback edition linked above. I'd usually cite by chapter, but they aren't numbered and there is no table of contents]—in the epoch in which they appear). Some call it a “techno thriller”, but it isn't really a compelling page-turner in that sense; this is a book you want to savour over a period of time, watching the story lines evolve and weave together over the decades, and thinking about the ideas which underlie the plot line.

The breadth of the topics which figure in this story requires encyclopedic knowledge. which the author demonstrates while making it look effortless, never like he's showing off. Stephenson writes with the kind of universal expertise for which Isaac Asimov was famed, but he's a better writer than the Good Doctor, and that's saying something. Every few pages you come across a gem such as the following (p. 207), which is the funniest paragraph I've read in many a year.

He was born Graf Heinrich Karl Wilhelm Otto Friedrich von Übersetzenseehafenstadt, but changed his name to Nigel St. John Gloamthorpby, a.k.a. Lord Woadmire, in 1914. In his photograph, he looks every inch a von Übersetzenseehafenstadt, and he is free of the cranial geometry problem so evident in the older portraits. Lord Woadmire is not related to the original ducal line of Qwghlm, the Moore family (Anglicized from the Qwghlmian clan name Mnyhrrgh) which had been terminated in 1888 by a spectacularly improbable combination of schistosomiasis, suicide, long-festering Crimean war wounds, ball lightning, flawed cannon, falls from horses, improperly canned oysters, and rogue waves.
On p. 352 we find one of the most lucid and concise explanations I've ever read of why it far more difficult to escape the grasp of now-obsolete technologies than most technologists may wish.
(This is simply because the old technology is universally understood by those who need to understand it, and it works well, and all kinds of electronic and software technology has been built and tested to work within that framework, and why mess with success, especially when your profit margins are so small that they can only be detected by using techniques from quantum mechanics, and any glitches vis-à-vis compatibility with old stuff will send your company straight into the toilet.)
In two sentences on p. 564, he lays out the essentials of the original concept for Autodesk, which I failed to convey (providentially, in retrospect) to almost every venture capitalist in Silicon Valley in thousands more words and endless, tedious meetings.
“ … But whenever a business plan first makes contact with the actual market—the real world—suddenly all kinds of stuff becomes clear. You may have envisioned half a dozen potential markets for your product, but as soon as you open your doors, one just explodes from the pack and becomes so instantly important that good business sense dictates that you abandon the others and concentrate all your efforts.”
And how many New York Times Best-Sellers contain working source code (p, 480) for a Perl program?

A 1168 page mass market paperback edition is now available, but given the unwieldiness of such an edition, how much you're likely to thumb through it to refresh your memory on little details as you read it, the likelihood you'll end up reading it more than once, and the relatively small difference in price, the trade paperback cited at the top may be the better buy. Readers interested in the cryptographic technology and culture which figure in the book will find additional information in the author's Cryptonomicon cypher-FAQ.

May 2006 Permalink

Vallee, Jacques. The Heart of the Internet. Charlottesville, VA: Hampton Roads Publishing, 2003. ISBN 1-57174-369-3.
The author (yes, that Jacques Vallee) recounts the history of the Internet from an insider's perspective: first as a member of Doug Engelbart's Augmentation group at SRI from 1971, and later as a developer of the pioneering Planet conferencing system at the Institute for the Future and co-founder of the 1976 spin-off InfoMedia. He does an excellent job both of sketching Engelbart's still unrealised vision of computer networks as a means of connecting human minds in new ways, and in describing how it, like any top-down system design, was doomed to fail in the real world populated by idiosyncratic and innovative human beings. He celebrates the organic, unplanned growth of the Internet so far and urges that it be allowed to continue, free of government and commercial constraints. The present-day state of the Internet worries him as it worries me; he eloquently expresses the risk as follows (p. 162): “As a venture capitalist who invests in high tech, I have to worry that the web will be perceived as an increasingly corrupt police state overlying a maze of dark alleys and unsafe practices outside the rule of law. The public and many corporations will be reluctant to embrace a technology fraught with such problems. The Internet economy will continue to grow, but it will do so at a much slower pace than forecast by industry analysts.” This is precisely the scenario I have come to call “the Internet slum”. The description of the present-day Internet and what individuals can do to protect their privacy and defend their freedom in the future is sketchy and not entirely reliable. For example, on page 178, “And who has time to keep complete backup files anyway?”, which rhetorical question I would answer, “Well, anybody who isn't a complete idiot.” His description of the “Mesh” in chapter 8 is precisely what I've been describing to gales of laughter since 1992 as “Gizmos”—a world in which everything has its own IPv6 address—each button on your VCR, for example—and all connections are networked and may be redefined at will. This is laid out in more detail in the Unicard Ubiquitous section of my 1994 Unicard paper.

May 2004 Permalink

Vallee, Jacques. Forbidden Science. Vol. 2. San Francisco: Documatica Research, 2008. ISBN 978-0-615-24974-2.
This, the second volume of Jacques Vallee's journals, chronicles the years from 1970 through 1979. (I read the first volume, covering 1957–1969, before I began this list.) Early in the narrative (p. 153), Vallee becomes a U.S. citizen, but although surrendering his French passport, he never gives up his Gallic rationalism and scepticism, both of which serve him well in the increasingly weird Northern California scene in the Seventies. It was in those locust years that the seeds for the personal computing and Internet revolutions matured, and Vallee was at the nexus of this technological ferment, working on databases, Doug Englebart's Augmentation project, and later systems for conferencing and collaborative work across networks. By the end of the decade he, like many in Silicon Valley of the epoch, has become an entrepreneur, running a company based upon the conferencing technology he developed. (One amusing anecdote which indicates how far we've come since the 70s in mindset is when he pitches his conferencing system to General Electric who, at the time, had the largest commercial data network to support their timesharing service. They said they were afraid to implement anything which looked too much like a messaging system for fear of running afoul of the Post Office.)

If this were purely a personal narrative of the formative years of the Internet and personal computing, it would be a valuable book—I was there, then, and Vallee gets it absolutely right. A journal is, in many ways, better than a history because you experience the groping for solutions amidst confusion and ignorance which is the stuff of real life, not the narrative of an historian who knows how it all came out. But in addition to being a computer scientist, entrepreneur, and (later) venture capitalist, Vallee is also one of the preeminent researchers into the UFO and related paranormal phenomena (the character Claude Lacombe, played by François Truffaut in Steven Spielberg's 1977 movie Close Encounters of the Third Kind was based upon Vallee). As the 1970s progress, the author becomes increasingly convinced that the UFO phenomenon cannot be explained by extraterrestrials and spaceships, and that it is rooted in the same stratum of the human mind and the universe we inhabit which has given rise to folklore about little people and various occult and esoteric traditions. Later in the decade, he begins to suspect that at least some UFO activity is the work of deliberate manipulators bent on creating an irrational, anti-science worldview in the general populace, a hypothesis expounded in his 1979 book, Messengers of Deception, which remains controversial three decades after its publication.

The Bay Area in the Seventies was a kind of cosmic vortex of the weird, and along with Vallee we encounter many of the prominent figures of the time, including Uri Geller (who Vallee immediately dismisses as a charlatan), Doug Engelbart, J. Allen Hynek, Anton LaVey, Russell Targ, Hal Puthoff, Ingo Swann, Ira Einhorn, Tim Leary, Tom Bearden, Jack Sarfatti, Melvin Belli, and many more. Always on a relentlessly rational even keel, he observes with dismay as many of his colleagues disappear into drugs, cults, gullibility, pseudoscience, and fads as that dark decade takes its toll. In May 1979 he feels himself to be at “the end of an age that defied all conventions but failed miserably to set new standards” (p. 463). While this is certainly spot on in the social and cultural context in which he meant it, it is ironic that so many of the standards upon which the subsequent explosion of computer and networking technology are based were created in those years by engineers patiently toiling away in Silicon Valley amidst all the madness.

An introduction and retrospective at the end puts the work into perspective from the present day, and 25 pages of end notes expand upon items in the journals which may be obscure at this remove and provide source citations for events and works mentioned. You might wonder what possesses somebody to read more than five hundred pages of journal entries by somebody else which date from thirty to forty years ago. Well, I took the time, and I'm glad I did: it perfectly recreated the sense of the times and of the intellectual and technological challenges of the age. Trust me: if you're too young to remember the Seventies, it's far better to experience those years here than to have actually lived through them.

October 2009 Permalink

Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. ISBN 1-57955-008-8.
The full text of this book may now be read online.

August 2002 Permalink