Technological Leadership     AutoCAD Expo Moscow

Through the Looking Glass

I had been interested in three dimensional user interfaces ever since I first heard of Ivan Sutherland's pioneering work in the late 1960's. NASA Ames had demonstrated a modern system with head and hand tracking, and it was clearly only a matter of time until the computing power of inexpensive personal computer made virtual reality a market reality. It also seemed obvious that a company experienced in three-dimensional geometric modeling and high-performance graphics on cheap hardware was uniquely qualified to become the leader in this emerging market. In this paper I urged Autodesk to enter the nascent field of virtual reality and set the standards. The paper was well received, and a project was launched shortly thereafter. Unfortunately, the disconnection between R&D and marketing, and the general paralysis that gripped Autodesk during the late 1980's (see Information Letter 14 on page [Ref]) caused Autodesk to squander a once-in-a-decade opportunity. Autodesk did finally manage to field a virtual reality developer kit in early 1993, but the chance to “own the market” had been lost long beforehand. An abridged version of this paper appeared in the book The Art of Human-Computer Interface Design, Brenda Laurel, ed., Addison-Wesley, 1990 (ISBN 978-0-201-51797-2).

Through the Looking Glass

Beyond “User Interfaces”

Today's fascination with “user interfaces” is an artifact of how we currently operate computers—with screens, keyboards, and pointing devices, just as job control languages grew from punched card batch systems. Near term technological developments promise to replace user interfaces with something very different.

by John Walker
September 1, 1988

A generation in computing is often identified by the technology used to fabricate computers of that era. By this reckoning, a list like the following is usually presented:

Generation Fabrication technology
First Vacuum tubes
Second Transistors
Third SSI and MSI Integrated circuits
Fourth LSI—CPU on a chip

While fabrication technology has been the dominant influence on the cost and performance of computers, and hence has been the economic determinant of how widely they are available and to what purposes they can be applied, from the user's perspective it is virtually irrelevant. A user of a Unix system, for example, may hardly be aware of whether that system is running on a PDP-1 (second generation), VAX (third generation), or 68020 (fourth generation), and other than issues of performance and cost, could care less.

User interaction generations

From the user's standpoint, how he interacts with the computer is an issue surpassingly more important than what the computer is built from. The very way in which the user perceives the computer (his mental model of it), the degree to which specialised knowledge and extensive training are required to use it, and therefore the extent to which the computer becomes accessible to a very broad segment of the populace is largely controlled by the means through which the user operates the computer. Let's try to redefine computer generations in terms of modalities of operation.

Generation Means of operation
First Plugboards, dedicated setup
Second Punched card batch, RJE
Third Teletype timesharing
Fourth Menu systems
Fifth Graphical controls, windows

First generation: Knobs and dials

By this reckoning ENIAC and the tabulating equipment which preceded it were first generation systems—set up to solve specific problems by specialists with detailed and precise knowledge of the operation of the hardware. Many of the popular images of computers in the 1950s, seen in cartoons from the period, of the computer stretching from floor to ceiling and covered with knobs, dials, and oscilloscope screens, attended by mad scientists derive from the reality of first generation operation.

In the first generation, the user went one-on-one with the computer, in the computer room, operating the computer at the switch and knob level. Since the user was the operator of the machine and controlled it with little or no abstraction, there was essentially no mediation between the computer and its expert user.

Second generation: Batch

After ENIAC, virtually all general purpose digital computers followed the von Neumann architectural model and were therefore programmable without hardware reconfiguration. Even though until late in the 1950s most programming was done in machine language, requiring detailed knowledge of the hardware, the machine could be turned from task to task as rapidly as new programs could be loaded into memory. With computers built from vacuum tubes or discrete transistors being so expensive, extensive efforts were devoted to maximising the productivity of a computer, and as the 1950s waned the original model of computer usage, an individual user signing up for dedicated time on the machine, was supplanted by the batch shop, with a specialist computer operator running a stack of jobs. In time, computer operating systems automated much of the operator's work, assuming responsibility for scheduling, priorities, allocation of resources, and efficient management of the scarcest and most precious resources—CPU cycles and memory space. In time, the advent of remote job entry provided the same batch computing service available in the computer room to remote terminals located anywhere in the world.

The user's image of the computer during the second generation often revolved around a countertop. It was across the counter that the user handed the card deck containing his program and data, and across the same counter that, some time later, his cards would return, accompanied by a printout which he hoped would contain the desired result (but more often consisted of a cryptic error message or the Dreaded Core Dump).

Second generation operation introduced many important levels of mediation and abstractions between the user and the computer hardware. First and probably most important was the time shifting performed by a batch system and the autonomy this gave to the computer (or its operator) at the expense of the user's direct control. Since the computer did the user's bidding without an opportunity for user intervention, time limits, resource scheduling, recovery from unanticipated errors, and the like became a shared responsibility of the user and the autonomous computer operating system. This led to the development of job control languages, which provided a powerful (though often arcane) means of controlling the destiny of a task being performed by the computer without the user's involvement. The card deck, printout, countertop, and job control language form the heart of the user's view of a second generation system. The concurrent development of high-level programming languages reduced the degree of specialised knowledge needed to use such systems and made them accessible to more people.

Third generation: Timesharing

Throughout the second generation period operating system technology progressed toward the goal of squeezing more and more performance from computers. Early developments included spooling (from Simultaneous Peripheral Operation On-Line), which allowed a computer to process other work at full speed while devoting a small portion of its attention to running slow devices such as card readers and printers. Since many programs did not use the full capacity of the computer, but rather spent much of their time reading and writing much slower peripheral devices such as tape drives and magnetic drum and disc memories, operating systems were eventually generalised to allow concurrent execution of multiple jobs, initially in the hope of maximising the usage of scarce CPU and memory resources, and later with subsidiary goals such as providing more responsive service for small tasks while larger jobs were underway.

If a computer's time could be sliced or shared among a small number of batch jobs, why couldn't it be chopped into much smaller slices and spread among a much larger community of interactive users? This observation, and a long-standing belief that the productivity of computer users (as opposed to the productivity of the computer itself) would be optimised by conversational interaction with the computer, led to the development of timesharing systems in the 1960s. Timesharing promised all things to all people. To the computer owner it promised efficient utilisation of computing power by making available a statistical universe of demands on the computing resource which would mop up every last CPU cycle and core-second. It promised batch users the same service they had before, plus the ability to interactively compose their jobs and monitor their progress on-line. And it offered interactive, conversational interaction with the computer to a new class of users.

Computer facilities were imagined, in the late 1960s, as agglomerating into regional or national “computer utilities”, paralleling the electric power industry, which would sell computing capability to all who needed it, providing access wherever telephone lines reached, and all users a common database which could grow without bounds.

The archetypal device for computer interaction in the third generation was the Teletype model 33 KSR. It is hard to explain to people who didn't first enter computing in the batch era just how magical this humming, clunking, oil fume emitting, ten character per second device with the weird keyboard seemed. You could type in “PRINT 2+2”, and almost instantly the computer would print “4”. And most of all, you could imagine that device in your own home, linked by telephone to the computer whenever you needed it (the price of the hardware and the cost of computing in that age kept this a dream for virtually everybody, but it was a dream whose power undoubtedly contributed to its fulfillment, albeit through very different means).

The interactive character device, whether a slow printing terminal such as a teletype, or an ASCII “glass teletype” running at speeds of up to 960 characters per second, led to the development of conversational computing. The user types a line of input to the computer, which immediately processes it and responds with a reply (perhaps as simple as a prompt indicating it's ready for the next line). Many different flavours of this form of interaction were explored and coexist today, including the BASIC environment originally developed by Kemeny and his group at Dartmouth, editors such as TECO and its many derivatives such as VI, the project MAC timesharing environment whose influence is everywhere in the industry (including in Autodesk's own text editor), and eventually TOPS-10 and Multics (with their many derivatives including Unix and MS-DOS).

The conversational mode of interaction was the Turing test made real—the user “conversed” with the computer, just as he might with another human on a teletype-to-teletype connection (or CB on CompuServe today), and if the computer's responses tended toward comments such as “WAIT”, “FAC ERR 400000044000”, or “segmentation violation-core dumped” rather than observations about relevant passages in Wittgenstein, well with enough funding for the AI Lab and enough lines of Lisp, who knew?

Today it is fashionable to hold conversational computing in disdain, yet it achieved most of the goals that realistic observers expected of it. That it disappointed those whose expectations weren't grounded in the reality of what it was shouldn't be held against it—visionaries are always disappointed by reality (and therefore often lead the way to the next generation). In the guise of GE Timesharing, conversational computing introduced hundreds of thousands of students in high schools to computing, and many of these people now fill the ranks of the computing industry. The conversational model is almost universally the approach of choice by software developers, to the extent that Apple's own Macintosh Programmer's Workshop implements that model on a computer that is identified with another model entirely. The dominance of MS-DOS in the personal computer market and Unix in the technical workstation world (as well as the BASIC environment on most home computers, such as the Commodore and Atari 8 bit families) is testimony to the effectiveness and staying power of this mode of interaction. (It should be noted, however, that Unix in particular has in its shell programming facilities co-opted much of the second generation's job control languages, and a significant fraction of the power of Unix comes from integrating that approach with conversational interaction).

Fourth generation: Menus

Although conversational systems broadened the accessibility of computers, they still fell far short of the goal of making computers accessible to a large segment of the populace. As conversational interaction grew from slow (10 or 30 character per second) terminals, the appearance of fast alphanumeric terminals (1000 characters per second and up) made it possible to present large amounts of information to the user almost instantaneously. This allowed the computer to present the user with a “menu” of choices, from which selections could be made simply by pressing one or two keys.

Menu command selection, coupled with data entry modeled on filling in a form, rapidly became the standard for application systems intended to be operated by non-computer-specialists. Hundreds of thousands of people spend their entire working day operating systems of this design, although people who have studied how users actually learn and use these systems, in applications ranging from credit card transaction entry to targeting tactical nuclear weapons, often find that users see them in a very different way than the designers intended—frequently moving from menu to menu by rote learning of keystroke sequences, leaving the carefully-crafted menus unread.

Many attempts have been made to expand menu-driven systems into a general method of operation. Much of the Macintosh interaction model is actually fourth generation operation. Selecting commands from menus (whether presented directly to a user or pulled down from the top of the screen) and selecting options and setting program parameters by entering them in a form called a “dialogue box” is pure fourth generation design. The major point of departure from classic fourth generation structure in the Macintosh menu system is its attempt to place the user in direct command rather than treat the user as a peripheral who directs the computer as so many menu driven application systems do.

Fifth generation: Graphics

As monolithic integrated circuits began to relentlessly drive down the cost of computer memory, full screen raster graphics moved from a laboratory curiosity or specialised component of high-end systems to something which could be imagined as an integral part of every computer. Alan Kay and others at the Learning Research Group at the Xerox Palo Alto Research Center saw that this development, along with development of fast inexpensive processors, data networks, and object-oriented programming techniques, could lead to the development of totally new ways of interaction with computers. In the mid 1970s they explored the potential of these technologies on the Alto computer with the Smalltalk language. They involved children in their research program in part to take advantage of the unconditioned viewpoint a child brings to what he encounters.

Being able to express interaction with a computer on a two dimensional graphics screen allows many metaphors which can be only vaguely approximated with earlier technologies. The screen can be turned into a desktop, complete with pieces of paper which can be shuffled (windows), accessories (tools), and resources (applications). The provision of a pointing device such as a mouse allows direct designation of objects on a screen without the need to type in names or choose from menus as in earlier systems. This property has caused such systems to be referred to as direct manipulation systems. For example, file directories can be displayed as file folders on a screen, each folder containing a number of documents. If the user wishes to move a document from one directory to another, he need only grasp it with the pointing device and drag it from one folder to another. Compare this with the levels of abstraction inherent in the Unix command “mv doc/speech1.tex archives”. This command, which quite explicitly specifies the same operation, requires the user to remember the name of the “move file” command is mv, the name of the sending directory, the name assigned to the file to be moved, and the name of the receiving directory.

In addition, the availability of a graphics screen allows much more expressive means of controlling programs and better visual fidelity to the ultimate application of the computer. When editing documents, font changes can actually be shown on the screen. Controls which would otherwise have to be expressed as command names or numbers can be shown as slider bars, meter faces, bar or line charts, or any other form suited to the information being presented. Lee Felsenstein refers to the distinction between a conversational system and one like the Macintosh as the difference between a one- and a two-dimensional mode of interaction.

The extent to which five generations of user interaction with computers have brought us back to the starting point is ironic. Users of the first computers had dedicated access to the computer and direct control over its operation. The development of personal computers has placed the computer back in the user's hands as a dedicated machine, and event-driven interaction which places the user in immediate command of the computer's operation restores the direct control over the computer which disappeared when the user was banished from the computer room in the second generation. Use of graphics to express operating parameters is even restoring to computer applications the appearance of the computer control panels of the first generation, replete with meters, squiggly lines moving across charts, and illuminated buttons. This isn't to say we haven't come a long way—the meters on a Univac I console read out things like the temperature of the mercury delay line memories and the B+ voltage, and the switches allowed the user to preset bits in the accumulator. Today's displays and controls generally affect high-level parameters inside applications and allow the user, for example, to vary the degree of smoothing of a surface patch by moving a slider bar while watching a three dimensional shaded image change on the screen.

What next?

So, in the last forty years we've taken the computer user, who was initially in direct control of a dedicated computer, operating it by switches and gazing at huge arrays of blinking lights, to greater and greater distances from the computer and direct interaction with it, then back again to contemplating a virtual control panel on a glowing screen filled with slide pots, radio buttons, meters, all providing direct and expressive control over what's going on inside the computer. It appears that we've finally reached the end of the road—an individual has at his fingertips, for no more than the price of an automobile, dedicated computing power in excess of what existed in the world in 1960, with applications carefully tailored to provide intuitive control of the powerful tasks they perform, and a growing ability to move between applications at will, combining them as needed to address whatever work the user needs done.

It's interesting to observe the extent to which the term “user interface” has emerged as a marketing and, more recently, legal battleground following the introduction and growing acceptance of fifth generation user interaction. Many people would probably fail to identify anything before a fourth generation menu system as a “user interface” at all, though each generation has been how a user interacted, or in Eighties-speak “interfaced” with a computer system of that era.

Perhaps there's a semantic truth beneath the surface here. While one tends to speak of a “dialogue” or “conversation” when working with a line-oriented (third generation) timesharing system, only with fifth generation systems (and to a much lesser extent fourth generation menu systems) is one “face-to-face” with the computer. Maybe we keep referring to interfaces because we see our interaction as inter-face: our face dimly reflected in the screen that is the face of the computer.

I believe that conversation is the wrong model for dealing with a computer—a model which misleads inexperienced users and invites even experienced software designers to build hard-to-use systems. Because the computer has a degree of autonomy and can rapidly perform certain intellectual tasks we find difficult, since inception we've seen computers as possessing attributes of human intelligence (“electronic brains”), and this has led us to impute to them characteristics they don't have, then expend large amounts of effort trying to program them to behave as we imagine they should.

When you're interacting with a computer, you are not conversing with another person. You are exploring another world.

In Computer Power and Human Reason Joseph Weizenbaum spoke of this world, as seen by the programmer who creates it, as follows:

The computer programmer is a creator of universes for which he alone is the lawgiver. Universes of virtually unlimited complexity can be created in the form of computer programs. Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behaviour. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or field of battle and to command such unswervingly dutiful actors or troops.

This, Weizenbaum believes, explains the fascination programming holds for those who master it, and is the central reason why programming can become as compulsive a behaviour as any other activity that confers feelings of power, mastery, and pleasure.

The problem is that once a programmer has created a world intended for use by others, some poor user has to wander into it, armed only with the sword of his wits, the shield of The Manual, and whatever experience in other similar worlds he may have painfully gleaned, then try to figure out the rules. The timeless popularity of adventure games seems to indicate that at least some people enjoy such challenges, but it's much easier to exult in the discovery that the shiny stones cause the trolls to disappear when exploring the Cave of Befuddlement for fun than to finally learn that only if you do a preview will the page breaks be recalculated correctly for the printer when the boss is waiting for the new forecast spreadsheet.

If what's inside the computer is a world instead of another person, then we should be looking at lowering the barriers that separate the user from the world he's trying to explore rather than how to carry on a better conversation. Let's look at the barriers which characterise each generation of user interaction:

Generation Barrier
First Front panel
Second Countertop
Third Terminal
Fourth Menu hierarchy
Fifth Screen

Now there's little doubt that the greatest barrier between the user and the world inside the computer is the system designer's failure to comprehend that he's designing a world and to adequately communicate his vision of that world to the user. But also, as we've enriched the means of interaction between the user and the computer, both by raising the communication bandwidth and increasing the expressiveness of what is communicated by using graphics, pointing, and the like, we've placed more powerful tools in the hands of the system designer to bring the worlds he creates to life and to involve the user in them. The fact that in the adventure game world the pure text line-by-line adventures remain the classics of the genre is an indication of how slow designers are to effectively exploit new capabilities.

Now we're at the threshold of the next revolution in user-computer interaction: a technology which will take the user through the screen into the world inside the computer—a world in which the user can interact with three-dimensional objects whose fidelity will grow as computing power increases and display technology progresses. The world inside the computer can be whatever the designer makes it; entirely new experiences and modes of interaction can be explored and as designers and users explore this strange new world, they will be jointly defining the next generation of user interaction with computers.

Through the screen to cyberspace

To move beyond the current generation of graphics screen and mouse, to transport the user through the screen into the computer, we need hardware and software that provide the user a three dimensional simulacrum of a world and allows interaction in ways that mimic interaction with real world objects. Several terms have been used to refer to a computer-simulated world, none particularly attractive. “Artificial reality” and “virtual reality” are oxymorons, Ted Nelson's term “virtuality” refers to a much more general class of computer worlds, “world simulator” is too grandiose for what we're talking about, and “cyberspace” misuses the root “cyber” (from κυβερνήτης—“steersman”) to denote computer rather than control. Nonetheless, I will use “cyberspace” here to avoid burdening the discourse with still another term.[Footnote] Since I'm talking about means of man/machine interaction, I can make the case that “cyberspace” means a three dimensional domain in which cybernetic feedback and control occur.

I define a cyberspace system as one which provides the user a three-dimensional interaction experience that provides the illusion he is inside a world rather than observing an image. At the minimum, a cyberspace system provides stereoscopic imagery of three dimensional objects, sensing the user's head position and rapidly updating the perceived scene. In addition, a cyberspace system provides a means of interacting with simulated objects. The richness and fidelity of a cyberspace system can be extended by providing better three dimensional imagery, sensing the user's pupil direction, providing motion cues and force feedback, generating sound from simulated sources, and further approximating reality almost without bounds (wind in the face, odor, temperature-direct neural interface, anyone?).

The idea of transporting the user in some fashion into a computer and allowing him to interact directly with a virtual world has been extensively explored in science fiction. Frederick Pohl's later Heechee books (the second of which, Beyond the Blue Event Horizon, inspired the product Autodesk, which idea played a significant part in the formation of this company), writers of the “cyberpunk” genre such as William Gibson and Rudy Rucker, and movies including even Tron have explored what we will find and what we will become when we enter these worlds of our own creation. It's no wonder the idea of entering a computer world is so fascinating—it can be thought of as the ultimate realisation of what fiction has been striving for since sagas of the hunt were told around Paleolithic campfires. The images that prose and poetry create in the mind, that the theatre enacts on stage, that motion pictures and television (aided by special effects) bring to millions can, inside a computer, not only be given three-dimensional substance, but can interact directly with the viewer, now a participant rather than passive spectator.

Science fiction's attention to the idea of entering a computer world should not be taken as an indication that the idea is itself infeasible, part of the distant future, or a fictional device devoid of practical applications. Ivan Sutherland, who invented so much of what we now consider commonplace in the computer graphics industry, realised in the 1960's that using two small CRTs to provide stereoscopic images to the eyes and sensing head position to compute the viewpoint was the way to three dimensional realism. In 1968 Sutherland built a helmet with two CRTs, attached to the ceiling with a set of linkages and shaft encoders to determine head position.[Footnote] This contraption, called the “Sword of Damocles” because of all the hardware dangling above the user's cranial vault, really had only one serious flaw—it was twenty years ahead of its time in the computer power required to make it practical.

Now that fast CPUs and special-purpose graphics hardware have made real-time generation of realistic 3D images widely available at reasonable cost, and every expectation is that the ongoing trend of increasing performance at decreasing cost will soon bring that power to personal computers, the technological groundwork is in place to bring Sutherland's prototype into the mainstream of computer graphics. Interest in head mounted displays and other cyberspace technologies is growing. Attached to this paper are an article from the October 1987 Scientific American titled “Interfaces for Advanced Computing” which surveys the field and describes current technology, a paper titled “Virtual Environment Display System” by the group at NASA Ames who built the first modern cyberspace system, an article from the August 15th, 1988 Aviation Week and Space Technology describing a helmet-mounted display with head tracking being tested by Navy aviators, and an article from the August 22nd issue of the Independent Journal which indicates that the potential of cyberspace is beginning to filter down to even minor suburban dailies.

Building cyberspace

Exploring cyberspace requires specialised hardware and software. The attached articles describe the hardware used in the current laboratory systems; please refer to them for details. Here's an overview of what constitutes a minimum contemporary cyberspace system which can be assembled from off-the-shelf components and a sketch of the outlines of a complementary software project to initially explore the applications of cyberspace to our products.

Cyberspace Hardware

To provide the illusion of being within cyberspace, the system should provide a stereoscopic image that tracks head position. A system that does this can be built by using two small video monitors mounted on a helmet the user wears. Affixed to the helmet is a head-tracking device, such as the Polhemus Navigator (made by a subsidiary of McDonnell Douglas) which provides eighth inch position and quarter-degree angular accuracy without attached wires.

The video displays can be fabricated from components salvaged from LCD pocket televisions, or camcorder viewfinders can be used as-is. (The current NASA design uses custom displays and optics to achieve a wide field of view, but their first prototype used commercial LCD displays). Each monitor is attached to a separate graphics controller which renders the view of the three dimensional model of the world from that eye's viewpoint, updating the display as the head translates and rotates. Cyberspace experimenters stress that quickly updating the display as the viewpoint changes is essential both to maintain the illusion of being in a simulated world and to avoid vertigo. NASA has found that a fast wireframe display is preferable to shaded imagery that lags head movement.

One could configure an initial experimental cyberspace system using, for example, two Amiga 500 home computers as rendering engines, fed a three dimensional model created with AutoCAD or AutoSolid by a control computer (either a Compaq 386 or a Sun would work well) which monitors head position and sends viewpoint updates to the rendering engines.

For user interaction with the cyberspace environment, one could use the VPL glove which, with available software, allows recognition of commands from hand gestures and, with a Polhemus navigator attached to the glove, pointing and grasping of objects in cyberspace. Other input devices such as joysticks and foot pedals could also be explored. The entire hardware complement needed for this initial cyberspace exploration and demonstration system would cost less than $15,000 (not counting the control computer, which would not be dedicated to the system in any case) and could easily be transported and set up wherever required.

This system is so simple and transportable that I call it “cyberspace in a briefcase”. It would serve as an initial prototype to demonstrate the value of cyberspace environments, and introduce our user, developer, hardware manufacturer, and analyst communities to the potential of our work in the area. Our goal in assembling an initial cyberspace system and demonstrating its potential would be to spur the graphics hardware manufacturers who work closely with Autodesk into cooperating with us to specify, develop, and market commercial cyberspace hardware to work in conjunction with cyberspace software developed by Autodesk. Manufacturers of high-performance graphics peripherals are often disappointed that Autodesk does not push their products to the limit. Improving the realism of cyberspace systems can use all of the capacity of the next several generations of graphics hardware (while being useful even with currently affordable products).

Cyberspace Software

For initial explorations of cyberspace, software should consist of a toolkit which allows rapid prototyping of cyberspace environments. Because cyberspace is so new and the fundamentals of how one should interact with it remain to be discovered, we should attempt to prescribe as little of the interaction as possible in the toolkit itself, but make it easy for those who use the kit to define their own environments.

Autodesk has a large advantage in undertaking cyberspace software development. A cyberspace environment is a three-dimensional computer model, and Autodesk has a rich set of off the shelf tools for building and manipulating 3D geometry. As a result, the amount of software development specific to cyberspace will be limited—AutoCAD and AutoSolid can be used for modeling without modification, and appropriate AutoLisp routines and “glue” can be developed to create an effective utility belt for the cyberspace explorer.

Cyberspace Environments

So, what does the world look like to the intrepid cybernaut? Whatever he wants! Cyberspace is unlimitedly rich because it can be anything at all. In time we may expect that conventions for cyberspace will evolve, just as they have for command line, menu, and graphical user interfaces, but cyberspace will always provide an arena where anything that can be imagined can be made to seem real.

Initial cyberspace environments will literally represent three dimensional models. Since cyberspace is the most natural way to work in three dimensions, we expect that three dimensional design will be the first major application area for cyberspace systems. But as William Gibson says, “The street finds its own uses for things”. Just as AutoCAD has been applied to many tasks well outside the traditional bounds of the “CAD market”, cyberspace can be expected to rapidly grow in unanticipated directions. If video games are movies that involve the player, cyberspace is an amusement park where anything that can be imagined and programmed can be experienced. The richness of the experiences that will be available in cyberspace can barely be imagined today.

Menus might be replaced by doors you walk through to enter new worlds (certain doors would be unlocked by the key of imagination). A ZOOM command could be implemented by grabbing the appropriate mushroom—one makes you larger, the other makes you small. Need HELP? Go ask Alice.

As conventions develop for defining cyberspace environments, cyberspace will be applied in increasingly abstract ways. A cyberspace system may turn out to be the best way to implement a hypertext browsing system, or for visualising scientific data in multidimensional space (one could imagine a “transdimensional cyberspace Harley” that lets you ride along any vector in the state space).

In designing interactive systems we must distinguish abstractions introduced because of the limitations of the medium (for example, abbreviations to compensate for a slow teletype) and abstractions that add power or intuitiveness to the interface (such as the ability to create macros to perform repetitive tasks). By creating a very rich environment, cyberspace allows us to dispense with the abstractions of compromise and explore the abstractions that empower the user in new ways.

Should Autodesk lead?

If cyberspace is such an obvious next step in user/computer interaction, then it's reasonable to ask why Autodesk should expend any effort to develop the technology in-house. Can't we just let others do the pioneering and adopt their discoveries as they reach the market?

I think that Autodesk should be a leader in making cyberspace a mainstream technology because Autodesk has several attributes which uniquely qualify us to develop cyberspace. I believe that Autodesk stands to benefit enormously if we are successful in developing the technology and bringing it to market in conjunction with our product line.

Autodesk's business

Cyberspace is a general purpose technology of interaction with computers—nothing about it is specific to 3D graphical design any more than fifth generation interfaces based on raster graphics screens are useful only for two dimensional drawing. New technologies, however, tend to be initially applied in the most obvious and literal ways. When graphics displays were first developed, they were used for obvious graphics applications such as drawing and image processing. Only later, as graphics display technology became less expensive and graphics displays were widely available, did people come to see that appropriate use of two dimensional graphics could help clarify even exclusively text or number oriented tasks.

So it will be with cyberspace. Cyberspace represents the first three dimensional computer interface worthy of the name. Users struggling to comprehend three dimensional designs from multiple views, shaded pictures, or animation will have no difficulty comprehending or hesitation to adopt a technology that lets them pick up a part and rotate it to understand its shape, fly through a complex design like Superman, or form parts by using tools and see the results immediately. Those who had to see shaded pictures to appreciate the value of rendering software and experience their first fly-through to consider animation something more than a gimmick are sure to appreciate cyberspace only after they have stepped into it the first time.[Footnote]

Since Autodesk's business is three dimensional design, we not only have the tools needed to build cyberspace environments as the heart of our product line, we have as customers the most likely early adopters of cyberspace systems—the pioneers in applying cyberspace to their application areas.

Autodesk's technological leadership

I believe that cyberspace is the only technology which is a serious contender to define the next generation of user interaction. Today, Autodesk is faced with the challenging task of finding effective ways for users to manipulate three dimensional models while conforming to the conflicting standards of numerous competing fifth generation graphical interfaces. This job is made difficult because of the lack of standards and by the inherent difficulty of trying to work on a 3D problem through a 2D window.

If Autodesk establishes itself as the leader in exploring cyberspace, and is forthright in identifying its effort as explicitly attempting to invent the next generation of user interaction, we will to a large extent transcend the quibbles over conformance with the last generation of user interface standards (which is not to say that we can ignore them, nor that Autodesk need not continue to upgrade our products' interfaces in conventional ways). The cyberspace project will be a technology flagship, demonstrable in all forums in the near term, which will clearly position Autodesk as a leader in technology and innovation in our core business, 3D design, just at the time when our marketing effort will be aimed at making Autodesk a peer of the big CAD companies. For years those companies have been defending their territory by enumerating things that we couldn't do. Isn't it time we simply superseded them by developing the next generation of interaction and causing it to be identified with our products just as strongly as the fifth generation interface has become identified with Apple?

Autodesk's future

As cyberspace systems mature they will redefine the way products are designed and operated just as thoroughly as have graphics screens and mice. If we take advantage of our opportunity to lead in the development of cyberspace, then our products will be the first to incorporate it, our users the first to explore it, and our applications developers the first to apply it in the many industries they serve. This will give us a huge head start in building the first products of the age of cyberspace—the products which will define the common ground of interaction in that space and from which all latecomers will be forced to somehow distinguish themselves.

Our experience in the PC CAD market should have taught us the value of arriving first in an empty market. Our experience in playing catch-up in the 3D world and in upgrading AutoCAD's user interface is a testament to how difficult it is when you don't get there first. If cyberspace truly represents the next generation of human interaction with computers, it will represent the most profound change in the industry since the development of the personal computer. By helping to bring about that change, Autodesk can emerge as one of the few key players in the next phase of the industry's expansion.

Autodesk's investment

The investment required by Autodesk to explore cyberspace interaction is modest, comparable in size and expense to most other in-house product development efforts. A group of three or four programmers should be able to demonstrate a cyberspace environment within two to three months after project inception, with capabilities added to the system, interfaces built to existing and new products, and adaptation of the system to new hardware systems progressing as work continues after the initial system is demonstrated.

The initial experimental system would be built by cobbling together off the shelf hardware, probably engaging the services of a hardware consultant to help us assemble the gizmo. After this initial system was built (I believe that $25,000 is plenty of money to fund its construction), development would focus on software designed for easy portability to new hardware as it became available. Autodesk would use the initial experimental system to interest hardware vendors in working with us on cyberspace technology. After we demonstrate what we can do with the crude original system, I suspect that we will have no problem finding vendors eager to develop more powerful, professional, and inexpensive solutions to the problems of cyberspace interaction. Once again, Autodesk's preexisting close relationships with hardware manufacturers give us a large advantage over others in promoting this technology.

A project of this scale should begin to yield deliverable results, both new products and cyberspace additions to our existing products, in about a calendar year after inception.[Footnote] Because of the necessity of involving hardware companies in the project and the time it will take to explore the potential of cyberspace before beginning to design products, I don't believe a larger project would yield results any faster.

Autodesk's opportunity

The history of the computer industry consists of the realisation of dream after dream initially dismissed as “only science fiction”. The ability to place users in computer-generated three dimensional environments and allow them to interact with simulated objects will begin to break down the barrier between the user and the world inside the computer. This may usher in totally new ways to interact with computers, new applications for computers, and even new ways of thinking about computers.

Twenty years after Sutherland's prototype demonstrated the feasibility of computer simulated realities (cyberspace), the technology required to build cyberspace systems is almost in hand and can be expected to become widely available at low cost within the next several years. Development of cyberspace systems, driven by their near-term practical applications in control of complex systems, teleoperation, rapid presentation of information in aerospace environments, and more effective manipulation of three dimensional models, is underway in several laboratories and will soon move into the marketplace.

Autodesk is well positioned to be a leader in this new industry. Our fundamental core business is three dimensional modeling. Our products stand to benefit from having the most effective three dimensional interaction of any vendor's, and from being correctly seen as leaders in making the complex tasks they do widely accessible. Our relationships with application developers and hardware manufacturers permit us to cooperatively develop this technology without having to bear all the costs and risk.

Autodesk's decision

Autodesk can identify itself with one of the most exciting developments ever to happen in the computer industry, one which may define the entire shape of the industry for the next decade. I believe that effort required to do this is on the order of that we expend to bring a new product such as AutoShade or AEC Mechanical to market. We should have no difficulty in finding people interested in developing this technology and no problem promoting it once we can demonstrate its potential.

If Autodesk believes, as I do, that this technology not only holds the key to the next generation of user interaction but will first find applications in our central market, three-dimensional design, then Autodesk should apply resources to developing this technology commensurate with its potential. If we undertake this project, we should commit to it explicitly and allocate adequate manpower to get it done. If the project merits only the efforts of one burned-out programmer in his spare time, then it isn't worth undertaking at all. An “Autodesk Cyberpunk Initiative” which will yield results within four months and products within twelve is affordable, achievable, and appropriate.

Autodesk can pioneer cyberspace. We need only the vision to see the opportunity, the courage to break new ground, the decision to do it, and the will to see it through.

Cyberspace
Reality isn't enough any more.

Technological Leadership     AutoCAD Expo Moscow