So, in the last forty years we've taken the computer user, who was initially in direct control of a dedicated computer, operating it by switches and gazing at huge arrays of blinking lights, to greater and greater distances from the computer and direct interaction with it, then back again to contemplating a virtual control panel on a glowing screen filled with slide pots, radio buttons, meters, all providing direct and expressive control over what's going on inside the computer. It appears that we've finally reached the end of the road--an individual has at his fingertips, for no more than the price of an automobile, dedicated computing power in excess of what existed in the world in 1960, with applications carefully tailored to provide intuitive control of the powerful tasks they perform, and a growing ability to move between applications at will, combining them as needed to address whatever work the user needs done.
It's interesting to observe the extent to which the term ``user interface'' has emerged as a marketing and, more recently, legal battleground following the introduction and growing acceptance of fifth generation user interaction. Many people would probably fail to identify anything before a fourth generation menu system as a ``user interface'' at all, though each generation has been how a user interacted, or in Eighties-speak ``interfaced'' with a computer system of that era.
Perhaps there's a semantic truth beneath the surface here. While one tends to speak of a ``dialogue'' or ``conversation'' when working with a line-oriented (third generation) timesharing system, only with fifth generation systems (and to a much lesser extent fourth generation menu systems) is one ``face-to-face'' with the computer. Maybe we keep referring to interfaces because we see our interaction as inter-face: our face dimly reflected in the screen that is the face of the computer.
I believe that conversation is the wrong model for dealing with a computer--a model which misleads inexperienced users and invites even experienced software designers to build hard-to-use systems. Because the computer has a degree of autonomy and can rapidly perform certain intellectual tasks we find difficult, since inception we've seen computers as possessing attributes of human intelligence (``electronic brains''), and this has led us to impute to them characteristics they don't have, then expend large amounts of effort trying to program them to behave as we imagine they should.
When you're interacting with a computer, you are not conversing with another person. You are exploring another world.
In Computer Power and Human Reason Joseph Weizenbaum spoke of this world, as seen by the programmer who creates it, as follows:
The computer programmer is a creator of universes for which he alone is the lawgiver. Universes of virtually unlimited complexity can be created in the form of computer programs. Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behaviour. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or field of battle and to command such unswervingly dutiful actors or troops.This, Weizenbaum believes, explains the fascination programming holds for those who master it, and is the central reason why programming can become as compulsive a behaviour as any other activity that confers feelings of power, mastery, and pleasure.
The problem is that once a programmer has created a world intended for use by others, some poor user has to wander into it, armed only with the sword of his wits, the shield of The Manual, and whatever experience in other similar worlds he may have painfully gleaned, then try to figure out the rules. The timeless popularity of adventure games seems to indicate that at least some people enjoy such challenges, but it's much easier to exult in the discovery that the shiny stones cause the trolls to disappear when exploring the Cave of Befuddlement for fun than to finally learn that only if you do a preview will the page breaks be recalculated correctly for the printer when the boss is waiting for the new forecast spreadsheet.
If what's inside the computer is a world instead of another person, then we should be looking at lowering the barriers that separate the user from the world he's trying to explore rather than how to carry on a better conversation. Let's look at the barriers which characterise each generation of user interaction:
Now there's little doubt that the greatest barrier between the user and the world inside the computer is the system designer's failure to comprehend that he's designing a world and to adequately communicate his vision of that world to the user. But also, as we've enriched the means of interaction between the user and the computer, both by raising the communication bandwidth and increasing the expressiveness of what is communicated by using graphics, pointing, and the like, we've placed more powerful tools in the hands of the system designer to bring the worlds he creates to life and to involve the user in them. The fact that in the adventure game world the pure text line-by-line adventures remain the classics of the genre is an indication of how slow designers are to effectively exploit new capabilities.
Now we're at the threshold of the next revolution in user-computer interaction: a technology which will take the user through the screen into the world inside the computer--a world in which the user can interact with three-dimensional objects whose fidelity will grow as computing power increases and display technology progresses. The world inside the computer can be whatever the designer makes it; entirely new experiences and modes of interaction can be explored and as designers and users explore this strange new world, they will be jointly defining the next generation of user interaction with computers.
Editor: John Walker