As monolithic integrated circuits began to relentlessly drive down the cost of computer memory, full screen raster graphics moved from a laboratory curiosity or specialised component of high-end systems to something which could be imagined as an integral part of every computer. Alan Kay and others at the Learning Research Group at the Xerox Palo Alto Research Center saw that this development, along with development of fast inexpensive processors, data networks, and object-oriented programming techniques, could lead to the development of totally new ways of interaction with computers. In the mid 1970s they explored the potential of these technologies on the Alto computer with the Smalltalk language. They involved children in their research program in part to take advantage of the unconditioned viewpoint a child brings to what he encounters.
Being able to express interaction with a computer on a two dimensional graphics screen allows many metaphors which can be only vaguely approximated with earlier technologies. The screen can be turned into a desktop, complete with pieces of paper which can be shuffled (windows), accessories (tools), and resources (applications). The provision of a pointing device such as a mouse allows direct designation of objects on a screen without the need to type in names or choose from menus as in earlier systems. This property has caused such systems to be referred to as direct manipulation systems. For example, file directories can be displayed as file folders on a screen, each folder containing a number of documents. If the user wishes to move a document from one directory to another, he need only grasp it with the pointing device and drag it from one folder to another. Compare this with the levels of abstraction inherent in the Unix command ``mv doc/speech1.tex archives''. This command, which quite explicitly specifies the same operation, requires the user to remember the name of the ``move file'' command is mv, the name of the sending directory, the name assigned to the file to be moved, and the name of the receiving directory.
In addition, the availability of a graphics screen allows much more expressive means of controlling programs and better visual fidelity to the ultimate application of the computer. When editing documents, font changes can actually be shown on the screen. Controls which would otherwise have to be expressed as command names or numbers can be shown as slider bars, meter faces, bar or line charts, or any other form suited to the information being presented. Lee Felsenstein refers to the distinction between a conversational system and one like the Macintosh as the difference between a one- and a two-dimensional mode of interaction.
The extent to which five generations of user interaction with computers have brought us back to the starting point is ironic. Users of the first computers had dedicated access to the computer and direct control over its operation. The development of personal computers has placed the computer back in the user's hands as a dedicated machine, and event-driven interaction which places the user in immediate command of the computer's operation restores the direct control over the computer which disappeared when the user was banished from the computer room in the second generation. Use of graphics to express operating parameters is even restoring to computer applications the appearance of the computer control panels of the first generation, replete with meters, squiggly lines moving across charts, and illuminated buttons. This isn't to say we haven't come a long way--the meters on a Univac I console read out things like the temperature of the mercury delay line memories and the B+ voltage, and the switches allowed the user to preset bits in the accumulator. Today's displays and controls generally affect high-level parameters inside applications and allow the user, for example, to vary the degree of smoothing of a surface patch by moving a slider bar while watching a three dimensional shaded image change on the screen.
Editor: John Walker