15Olsen.tioga
Rick Beach, February 15, 1987 8:03:08 pm PST
Larger Issues in User Interface Management
Dan R. Olsen Jr.
Computer Science Department
Brigham Young University
Provo, UT 84602
Introduction
Since the Seattle workshop where the term UIMS was coined there have been a number of User Interface Management Systems built and a number of papers published about them [13]. Most of such papers have primarily focused on the issues of dialogue management and models for describing human-computer dialogues. Such research has been somewhat narrow in its focus and has not addressed a number of larger issues that surround the development of user interfaces to applications programs. This paper presents a number of these larger issues and outlines some of the areas that the author believes need attention.
In an earlier paper [9] examples were given of three levels at which user interface issues can be discussed. The first is the transaction level. This is where individual primitive tasks are studied so as to understand which interactive devices and interactive techniques are best suited to accomplish the task. Tasks of this size, menu handling in particular, have been studied extensively by human factors experts. The syntactic issues and examples presented in many UIMS papers are at this level. The reason this level has received so much attention is that it can be isolated and studied in detail. This view has also grown from the idea that interactive dialogues can be built by combining simple devices to form complex virtual devices and so on in a growing hierarchy until the entire dialogue can be viewed as a huge virtual device.
A second level of interest, however, is the functional level. This views the interface as a set of tasks and capabilities irrespective of the specific interactive techniques that invoke them. The issues that arise at this level are ones of communication among modules as well as definition of what the modules are and what the capabilities of the user interface really should be. The example was given of a text formatter and a spell checker. Each is an independent module but each must share the same data model and manipulate the same image of the formatted text. How can this be done effectively? Most solutions to these problems have proposed object-oriented programming or message passing systems as the solution. Such proposals, however, are similar to proposing FORTRAN as a finite element modeling tool. Yes it can be used for such but it sheds no new light on the specific problem at hand. Just as research at the transaction level suffers from excessive specificity, research at the functional level suffers from excessive generality.
A third level of interest was the global level. This must account for the issues that transcend a particular application program to include families of programs which all must be applied to solve a particular problem. A major stumbling block in CAD/CAM is the inability of the various programs to talk to each other through a common data model. Such issues are outside of the scope of this workshop but similar issues arise when end users must work with multiple user interfaces in accomplishing a specific application task. An individual user has difficulty talking to multiple programs because they do not all share a common interface model. The issues of consistency and training raise their heads at this level. This need for consistency is somewhat at odds with the notion that a UIMS should support all possible kinds of dialogues.
The three levels discussed have been reiterated because they point out where UIMS research must turn its attention. The remainder of this paper will address the following issues, primarily from the functional and global levels of interest:
1. Tools for task oriented design of user interfaces
2. Expressive, integrated user interface specifications
3. Underlying data models with browse/edit interfaces
4. Managing user interface specifications as real software
5. Extended functionality at the user interface
6. Automated evaluation of user interfaces
Task-oriented interface design
When students are asked which of two text editors that they would prefer; a line oriented editor which has received years of attention to the human factors the individual commands or a full-screen editor with a mediocre to poor interface. The answer invariably is the full-screen editor. This illustrates the dominance of functionality over low level interactive techniques. Care must be taken in pursuing this point of view, however, because when asked to choose between two full screen text editors, one with a finely crafted user interface but no spelling checker or a mediocre user interface with a spell checker the answers are mixed. The reason for the mixed reviews is a question of effort entering and editing the text vrs. proof reading for spelling errors.
Each of the above examples, however, advocate strongly for an understanding of what the user interface must accomplish before designing any of the low level hows. Even in the user interface/spell checking case the choice is based on the relative emphasis to be placed on editing tasks vs. spell checking. Many models have been created for specifying interactive tasks. Most UIMS developments, however, have been driven not by a task model but rather by syntactic and device oriented models.
The most frequently used model for specifying the functionality of an interactive program is the object/action model. That is where an interface is defined in terms of a set of object classes and a set of actions which can be performed on each such class. In essence this is a set of abstract data types which are defined in terms of the operations that can be performed on them. This is also the model for SMALLTALK as well as the approach taken in the MIKE UIMS [10] which is driven by an object/action definition of the user interface.
There are a number of problems, however, which arise from a simple object/action model for the semantics of an interface. One of these is the specification of what each action is to do. This issue opens up all of the problems of formal semantics. Unfortunately it is more likely that the formal notations involved will confuse those trying to develop user interfaces rather than enlighten them. A more pertinent problem is that the object/action model does not capture information about the visual behavior of objects in response to each of the actions. At present this information is handled in most systems as a textual comment or a static picture of the object. A notable exception is found in forms-based dialogues such as Cousin [5]. In such systems a visual presentation of the object can be drawn and its behavior specified. The forms approach, however, is not universally applicable. In addition to a lack of visual behavior information most UIMSs do not provide tools for collecting and defining the terminology and the symbology of the application domain in a form independent of a specific menu or interaction technique.
A final issue is the question of how ``good'' is the task specification for a particular user interface. How well does it capture the intent of the designers? There are very few guidelines or principles to direct task level design of interfaces. Questions can arise as to whether or not a particular task specification places unnecessary restrictions on the dialogue designers. Even if such information is defined and collected the question remains as to how task level designs feed directly into the implementation of the functionality and the transaction level design of the interface syntax. If such questions are not answered and then embodied into software tools which foster the application of the answers then this will remain a ``seat of the pants'' endeavor in user interface design.
Expressive, integrated interface specifications
The experience that our lab has gained in developing four different UIMSs has lead us to believe that the success of a UIMS is directly related to the ease with which interface designs can be expressed in the form that the UIMS requires. Difficulty in expressing interfaces in grammars or in transition diagrams caused the SYNGRAPH system [8] to not be widely used. This was in spite of the fact that those who used it realized enormous productivity gains. The quality of the user interface to the UIMS is an important design consideration.
One of the main difficulties arises when visual information, which is best expressed by drawing it on the screen, must be integrated with logical behavior specification which is not visually expressed. Peridot [6] provides some solution to this problem at the transaction level of interactive techniques. Juno [7] and Snap Dragging [1] also chart possible directions for drawing and specifying the dynamics of user data objects. Papers have been written on transition diagrams and grammars which ignore the visual aspects of the user interface. The Seeheim model [4] for a UIMS simply says that output tokens exist. This is not a very satisfying solution. On the other hand layout and icon drawing programs have been written which totally ignore the issues of dialogue syntax. An effective UIMS must integrate all of these issues if it is to be of any value at all.
One area which has been sorely neglected in UIMS research has been the presentation of application data. Using our current UIMS the dialogue for an editing application with 150 commands was designed and implemented in three days. The management of the underlying data took about four more days. The implementation of the display update facilities took more than four months. The process of incrementally updating a display on the screen in response to some semantic modification of the underlying data structures is a difficult problem that is not yet well understood.
Data models and browsing/editing
There is a strong similarity between UIMS research and fourth generation languages (4GL) for data processing. Both disciplines seek to build user interfaces with as little effort as possible. One questions what the differences are between these two research efforts. The most notable difference is the underlying data model. In 4GL research the underlying data model is usually a relational data base. This data model is the key to the success of 4GLs and also the key to their inapplicability to the problems that most UIMSs are applied to. The relational data model is geared towards large amounts of data with relatively low structural complexity and associative relationships between the data items. Even though a UIMS application may operate on data which has been retrieved from a data base it typically has a much more complex structure and cannot tolerate the slow performance of associative rather than linked data structures.
There is, however, a good deal to learn from database and 4GL technology in building UIMSs. The first lesson is the focus on the data model. Most of what a 4GL does is to either display, edit or generate reports from the underlying database. This in conjunction with some statistical analysis functions covers 90% of the user interface needs in data processing. By focusing on the data processing task as a whole 4GLs have achieved levels of programmer productivity which far exceed the performance of most UIMSs. We have spent far too much time on syntax models and not enough time on the application tasks that UIMSs are intended to support.
A second lesson to be learned is the provision of undo facilities. Databases have long had facilities for transaction recovery and other error handling facilities. One of the criteria posed for direct manipulation interfaces is that all operations be reversible [12]. This requires that in some fashion every semantic action must be undoable. Database research has long shown that such facilities can only be provided by controlling the underlying data.
These lessons can be applied to UIMS research in the following way. A data structuring and manipulation facility must be designed which is suitable for the interactive graphical environment and offers the requisite performance in such an environment. It must be realized that 85% of all interaction is a process of editing or browsing through some underlying data model. All of the application specific images that appear on the screen are representations of objects from this underlying model. Based on these assumptions UIMS tools can be developed which address the entire interactive application problem rather than narrowly focusing on syntax and device selection as the only keys to interface productivity.
User interface specifications are software too
As UIMS tools have developed new forms of expression have been defined. Window layouts and icons are drawn and stored in files with a particular format. Help messages, glossaries and synonym tables are stored in another format. Transition tables or other kinds of syntactic information are stored in yet another format. When one uses a UIMS all of these kinds of information are brought together with the application code to form an interactive application. The problem arises when we try to apply software management principles and tools to this melange of information.
What does modularity and abstraction mean when applied to icons and window layouts? How do you apply UNIX's SCCS (source code control system) to transition diagrams or tables stored in a binary format? As UIMSs become more successful programs are constructed which are no longer simple demonstrations consisting of ten or fewer commands. Such programs also need maintenance. More than one commercial UIMS has advertised that the generated transition tables can be edited directly. Such activity is tantamount to editing the object code of a production program and leads to maintenance nightmares. The maintenance of large programs which are built with a UIMS becomes a problem when one considers that most of the program's control structure is buried in the UIMS which to an applications programmer is a black box. There are serious issues that arise when the structure of the dialogue is interdependent with the side-effect behavior of the semantic actions. Such interdependencies can seriously degrade the reliability of any modifications to the code.
The construction of reusable components becomes even more complex in a user interface setting. A component such as a spelling checker is much more complicated than a computational package. The spelling checker to be reused must be able to retarget to a new application data model as well as integrate with the application's display of the text to be checked. A simple procedure call mechanism or separately compiled module is insufficient for controlling such relationships. This becomes even more complicated when the spelling checker has its own dialogue and layout information which must also be integrated.
As UIMSs develop, entirely new forms of program expression will be created which do not necessarily fit into software engineering models which were constructed for text oriented source files. The problems of software management must must be readdressed within the context of a UIMS.
Extended functionality at the user interface
A user interface to an application should provide more than simply the interactive commands for the application. There are a number of facilities that can be supported by the UIMS which are independent of a particular application. The first such capability which comes to mind is a macro facility. It should be possible for end users to add new commands to their interfaces which are built out of the existing ones supplied by the application. This provides not only a flexibility to the user which would not otherwise exist but a powerful prototyping mechanism for future enhancements to the application. A related capability would be rule based reasoning facilities on top of the user interface. The creation of rule-based paradigms for adding new commands to user interfaces is not significantly more complicated than adding macros to user interfaces.
A second functionality would be a notation facility. That is the ability for an end user to tag a particular data object displayed on the screen with a note for further reference. Such a notation ability is possible if the UIMS has a mechanism for naming data objects that are being displayed and manipulated. Such a naming mechanism for communication between program parts is essential for new capabilities of UIMSs and relies heavily on the need for an underlying data model.
A third capability would be a package for designing Computer Assisted Instruction modules for training end users how to use the applications that the UIMS is being developed for. It turns out that many of the features that one finds in CAI packages are very close to what is already found in a good UIMS. If one can take these same capabilities, couples them with the knowledge and control of the user interface which a UIMS should already possess, one can create user interfaces which contain their own training facilities to guide new and infrequent users.
As one develops large user interfaces and then looks at the knowledge about the user interface that is possessed by a UIMS one sees that the interactive program becomes a working domain in its own right. Spreadsheet programs began as simple interactive programs which provided a nice metaphor for manipulating numbers. As they became more popular and the industry settled down to a few major contenders, a number of products have been developed which are themselves written in spreadsheet macros and templates. The spreadsheet program itself became a new programming paradigm. Devotees of UNIX have long been using UNIX's user interface as a programming language. If we recognize and support such activities a UIMS becomes even more powerful than simply a tool to build user interfaces it becomes a tool for creating specialized working environments. Within the global context of user interface issues this means that regardless of a particular application an end user can expect a powerful, uniform set of support tools at the interface to each application. This assumes, of course, that the UIMS provides such tools and is used to implement all of the applications.
Automated evaluation of user interfaces
The issue of automated evaluation of user interfaces has received some attention in the literature previously. Buxton [3] proposed that a UIMS should have a ``dribble file'' for tracking input events for post evaluation. [2] has proposed a set of metrics for evaluating user interfaces as has [11]. As yet, however, such ideas have not be realized in software. One of the reasons for this is that without a working UIMS one does not have the specification information or the control necessary to take such measurements and subsequently analyse them. This is work whose promise has not yet been tested. Much of user interface design is still an art form and will remain so. The choice and integration of an appropriate metaphor for a particular application cannot be automated. It is a highly subjective creative act. The detection of incompatible colors or the existence of excessive device swapping are quantifiable tasks and should have tools for automatically making such evaluations. It is not reasonable to expect extensive human factors analyses to be performed on every user interface. The demand far outstrips the resources. Tools for performing such analyses may assist in bridging this gap. Such tools become possible within a UIMS because of the extensive amount of information about layouts and command syntax that a UIMS already has in order to perform its function.
Summary
This issues presented here are ones which are larger than the questions usually discussed relative to UIMSs. They are also ones where the existence of a UIMS makes possible solutions and approaches that may not have been feasible before. It is hoped that by addressing these issues a UIMS can offer significant intellectual leverage to user interface development instead of mere increases in programmer productivity.
References
[1] Bier E. and Stone, M. Snap-Dragging. Proceedings of SIGGRAPH'86 (Dallas, Texas, August 1822, 1986). In Computer Graphics 20, 4 (August 1986), 233240.
[2] Bleser, T and Foley, J. Towards Specifying and Evaluating the Human Factors of User-Computer Interfaces. Proc. Human Factors in Computer Systems (Mar 1982).
[3] Buxton, W., Lamb, M.R., Sherman, D. and Smith, K.C. Towards a Comprehensive User Interface Management System. Proceedings of SIGGRAPH'83 (Detroit, Mich., July 2529, 1983). In Computer Graphics 17, 3 (July 1983), 3542.
[4] Green, M. Report on dialogue specifications. In User interface management systems, Pfaff G.E. (ed.), Springer-Verlag, New York, 1985, 920.
[5] Hayes, P. Executable Interface Definitions Using Form-Based Interface Abstractions, In Advances in Computer-Human Interaction, H.R. Hartson, ed., Ablex, Horwood, NJ, 1984.
[6] Myers, B.A. and Buxton, W. Creating Highly Interactive and Graphical User Interfaces by Demonstration, Proceedings of SIGGRAPH'86 (Dallas, Texas, August 1822, 1986). In Computer Graphics 20, 4 (August 1986), 249258.
[7] Nelson, G. Juno, a Constraint-Based Graphics System. Proceedings of SIGGRAPH'85 (San Francisco, Calif., July 2226, 1985). In Computer Graphics 19, 3 (July 1985), 235243.
[8] Olsen, D.R. Jr. and Dempsey, E. SYNGRAPH: A Graphical User Interface Generator, Proceedings of SIGGRAPH'83 (Detroit, Mich., July 2529, 1983). In Computer Graphics 17, 3 (July 1983), 4350.
[9] Olsen, D., Buxton, W., Ehrich, R., Kasik, D., Rhyne, J. and Sibert, J. A Context for User Interface Management, IEEE Computer Graphics and Applications 4, (Dec 1984), 3342.
[10] Olsen, D. Mike: The Menu Interaction Kontrol Environment. To appear in ACM Transactions on Graphics.
[11] Reisner, P. Formal Grammar and Human Factors Design of an Interactive Graphics System. IEEE Transactions on Software Engineering SE-7, 2 (Mar 1981).
[12] Shneiderman, B. Direct Manipulation: A Step Beyond Programming Languages. IEEE Computer 16, 8 (1983), 5769.
[13] Thomas, J.J. and Hamlin, G. Graphical input interaction technique (GIIT) workshop summary, Computer Graphics 17, 1 (January 1983), 530.