07Bennett.tioga
Rick Beach, February 15, 1987 1:14:42 pm PST
Collaboration of UIMS Designers and Human Factors Specialists
John L. Bennett
IBM Almaden Research Center
650 Harry Road, K52/803
San Jose, CA 95120-6099
December 26, 1986
``How can the gap between human factors experimenters and interface software designers be bridged?''— Call for Participants
Introduction
The amount and kind of learning that people must do in order to use a computer is strongly influenced by the characteristics of the user/computer interface. Other Workshop papers review current conceptual and practical developments in User Interface Management Systems (UIMSs). During the Workshop we saw that many designers know ease of learning and ease of use are important at the interface. They are also aware of difficulties in achieving results. In particular, people in the academic and industrial communities know that the availability of sharp tools is a necessary, but not sufficient, step toward building interface designs that prove effective for users.
In this note I explore some prerequisites for increased effective contribution to system design by human factors specialists. I see an opportunity for these specialists to become a vital part of the system development process rather than playing an after-the-fact testing and evaluation role. Demonstrated effective collaboration among people expert in user requirements and people expert in computer requirements could yield tangible benefits by shaping function to meet the needs of the intended end users during the UIMS design process. The end users include both the application builders who will use the UIMS to construct the user interface and the people who access applications through the user interface. The resulting UIMS must meet respective needs for ease of learning, ease of use, and flexibility in growth of use.
The presence of measurable, testable objectives for all aspects of system performance, including usability, can be an incentive for collaboration. I suggest that the intensity, frequency and subtlety of the UIMS design decisions will justify the practical involvement in day-to-day work by specialists representing user requirements.
Given the need for an intense collaboration, we must then address the cultural differences that may have to be overcome to establish the trust and teamwork needed to meet overall system goals for computer and user performance.
Establishing the need for collaboration
User Interface Management System (UIMS) function bridges between users (with human purposes, tasks, and results needed) and computer function (application capability requiring control and data from users). Previous workshops and technical publications have outlined conceptual and practical issues in building UIMSs which are logically complete and technically viable. The announcement for this workshop specifically identified a gap between people who focus on system usability (human factors experimenters) and people who focus on design of the computer software that defines the interface.
Defining the user interface
I have found it helpful to think of the user interface as a surface through which data and control is passed back and forth between computer and user [1]. Physical aspects of the user interface include the display devices, audio devices that may be used, and input devices such as a tablet, joystick, mouse, or even a microphone, in addition to the usual keyboards. Treating the interface as a surface can help us focus on the qualities seen by the user, and these qualities are influenced by both hardware and software design. Data displayed on the workstation provides a context for interaction, giving cues for user action — presuming the user knows how to turn it into information by being able to interpret what is displayed. The user formulates a response and takes an action. The data passes back to the computer through the interface. From this perspective, all aspects of the system that are known to the user are defined at the interface. The quality of the interface depends on what the user sees (or senses), what the user must know in order to understand what is sensed, and what sequence of actions the user can (or must) take to get needed results.
Establishing measurable, testable objectives
I make a distinction between the user REQUIREMENTS for successful interaction and a particular IMPLEMENTATION intended to meet the requirements [1]. This leaves us open to the possibility that a particular set of design decisions, no matter how well-intentioned, may fail to meet requirements. That is, ease-of-learning and ease-of-use requirements persist even if we fail to meet them. I assume it is our purpose to engineer the interface to fit the user rather than ``engineering'' the user (through training) to fit the interface. However, I know that any implementation requires trade-offs and compromises, and these will inevitably lead to a need for some user training as an accommodation to the equipment. Recognizing the difference between the end results we want and the mechanisms that we use to obtain those end results encourages us to explore a variety of design approaches.
Typical high-level goals for UIMS builders
With respect to improving the quality of the user interface, typical goals adapted from Olsen et al. [2] and from Coutaz [3] are:
· Place the user interface processing that is common across applications in a separate module so that many applications can use the same code.
· Use the common module to present a more consistent interface both within and across applications.
· Use the presence of a common module to encourage specialists to separate the design of the presentation language and the action language seen by the user from the design of specific application content.
Typical high-level goals for end users
When considering the interface as a surface, we can outline qualities that users evaluate in an ``acceptance test'' of the design. Sample qualities [1] are:
· Learnability — a specified level of user performance is obtained by some required percentage of a sample of intended users, within some specified time from beginning of user training.
· Throughput — the tasks required of users can be accomplished by some required percentage of a sample of intended users, with required speed and with fewer than a specified number of errors.
· Flexibility — for a range of environments, users can adapt the system to a new style of interaction as they change in skill or as the environment changes.
· Attitude — once they have used the system, people want to continue to use it, and they find ways to expand their personal productivity through system use.
Establishing how success will be measured
Both UIMS builders and evaluators representing the user must establish specific measurable and testable goals for their work. Ultimate success at the user interface can be evaluated only after the system is built and put into use. However, because building a system is expensive and because it may be difficult to make design changes at later stages of implementation, developers are looking for cost-effective ways (such as division of function between UIMS and application) to make the changes needed at the user interface without affecting the relative stability of applications. Developers are also receptive to the potential success of early-warning methods to identify, before system completion, those design decisions that may lead to user inefficiencies. One method is modeling the series of interactive steps a user must follow to gain results.
A possible format for a table of objectives is suggested in [1]. Each attribute (typically relating to function, portability, computer performance, usability, ...) is represented in a row of the table. The attributes, or qualities, for each goal only have a meaning to developers if they can be measured. The table puts on record for all team members the attributes that are important, and one column indicates how each will be measured. The way one addresses usability, for example, can range from user tests to adherence to guidelines. But it is important that each measure of success be related to the goals for the system, and measures for usability attributes may not be valid unless they are validated through reference to the actual behavior of users.
Another column of such a table can be used to give a Planned Value. A Worst Case Value can be set as a lower bound; if any attribute in the table falls below its Worst Case Value, then the whole effort may be considered in some sense a ``failure.'' A State of the Art value can be used to give a best-known achieved value for that attribute. The ideal system would be at the State of the Art for all important attributes, but attaining the Planned Value may be all that is economically feasible.
A filled-in table at this top level show at a glance the overall objectives for the system. Each attribute can then be expanded into a sub-table showing the details for that attribute — in the same format.
As an example of the kind of analysis needed to establish meaningful values in a table of objectives, it is helpful to consider ``consistency.'' Often we see the terms ``consistent'' and ``consistency'' used as if they were self-defining. Concrete examples reveal that consistency is ALWAYS defined with respect to some ``dimension.'' In addition consistency is always judged with respect to the qualities (properties) of some object when compared to the qualities of some other object.
As a simple illustration, one could say that two keyboards are consistent with respect to the relative location of the keys, or with respect to the color of the keys, or with respect to the distance of travel when the keys are pushed, or with respect to the presence of a key click when a key is depressed. Clearly some of these ``dimensions,'' and the values observed along each dimension, are more important than others to me as a user when I move from one keyboard to another keyboard. With respect to touch typing, the location of the keys is very important to me, and the color is likely to be less important or may even be unimportant. If I am a hunt- and-peck typist, the color pattern that I have memorized on one keyboard may be important to me, and I may require consistency on that dimension if I am to be supported in transferring my skill gained on one system to my work done on another.
The concept of ``consistent'' is a useful distinction — we do not require that two interfaces be IDENTICAL. In establishing measurable, testable objectives for consistency, we would need to elicit which ones of the many possible ``dimensions'' we suspect are important for carrying out a particular task given a particular set of users. Once the important distinctions are recognized, then we can make design decisions which are examined in the light of this working hypothesis. We might plan tests during design to determine more closely which dimensions, and for what range of values, are critical for the users and tasks in question.
The central point is that success in meeting objectives must be evaluated behaviorally. The dimensions of consistency are ``under control'' when the interface is formed from technology appropriate to the overall system goals AND the characteristics of the interface lead to the needed user behavior. To meet usability objectives, we must control those aspects of the interface that affect user performance.
Establishing and maintaining a set of working objectives during system design can be a particularly effective way of keeping a focus on the question, ``To what are we committed in this enterprise?'' Changes in understanding and in implementation direction are inevitable, especially in an evolving area such as UIMSs. By continually capturing the current best estimate with respect to performance, both on the computer side and the user side of the interface, we can help people on the team, from different backgrounds and with different focal interests, to maintain the commitment needed to see the project through to conclusion.
Shaping the technology to meet user needs
Although developers require tools to construct user interfaces with advanced features, the presence of a tool, even one that demonstrates internal elegance and creativity, does not itself automatically lead to user interfaces that meet user requirements. To identify some operational characteristics of the UIMS that result in user satisfaction, we can build performance-monitoring tools into the UIMS run-time support. The issue, however, is knowing what needs to be measured to diagnose user problems.
Olsen et al. [2] suggest the integration of a spelling checker with a word processor as an example of a required technical capability for accessing functions from several applications. Such a capability can aid the user, but the timing of its invocation is critical. If, in a work situation a person is using an editor and is barely able to type fast enough to capture transient creative thoughts in text while writing, then the automatic invocation of a spelling checker could interrupt the flow of thought and therefore the user's progress in the task. A task analysis might suggest that the system should support text entry at typing speed followed by a user-controlled pass through the text to correct spelling. The need for integration and rapid performance made possible through invocation of UIMS services still exists, but the invocation would be timed to avoid interference with the creative process of the user.
While this is a simple example, it illustrates the kind of concerns the development team must address in order to meet user objectives.
As we review recent literature on the developing concept of the UIMS (e.g. Refs 2, 3, 4) and the tools associated with it, we see identified a need to provide a more ``consistent,'' ``uniform,'' ``supportive'' interface for the user. However, designers, both in the academic community and industry, tacitly acknowledge that they do not have operational definitions for these terms.
Coutaz [3] calls for ``advice from psychologists, human factors specialists, and graphic artists who have acquired a better understanding of human behavior than have computer scientists'' when making specific design decisions affecting the user interface. ``Dialog design requires the cooperative efforts of technicians from a variety of disciplines.'' By cooperating in this way developers increase the chances of building in effective guidance mechanisms to help the user adapt to the design. She identifies a need to present designers with a standard view of the potential users, so that designers can provide access to applications through the workstation in a consistent way.
What are the barriers standing in the way of a usability specialists becoming effective members of design and development teams? One barrier is a general lack of tradition for having a specialist who represents user requirements as a member of the design team. Designers and developers, necessarily well versed in the hardware and software technology they use, may also consider themselves to be ``end users.'' Thus, they naturally assume that they are able to be user surrogates as a routine part of their work. Given this assumption, there may be a resistance to taking time to establish measurable, testable objectives: ``Of course systems I design are easy to learn and use — how could they be otherwise?'' Under these circumstances, adding a specialist in user requirements as a team member may be seen as an unnecessary lost opportunity to add another builder.
Another objection may be raised. Making the concepts of UIMS real in a concrete implementation is itself so challenging on the computer side of the interface that all resources must be concentrated on this focus. We start by constructing a workbench prototype in the academic laboratory just to show that one can be built at all. The next step is to build another version in an industrial advanced technology laboratory to show that the approach is technically feasible with given product constraints. Finally a development project establishes commercial viability. Some argue that usability considerations can be handled at the final stage. This has indeed been an effective way to develop many technologies.
But ``painting on'' usability at a final stage probably leads to loss of an opportunity to build in fundamental structures in support of user processes. If the success of a UIMS is dependent on simultaneously meeting requirements on BOTH sides of the interface, then the user requirements specialist role may need to be filled at the first stage, even in an academic environment.
Establishing measurable testable objectives can be a powerful incentive for collaboration. If the objectives include usability, then the chances are that computer specialists on the design team will not know how to meet them. If people on the team are committed to results, then they will look for ways to achieve them. This can be a first step toward collaboration.
A framework for collaboration
In an article [5] reporting on an evolution in the role of human factors practitioners, I suggested a basis for building the trust and mutual commitment needed for collaboration. Winograd and Flores [6], especially in Chapter 5, provide a more fundamental discussion. As part of maintaining effective collaboration, I have observed that one person in such a conversation can often play a pivotal role in asking participants (including one's self) about intention. If, given a difference of opinion, people focus on the requests and promises made and how requests will be judged as completed, this can be a powerful way of shifting from a context of ``us'' versus ``them'' to a focus on the common ground — the results to which those in the group are committed. This does depend on establishing trust and goodwill in advance.
The challenges of building UIMSs that are responsive to user needs may require increased collaboration among user representatives and system builders. I suspect it will be helpful to:
· Explicitly state and explore goals of individuals in order to establish mutual understanding, potential mutual benefit.
· Establish, through conscious intention and the resulting use of language, the trust and openness that gives team members permission to make mistakes and for all to learn from such mistakes. This allows the inevitable breakdowns, errors, and problems to be an opportunity for advancing the collaborative work rather than an excuse to stop it.
· Be open to the possibility that you, as a member of the team, can act for the moment as a disinterested third party to observe cultural misunderstanding and feed back to speakers what was heard. This gives the team an opportunity to be sure that what was heard was what was intended by both speakers and listeners.
Specific objectives must be given in concrete detail for a particular system, but we made some progress at the Workshop in setting up the generic framework for doing the specific work required. The real test will come in the evolution of UIMS work during the next few years.
Perspective and background
I have worked during the last decade as a direct participant in several cross-disciplinary projects. Just as UIMS concepts are abstractions drawn from fragments of concrete projects and formed into a unified whole, work on a User Interface Architecture [7] was a step toward establishing a framework for the evolution of concrete user interfaces in ways that support users in transferring their skills to new interfaces.
In an ongoing collaboration with Professors D. Kieras and P. Polson, we have been working to extend, adapt, and apply the ``cognitive complexity'' theory [8] that Kieras and Polson have been developing. In this approach we construct tools for analyzing the user's side of the interface to complement the formal tools already available for describing the computer side of the interface. The purpose of the work is to give design and development teams a frame of reference for analyzing the implications of design decisions before they are committed to code. While ``cognitive complexity'' is an analytic and evaluation tool, the intention is to inspire an inventive design cycle by giving early indications of where users may have ease of learning and ease of use problems.
Though this technology is not yet ready to be moved from the research laboratory into the development laboratory, we believe it does represent a step toward modeling the user interface in a way understandable to engineers and compatible with a system development process.
Summary
Explicit measurable, testable usability objectives can create a recognized need for collaboration among human factors specialists and UIMS designers. The fact that the technical training of these specialists is often quite different may require special attention to building a basis for teamwork. A commitment by team members to meet the usability objectives can make the needed collaboration work.
References
[1] Bennett, J. Managing to Meet Usability Requirements. In Visual Display Terminals: Usability Issues and Health Concerns, Bennett J., Case D., Sandelin J. and Smith M. (eds.), Prentice-Hall, Englewood Cliffs, NJ, 1984.
[2] Olsen, D., Buxton, W., Ehrich, R., Kasik, D., Rhyne, J. and Sibert, J. A Context for User Interface Management, IEEE Computer Graphics and Applications 4, 12, (Dec 1984), 3342.
[3] Coutaz, J. Abstractions for User Interface Design, IEEE Computer 18, 9 (1985), 2134.
[4] Pfaff, G.E., ed. User Interface Management Systems, Springer-Verlag, 1985.
[5] Bennett, J. Observations on Meeting Usability Goals for Software Products. Behaviour and Information Technology, 5, 2, (1986), 183193.
[6] Winograd, T. and Flores, F. Understanding Computers and Cognition, A New Foundation for Design, Ablex, 1986.
[7] Bennett, J. The Concept of Architecture Applied to User Interfaces in Interactive Computer Systems. Proceedings of INTERACT 84, (London), Shackel, B. (ed.), North-Holland, 1985, 865870,
[8] Kieras, D. and Polson, P. An Approach to the Formal Analysis of User Complexity, International Journal of Man-Machine Studies 22, 4 (1985), 350.