Keith A. Lantz, Peter P. Tanner,
Carl Binding, Kuan-Tsae Huang, Andrew Dwelly
3. The workstation agent
The remainder of this report focuses on the workstation agent (WA). This section discusses the interfaces a WA should present to its clients, including dialogue managers, without proposing a specific strategy for implementing those interfaces. Subsequent sections suggest a specific implementation strategy, as well as discuss the relationship of the WA, as proposed, to existing or proposed standards.
The workstation agent provides the basic interface between the hardware and the rest of the system. One of its principal functions is to hide any idiosyncrasies of that hardware — through the use of
virtual devices. Rather than dealing directly with the hardware, most applications request input from a virtual keyboard or mouse, for example, and write output to a virtual store.
2 Depending on the kind of real devices being emulated, the characteristics of the virtual devices will vary widely. In the simplest implementation, each workstation agent emulates exactly one input device and one output device. More sophisticated workstation agents might emulate multiple classes of devices simultaneously.
2 This is not to say that idiosyncrasies must be hidden. Indeed, in many situations the details of specific devices are exactly those that determine the quality of interaction [8,12].
Historically, the most common pair of devices emulated has been the keyboard and display of a page-mode (character) terminal, exemplified by the DEC VT-100. Even in this case, the workstation agent can be thought of as emulating different types of devices, corresponding to the various input and output modes provided by a VT-100 — character-at-a-time versus block transmission, local editing facilities, and the like. In general, the workstation agent, through its virtual devices, provides a set of facilities that might be referred to as ``cooked I/O'' — ranging from line-editing to page-editing to graphics-editing and so forth. In support of this, the virtual input devices are usually capable of ``echoing'' the input and local editing operations on an appropriate virtual output device. These facilities are enabled and disabled individually for each virtual device.
Largely due to this linking of virtual input to virtual output devices for purposes of echoing, it has been common to regard (input device, output device) pairs, or virtual terminals, as the entities of interest. Unfortunately, such one-to-one coupling causes problems in situations where an application requires a many-to-many mapping of input to output devices, as is the case in the context of real-time multimedia conferencing (cf. [2,18,24]). Therefore, we advocate a separation of input from output and eschew the expression ``virtual terminal.''
The number of virtual devices may be different from the number of real devices. There may be more — as in the case of multiple virtual displays (windows) — or less — as in the case of locator and button devices being combined as a mouse device. The latter case is more typical of input devices, where events from multiple real devices are merged into a single input stream.
3
3 To be more precise, there may be multiple classes of virtual device for each real device, and there may be multiple instances of each class. We use the expression ``virtual device'' to refer to an instance.
The expression ``workstation agent,'' then, derives from the fact that it ``stands in'' or ``acts as an agent for'' the physical hardware. Many contemporary systems have used the expression ``window system'' to refer to this functionality, but that expression does not suggest a system that can accommodate communications media other than traditional display devices. While for purposes of exposition the bulk of the following discussion does, in fact, focus on the case of I/O being graphical (or textual), we believe most of the concepts are applicable to other media. Our use of the term ``window'' will, in general, be restricted to situations where we are referring to portions of the display.
3.1 An input model
We propose that the workstation agent should, as a minimum, offer one class of (composite) virtual input device, namely, one that interleaves events from all real hardware devices into a single stream. Thus, the interface to clients is a stream interface. A client must ``read'' the stream to receive input.
4 Naturally, a client may choose to open multiple such streams, perhaps one for each window. Other features pertaining to input handling are best described working from the hardware up. To illustrate these features, we will describe how the WA handles a mouse event (refer to Figure 2).
picture(size 3.53in, postscript "figures/wa-input.vps-fig")])
device
|
←←←←←←|←←←←←←
| | | | | | | window determination
|
←|← determination of clients
| | replicating ( if <1 client interested in window)
|
| filtering (possible discarding)
|
|←|←|←| multiplexed with other device input
|
| input queue to client
Figure 2. Processing a mouse event.
4 This does not mean that the client must issue a new read request every time it wants a new input token. In an asynchronous message-passing system, for example, the client might open a stream for reading, after which the WA would send events to the client as they become available. The client would simply receive those events from its input message queue.
A mouse event, on arrival at the WA, is analyzed to determine which client (or clients) are interested in the event. Multiple clients may be associated with a single window, for example, if one client is responsible for lexical feedback (echo), while a different client is responsible for semantic feedback. Alternatively, there could be a client responsible for the mouse echo for all windows; it would receive a copy of all mouse events, and so be able to update the cursor position on the screen in parallel with another copy of the event being sent to other clients.
The determination of which clients are interested in the event is a three-step process. First, the WA determines the window (or windows) in which the mouse is positioned when the event occurred. Second, the WA determines which of the clients associated with those windows are interested in input from the device that generated the event. If there is more than one such client, the event is replicated, with a copy being targeted for each client. This then is the demultiplexing stage — the single stream of mouse input branches into several streams, one for each client. Of course, if there are no clients interested in an event, it is discarded.
5
5 Events should not be discarded until this determination can be made with certainty. There may be situations where a device is generating events faster than the WA can demultiplex them, in which case those events should be queued elsewhere until they can be demultiplexed. In the most pathological situations even these queuing limits may be exceeded, in which case events must be discarded.
At this point, a copy of the input event has been produced for each interested client. The final step in client determination is to determine, for each client selected thus far, whether the client is interested in this particular event. That is, in general, a client may specify a
filter that performs an action that is dependent on the actual input values as well as on the originating input device and the client's specifications for that device.
6 For example, it would be quite reasonable for an input event to be discarded if it is not ``significantly different'' from the most recent previous reading in the client's queue. Alternatively, the ``new'' event (the one being processed) might cause the ``old'' event (the one in the queue) to be discarded. The measure of ``significant difference'' may well vary from one client to another; a typical measure for the mouse would be distance moved since last event, or whether a button transition had occurred.
6 The filter need not be procedural, but could consist instead of a mask or template.
Once it has been decided that the event should be queued for the client, it is entered in the queue in time-sequence order with input from all other devices. This is the multiplexing stage, and is the final step in the input flow through the WA. Subsequently, the event will be returned to the client in response to a ``read'' request.
A final desirable feature is support for out-of-band data. For example, a CTRL-C typed on a keyboard, indicating an abort request, should be processed before any pending events. The necessary input might be inserted at the head of the client's input queue, or it might be sent using an asynchronous event notification mechanism distinct from the stream interface we have discussed thus far. For reasons of simplicity, we advocate the former approach, relying on the client-side interface routines to accommodate out-of-band data. On the other hand, since out-of-band data often means that pending events should be flushed or modified in some other way, the client interface should provide the corresponding functionality.
Although we know of no current system that adheres precisely to the above description, the input models adopted by the Adagio project [41,42] and by the ANSI X3H3.6 Committee on Display Management for Graphical Devices differ only slightly. TheWA [30], X [38], and NeWS [21] differ in more details but are similar in spirit — including having adopted the notion of a single composite virtual input device.
3.2 Output: retention issues
The workstation agent is responsible for posting images on a display, sending controls to a sound generator and managing all other devices capable of providing feedback to the user. It is not our intention in this section to describe the media-dependent output primitives available to clients of the WA, but rather to discuss one of the most controversial issues in ``window system'' design: retention, the degree to which information needed to generate the output should be retained in the WA. As in the previous discussion, what follows looks at the case of the output being graphical or textual, and being displayed on a window of a graphics display, although many of the ideas presented are applicable to other media.
There are four principal levels of retention:
1. None
2. Device-dependent (e.g. a bitmap)
3. Partially structured (e.g. a segmented display file)
4. Complete (e.g. a structured display file)
If the WA does not retain any portion of the image that it has placed on the screen, the application must then be responsible for all window regeneration whenever that becomes necessary. The WA's role becomes one of an informer — it informs the application that its window (or a specific portion of its window) has been destroyed, and must be replaced. It is then up to the application to enter the necessary information into its output stream that will cause the WA to repair or replace the contents of the window. The application module must keep, at all times, enough information to regenerate the image on the screen. This is the approach taken in SunWindows [40], X [38], and Andrew [22], for example.
Alternatively, the WA may keep a bitmap representation of the full window, including that part of the window covered by other windows. In this case, window movements do not require any work from the application, but any pan, scroll, zoom or resize of the window will require the application to take on the responsibility of redrawing the window. This approach was taken, for example, in the Teletype 5620 (formerly the Blit) [35].
The WA may keep a partially structured representation of the contents of the screen. For example, a WA handling a text window may keep the textual representation of the contents of the window. In this situation, the WA can handle any regeneration of the output unless a ``fault''occurs — analogous to a page fault — where the WA requires information from the application which precedes or follows that portion of the output structure retained by the WA.
A model of full retention can be illustrated with the idea of the WA having a built-in implementation of PHIGS [11]. Such a WA would retain the complete structure, the model, of a 3-D graphics image. The WA would be able to redisplay the 3-D image independently after any window modification mentioned above, or after such window parameter changes such as movement of the viewpoint or modification of the display attributes. Moreover, it would be possible for the WA to provide multiple independent views of the same image — independent of the application. This is the approach taken (for graphics) in the Virtual Graphics Terminal Service (VGTS) [28,34] and its successor, The Workstation Agent (TheWA) [30]. It is also an approach that can be taken in NeWS (formerly SunDew) [21], where Postscript [1] is supported rather than PHIGS.
The tradeoffs between these approaches are significant. For example, complete retention allows for rapid reposting of the window when it is moved, changes size or is uncovered — in contrast to the sometimes excruciating slowness with which a screen is refreshed when no information is retained. This is especially advantageous when the application is running on a machine remote from that running the WA (cf. [28,29,34]). However, the method used for this retention is highly dependent on the types of applications being supported and may lead to a high degree of redundancy — data structures in the application plus display file in the WA plus frame buffer, for example. Consequently, the idea of ``forcing'' such complexity upon all applications is inappropriate.
Fortunately, as demonstrated by TheWA and NeWS, it is possible for a single workstation agent to implement multiple retention models simultaneously. Indeed, the approach is to provide multiple classes of virtual output devices for each real output device. For example, TheWA supports one class of virtual output device that emulates the store of a VT-100 and one class that emulates a PHIGS-like structured display file. However, adding this much complexity to the workstation agent may be inappropriate in some situations, especially if memory space is a concern.
6. Concurrency and multi-process structuring
With the proliferation of networks and multiprocessor workstations, concurrency and parallelism have been of increasing concern to systems developer.
7 Until the advent of window systems, however, the typical user had little cause to be aware of any underlying concurrency — as provided by any multi-tasking operating system, for example. She did one thing at a time. Only when window systems made it not just possible, but easy, for a user to interact with multiple applications (more-or-less) simultaneously did concurrency have a significant impact on her model of the system.
7 By ``parallelism'' we mean truly parallel execution, possible only by using multiple processors. By ``concurrency'' we mean ``logical parallelism'', that is, achieving the same semantic effect as parallel execution but without requiring multiple processors. Thus, from a semantic point of view, ``concurrency'' subsumes ``parallelism''.
Thus, even casual users are becoming accustomed to at least one fundamental concept of concurrency, namely, multiple threads of control. The issues at hand, however, are whether concurrent programming mechanisms and practices can assist in the implementation of interactive software, and, if so, how. The principal mechanisms of interest are processes (threads of control) and the mechanisms to prevent processes from interfering with each other (mutual exclusion), to permit them to synchronize their dialogue (synchronization), and to permit them to exchange data (communication). The selection of and subsequent use of those mechanisms should be determined entirely by the choice of concurrency model — or paradigm for multi-process structuring [13].
We assume that the underlying operating system already supports processes and discuss only the issues surrounding multi-process implementation of (sub)systems outside the kernel. Of course, concurrent systems of this sort predate window systems and UIMS's by at least a decade, so our intent is not to give a comprehensive overview of the field — for which the reader may wish to refer to the excellent tutorial by Andrews and Schneider [4]. Rather, we limit our discussion to the basic motivations for and application of multi-process structuring in the context of a UIMS.
6.1 Motivations
Historically, there have been four principal motivations for composing software systems out of multiple interacting processes:
1. Modular decomposition: The decomposition of large systems into smaller, self-contained pieces, with the possible advantages of improved portability, configurability, maintainability, and . . .
2. Protection: The information hiding that comes with modular decomposition yields increased protection, especially if processes (modules) use disjoint address spaces.
3. Distribution: If processes have disjoint address spaces, and communicate only through messages, it is straightforward to distribute the various processes across multiple (loosely-coupled) processors.
4. Performance: If processes can be distributed across multiple processors (whether loosely- or tightly-coupled), and can run in parallel, performance may be enhanced.
Unfortunately (for the case of multi-process structuring), most if not all the benefits of modular decomposition accrue to contemporary data abstraction languages. Moreover, many early concurrent programming mechanisms, especially message-based systems, gained a reputation for poor performance and difficult-to-get-right programming.
However, more recent concurrent programming environments have, in general, overcome their performance and programming problems. That does not necessarily mean that passing a message is as cheap as a procedure call, for example, but that overall system performance degradation, if it occurs, is outweighed by the advantages of multi-process structuring. In particular, data abstraction by itself does not help when the system in question must deal with multiple threads of control — for example, input from multiple devices or output from multiple applications. In that case multiple processes, one for each thread, say, are preferred over one process that needs to multiplex itself between the threads — from an ease-of-programming point of view [13,19,27,31]. Without employing multiple processes it is also impossible to distribute a software system across multiple processors and, therefore, impossible to take advantage of any available parallelism.
Finally, suppose there were only two modules to worry about, the application and the workstation agent. There are three basic ways to ``connect'' these two modules:
1. Implement the workstation agent entirely in the kernel, accessing it via system calls.
2. Provide some kernel support, but link the bulk of the workstation agent with the application.
3. Implement the workstation agent as a server process, accessing it via interprocess communication. (Still requires some kernel support for low-level device access and for IPC.)
Without repeating all the arguments, method 1 is simply incompatible with current trends in operating system design, and should not be considered for any but the most specialized applications environments. Experience with SunWindows provides ample evidence that method 2 suffers from three severe disadvantages [37]:
1. A high degree of redundancy — if shared libraries are not supported,
2. Complex synchronization problems — multiple clients attempting to repaint the screen at the same time, for example,
3. Inadequate support for remote (distributed) applications.
That leaves method 3, and the success of Andrew [22], X [38], NeWS [21], and other server-based window systems — for UNIX — leave little doubt as to the superiority of that method over the others — for any multi-tasking environment.
Therefore, we see no realistic alternative to employing multiple processes in the construction of interactive software — in the context of emerging hardware environments. The real issue is how to do so in such a way as to gain the indicated advantages while avoiding the twin perils of poor performance and ``difficult-to-get-right'' programming.
6.2 Mechanisms and paradigms
Historically, the mechanisms and paradigms for concurrent programming have fallen into three broad classes, based on their underlying memory architecture:
1. Shared memory
2. Disjoint address spaces
3. Hybrid
The first two classes represent two ends of the spectrum, with competing advantages and disadvantages. In shared-memory-based systems the interface to the various process interaction mechanisms is the procedure call. Data need never be copied, and creating, destroying and switching between processes in the same address space do not require significant memory map manipulation; hence the expression ``lightweight process.'' In disjoint-address-space-based systems the interface is the message,
8 data needs to be copied, and creating, destroying and switching between processes requires a significant amount of memory map manipulation; hence the expression ``heavyweight process.'' Consequently, shared-memory-based mechanisms typically offer superior performance. Disjoint-address-space-based mechanisms, on the other hand, offer transparent distribution and better protection.
8 All remote procedure call mechanisms are built on top of messages, by whatever name.
Achieving a more balanced mix of advantages and disadvantages suggests a hybrid system. In such systems,
teams of (lightweight) processes share a single address space and may employ shared-memory-based mechanisms to interact with each other. Processes on separate teams interact via messages.
9 A team, then, resides on a single host (including shared-memory multiprocessors) and its constituent processes can interact efficiently by virtue of their shared address space. However, the software system as a whole can still be distributed (and possibly made more reliable) by placing different teams on different hosts.
9 The team is the equivalent of a heavyweight process — as in UNIX, say.
A typical server, for example, might employ separate processes on the same team for each outstanding ``connection'' (open resource) — each dialogue, or each display file, or each input stream, for example. In addition, helper processes might be employed as timers or to perform operations concurrently with the main server, such as repainting the contents of one window concurrent with processing updates to the display file associated with another window. In either case, each process can be ``single-threaded'' — taking a request and processing it to completion before asking for another request — and therefore be almost as easy to program as a typical sequential program. The choice of message-passing paradigm — in particular, whether asynchronous or synchronous — can lead to additional simplification; the reader is referred to [4] for an overview of the relevant issues.
The hybrid ``teams of processes'' paradigm was pioneered in the Thoth operating system [13,15] and employed in all of its descendants, including the V-System [6] and Harmony [20]. Examples of interactive software constructed for these systems are discussed in [5,7,9,24,25,41,42]. The same basic paradigm has also been adopted for the Argus [32], Eden [3], Verdix Ada [43], and Mach [36] systems, as well as by the innumerable groups who have developed ``lightweight process packages'' for UNIX — including those employed internal to the X and NeWS window servers. It should come as no surprise, then, that it is the basic paradigm we advocate for multi-process structuring.
10
10 Some of us also believe that there is reason to support the sharing of memory between teams, as might be the case for the semantic support component discussed in [16], for example. However, this issue was not discussed in sufficient depth as to offer it as a consensus view.
6.3 Key performance issues
If the reader remains unconvinced as to the viability of the ``teams of processes'' paradigm for multi-process structuring of interactive software, it is probably due to remaining doubts as to the efficiency of interprocess communication between disjoint address spaces. We offer several observations to assuage those doubts. First, the reader should remember that the ability to distribute software across multiple processors, without fine-tuning the software for the particular hardware environment, can in fact improve performance — by virtue of parallelism or access to more powerful processors. Second, increasingly efficient techniques are being developed for exchanging messages between processes on the same machine. For example, numerous systems exist which, running on Motorola 68020-class hardware, can execute complete send - receive - reply transactions in under 1 millisecond. More importantly, evidence is accumulating that, whatever the costs of interprocess communication, it represents but a small piece of overall computation time (cf. [14,17,29]).
The remaining observations pertain to the situation where large numbers of separate events must be exchanged, as when many input or output events are being generated. If a separate message is sent for every event, then performance (in a single-machine environment) will naturally be slower than if each event resulted in a procedure call. Two fundamental techniques limit this degradation. First, wherever possible, multiple events should be batched in a single message. Second, wherever possible, messages should be pipelined; that is, replies should be eliminated. Experiments with the VGTS have shown order-of-magnitude performance improvement due to application of these techniques [29]. Similar observations have been made with respect to X and Andrew [22,37].
6.4 Application to a UIMS
Applying the above techniques to a UIMS, we postulate the existence of at least three distinct teams (address spaces). One team contains the workstation agent, one contains the dialogue manager and there is one team for each application including the workstation manager. Within the workstation agent team, separate (lightweight) processes may be used for each device, or even for each input or output stream. However, each has access to the same queues, for example, by virtue of their shared address space. Similarly, within the dialogue manager team, separate processes may be used for each dialogue, but each has access to the shared databases needed to maintain user profiles, dialogue specifications and the like. The resulting architecture is precisely that of Figure 1(b), where ovals correspond to teams and the interconnections correspond to message paths (possibly hidden by stream interfaces or the like).
If, for efficiency reasons, three disjoint address spaces prove too many, we believe that a merger of the workstation agent and dialogue manager would be preferable, in most case, to a merger of the application and the dialogue manager. This merger still permits the UIMS to be run on one host, while the application(s) are distributed. From our experience, the required communication bandwidth within the UIMS is considerably higher than the necessary communication bandwidth between applications and the UIMS. Moreover, as in the case of SunWindows, merging the dialogue manager with the applications typically means merging each application with (a copy of) the dialogue manager, leading to a high degree of redundancy and potential synchronization problems. On the other hand, our experience may not bear out in the event that the dialogue manager becomes more tightly coupled with the application, as proposed in [16].
Fortunately, we believe that the cost of using disjoint address spaces in many systems is already sufficiently low as to render the efficiency argument moot, and that this trend will continue. Experience with the VGTS, TheWA, X, Andrew, NeWS and Adagio all support this observation, at least with respect to the use of disjoint address spaces for applications and the workstation agent.
8. References
[1] Adobe Systems Incorporated. PostScript Language Reference Manual. Addison-Wesley, 1985.
[2] Aguilar, L., Garcia Luna Aceves, J.J., Moran, D., Craighill, E.J. and Brungardt, R. Architecture for a multimedia teleconferencing system. In Proc. SIGCOMM '86 Symposium on Communications Architectures and Protocols, (August 1986). ACM, New York, 126-136.
[3] Almes, G.T., Black, A.P., Lazowska, E.D. and Noe, J.D. The Eden system: A technical review. IEEE Transactions on Software Engineering SE-11, 1, (January 1985), 43-59.
[4] Andrews, G.R. and Schneider, F.B. Concepts and notations for concurrent programming. ACM Computing Surveys 15, 1, (March 1983), 3-43.
[5] Beach, R.J., Beatty, J.C., Booth, K.S., Plebon, D.A. and Fiume, E.L. The message is the medium: Multiprocess structuring of an interactive paint program. Proceedings of SIGGRAPH'82 (Boston, Mass., July 2630, 1982). In Computer Graphics 16, 3 (July 1982), 277287.
[6] Berglund, E.J. An introduction to the V-System. IEEE Micro, (August 1986), 35-52.
[7] Berglund, E.J. and Cheriton, D.R. Amaze: A multiplayer computer game. IEEE Software 2, 3, (May 1985), 30-39.
[8] Bolt, R.A. The Human Interface: Where People and Computers Meet. Lifetime Learning Publications, 1984.
[9] Booth, K.S., Cowan, W.B. and Forsey, D.R. Multitasking support in a graphics workstation. In Proc. 1st International Conference on Computer Workstations, (November 1985), IEEE, 82-89.
[10] Borufka, H.G. and Pfaff, G. The design of a general-purpose command interpreter for graphics man-machine communication. In Man-Machine Communication in CAD/CAM. Sata, T. and Warman W. (eds.), North-Holland, 1981.
[11] Brown, M. and Heck, M. Understanding PHIGS: The Hierarchical Graphics Standard. Megatek Corporation, San Diego, CA, 1985.
[12] Buxton, W. Lexical and pragmatic considerations of input structures, Computer Graphics 17, 1 (January 1983), 3137.
[13] Cheriton, D.R. The Thoth System: Multi-Process Structuring and Portability. North-Holland/Elsevier, 1982.
[14] Cheriton, D.R. The V Kernel: A software base for distributed systems. IEEE Software 1, 2, (April 1984), 19-42.
[15] Cheriton, D.R, Malcolm, M.A., Melen, L.S. and Sager, G.R. Thoth, a portable real-time operating system. Communications of the ACM 22, 2, (February 1979), 105-115.
[16] Dance, J.R. et al. Report on run-time structure for UIMS-supported applications. In this issue, Computer Graphics 21, 2 (April 1987).
[17] Fitzgerald, R. and Rashid, R.F. The integration of virtual memory management and interprocess communication in Accent. ACM Transactions on Computer Systems 4, 2, (May 1986), 147-177.
[18] Forsick, H.C. Explorations in real-time multimedia conferencing. In Proc. 2nd International Symposium on Computer Message Systems, (September 1985). IFIP, 299-315.
[19] Gentleman, W.M. Message passing between sequential processes: The reply primitive and the administrator concept. Software—Practice and Experience 11, 5, (May 1981), 436-466.
[20] Gentleman, W.M. Using the Harmony operating system. Technical Report NRCC-ERB-966, Division of Electrical Engineering, National Research Councial of Canada, May, 1985.
[21] Gosling, J. SunDew: A distributed and extensible window system. In Methodology of Window Management, F.R.A. Hopgood, et al. (eds.), Springer-Verlag, 1986, 4758.
[22] Gosling, J and Rosenthal, D. A window manager for bitmapped displays and UNIX. In Methodology of Window Management, F.R.A. Hopgood, et al. (eds.), Springer-Verlag, 1986, 115-128.
[23] Hopgood, F.R.A., Duce, D.A., Fielding, E.V.C., Robinson, K., and Williams, A.S. (eds.) Methodology of Window Management. Springer-Verlag, 1986.
[24] Lantz, K.A. An experiment in integrated multimedia conferencing. In Proc. CSCW '86: Conference on Computer-Supported Cooperative Work, (MCC Software Technology Program, December, 1986). 267-275.
[25] Lantz, K.A. Multi-process structuring of user interface software. In this issue, Computer Graphics 21, 2 (April 1987).
[26] Lantz, K.A. On user interface reference models. SIGCHI Bulletin 18, 2, (October, 1986), 36-42.
[27] Lantz, K.A., Gradischnig, K.D., Feldman, J.A. and Rashid R.F. Rochester's Intelligent Gateway. Computer 15, 10, (October 1982), 54-68.
[28] Lantz, K.A. and Nowicki, W.I. Structured graphics for distributed systems. ACM Transactions on Graphics 3, 1, (January 1984), 23-51.
[29] Lantz, K.A., Nowicki, W.I., and Theimer M.M. An empirical study of distributed application performance. IEEE Transactions on Software Engineering SE-11, 10, (October 1985), 1162-1174.
[30] Lantz, K.A., Pallas, J., and Slocum, M. TheWA beyond traditional window systems. Internal Memo, Distributed Systems Group, Department of Computer Science, Stanford University.
[31] Liskov, B. and Herlihy, M. Issues in process and communication structure for distributed programs. In Proc. 3rd Symposium on Reliability in Distributed Software and Database Systems, (October 1983). IEEE, 123-132.
[32] Liskov, B.H. and Scheifler, R. Guardians and actions: Linguistic support for robust distributed programs. ACM Transactions on Programming Languages and Systems 5, 3, (July 1983), 381-404.
[33] Microsoft Windows Software Development Kit, Microsoft Corporation, 1986.
[34] W.I. Nowicki. Partitioning of Function in a Distributed Graphics System. PhD thesis, Stanford University, 1985.
[35] Pike, R. Graphics in overlapping bitmap layers. ACM Transactions on Graphics 2, 2, (April 1983), 135-160.
[36] Rashid, R.F. Threads of a new system. UNIX Review 4, 8, (August 1986), 36-49.
[37] Rosenthal, D.S.H. Toward a more focused view. UNIX Review 4, 8, (August 1986), 54-63.
[38] Scheifler, R.W. and Gettys, J. The X window system. To appear in ACM Transactions on Graphics, April 1987.
[39] Steinhart, J.E. Display management reference model: Preliminary first draft. Technical Report ANSI X3H3.6/86-41, ANSI Committee X3H3.6, September, 1986.
[40] Programmer's Reference Manual for the Sun Window System, Sun Microsystems Inc., 1985.
[41] Tanner, P.P., MacKay, S.A., Stewart, D.A. and Wein, M. A Multitasking Switchboard Approach to User Interface Management, Proceedings of SIGGRAPH'86 (Dallas, Texas, August 1822, 1986). In Computer Graphics 20, 4 (August 1986), 241248.
[42] Tanner, P.P., Wein, M., Gentleman, W.M., MacKay S.A. and Stewart D.A. The user interface of Adagio: A robotics multitasking multiprocessor workstation. In Proc. 1st International Conference on Computer Workstations, (November 1985). IEEE, 90-98.
[43] Verdix Ada Development System, Version 5.1, Verdix Corporation, 1985.
[44] Zimmermann, H. The ISO reference model. IEEE Transactions on Communications COM-28, 4, (April 1980), 425-432.