Abstracts from the 1986 Workshop on Interactive 3D Graphics Frank Crow, Workshop Program Co-Chairman Xerox Palo Alto Research Center The October 23-24, 1986 workshop at Chapel Hill, NC was the inspiration of several members of the faculty of the Department of Computer Science of the University of North Carolina (UNC) at Chapel Hill, notably Jay Nievergelt, Fred Brooks and Henry Fuchs. The University of North Carolina has claims to being the oldest state university in the nation. Certainly it offers a lovely traditional campus which was nicely complemented by clear weather and fall foliage for the days of the workshop. The workshop was sponsored by ACM SIGGRAPH and NSF in cooperation with ACM SIGCHI and the IEEE Computer Society. The workshop committee included: Frederick P. Brooks Jr., honorary chair Henry Fuchs, chair Frank Crow and Stephen M. Pizer, co-program chairs David Beard, treasurer A mix of invited talks and submitted papers was presented totaling 19 presentations. The largest group of talks focused on human-computer interface issues. However there were sessions devoted to animation, CAD and assorted applications. Invited talks covered all these subjects plus human vision and graphics system architectures. A large proportion of the established core of the computer graphics community was present, inspiring the grad students and faculty involved in computer graphics at UNC to make the best of the opportunity to show off their work. The workshop served as a galvanizing force which brought everybody together in a Herculean effort to get all the myriad interactive systems at UNC functioning in time to demonstrate to the workshop attendees. Demonstrations included the following devices: The various devices were utilized in systems for exploring everything from molecules to spaces in the new UNC computer science building currently under construction. In short, the demonstrations were a very impressive addition to the workshop. Such a good time was had by all that there was much discussion of holding another workshop in 1987 or 1988. Brian Barsky (EE-CS, University of California, Berkeley), Scott Fisher (NASA-Ames Research Center, Moffett Field) and Tom Ferrin (Computer Graphics Lab, University of California, San Francisco) have each expressed interest in helping to organize a workshop in the San Francisco bay area. Contact one of them if you are interested in promoting the idea. The abstracts below are primarily supplied by the authors. However, as noted, the program chairs, Frank Crow (FC) and Steve Pizer (SMP), have amplified short abstracts, supplied summaries in cases where no abstract was available and edited abstracts too long for this space. Keynote Address Alan Kay Apple Computer and MIT Media Lab (Summarized by SMP and FC) Alan Kay asked us to remember the words of Bob Barton, good ideas don't often scale,'' with respect to using the same window paradigm for everything from Macintoshes to ultra-high resolution large screens. He further suggested that good user interfaces should be useable by children under six years of age. Adults don't make good subjects because they have too much patience. They've learned to suffer. That's what schools are for.'' A videotape of a 22 month-old girl adroitly using MacPaint drove home his point. Kay promoted the idea of agents,'' computer-created creatures with personality and some ability to act on their own. As an experiment in using advanced technology to further this idea, Kay brought together a group of people including a Disney animator to spend a weekend with an E&S CT-6 real-time shaded graphics system. Out of this came some very interesting animation of a bouncing rabbit's-eye-view ramble through an infinite forest and a swim in a shallow sea in the company of a couple of realistically swimming sharks. We were exhorted to strive for impressionistic imagery such as that seen in the dance of the Sugar-Plum fairies in Disney's Fantasia, rather than spending teraflops trying to achieve the ever-receding goal of absolute realism. Kay closed by noting that while art imitates life, computer art/animation can imitate creation itself. WalkthroughA Dynamic Graphics System For Simulating Virtual Buildings Frederick P. Brooks, Jr. Department of Computer Science University of North Carolina at Chapel Hill As part of our graphics research into virtual worlds, we are building a tool for an architect and his client to use for rapid prototyping of buildings by visually ``walking through'' them in order to refine specifications. Our first prototype simulated the new UNC computer science building with some 8,000 polygons. BSP-tree software on the Adage Ikonas gave a colored, shaded perspective view every 3-5 seconds while the user moved a cursor in real-time over floorplans shown on the Vector-General 3300. The current (third) version uses Pixel-Planes to generate, at 9 updates/second, view images shown 4' by 6' video projector. Active short- and long-term research questions include speed-ups, stereo, a 6-degree of freedom interface with eye-level defaults, realism enhancements and an interactive model-building, model-changing system. 4-D Display Of Meteorological Data William L. Hibbard Space Science and Engineering Center University of Wisconsin-Madison (Amplified by SMP) The Man-computer Interactive Data Access System (McIDAS) developed at the University of Wisconsin-Madison Space Science and Engineering Center (UW-SSEC) collects large quantities of meteorological data in real time for storage, analysis and display on multi-frame video terminals. These data specify the state of such parameters as cloud heights, temperatures, wind velocities and pressures in three spatial dimensions plus time. Software has been and is being developed on the McIDAS system which produces 3-D images combining possibly transparent shaded surfaces and vector displays from a variety of meteorological data for stereo display in short animation sequences. These animation sequences are produced in a few minutes. The user controls the space and time extents, contents and information density of the display. Numerous examples of types of presentation are set forth, and it is demonstrated how the understanding of the meteorological situation can be enhanced. New visualization features to allow the comparison of more surfaces and much improved interaction are necessary improvements. The Virtual Simulator Charles E. Mosher, Jr., George W. Sherouse, Peter H. Mills Kevin L. Novins, Stephen M. Pizer, Julian G. Rosenman Edward L. Chaney Department of Radiation Therapy North Carolina Memorial Hospital, Chapel Hill We have undertaken to provide radiotherapists with a CAD tool for the design of radiation treatment beams that allows them to explore alternatives to traditional treatment geometries. The tool implements a superset of the functions of a conventional radiation therapy simulator. Since the CAD tool operates on a virtual patient model (derived from CT scans) we call it a virtual simulator. The virtual simulator provides for three-dimensional visualization of the intersection of radiation beams with patient anatomy, and for the interactive design of those beams. We have explored various control configurations for the input devices to make interactions intuitive. Orthogonally-mounted knobs (for rotations) and belts (for translations) provide a means of input where there is a natural correlation between the action the user takes and the result on the screen. In an effort to provide an easy-to-learn paradigm for physicians to interact with the virtual simulator, we are building a miniature of a conventional simulator that will function as an analog input device for the (virtual) machine settings. Efforts have been made to incorporate high quality 3D display techniques with interactively chosen parameters into the CAD tool so as to allow the appreciation of the 3D relationships among numerous anatomical and treatment-related objects. In addition to the wire loop rendering that the prototype implementation uses (provided with interactively chosen color, visibility and loop density for each object), a fast Phong renderer is being incorporated to provide smooth-shaded images with interactively chosen colors and transparency after a short wait. Interactive smooth shaded rendering is being explored with Henry Fuchs's Pixel-Planes graphics engine. Stereo display of wire loops, tiled objects, and smooth shaded renderings are also being explored. Manipulation Within Rotations (Invited Talk) Michael Pique Research Institute of Scripps Clinic An architect sits at the graphics workstation placing desks within an office: several screen windows show perspectives from different viewpoints, several more show floor plans oriented in different directions. He slides the mouse cursor up to a desk on the floor plan that has east at the top, clicks the selection button and drags the desk right. If the system is to be at all usable, the mouse should move the desk in the mouse's window the direction the mouse moves, regardless of the orientation of the picture in which the desk is embedded. This manipulation is only a translation - the difficulties are even greater when the manipulations nested within the rotations are rotations themselves. How can we, as computer graphics system designers, make the desk move correctly in rotated windows, and how can we make sure that at the same time that it moves correctly in one window it will move correctly in the other windows? First we outline an overall design philosophy for geometric manipulations, then examine briefly a manipulation's characteristics: nesting, scope, pivot and axis constraints. We show how a mnemonic notation helps us discover that a simple matrix operation can make manipulations (both rotations and translations) nested within rotations easy to control. Finally, we mention some practical considerations to increase calculation speed and control numerical error. 3D Scan-Conversion Algorithms For Voxel-Based Graphics Arie Kaufman State University of New York at Stony Brook Eyal Shimony Ben-Gurion University of Negev, Israel An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled) and quadratic objects (optionally filled) like those used in constructive solid geometry (e.g., spheres and cylinders). All algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented. Virtual Environment Display System S.S. Fisher, M. McGreevy, J. Humphries, and W. Robinett NASA Ames Research Center (Amplified by SMP) A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems and human factors research. The system configuration consists of LCD display screens and wide-angle optics for each eye and a 6-degree-of-freedom magnetic tracker, mounted in a helmet. Experience with exploration of a world of line drawings, including modification of that world using hand gestures sensed by an instrumented glove, is discussed. Application scenarios, including telerobotics, space station systems management and human factors research, are described. Constraints in Constructive Solid Geometry Jarowslaw R. Rossignac Manufacturing Research Department IBM Thomas J. Watson Research Center The success of solid modeling in industrial design depends on facilities for specifying and editing parameterized models of solids through user-friendly interaction with a graphical front-end. Systems based on a dual representation, which combines Constructive Solid Geometry (CSG) and Boundary representation (BRep), seem most suitable for modeling mechanical parts. Typically they accept a CSG-compatible input (Boolean combinations of solid primitives) and offer facilities for parameterizing and editing part definitions. The user need not specify the topology of the boundary, but often has to solve three-dimensional trigonometric problems to compute the parameters of rigid motions that specify the positions of primitive solids. A front-end that automatically converts graphical input into rigid motions may be easily combined with boundary-oriented input, but its integration in dual systems usually complicates the editing process and limits the possibilities of parameterizing solid definitions. This report proposes a solution based on three main ideas: (1) enhance the semantics of CSG representations with rigid motions that operate on arbitrary collections of sub-solids regardless of their position in the CSG tree, (2) store rigid motions in terms of unevaluated constraints on graphically selected boundary features, (3) evaluate constraints independently, one at a time, in user-specified order. The third idea offers an alternative to known approaches, which convert all constraints into a large system of simultaneous equations to be solved by iterative numerical methods. The resulting front-end is inadequate for solving problems where multiple constraints must be met simultaneously, but provides a powerful tool for specifying and interactively editing parameterized models of mechanical parts and mechanisms. An implementation under way is based on the interpreter of a new object oriented programming language, enhanced with geometric classes. Graphic interaction is provided through a geometrical engine which lets the user manipulate shaded images produced efficiently from the CSG representation of solid models. Constructing Three-Dimensional Geometric Objects Defined By Constraints Beat Bruderlin Inst. fur Informatik, ETH, Zurich A system has been developed for automatically constructing three-dimensional geometric objects, which are defined by their topology and by geometric constraints. An algorithm written in Prolog first finds a construction symbolically and then evaluates it numerically by procedures linked to Prolog. A solid modeler is used for sketching the object. Interactive Design Of 3-D Computer-Animated Legged Animal Motion Michael Girard Computer Graphics Research Group Ohio State University We present a visually interactive approach to the design of 3-D computer-animated legged animal motion in the context of the PODA computer animation system. The design process entails the interactive specification of parameters which drive a computational model for animal movement. The animator incrementally modifies a framework for establishing desired limb and body motion as well as the constraints imposed by physical dynamics (Newtonian mechanical properties) and temporal restrictions. PODA used the desired motion and constraints specified by the animator to produce motion through an idealized model of the animal's adaptive dynamic control strategies. Multi-Dimensional Input Techniques And Articulated Figure Positioning By Multiple Constraints Norman I. Badler, Kamran H. Manoochehri, and David Baraff University of Pennsylvania A six degree-of-freedom input device presents some novel possibilities for manipulating and positioning three-dimensional objects. Some experiments in using such a device in conjunction with a real-time display are described. A particular problem which arises in positioning an articulated figure is the solution of three-dimensional kinematics subject to multiple joint position goals. A method using such an input device to interactively determine positions and a constraint satisfaction algorithm which simultaneously achieves those constraints is described. Examples which show the power and efficiency of this method for key-frame animation positioning are demonstrated. Interactive Tools To Support Animation Tasks (Invited Talk) Frederic I. Parke Computer Graphics Laboratory New York Institute of Technology (Summary by FC) Parke described the evolution of systems for aiding in motion design at NYIT. He first described BBOP, written by Garland Stern. BBOP uses E&S Picture Systems to do real-time playback of line-drawing motion sketches. The motion is limited to rigid-body actions, which motivated the development of follow-on systems, EM and GEM, originally designed and written by Pat Hanrahan. The more advanced systems allow plastic motions such as flesh stretched over bending joints and expressions, such as a smile. The animation systems are currently being ported to a real-time shaded graphics system, the Trillium 1100. Direct Manipulation Techniques For 3D Objects Using 2D Locator Devices Gregory M. Nielson Arizona State University Dan R. Olsen Jr. Brigham Young University (Summary by FC) A set of interaction techniques was presented which require only a two dimensional locator and a single event device. Direct manipulation techniques were described for the interactive tasks of specifying, in 3D, a point, a translation, a rotation about any axis or a scaling. Various cursor representations were discussed for establishing a frame of reference for specifying 3D locations. 2D locator motion was interpreted in the following way to get motion along three orthogonal axes. Left-right motion moved the cursor along the x-axis. Up-down motion moved the cursor along the y-axis. Diagonal motion (upper-right lower-left) moved the cursor along the z-axis. The plane of locator motion was thus divided hexagonally. More constrained translations, as well as rotations and scalings, were specified by selecting a plane of motion, an edge as axis of rotation or a point as center of scaling motion. The notion of control precision was introduced in an effort to qualify the applicability of the above techniques, which the authors were quick to admit were not equally effective in all situations. Skitters And Jacks: Interactive 3D Positioning Tools Eric Allan Bier Xerox Palo Alto Research Center Let scene composition be the precise placement of shapes relative to each other, using affine transformations. By this definition, the steps of scene composition are the selection of objects to be moved, the choice of transformation and the specification of the parameters of the transformation. These parameters can be divided into two classes: anchors (such as an axis of rotation) and end conditions (such as a number of degrees to rotate). I discuss the advantages of using cartesian coordinate frames to describe both kinds of parameters. Coordinate frames used in this way are called jacks. I also describe an interactive technique for placing jacks, using a three-dimensional cursor, called a skitter. Understanding Key Constraints Governing Human-Computer Interfaces (Invited Talk) Stuart Card Xerox Palo Alto Research Center (Summary by FC) Card pointed out that considerably more attention is paid to the machine side of human-machine interaction than to the human side. Two things are needed to develop the human side: (1) an analytic theory enabling predictions and (2) abstractions that become the designers' tools for thoughts. Helpful results can arise from trying to find the key constraints of an interaction, the things that make a difference. Two extended examples were offered: the mouse and window systems. Card described experimental results from various input devices which showed the mouse to be as good as anything for a rough pointing device. Time taken to point to a number of locations was shown to conform to Fitts' Law, which says that positioning time is proportional to the distance to the target divided by the size of the target. Card also discussed experiments, carried out jointly with Austin Henderson, on the window paradigm for system interaction. He proposed an analogy to the virtual memory model used in modern computer systems. He explained that, particularly when switching between tasks, the user would be thrashing,'' spending all his/her time manipulating windows rather than getting work done. Card and Henderson have developed a system called Rooms which tries to alleviate this problem by allowing the user to impose a hierarchically structured environment of screens on his/her world. Each screen or room'' provides a set of related windows for carrying out a given task. The last example points out that building a system which capitalizes on key constraints can shake out further constraints masked by the clumsiness of earlier systems, thereby enhancing the underlying theory. Vision And The Graphical Simulation Of Spatial Structure (Invited Talk) W.A van de Grind Neuroinformatics Group Medical Physics, University of Amsterdam The Netherlands Ophthalmic Research Institute in Amsterdam (Edited by SMP) Vision research and 3D graphics technology will strongly influence each other. I define ecological optics, the discipline trying to describe the visual information available to an active (mobile, structure-seeking) observer, and present some psychophysical studies inspired by ecological optics, made possible by modern computer graphics systems, and relevant to the development of graphics systems `tuned' to human perceptual capacities. Flat displays simulating 3D layouts and objects have a kind of dual reality. The active observer can easily obtain information on flatness by moving the head or trying to undo motion blurring with pursuit eye movements (in cases where there is motion-blurring, as in films). On the other hand the display will also contain information on 3D-structures and specify depth. A brief discussion is given of the less generally known but powerful depth cues related to motion about an object and their possible relation to internal models of solid shape (the visual potential). A graphics system that would provide the active observer with a highly realistic visual world requires high resolution at the center of gaze, but strongly decreasing resolution with increasing eccentricity in the visual field. The necessary feedback of information on the center of gaze need not be more precise than the control systems inside the observer responsible for the movements. More generally, the limits of perception set limits to the required quality of the display system and the limits of organismic movement control set limits to the required quality of the interaction with the machine. These limits in relation to 3D graphics simulation are explored and, where possible, quantified. User Interfaces for Geometric Modeling A. R. Forrest University of East Anglia, UK One of the biggest obstacles to wider adoption of geometric modeling systems for three-dimensional objects is the relatively poor state of user interfaces. In geometric design, two forms of interface are required: one which permits rapid evaluation of the three-dimensional nature of an object and its relationship with other objects, and one which permits precise positioning and shaping of an object. Many systems provide one or the other but fail to provide both. The paper will address issues relating to both forms of interface, particularly in the context of current display technology. We first evaluate the pro's and con's of various attempts at true three-dimensional interaction before concentrating on the use of conventional displays for three-dimensional design. Issues of concern are speed of interaction, the precision and predictability of the interface, the information required to convey meaningful three-dimensional data and the fidelity of the image to the stored geometric model. The paper discusses work in the Computational Geometry Project, University of East Anglia, aimed at improving user interfaces and in particular considers two systems for the design of mechanical objects. One implements a pseudo-English language for describing assemblies of three-dimensional primitives with the ability to describe and maintain constraints between these primitives in a natural way. The second system takes the user's isometric wire-frame sketch and attempts to generate, using user intervention where automatic means fail, a winged edge data structure modeling the three-dimensional object. Both systems rely to an extent on the concept of constraints and the paper concludes with the re-examination of the role of constraint based systems in computer-aided geometric design. Describing Free-Form 3D Surfaces For Animation Eben Ostby Pixar (Amplified by FC) A system for interactively describing and modifying free-form surfaces is presented. The system is based on bicubic patches, using a variant of the Coons patch. Although it is not a full-fledged mechanical CAD system, it has been used to construct complex surface descriptions. Patches may be joined by coalescing points along common edges or separated by adding points, all while second-order continuity is maintained. A 6-degree of freedom digitizer can be used to "sketch in" shapes. A dense point set can be digitized from an object and fit with a surface, or a grid may be traced over the object, then fit. Using a Vax 11/750 and E&S Picture System, patches may be updated at about 6 frames per second. The digitizer may also be used to specify camera positions, pick a control point or patch and act as a tool for modifying surfaces. The stained glass knight from Young Sherlock Holmes was used as an example of the system in use. The system is also intended as a testbed for further experimentation. Special Purpose Computer Arrays for Graphics and Other Applications (Invited Talk) James Clark Silicon Graphics Inc. (Summary by FC) Clark advocated quasi-general-purpose arrays of computers as the most cost-effective means to high-performance graphics and other applications as well. Such arrays can be quite inexpensive, offering orders of magnitude improvement over the cost of supercomputers and the performance of microcomputers. As an example, Clark used the Geometry Engine pipeline in the Iris workstation. A typical computer spends about 10% of its cycles (or less) doing arithmetic, as opposed to moving data, etc. Thus it can be characterized as 10% efficient at doing arithmetic. A typical special-purpose architecture is about 30% efficient, still having to fetch arguments and feed them to an arithmetic unit. The path to higher efficiencies (in the realm of 85-90%) lies in a two-pronged method. (1) Increase the percentage of states spent in arithmetic by heavily pipelining the arithmetic (thus slowing things). Then (2) use many arithmetic units in parallel to get the speed back. This is the philosophy inherent in the geometry engine. There has been much discussion of n-cube architectures (Hypercube, etc.) recently. However, much can be done with 1-cube (pipeline) and 2-cube (planar array) architectures. Clark described 2-cube architectures for ray-tracing, differential equation solving, high-speed scan conversion, z-buffer and pixel-by-pixel CSG operations. For example the ray-tracing architecture uses an array of processors wherein each processor represented some set of pixels. The environment of objects flows through the array effecting those pixels whose rays intersect an object. Secondary rays are stored and the environment recirculates until all rays are satisfied. The major point was that such arrays, using programmable elements, can handle large generic classes of algorithms, making them quasi-general-purpose, more useful than special-purpose architectures and much more efficient than truly general-purpose architectures.