Panel #1 Where do User Interfaces Come From? Chair: Christopher F. Herot, Javelin Software Corporation Panelists: Stuart Card, Xerox Corporation Bruce Tognazzini, Apple Computer, Inc. Kent Norman, University of Maryland Andrew Lippman, Massachusetts Institute of Technology Introduction Everyone who builds a graphics system has to build a user interface. In most cases, the design of such interfaces is based primarily on intuition. Proponents of this approach claim that attempts to codify principles of user interface design stifle innovation, human factors research is too narrow to prescribe solutions, and rigorous testing is too time consuming. Others claim that the fields of Human Factors and Psychology have much to offer and need not be cumbersome or expensive. The issue of the relative importance of intuitive design and empirical studies has been subject to debate since the early days of computer graphics. The members of this panel were chosen for their reputations of holding strong viewpoints on this subject, but as their statements below indicate, the discussion has now progressed to a more mature phase where there is an attempt to combine both approaches. There is a recognition that human factors and psychological research can be useful if it is directed at discovering models of human cognition which apply to the application at hand. Interface designers have recognized the value of some form of evaluation of prototype designs short of acceptance or rejection in the marketplace. Researchers in empirical studies have begun to develop quick and inexpensive methods of doing these evaluations. Finally, the whole concept of interface design is evolving from the notion of grafted something onto a system to one of designing the entire system to fit a model of how a person uses a computer. Statement from Stuart Card If it can be said that many of the great breakthroughs of technology were the result of intuitive leaps, it can equally be said that many were not. The question of whether there can ever be anything better than intuitive design of human-computer interfaces is really just a version of the relative effectiveness of traditional ``cut and try'' engineering methods vs. science for advancing technology in this area. Unfortunately, in the history of different technologies one finds nearly all possible patterns: cases where science was of little or no importance in advancing the technology (e.g., the bicycle), cases where science was very important (e.g., the atomic bomb), and all manner of complex interactions in between. So we cannot argue for or against the primacy of intuitive design on general grounds, but must understand better the conditions under which cut-and-try methods or science is likely to be an important contributor to technology progress. As a sort of intellectual first-aid until the historian arrives, let me suggest that science is most likely to be effective (1) when technology gets stuck, as it does from time to time, on problems that require more understanding than cut-and-try methods provide, and (2) when science can identify the key constraints that underlie success for the technology, and (3) when science can provide tools for thought for the designer, especially helping the designer reconceptualize the design space. It is important to distinguish in this discussion between the product development context, where the aim is making money, and the research context, where the aim is advancing the state-of-the-art. Methods that are too time-consuming for product development, such as controlled experiments or theory building, can nonetheless be effective in the research context and, if sucessful, lead to commercial products. Few companies would have as a task in the development of a cryptographic product figuring out a new method for factoring large numbers, but this task would make sense in an industrial research laboratory or a university department. Likewise, we have to distinguish between intuitive cut-and-try design (which usually means incremental improvements over existing designs), informal empirical observations (e.g., rapid prototyping), evaluation experiments, and theory building. The various methods for gaining empirical and analytical information about human interface design vary in the reliability of the information, the amount of insight it produces, and the generizability of the information. For example, informal observation of a rapid prototype may clearly demonstrate that users have difficulty with some part of the system without giving the designer the faintest clue as to why. Or comparison of two methods of doing some task may show that one is faster without showing why or whether the difference will hold up in practice or after system revision. Debate on cut-and-try design vs. empirical or analytical methods must therefore be grounded in whether the context is product development vs. research and just what sorts of methods are being discussed. Two examples that illustrate the interaction of science and technology for human-computer interface designs are research on the mouse and the Rooms system. In the case of the mouse, numerous input devices had been constructed, but there was no way of deciding what was a good device other than empirically comparing every device to every other device for every program they were used in. Research showed that pointing speed with the mouse just depended on distance and target size (and not on what one was pointing at or the direction of pointing, for example). The research also showed that the speed of the mouse was limited by the human, not by the mechanics of the mouse itself. This research helped lead to the commercial introduction of the mouse, to proprietary ideas of how to make other devices, to understanding the key constraints that determine pointing, and to tools for ways to conceptualize designing for the mouse (it helps to make the targets large or nearby). In the case of the Rooms system, a window manager for switching among the use of many virtual workspaces, an analysis of window user overhead showed that the real problem is severe space contention because of small screens. This can be modeled in terms of working set sorts of models and helps suggest the design of the Rooms system by analogy with operating system memory preloading policies. In both of the above cases, empirical and analytical science was used to augment cut-and-try engineering methods. In both cases, the issues were of long-standing and the insight was of use in conceptual design. At PARC, we have found the coupling between science and design useful enough that we now try in research projects for the following quadruple (first suggested by John Brown): (1) a crisp statement of a problem, (2) a theory analyzing the problem, (3) an artifact in which the theory is embedded, and (4) an ``entailment'' of the theory, that is an application to a different problem showing that the theory has generality. Several projects in the cognitive sciences (with commercial or potentially commercial payoff) are now reasonable examples of this paradigm. Intuitions do not arise out of thin air, they come from experience, data, and knowledge of theory. In the case of the mouse, of Rooms, and of the other cognitive science projects above, the intuitions of the designers are greatly improved by the understanding made possible through the research. Nothing aids intuitive design like a good theory. Statement from Bruce Tognazzini Many of the great breakthroughs of science and invention were the result of intuitive leaps. The real power of western science comes from the extraction of what is important in new discovery, and in the building upon that extraction. A few decades ago, the Tomograph X-ray machine was invented, letting doctors view longitudinal ``slices'' of the human body. Like most intuitive inventions, how it works seems remarkably obvious once you see it work. Unfortunately, from the patient's point of view, it is rather terrifying to see it work, tearing down the length of the patient's body while its X-ray source, appearing to weigh several tons, careens around its axis over the patient's head. Ungainly as it was, this mechanical contraption had within it a very important concept, one that was translated into the quiet sophistication of the next generation, the Computer Axial Tomograph, or CAT scan, a machine that has revolutionized diagnostic medicine. Western science performed that translation. Computer science is too new not to depend on the creative talent of intuitive minds: The visual interfaces used by Apple, Xerox, and others when developed in the '50's and '60's, smashed every contemporary assumption of human interface design. It is unlikely that we have now reached such a doddering state of maturity that such leaps will no longer be an important factor in further evolution. On the other hand, we have reached a state of maturity where we need to look at what we have accomplished and learn from it, if only to prepare for the next leap. Thus, I find myself (to my surprise) to be a moderate on the whole issue: I think that the tension between the intuitives and the empirical codifiers (for want of a better term) is the very stuff from which the future will be made. For without the intuitives we can only evolve into entropy; without the empirical codifiers, we can only have a disconnected series of fascinating starts. Statement from Kent Norman In the history of various sciences, empirical research has served either to ``confirm the obvious'' or to ``dispel the myth.'' Either result contributes to our scientific knowledge base. In the same way in computer graphics and human/computer interaction, empirical research should serve the function of bearing evidence that a particular design is either the right way to do it or the wrong way. Critics of empirical research point out that more often than not system designers just don't have the time for iterative human-factors testing or even for a search of the relevant literature. The unfortunate result may be that down the road users just don't have the time for such untested systems. The consequence is that corporate research funds are in the end spent to analyze what went wrong rather than how to have designed it right in the first place. The point is that an empirical result will occur whether it has been experimentally planned or not. It is argued that the safest bet is to gather those data as early as possible. Intuitive design is generally the result of intuitions based on anecdotal evidence, intuitions about the cognitive functions, and shear creativity. Results in the psychology of intuitive statistics indicate that while people are generally good at making gross estimates, there are substantial and predictable errors. Consequently, we rely on calculators for statistics rather than on heads. Results in the psychology of thinking and problem solving indicate that creativity is a function of divergent thinking rather than rote application. Consequently, we rely on heads for new ideas rather than on calculators. Designers are good at generating innovative alternatives. While they often make the right choices about software functionality, graphic layout, etc., they are nevertheless subject to biases such as the ethnocentric fallacy. Designers think that users think like they do. One of our studies on the layout of menus reveals several discrepancies between how programmers would design menus versus how novice users would prefer them to be displayed. Usually designers make the right choices; however, discrepancies also exist between how programmers layout menus and performance data. At present there is an uneasy relationship between system design and empirical research. Part of this stems from a conflict of interest. Planned empirical research is motivated by two different sources. Research may be theory driven in the sense studies are motivated by a basic need for diagnostic information about a theoretical issue that may or may not impact on design questions. Basic research on theories, such as cognitive control and cognitive layout, help to provide ways of thinking about design as well as guidelines and design principles. On the other hand, research may be design driven in the sense that studies are motivated by a practical need for diagnostic information about whether Design A or Design B should be implemented that may or may not impact on theoretical questions. Such research is the mainstay of product testing in industry. The conflict arises due to the priority of theory versus product. A second more serious problem is the need for the development of research methods and experimental design specifically tailored to human/computer interaction. Off-the-shelf methods are ill-suited to the particular problems inherent in research on human/computer interaction. Instead researchers need to pursue efficient designs capable of studying multiple design factors while at the same time controlling nuisance variables such as individual differences and sporadic system variables. In addition, standardized performance measures and user evaluations are needed as a means of comparing different implementations. Finally, managers must decide to make a serious investment in qualified researchers in order to gain the edge on system design. It is no longer sufficient to produce an innovative product, one has to provide empirical evidence that it is the right innovation. Statement from Andrew Lippman The notion that there is such a thing as a ``user interface'' is probably the most serious problem in the field today. The word interface carries with the connotation that one designs a ``system'' to meet some performance criteria and then grafts and ``interface'' to it to make it useful. Whether the dictates of the interface come from human factors research or are intuitive is beside the point. The notion is akin to designing a user-friendly policeman who smiles as he hands you a speeding ticket. No interface can make that task a pleasure. The assumed separability of the interface and the task is basically incorrect. Only in the domain of computer systems, where functionality has historically had precedence over usability, has this simple predicate been violated. A more direct approach is possible, and it is amenable to the incorporation of human factors research as well as intuition and design. There is certainly a place for all of these components. The important notion is that of ``Systems Design.'' Perhaps the most productive approach one can take is to define a useful goal and then assume that technology can be bent to meet it. When the goal places constraints on computer technology, recent history has demonstrated that it is a mistake to underestimate the power of the next generation machine. Similarly, the next generation display hardware and input technology is not so far away nor as expensive as is often assumed. One way to approach the design of useful systems is to consider models from existing media. For example, at the Architecture Machine Group, a personalized information retrieval system called Newspeek is modeled upon the style of presentation and interaction associated with a large format, daily newspaper. The screen is divided into columns each of which contain a part of a ``lead story'' and a headline. Gestures on the screen scroll through these articles and highlight similar portions or neighboring ones. The editors and reporters associated with print newspaper production are computer programs in Newspeek, and they take their cues from the same gestures used to read any particular edition. Thus the newspaper is ``programmed'' as it is read; it evolves to reflect the interests of the individual reader. Similarly, an electronic manual was designed based upon the styles of access associated with a printed book. It contained a table of contents, chapters, and pages; text was typeset on-the-fly as the book was read, and illustrations were interspersed throughout. The form of the book was extended by allowing the page to become a workspace rather than a primitive element. It could be queried for definitions of unfamiliar terms and for elaboration. The illustrations were sync-sound movies that could be coordinated with the text. In both of these examples, an existing information resource was extended rather than replaced. Successful information resources were mapped into electronic forms, and the displays and data base was built to accommodate successful styles of use. Another approach involves modeling the mechanisms and language of interactions drawn from everyday life. In some cases, the book and newspaper, this is not so difficult; in others, there is no well understood history. While we know a little about how humans converse, we have yet to build a good computer conversationalist. However, Schmandt's Phone-Slave, which engages a telephone caller in a synthetic dialogue, goes quite far by exploiting the simple fact that most people will answer a question directly posed to them. Depending on how much the system knows about the caller, each dialogue can take one of several turns and twists that makes it somewhat individual. A current challenge in this area is the notion of interactive movies. It is generally assumed that an interaction is better as the degrees of freedom and fidelity provided by the computer increase; movies are therefore a logical extension to graphically and aurally parsimonious, typewritten dialogues. However, we don't have either a technology or a language for interacting with moving images that approaches the simplicity and utility of those used for static printed material, and the movie itself may be ``anti-interactive.'' For the most part, network television, the most ubiquitous example of moving images, has not made the leap to interactivity even though the television receivers and distribution systems contain many of the necessary components. Bibliography Gano, S. ``Forms for Electronic Books.'' Masters thesis, Department of Architecture, Massachusetts Institute of Technology, Cambridge, MA, June 1983. Lippman, A., and Backer, D.S. ``Personalized Aides for Training: An Assault on Publishing.'' Proceedings of the Fourth Annual Conference on Video Learning Systems. Society of Applied Learning Technology, Arlington, Virginia, 1982. Lippman, A., and Bender, W. ``News and Movies in The 50 Megabit Living Room.'' Globecom, April 1987. XXXX. ``Computing with Television: Paperback Movies.'' Talk given at Images, Information & Interfaces Directions for the 1990s, symposium. New York, NY, November 19, 1986. XXXX. ``Optical Publishing.'' Computer Society International Conference (COMPCON), San Francisco, CA, February 1985. XXXX. ``Imaging and Interactivity.'' 15th Joint Conference on Image Technology, Tokyo, Japan, November 1984. XXXX. ``Video Instrument Control.'' Instrument Society of America (ISA), International Conference and Exhibit, Philadelphia, PA, October 1982. XXXX, and Arons, B. ``Phone Slave: A Graphical Telecommunications Interface.'' Proceedings, Society for Information Display, San Francisco, CA, 1984. XXXX, and Backer, D. ``Future Interactive Graphics: Personal Video.'' National Computer Graphics Association Conference, Baltimore, MD, 1981. Panel #2 SIGKIDS Co-Chairs: Creighton Helsley, Rockwell International Coco Conn, Homer & Associates Panelists: Scott Crow -12th grade, Grant High School, Sherman Oaks, CA Robert Holtz - 8th grade, Chaminade Prep, Northridge, CA Joshua Horowitz - 4th grade Open school, Hollywood, CA Eddie Lew - 11th grade, Hollywood High School, Hollywood, CA Ben Novida - 12th grade, Garfield High School, Los Angeles, CA Joe Savella - 12th grade, Garfield High School , Los Angeles, CA Discussant: Margaret Minsky, MIT Summary From a panel discussion and video display of the work of ``kid'' panel members, we consider the benefits of having access to computer graphics from a younger perspective. Each of the students on the panel has implemented programs involving computer graphics. Four of the panelists are semi-finalists in the 1987 Rockwell International Los Angeles Unified School District Computer Science Competition. The individual works are outlined below. Scott Crow built ``Star Creator'' for the Rockwell International competition. ``Star Creator'' is both a program and an editor that enables a programmer to incorporate into any program a scrolling or stationary star display onto a graphics screen. Scott did all his own work in creating star backgrounds which can be easily integrated with any other program. Eddie Lew built ``Super Slot,'' a combination hardware and software project, for the Rockwell International contest. It is designed to emulate a standard three reel, three pay line slot machine, similar to those used in casinos. It plays one to three quarters and has twelve stops per reel. This project is designed to work with an IBM Personal Computer equipped with the IBM Enhanced Graphics Adapter (with a 256k video buffer) and the Enhanced Color Display. Robert Holtz worked on his first Apple at age 7. Now at 14 he has a software company, R J Software, that markets several programs that he has developed. Among these is the Artist program and screen dump for the Apple II GS. He has also set up a Bulletin Board. Robert focuses particularly on animation. He has put together several skits about Snappy, a character he created, on a video paint system that he manipulates to provide animation. Robert is currently working on a new game for Apple. Joshua Horowitz uses LOGO and Video Works for his graphics and animation at the Open Management School in Hollywood. The Open School is the host site for the Apple Vivarium project, and Josh takes advantages of the Macintosh computers. As a result of his experience with LOGO, he recently held a guest speaker spot at the Los Angeles LOGO Conference. For SIGGRAPH '87, he will show his existing work with LOGO and perhaps a project on his new Apple II GS as well. Ben Novida's project for the Rockwell International contest was ``Mr. Egghead.'' Although ``Mr. Egghead'' is a program designed to entertain computer enthusiasts, it is also designed to introduce beginners to the concepts of using the computer. These concepts include: manipulating the keyboard, printing the screen contents to a printer, and saving and unloading files. Joe Savella developed a program to compose music called ``Compusynth'' for the Rockwell contest. ``Compusynth'' involves animation, 3-D graphics, page flipping, easy operation and sound manipulations. It offers a variety of commands including two unique programmable operations. ``Compusynth'' comes with a tempo adjustment that can increase or decrease the relative speed of a composition, and six different sound effects like those recognized on Casio portable keyboards. To add visual effects, there is a piano keyboard that works exactly like a player piano and plays whatever music you compose. Panel #3 Traditions and the Future of Character Animation Chair: John Lasseter, Pixar Panelists: Brad Bird, Director Alex Carola, Director, Graphoui Studio, Brussels John Musker, Director, Walt Disney Animation Frank Thomas, Directing animator Introduction Character animation using the computer is becoming more exciting and more accepted today. But this is not simply a new art form, it is another medium of animation with new and different potential. Other new mediums such as clay, sand, puppet, and cut-out animation have appeared in the past. Each of these has evolved into its own particular genre. But this evolution would have taken much longer if nothing had been learned from the many years spent in the development of traditional animation. Hopefully, computer character animation will benefit from the history preceding it. As the basics of traditional character animation are understood and applied, new methods in computer animation will evolve. Then the computer as a medium of animation will be more solid because its roots will have already been proven in the Walt Disney Studios as well as other studios around the world. The panelists will present their contributions to the field of traditional animation and show excerpts from their work. They will discuss their thoughts on the past, present, and future of traditional character animation. They will also contribute their thoughts on how computer character animation will fit into the future of traditional cel animation. Through their wide ranging experiences from the Disney Studios past and present, high quality character animation for television, and character animation in Europe, we will get a better understanding of the state of the art of traditional animation today. Possibly, we will gain insight into the future of computer character animation. Biographical Notes John Lasseter joined the Pixar Computer Animation Group (formerly the Lucasfilm Computer Animation Group) in 1984 after five years as an animator at the Walt Disney Studio. At Disney, he worked on ``The Fox and the Hound,'' ``Mickey's Christmas Carol,'' ``The Brave Little Toaster,'' and ``The Wild Things Computer Animation Test,'' a combination of hand drawn Disney character animation with computer generated backgrounds. At Pixar, John most recently has written, directed and animated the computer animated short film ``Luxo Jr,'' which received an Academy Award nomination for Best Animated Short Film for 1986. It has also won awards at the 1987 Berlin Film Festival, the 1986 Canadian International Animation Festival, the 1987 San Francisco Film Festival, NCGA's Computer Graphics '87, and Forum of New Images in Monte Carlo. John also has animated ``The Adventures of Andre and Wally B.'' and the stained glass knight in ``Young Sherlock Holmes.'' In March 1986, John was awarded the ``Raoul Servais Animation Award'' for his work in 3D computer generated character animation at the Genk International Animation Festival in Genk, Belgium. John received his BFA in Film from the California Institute of the Arts where he attended the Character Animation Program. He received two Student Academy Awards in Animation in 1979 and 1980 for his films ``Lady and the Lamp'' and ``Nitemare.'' Brad Bird is best known for writing and directing ``Family Dog,'' the innovative animated episode of Steve Speilberg's Amazing Stories, which was aired in February of this year. Brad received his education at California Institute of the Arts, in the Character Animation Program. After CalArts, he worked as an animator at the Walt Disney Studio, working on ``The Fox and The Hound'' and ``The Small One.'' As a freelance animator, he has worked on the feature films, ``Animalympics'' and ``Plague Dogs.'' He has also directed the development of an animated feature film based on Wil Eisner's ``The Spirit'' comic book series. Brad's writing credits include ``The Main Attraction,'' a live-action episode of Amazing Stories, and he has also co-written ``Captain Eo'' and the upcoming Spielberg feature film ``Batteries Not Included.'' Brad is now working as a freelance writer and director. Alex Carola graduated in 1978 from the animation school of La Cambre in Brussels, Belgium. He was a founding member of Collectif Shampoang Traitant, a group of experimental animation filmmakers. From 1979 to 1981, he worked in Rome, Italy on many short films including Pino Zac's film ``Capricio Italino,'' ``Ciel!,'' ``Asteroide,'' and ``Flashes Sante,'' a series of medical prevention films. From 1982 to 1986, Alex worked as an animator and director at the Graphoui Studio in Brussels, where he worked on the TV series ``Yakare.'' He directed several pilot films for TV series and he directed 260 one minute episodes of the popular Belgian TV series ``Quick and Flupke.'' In 1986, Alex established his own production company, Flow S.C., specializing in 2D and 3D computer animation for television and advertising. John Musker works as a director for the Walt Disney Studio. His latest film, ``The Great Mouse Detective,'' was considered by many critics as one of Disney's best animated features in decades. He has worked at Disney since 1977. John animated on the films ``The Fox and the Hound'' and ``The Small One.'' He worked as a writer and director on ``The Black Cauldron'' before moving over to develop ``The Great Mouse Detective.'' John is now co-writing and will co-direct the upcoming Disney animated feature ``The Little Mermaid,'' based on Hans Christian Andersen's classic tale. John received a BA and MA in English from Northwestern University and attended the Character Animation Program at CalArts. One of Disney's famous Nine Old Men of animators, Frank Thomas has worked at the Disney Studio since the mid-1930's. His work on films such as ``Snow White and the Seven Dwarfs,'' ``Pinocchio,'' ``Bambi,'' and ``Peter Pan'' has been important in the development of the art of animation. In 1978, Frank retired from Disney and authored with Ollie Johnston, ``Disney AnimationThe Illusion of Life,'' which has become the definitive book on Disney animation's history and technique. Also in 1978, Frank received the ``Pioneer in Film'' award from the University of Southern California chapter of Delta Kappa Alpha National Honorary Cinema Fraternity. He has also received honors from the American Film Institute. Frank lectures frequently on animation and the computer's application to character animation. Bibliography Adamson, J. *Tex Avery: King of Cartoons.* Da Capo Press, 1975. Adamson, J. *The Walter Lantz Story.* Putnam, 1985. Blair, P. *Animation.* Walter T. Foster, 1949. Bocek, J. *Jiri Trnka: Artist and Puppet Master.* Artia, 1965. Brasch, W. M. *Cartoon Monickers.* Bowling Green Univ. Press, 1983. Canemaker, J. *The Raggedy Ann & Andy.* Bobbs-Merrill, 1977. Carbaga, L. *The Fleischer Story.* Nostalgia Press, 1976. Collins, M. *Norman McLaren.* Canadian Film Institute, 1976. Crafton, D. *Before Mickey: The Animated Film 1898-1928.* MIT Press, 1982. Culhane, J. *Walt Disney's Fantasia.* Abrams, 1983. Culhane, S. *Talking Animals and Other People.* St. Martin's Press, 1986. Edera, B. *Full Length Animated Feature Films.* Hastings House, 1977. Feild, R. D. *The Art of Walt Disney.* Macmillan, 1942. Finch, C. *The Art of Walt Disney.* Abrams, 1973. Friedwald, W., and Beck, J. *The Warner Brothers Cartoon.* Scarecrow, 1981. Halas, J. *Graphics in Motion.* Van Nostrand Reinhold, 1981. Halas, J., and Manvell, R. *Art in Movement.* Hastings House, 1970. Halas, J., and Manvell, R. *Design in Motion.* Studio, 1962. Halas, J., and Manvell, R. *The Technique of Film Animation.* Hastings House, 1959. Heath, B. *Animation in Twelve Hard Lessons.* R.P. Heath Productions, 1972. Heraldson, D. *Creators of Life: A History of Animation.* Drake's, 1975. Holloway, R. *Z is for Zagreb.* A.S. Barnes, 1972. Kitson, C. *Fifty Years of American Animation.* American Film Institute, 1972. Lasseter, J. ``Principles of Traditional Animation Applied to Computer Animation.'' Proceedings of SIGGRAPH '87. *Computer Graphics*, July 1987. Maltin, L. *Of Mice and Magic.* Plume, 1981. Manvell, R. *The Animated Film.* Hastings House, 1954. Manvell, R. *The Art of Animation.* Hastings House, 1980. Peary, G., and Peary, D. *The American Animated Cartoon.* Dutton, 1981. Richard, V. T. *Norman McLaren: Manipulator of Movement.* Univ. of Delaware Press, 1982. Russett, R. *Experimental Animation.* Van Nostrand Reinhold, 1976. Shale, R. *Donald Duck Joins Up.* UMI Research Press, 1982. Snow, C. *Walt: Backstage Adventures with Walt Disney.* Windsong, 1980. Thomas, B. *The Art of Animation.* Simon and Schuster, 1958. Thomas, F. ``Can Classic Disney Animation Be Duplicated on the Computer?'' *Computer Pictures*, Vol.2, Issue 4, pp 20-26, July/August 1984. Thomas, F., and Johnston, O. *Disney AnimationThe Illusion of Life.* Abbeville Press, 1981. Whitaker, H., and Halas, J. *Timing for Animation.* Focal Press, 1981. White, T. *The Animator's Workbook.* Watson-Guptill, 1986. Panel #4 Tool Kits - A Product of their Environments Chair: Ken Perlin, R/Greenberg Associates Speakers: Jim Blinn, Jet Propulsion Laboratory Tom Duff, AT&T Bell Laboratories Craig Reynolds, Symbolics, Inc. Bill Lorensen, GE Corporate R&D Center Summary Are some environments inherently better than others for building tool kits? The panelists, all noted builders of tool kits who work in different software environments, have distinctly different and publicly stated views on this question. Three dimensional computer graphics involves such diverse tasks as shape modeling, motion design, and rendering. Handling these tasks generally requires something more coordinated than an assortment of unrelated tricks and techniques. Also, something more flexible is required than a closed monolithic system with fixed capabilities. A number of researchers have found it useful to create a semantically coherent environment, or 'tool kit', within which new capabilities can be fitted to existing ones flexibly yet gracefully. The tool kit designer is trying to build a language that models a preferred way of working, or even of looking at the world. Thus there are as many different flavors of tool kit as there are research goals or methodologies. The advantages and disadvantages of the various environments available to build on have provoked marked disagreement among leading researchers in the field. This panel compares software environments within which different tool kits have been built. The panelists represent very different points of view. Tom Duff believes that C and UNIX [1] provide an ideal environment in which to build a graphics tool kit. He notes that most environments suffer from over-integration, quickly becoming unmanageable, whereas: ``The UNIX text processing tools avoid this syndrome by cutting the text processing problem into many small sub-problems with a small program to handle each piece. Because all the programs read and write a simple common data representation, they can be wired together for particular applications by a simple command language [2].'' Jim Blinn believes that this aspect of UNIX is vastly overrated, and maintains that most operating systems are ultimately equivalent, since the tool kit designer eventually builds his/her own preferred semantic layer anyway. He has said, ``UNIX doesn't have a patent on that sort of philosophy, you know. Basically, just because you learn about something in some environment doesn't mean it was invented there. People who start off in UNIX find that it has some good ideas, and so they think, `Ah, this must be the UNIX philosophy.' They never stop to think that maybe it's just a fundamentally good idea that works for any operating system. I can tell you I was using tools long before I had any contact with UNIX [3].'' Dr. Blinn works in a FORTRAN/VMS environment. The approach at Symbolics is to embed everything in the LISP programming language, taking particular advantage of LISP's generality and generic nature. The interesting thing here is that the properties of LISP are very different from those of UNIX. LISP users work in a unified programming language/ operating system. Instead of many small interconnecting tools, the tool designer works with a single integrated run-time data-base. Compilation and loading of new procedures is possible at run time, and interpreted and compiled commands can be intermixed within a common language. Bill Lorensen represents the object-oriented approach. In this view, all data is viewed as procedural, sets of procedural types forming a semantic entity. A tool kit is built from layers of such semantic entities. The philosophy is to allow for very rich semantics without losing the clarity of structure that a good toolkit needs. Sometimes these views are combined effectively. For example, Reynolds has written a modeling and animation system [4] in LISP that is distinctly object-oriented in structure. A number of practical questions test these views. What types of tools are facilitated by having a particular environment? Are these tools really difficult or impossible to duplicate in other environments? For example, what important tools can you create in LISP or in an object-oriented environment that you cannot create in UNIX, or vice versa? Can all of these things be done in FORTRAN under VMS? If so, would this be very difficult? We will also touch on relative efficiency, portability, and ease of learning of the various approaches. Perhaps the preference for one's own environment arises because one's very goals become molded by one's environment; those things that are easier to accomplish seem more reasonable. The panel will try to determine to what extent the researchers' tool-building goals have themselves been influenced by what is easy or hard to do in their particular environment. References 1. Kernighan, B., and Pike, R. ``,'' in The UNIX Programming Environment. Prentice Hall, Englewood Cliffs, 1984. 2. Duff, T. ``Compositing 3-D Rendered Images.'' Computer Graphics 19, 3, July 1985. 3. Cook, R. ``Interview with Jim Blinn.'' UNIX Review, September 1986. 4. Reynolds, C. ``Computer Animation with Scripts and Actors.'' Computer Graphics 16, 3, July 1982. Panel #5 Supercomputer Graphics Chair: Richard Weinberg, University of Southern California Panelists: Donald P. Greenberg, Cornell University Michael Keeler, San Diego Supercomputing Center Nelson Max, Lawrence Livermore Laboratory Craig Upson, National Center for Supercomputing Applications Larry Yaeger, Apple Computer (formerly of Digital Productions) Statement from Richard Weinberg Supercomputers are proving to be an invaluable tool for science, engineering and computer animation. Although only a few years ago it was very difficult for researchers in the United States to gain access to supercomputers, the creation of five National Science Foundation supercomputer centers has made them widely available. Because of the enormous computing capacity of these systems, and their use in multi-dimensional, time-varying physical systems simulations, graphics capabilities are a major concern to their users. However, the most effective use of these systems requires high performance graphics systems coupled to the supercomputers over very high speed networks in integrated software environments. What lies on the horizon for users of these systems, and what remains to be done? Statement from Donald P. Greenberg Visualization is necessary in supercomputing for the problem-definition and result interpretation of temporal and spatial phenomena. To improve the potential for scientific discovery it is necessary to correlate multi-dimensional time-dependent parameters; the static display of data of reduced dimensionality can lead to misleading conclusions. Today's advanced graphics workstations provide the most efficient and cost effective environments for the graphics display operations as contrasted to graphics processing on the supercomputer. By remotely computing these display operations, object data instead of image data can be transmitted, thus reducing the bandwidth requirements for the interactive steering of the computations. New workstation hardware provides hundreds of megaflops performance for the standard kernels of the display pipeline. As computation power becomes more available, so will the complexity of the environments simulated. Standard, direct illumination models are not sufficiently accurate to represent complex spatial phenomena, particularly when color is used to abstractly depict scalar engineering parameters. Global illumination effects will have to be included to maintain the three-dimensional perception. Using current software techniques, development time for the interactive graphics programming of scientific applications is excessive. There is a need to reduce attention to the input/output graphical operations by standardizing high level, user friendly modular graphics environments, thus allowing scientists to concentrate their efforts on science. Statement from Michael Keeler The primary focus of the San Diego Supercomputer Center is to provide a complete and balanced supercomputing environment for a very large number of research-oriented users. This environment extends over a large, high speed, nationwide communications network, and includes both the hardware and software resources at the central facility, as well as,that at the remote user's terminal or workstation. We believe a systems approach based on network computing and distributed processing will provide the best way to use most of the available resources. Graphics intensive applications, especially, will benefit from this approach. As an example, the ideal solution to many problems is to have multiple processes running concurrently on both a front-end workstation and the supercomputer, with interprocess communication via remote procedure calls. This is a highly complementary combination of technologies that permits each machine to do what it does best (interactive manipulation vs. intensive computation) and makes intelligent use of available bandwidth by reducing data traffic to small amounts of concentrated information. And most importantly, the user is presented a clean and efficient interface to the entire system. We are keenly aware of the tremendous need for advanced graphics facilities that will serve a large number of people across a wide array of disciplines. Our approach is one of providing fundamental tools and general purpose utilities that take advantage of our unique resources and make it easier for users to tailor custom applications to the system. We continue to integrate more capabilities into the network, providing remote access to such things as high quality image generation software as well as film, video and laser disk recorders for animation. We want to maximize the effectiveness and availability of expensive hardware and software development by sharing them throughout a large community of users. It is also essential to educate and train users and share expertise. The supercomputer center can serve as focal points for this process by providing the new technologies and techniques in graphics to researchers who can apply them as analytical tools in their own endeavors. Statement from Craig Upson Usage of computer graphics and animation falls into two major categories: entertainment-oriented and science-oriented. These two uses have distinctly different requirements in terms of hardware, software and people to accomplish the work. In scientific applications there are two major simulation steps: the simulation of the physical process, such as chemical bonding, the advection of a fluid, or the motion of gravity induced waves; and secondly, the visual representation of simulated phenomena. There is little doubt that a supercomputer is the machine of choice for the first step. For entertainment computer animation, there is (currently) little need for the first step of physical simulation: a simplification that makes the supercomputer quite suitable for entertainment production. The only remaining question is that of economics, that is, can one charge enough to cover the cost of such a large machine? For the production of scientific films, there is a major i/o bottleneck if the simulation machine is not the visualization machine. This is further complicated by the fact that, in general, the supercomputer is a heavily shared resource and thus is difficult to use for interactive image computing and display. The approach of computing the physical simulation on the supercomputer and the visualization of the computation on another machine with lesser computing capacity, but with perhaps less system load, also has its advantages and disadvantages. The approach most promising is a network consisting of a supercomputer(s) connected to a near-supercomputer(s) dedicated to visualization. This approach his feasible if the bandwidth between machines is high enough, the near-supercomputer has a large enough compute capacity, and identical rendering software exists on both machines. This allows a distributed approach to visualization in the computational sciences. Panel #6 Computer Graphics in Fashion Chair: Jane Nisselson, NYIT Computer Graphics Lab Speakers: Jerry Weil, AT&T Bell Laboratories Ronald Gordon, Microdynamics Jim Charles, Esprit de Corp. Introduction Among the many design fields in which computer graphics is making its mark is the fashion industry. The crossover between these two fields is being stimulated by two key areas of development. The first is research efforts in computer graphics to create techniques for simulating the physical properties of flexible geometric objects such as cloth. The second is the application of standard computer graphics techniques in systems in the fashion industry for design preview and pattern CAD/CAM (computer aided design and manufacturing). This panel gives presentations on current research in techniques for representing clothing in computer animations, application of computer graphics in systems in the fashion industry, and the role of computer graphics in the work of a fashion designer. The evolution of computer graphics techniques has rapidly turned the scope of its visual realm from rigid geometrical objects to those which are naturalistic and organic. Naturally, as methods developed for simulating flexible surfaces such as that characterizing the human body, the problem of simulating clothing came into focus. Garments are characterized by neither rigid surfaces nor simple geometrical construction. To accurately represent a garment requires techniques for modeling and animating flexible materials as well as accounting for the effects of physical forces such as gravity. While such high-end simulations are being developed in the field of computer graphics, systems using less sophisticated methods for representing clothes are being used in the fashion industry. The purpose of these systems is to provide (1) an efficient means for visualizing garment designs and (2) an optimal way to create a design to be readily used in a CAM system. Such systems are faced with the aesthetic and technical challenges of how to simulate and improve upon traditional methods used in the fashion industry. It is in the area of design that the question arises: can fashion benefit from computer graphics applications? Potential cost benefits from CAD/CAM systems alone may not be an adequate motive for a designer to make use of the technique. It must be kept in mind that one shortcoming of a video image of a garment is that a person cannot touch the fabric and have direct exposure to its colors and its quality. To model a garment on the computer requires a method for generating a physically accurate model. But fashion is not merely a functional event whose conjunction of color, movement, and shape can be translated into an algorithm. Computer graphics must be able to create visual effects that evoke the immediacy of a garment and capture the imagination in a way which is true to fashion. Whether for creating industrial systems, animations of figures, or fashion videos, as computer graphics continues to delve into the tools of the fashion trade, its imagery will be shaped by fashion's emphatic visual delivery. Statement from Jerry Weil Only recently has the computer graphics community addressed the need for modeling cloth which, in the past, was modeled as rigid surfaces. The rapid progress towards making realistic models will eventually have a great impact on the fashion industry. Imagine a completely interactive, real-time system for designing garments. Such a system could be used to design, cut and manipulate fabric all on the computer screen. This fabric could be draped over other objects, such as a mannequin, thus a designer could interactively create clothes in a realistic, three-dimensional environment. Although current computer hardware is not fast enough to actually allow real-time manipulation of realistic looking fabric, I will describe the model for such manipulation. In it, the fabric can be parameterized by factors such as its stiffness, elasticity and weight. By using simple physical equations, forces can be calculated over discrete points on the surface of the cloth. This will dictate the way in which each point will move. Furthermore, the movement of each point on the surface can be tested against collision with other solids which makes it possible to model fabric draped over other objects. In the current implementation of such a system, instead of moving the cloth in real-time, a series of positions for selected points in the cloth's surface must be predetermined by the user. The movement of the remaining points is then determined according to the defined physical equations. There is great potential for this work in the fashion industry. When this process takes place in real time, under the designer's control, he would be able to see how the clothing looks and moves before a single cut is made in the actual fabric. Statement from Ron Gordon Computer graphics may be used in the apparel industry to quickly evaluate new fashions. An initial place where computer graphics can be applied is in the design of new textiles. Prints, as well as woven or knit fabrics, may be simulated thread by thread and the resulting coloration evaluated on screen. An interface to numerically controlled knitting or weaving machines lets the designer obtain a sample swatch of the proposed fabric. A paint or sketching package may be used to draw and edit new designs, supplementing the pen and paper traditionally used for illustrations. In this way, the proposed garment can be visualized without a stitch. Once completed, the designer can obtain a hard copy print of the creation with either a color calibrated thermal printer or film. Color accuracy, ease of use, and turn-around time are important parameters for this application. Experience in using a color graphics design system to aid the development and evaluation of new fashions will be presented. Technical issues surrounding accurate color matching will be reviewed. Statement from Jim Charles The initial idea of using these ominous machines called computers to design with struck fear into the hearts and pocket books of many people in the design room. Most of these fears were based on the presumption that computers would take away the designers' jobs or that they would be mercilessly tied to this ruthless machine with a mind of its own and never free to make creative decisions again. Another response given by those considering themselves a bit up-to-date and open minded was the presumption that computers had arrived on the scene to assume a large part of their work and could quite possibly solve all their problems. Certainly the computers could boondoggle or dazzle management for at least a season or two. In my first attempt to introduce CAD into the design room, I bought three Macintosh computers because of their reputation for flexible art oriented programs. We scheduled an orientation class for the textile artists and designers who were brave enough to attend. Being the arrogant leaders of California style and design, we were confronted by a teacher, who seemed to us to be the epitome of the computer nerd. He proceeded to confuse, confound and completely poison the sweet taste of even the most inoffensive little design oriented computer that we could begin with. Quelle disastre! We were also introduced to some charming computer graphics done by a computer illustrator. But these gave CAD a definite aesthetic label. My boss now associated computer designed art with pixilated, jaggy lined images. We were to get those machines out of the design room; they were certainly not the right thing on which to base our future designs. At about this same time, the head of our systems department had received the results of an intensive industry study on which CAD systems were best suited to the garment industry. From these I selected the one of my preference. Now that I had discovered my dream machine, how could I resurrect the CAD subject in the proper light and convince management that it was a good idea, and could save hundreds of thousands of dollars a year. I arranged several demonstrations. One major consideration in making such a transition was how our decision fit into the big picture. We soon found out that other companies, such Levi Strauss & Co., were already receiving the systems that I wanted. After my initial campaigning, my boss called me into her office and said, ``I don't understand it but I can see that it's the way of the future. Okay boys, go get us a computer.'' Once I started learning to operate the system, I found it easy and only sometimes frustrating. My experience using the Macintosh made it easier for me to learn a new system. But what are the benefits? The old way is still being used and is at hand for the deadline crunches with multicolor. As for the computer system, the first recognizable benefits are the ease of manipulation of color and the ability to make decisions on the production of fabrics and garment styles which can then be merchandised. I expect many more to come. Panel #7 The Physical Simulation and Visual Representation of Natural Phenomena Chair: Craig Upson, National Center for Supercomputing Applications Panelists: Alan Barr, Computer Science Department, Caltech Bill Reeves, Pixar Robert Wolff, Jet Propulsion Laboratory, Caltech Steven Wolfram, Center for Complex Systems Research Introduction Within computer graphics and animation, two major sub-disciplines have developed: graphics for entertainment or artistic purposes, and graphics for scientific or technical purposes. Computer scientists working on entertainment applications have emphasized the correct visual representation of natural phenomena and have developed ad hoc physical simulations to accomplish this. Computational scientists working in the physical sciences have devoted their efforts to the underlying physics for simulating phenomena, with little emphasis on the visual representation. In recent years these two approaches have begun to reach the limits of their ability to work without each other. The realism required in entertainment animation is beyond that obtainable without physically realistic models. Similarly, numerical simulations in the physical sciences are complex to the point of being incomprehensible without visual representations. Statement from Robert S. Wolff Most physical systems are described by many parameters (e.g., temperature, density, pressure, magnetic field, velocity, etc.). Depending upon the system, the actual value of each of these parameters can vary considerably over space and time. Moreover, knowledge of each parameter throughout a system is often severely limited by available data. Based on these sparse data sets, one constructs ``models,'' either numerical simulations with ``best-guess'' initial and boundary conditions, or ``heuristic'' models, which rely on the investigator's physical intuition to tie together bits and pieces of observational data with basic physical principles. Unfortunately, even the best physical intuition cannot provide a dynamical picture of a 3-dimensional, time-dependent, multi-parameter physical system without some graphical aids. Until a few years ago, such descriptions were confined to traditional artists' renderings of the physicist's concepts. Although these renderings are generally very visually appealing, they were based more on belief than on any sort of numerical simulation, and were as often misleading as informative. However, in the last several years advances in entertainment computer graphics have made possible the utilization of data, analytic and numerical simulations to produce near-realistic visualizations of astrophysical phenomena. An example in point is Jupiter's Magnetsphere: The Movie (J.F. Blinn, R.S. Wolff, 1983) in which representations of spacecraft observations as well as analytic and numerical models of the plasma and magnetic fields in the Jovian system were employed to visualize the morphology and dynamical structure of Jupiter's magnetosphere. Using this same paradigm, computer graphic techniques developed for the entertainment industry (e.g., fractal representations, texture mapping, particle systems) could be used to model volcanic activity on Hawaii, dust storms on Mars, or accretion of matter into a black hole, among other things. Unfortunately, the scientific community as a hole has largely ignored the many significant advances in the field of computer graphics over the years, and as a result the general state of the art of science data analysis is not very much beyond that which was available 20-30 years ago. Statement from Alan Barr Our research at Caltech centers on the goal of computed visual realism. We are developing fundamental mathematical and computational methods for synthesizing high fidelity complex images through: 1. new rendering methods which more accurately model the physical interaction of light with matter; 2. new geometric modeling techniques which simulate the shapes found in nature and manufacturing, and 3. methods for automatically setting up and solving stable equations of motion for constrained mechanical and biophysical systems. While making films which visually simulate natural phenomena, we encountered an unexpected difficulty: a fundamental dichotomy between pure animation and pure simulation. In an animation project, the animator knows what he wants and controls the motion of the objects at a very detailed level to achieve the aesthetic goal. In a simulation project, an artificial universe is created with its own rules of physics. The objects within the universe act as if they had a mind of their own, and appear to choose what they wish to do rather than what you might want them to do. Even worse, you can't arbitrarily change their behavior, because it would not be simulation any more you have to get the objects to do what you want by suitably selecting the initial conditions that properly affect the behavioral equations of motion. For instance, in the 1984 SIGGRAPH Omnimax film, we had difficulties with the swimming creatures. However, the problems were not due to the difficulties of setting up or deriving and solving the swimming equations. Problems resulted because the creatures were somewhat unmanageable and kept swimming off screen, away from the camera and away from each other. We had to aim and control them so they would arrive where and when we wanted them. Statement from William T. Reeves Typically, images of natural phenomena are expensive to generate with computer graphics. Two productions involving natural phenomena that we have done in the last several years are: the trees and grass background from The Adventures of Andre and Wally B., and the ocean waves from the film Flags and Waves. The trees and grass models were pure ad hoc simulations. The waves were based on a simple model which ignored many parameters. In both, our purpose was to make an interesting film. The trees and grass required about 8 man-months to develop and the waves took about 6 man-months. The trees and grass averaged about two hours per frame on a VAX 11/750, and the waves averaged one hour per frame on a Computer Consoles Power 6/32 (about ten times faster than an 11/750). I think it's obvious that the best means of making images of a natural phenomenon is to simulate it fully and then make pictures of the resulting data. If possible, this approach should be followed by both the entertainment and scientific disciplines. The problem is that quite often it's not possible. The computing resources necessary to do the simulation sometimes require supercomputers or beyond. Not many of us have access to this class of computing resource, and if we did there would still be a tradeoff between using it for simulating the phenomenon and for rendering the phenomenon. For some phenomena, an accurate physical model does not exist. For other phenomena, where an accurate physical model does exist, the problem is that the data it calculates is nothing you can render. And finally, sometimes you don't want an object to be physically accurate. Animation is full of ``impossible'' things like squash and stretch and exaggeration. From the visual representation perspective where the sole goal is to make an image, I think it is perfectly acceptable to fake it for any of the reasons above. An important corollary is ``don't get caught.'' Analyze what is important visually and what isn't. Spend most of the time making the important things accurate (perhaps by really simulating them) and cheat on the rest. Panel #8 Software Tools for User Interface Management Chair: Dan R. Olsen Jr., Computer Science Department, Brigham Young University Panelists: David J. Kasik, Boeing Computer Services Peter Tanner, Computer Science Department, University of Waterloo Brad Myers, Computer Science Department, University of Toronto Jim Rhyne, IBM Watson Research Center Background The subject of User Interface Management Systems (UIMS) has been a topic of research and debate for the last several years. The goal of such systems has been to automate the production of user interface software. The problem of building quality user interfaces within available resources is a very important one as the demand for new interactive programs grows. Prototype UIMSs have been built and some software packages are presently being marketed as such. Many papers have been published on the topic. There still, however, remain a number of unanswered questions. Is a UIMS an effective tool for building high quality user interfaces or is the run-time cost of abstracting out the user interface too high? Why are there not more UIMSs available and why are they not more frequently used? Is simple programmer productivity alone sufficient motivation for learning and adopting yet another programming tool? What is the difference, if any, between a ``user interface toolbox,'' a windowing system and a UIMS? What are the differences between a UIMS and the screen generation languages found in fourth generation languages? In fact, exactly what is a UIMS? In order to discuss these questions and to reassess the state of the UIMS art, SIGGRAPH sponsored a workshop on these issues [1]. The panelists represent the four workshop subgroups who have each addressed these questions from different points of view. Goals and Objectives for User Interface Software (David Kasik) In establishing the goals and objectives for user interface software, this group defined a characterization of a User Interface Management System (UIMS). The definition looks at the UIMS as a tool that benefits two different audiences: the end user of an application and the team responsible for the design and implementation of the application. Both audiences levy different requirements on the UIMS and each benefits from the technology in terms of improved productivity. As well as defining a UIMS, the group established a morphology that helps determine the applicability to a particular task of software labeled as a UIMS. The criteria listed are also useful for categorizing available UIMSs to determine areas for future research. The criteria are aimed at the same two audiences as the definition and the characterization that the group established initially. While the early discussions focused on the UIMS as a technology, the participants in the group looked at the role of the UIMS in software development. In other words, the first audience affected by UIMS technology is composed of application programmers. The group examined how UIMS fits into or modifies ``traditional'' software engineering methodology. The principal effect lies in the ability to let the application development team focus on the user interface as one of the first components of a successful application. This discussion lead to identification of some holes in the current UIMS technology, especially during the specification and testing/maintenance phases of application development. The effective implementation of UIMS technology also hinges on accommodation of other computing technologies like artificial intelligence. Reference Models, Window Systems and Concurrency (Peter Tanner) Systems which support user interfaces must not only accommodate but take advantage of many factors that can enrich the human-computer interaction. Effective use of distributed systems, support for collaborative work, concurrent input, devices such as eye trackers or other head, body or foot mounted devices, and less common media such as voice, touch, and video must all be provided for. This group worked on defining a generalized expanded role for what are currently called ``window systems.'' The result, the workstation agent (WA), is responsible for managing the hardware devices, including sharing devices among several applications and, through the use of virtual devices, changing the form of the input to better match the needs of its clients - the applications. The WA position in the schema of an interactive system is best described as being one of several layers. The group identified four such layers: 1. hardware devices 2. workstation agent 3. dialogue managers 4. applications one or more of which will be the workstation manager(s). The hardware devices provide the ``raw'' interface to the user and include both input and output devices. The workstation agent is responsible for managing those devices, presenting a device-independent but media-dependent interface to the clients. It also provides the basic support for multitasking; it must multiplex and demultiplex the devices between multiple clients. The multiplexing (or scheduling) ``policy'' is determined by the workstation manager. The dialogue managers are the implementation of the interaction techniques that convert the media-dependent interface to the media-independent interface required by the application. They provide for the invocation of application modules and for the handling of their responses. The application layer is, of course, the set of modules that interact with the data base specific to the application. Along with these modules, the layer contains one or more workstation agents responsible for setting the policy for sharing devices and allocating resources. On the output side, many existing window systems do not provide any image retention possibilities, requiring clients to repaint windows in numerous situations such as when the screen is rearranged or a window is moved. There are a number of retention possibilities: 1. no retention (as is currently common) 2. device-dependent such as a bitmap (as in the Teletype 5620) 3. partially structured such as a text file 4. complete such as a PHIGS or Postscript display file (as in NeWS) There are no obvious advantages to integrating the image retention into the workstation agent, but it must be done in a manner that can take advantage of whatever hardware rendering support is offered on a number of displays. However, complete retention is inappropriately complex for many applications, and may lead to too high a degree of graphical data structure redundancy for other applications. With the recent advent of window systems, users are becoming accustomed to accessing several applications simultaneously. Accessing single applications through several windows, and the use of several devices simultaneously available for the user's control of application parameters will become more common. Concurrent programming mechanisms supporting concurrent lightweight processes with rapid inter-process communication and synchronization will be necessary to fully implement the workstation agent and dialogue managers to provide the level of concurrency that will be required. Run Time Stucture of UIMS-Supported Applications (Brad Myers) This group was responsible for generating a new model for the internal structure of Dialog Managers and how they interface to application programs. Traditionally, there has been an attempt to have a strong separation between the Dialog Manager and the application programs. The motivation for this is to provide appropriate modularization and the ability to change the user interface independent of the application. Unfortunately, this separation has made it difficult to provide semantic feedback and fine grain control which are necessary in many modern user interfaces. Therefore, the group proposed the beginnings of a model which provides for tighter coupling of the application and the Dialog Manager through the ``Semantic Support Component'' (SSC). The group discussed a number of possible organizations for the SSC, but emphasizes that this is an area for future research. The group also discussed two proposals for the internal architecture of the Dialog Manager: the traditional lexical-syntactic layering, and a homogeneous object space. The lexical-syntactic division is based on language models, and includes interaction modules such as menus, string input, etc., at the lexical level, and sequencing at the syntactic level. This division is often ad hoc, however. The homogeneous object model is based on object-oriented languages like SmallTalk, and is more flexible but provides less support. Which of these is the most appropriate internal architecture, or whether something entirely different is better, is an important future research area. Tools and Methodologies (Jim Rhyne) For the past few years, UIMS research has concentrated on the tools and methodologies for implementing user interfaces. While many problems in this area remain unsolved, the Task Group on Tools and Methodologies felt that progress was also needed on tools to support requirements and design. There was consensus in the group that no single methodology would suit all interface designs and designers, and that tools to support design should be flexible and modular. We treated design in two stages. Requirements Analysis and Conceptual Design (RA/CD) is a tight coupling of the tasks of gathering data about the intended use of the interface and of creating a design strategy well matched to the intended use. Design and Development encompasses the detailed design of the interface and choices about its implementation. Tools presently in use to support these activities include: drawing packages, rapid prototyping systems, user models, interaction technique libraries. Tools needed, but not available include: design databases, design languages, consistency analyzers, intelligent design assistants. Above all, these tools must be compatible with one another, so that designers at all stages can select tools appropriate to their needs. Reference 1. Computer Graphics, April 1987 Panel #9 Managing a Computer Graphics Production Facility Co-Chairs: Wayne Carlson, Cranston/Csuri Productions Larry Elin, Abel Image Research Moderator: Larry Elin, Abel Image Research Panelists: Wayne Carlson, Cranston/Csuri Productions Pete Fosselman, McDonnell Douglas Corp. Ken Dozier, Interactive Machines, Inc. Carl Rosendahl, Pacific Data Images Introduction The computer graphics industry, as no other, combines the disciplines of computer science, mathematics, art, engineering, and business, and presents graphics facilities managers with unique problems not faced by their counterparts in other industries. The growing proliferation, for CAD/CAM and for other art and engineering uses, has created a new wave of managers who share the same fundamental problems: how to effectively manage the facility and personnel in order to succeed in the world of graphics. This management distinctiveness is due to a number of reasons: the intricacies of the high-level software needed to produce top of the line graphics images, the dynamically changing hardware used in the graphics community, the creative personalities of the graphics personnel, the dynamics of the entertainment, engineering, and artistic community that the industry serves, and the lack of basic management training in supervisors that have come up the ladder from the production teams. A good number of these management issues are directly related to the creative process itself, as well as to the kind of creative personality that is the life blood of our industry. A recent article [1] outlines the difference between managing creative people and employees in traditional work situations: 1. The employees are often better trained and more able than their managers. 2. The personality of the creative person is typically less self organized and more temperamental. 3. Creative people's motivations are usually different from those of other workers. This same article quotes Bertolt Brecht: ``The time needed for rehearsal is always one week more than the time available.'' This is particularly characteristic of the creative process in computer graphics production. Because of the nature of image or software design, testing, and creation, deadlines that can be adhered to in a traditional manufacturing environment are challenging to adhere to in the graphics industry. The analysis of a recent management seminar listing [2] revealed that the course outlines for courses in facility personnel management, project management, and operations management differed significantly from the needs of the computer graphics supervisor. It is clear that the textbook examples of management techniques and philosophies do not always fit the environment in which we operated. Summary of Panel Presentations It is just these differences in management issues that this panel is trying to address. The panel consists of the general management of two of the premiere commercial production houses, a special projects manager from a world renowned graphics software and production facility, a representative from an aerospace company utilizing graphics production techniques for in-house use, and a manager of a graphics hardware manufacturing firm. These panelists, with their diverse backgrounds and organizational structures, bring a wealth of experience to be shared with interested observers, many of whom will face similar issues at their respective facilities. The panelists concur with each other in certain areas of management, and widely disagree in others. Each panelist will present an important issue that he faced in his management experience. He will try to focus on an issue which differs from the conventional management approach. The following topics suggest many of the critical issues in managing a computer graphics facility: 1. Motivating creative employees. 2. Managing the finances of a graphics facility. 3. Keeping up with the changes in technology. 4. The R & D issue: will it produce usable results? 5. Coordinating sales/production personnel. 6. Managing job stress. 7. Dealing with the demanding client. 8. Training technical personnel. 9. Schedules and the creative process. 10. Budgets and the development process. The creative and sometimes unexpected techniques used by these panelists in solving the management problems they encounter may provide insights for new CGI facility managers. Other managers may be pleased to find that they are not alone with their problems. This panel presents a stimulating and enlightening view of life in the commercial computer graphics production facility. References 1. ``How to Manage Creativity.'' Management Today, June 1983. 2. ``The Management Course.'' Course Catalog of the American Management Association, January-October 1987. Panel #10 A Comparison of VLSI Graphic Solutions Chair: Ed Rodriguez, National Semiconductor Panelists: Brent Wientjes, Texas Instruments Tom Crawford, Advanced Micro Devices Jack Grimes, Intel Corp John Blair, National Semiconductor Discussant: Jack Bresenham, Winthrop College Summary Graphics capability, once required only by the high-end CAD applications, is now becoming standard equipment on new desktop systems which are used for a wide variety of applications. Recent developments in VLSI graphic devices are bringing sophisticated graphics techniques to more affordable levels. In order to take advantage of the high level of functional integration achievable with today's technology, some architectural decisions must be made at the VLSI components level. These VLSI devices have become more than simple building blocks, they have become sub-system designs. Because of the many different graphics applications, architectural solutions which appear ideal for some applications are far from optimal for others. Unfortunately, because of the level of complexity, subtle features and/or limitations of a particular design are overlooked by quick evaluations. Careful scrutiny is required by the system's designer to select the appropriate device for his/her specific needs. This panel brings together four of the major semiconductor companies addressing the bit-mapped graphics requirements, as well as a leading industry expert in rasterization. Each design has had to incorporate architectural trade-offs, reflecting different implementation philosophies. Only the system designer can adequately judge the significant impact each trade-off has to his/her particular application. There are a few architectural differences immediately obvious between the four approaches. The following are a few of the main differences: · planar vs. packed-pixel · programmable vs. ``hard wired'' functions · open vs. closed frame buffer access · hardware vs. software windowing · partitioned vs. integrated architecture It is the intent of this panel session to provide the hardware and software designers with a comparison of these VLSI offerings, by clarifying the different architectural philosophies and trade-offs associated with each respective implementation. Bibliography Asal, M., Short, G., Preston, T., Simpson, R., Roskell, D., and Guttag, K. ``The Texas Instruments 34010 Graphics System Processor.'' IEEE Computer Graphics and Applications (October 1986) 24-39. Carinalli, C., and Blair, J.``National's Advanced Graphics Chip Set for High-Performance Graphics.'' IEEE Computer Graphics and Applications (October 1986) 40-48. Greenberg, S., Peterson, M., and Witten, I. ``Issues and Experience in the Design of a Window Management System.'' University of Calgary, Alberta, Canada, September 1, 1986. To appear in Proc. of Canadian Information Processing Society, Edmonton, Alberta, Canada, October 21-24, 1986, 12 pages. Meyers, B. ``A Complete and Efficient Implementation of Covered Windows.'' IEEE Computer (September 1986) 55-67. Shires, G. ``A New VLSI Graphics Coprocessorthe Intel 82786.'' IEEE Computer Graphics and Applications (October 1986) 49-55. XXXX. ``Controller Chip Puts Text and Graphics on the Same Bit Map.'' Electronic Design, June 1985. Panel #11 Computer-Aided Industrial Design: The New Frontiers Chair: Del Coates, San Jose State University Panelists: David Royer, Ford Motor Company Bruce Claxton, Motorola Corporation Fred Polito, frogdesign Introduction Although industrial designers are relatively few in number, they affect product development and sales crucially because they determine aesthetic issues. If you wonder how crucial their role, ask yourself when you bought a car, furniture, or virtually any product without considering how good it looked? Probably never. Even military shoppers are known to insist on good looking hardware. Accordingly, industrial designers are indispensable to more than a third of all U.S. corporations, especially manufacturers of consumer goods. And their role is growing in importance as more manufacturers come to realize the dividends of good design. As Christopher Lorenz, Management Editor of The Financial Times notes: ``From Tokyo to Detroit, from Milan to London, companies have begun to realize that they must stop treating design as an afterthought, and cease organizing it as a low-level creature of marketing..... Instead, they have elevated it to fully-fledged membership of the corporate hierarchy, as it has been for decades in design-minded firms such as Olivetti and IBM. The design is being exploited more and more to create competitive distinctiveness, not only for premium products but also in the world of mass marketing.'' Industrial designers are increasingly striving to apply computer technology. The members of this panel represent such pioneering efforts at Ford Motor Company, the Motorola Company, and frogdesign, a trend-setting international consulting firm. Inadequacies of Current Systems As the CIM (computer-integrated manufacturing) revolution gains momentum, industrial designers will be forced to adopt computer technology. Yet, the means for doing so seem woefully inadequate to many indusstrial designers, including frogdesign's Fred Polito. Despite greater attention to their needs recently, no commerically available system (or combination of systems) adequately meets the needs of industrial designers. Polito shows the considerable extent to which frogdesign uses its turnkey CAD system, despite its limitations, and suggests how it could be improved. Industrial designers belabor two shortcomings more than any others: · The sketching or conceptualization problem. The designer cannot rapidly model and visualize concepts during the earliest, most creative phases of the design process, especially when they are as complex as an automobile. Normally, they are forced to work ``off the tube'' with conventional media while the computer idles uselessly. The computer seldom enters the process until after traditional sketches, drawings and, sometimes, even tangible three-dimensional models have been created. · The rendering problem. CAD systems cannot produce ``photographic'' images that unambiguously simulate various materials and finishes. Short comings stem from the fact that realism requires simultation of two distinct kinds of reflection: diffuse reflection (also called ``cosine'' reflection); and specular (mirror-like) reflection which accounts for the appearance of gloss. Typical rendering software models only diffuse reflection. (Specular effects are approximated through modified diffuse reflection and by texture-mapping of hypothetical environmental reflections.) Despite claims to the contrary, no CAD system currently available models true specular reflection. And although specular reflection has been employed to a limited extent by animators, true specular reflection is not readily available in animation systems either. Beneficial Side-Effects In Contrast to these objections, Bruce Claxton is decidedly upbeat about Motorola's CAD experiences. He is enthusiastic about benefits that more than make up for any inabilities at conceptualization and visualization. CAD has brought about a degree of consolidation and coordination of the diverse personnel and organization involved in product design and development that might not have been achieved otherwise. This is due simply to the fact that everyone working on a project now refers to the same database. Industrial design has become a more integral part of the design process, and the enhanced good will among departments whose objectives seem less disparate now is of inestimable value. The shortened development time and improved product quality that ensued are more important than how realistically the computer can simulate the appearance of a product or whether designers can do the whole project ``on the tube.'' Meeting the Computer Halfway Rather than lamenting the lack of a perfect system, industrial design students working at San Jose University's CADRE Institute are exploring semiCAD techniques that combine the best capabilities of existing computer systems with their own special skills and traditional media. Results demonstrate that, although not yet optimal, computers have immense potential for increasing quality and productivity. One student, for instance, designed wine bottles with a 3-D modeling system ordinarily used for animation. She began on paper because the modeler was not spontaneous and interactive enough. But, once a concept was in digital form, she was able to create perspective views of it more quickly and accurately than she could on paper. She could design a label on a 2-D paint system and ``wrap'' it onto the bottle's surface by transferring it to the 3-D system. Renderings of such bottle/label combinations look very realistic in the label portion because the diffuse reflection algorithm simulates relatively non-glossy surfaces quite well. Because the system does not employ a ray-tracing algorithm, however, it merely suggests the glossy quality of glass and cannot show the refractive distortions of transparent material. However, a designer with reasonable rendering skills needn't be stymied. By transferring the image from the 3-D system back to the paint sytem, it can be touched up with specular and refractive effects. Furthermore, the bottle's image can be combined with a real background image (a table setting, for instance) scanned in with a video camera to create a realistic rendering far beyond the capabilities of most designers or the practicalities of normal media. Less Compliant Applications Some industrial design applications, like those of the automobile industry addressed by Ford's Dave Royer, leave less room for compromise and are driving the technology toward the farthest frontier. The ability to sketch with the computer is important because subtle aesthetic nuances of form seldom can be captured by keystrokes or other relatively analytical methods of input. Although today's rendering algorithms simulate glossy surfaces well enough for animation, they give misleading impressions of actual surface geometry. This can be crucial in the case of an automobile's subtle surfaces. The specular reflection of the horizon on a car's surface, for instance, is a key design element as important aesthetically as the car's profile or any other line. Its proportions, even its apparent size, can change when specular reflection is missing. In the final analysis, a design cannot be properly judged aesthetically without specular reflection. Royer describes Ford's efforts to address these issues. Panel #12 Pretty Pictures Aren't So Pretty Anymore: A Call for Better Theoretical Foundations Chair: Rae A. Earnshaw, University of Leeds, UK Panelists: Jack E. Bresenham, Winthrop College David P. Dobkin, Princeton University A. Robin Forrest, University of East Anglia, UK Leo J. Guibas, Stanford University and DEC Systems Research Center Summary Analysis and exposition of the theoretical bases for computer graphics and CAD are becoming increasingly important. Software systems and display hardware are now so sophisticated that the deficiencies in the underlying models of displayed images can be more clearly seen. Consistency, accuracy, robustness and reliability are some key issues upon which the debate now focuses. Vendors and pragmatists feel satisfied if the results are ``correct'' most of the time. Theoreticians are horrified at the lack of rigor and formalism in many of the ad hoc approaches adopted by current work. This panel challenges the assumption that the pragmatic approach is adequate. As parallel processing techniques bring the disciplines of computer graphics and image processing together, each can learn lessons from the other. In addition, it is becoming clear that interface modeling, expert systems techniques, software engineering and VLSI design are all areas where a more formal and rigorous approach can produce significant benefits. All these impinge upon computer graphics to some extent. Computer graphics is rapidly moving from a discipline based on pragmatics to one based on formal methods. Vendors mut be prepared to subject their systems and software to validation criteria. All the panel members are participants in the NATO Advanced Study Institute in Italy, 4-17 July 1987. The theme of this 1987 Institute is ``Theoretical Foundations of Computer Graphics and CAD.'' Results arising from the Institute provide a basis for discussion. Dr. Earnshaw proposes the virtues of the industry/vendor position. The end justifies the means, and pragmatists need not be overly concerned with theory. Dr. Bresenham outlines the industry position and the benefits and disadvantages of theory and practice. Professor Dobkin proposes the need for definition for computational geometry and rendering algorithms. Professor Forrest argues for consistency, accuracy, robustness, and reliability in graphical and geometric computations. Professor Guibas argues for the use of tools for the design and analysis of algorithms. Panel #13 Integration of Computer Integration with Other Special Effects Techniques Chair: Christine Z. Chang, R/Greenberg Associates Panelists: Tad Gielow, Walt Disney Studios Eben Ostby, Pixar Randy Roberts, Abel/Omnibus Drew Takahashi, Colossal Pictures Introduction Computer animation is used increasingly as a special effects technique in film and video production. Integrated with other imaging techniques such as motion control, stand animation, live action and optical printing, it can be a useful tool as well as a final element. Among other things, information can be shared with other computer controled cameras, or images can be generated for rotoscoping or for mattes. In fact, the final special effect may be largely dependent on computer imagery yet not contain a single computer-generated pixel. Special Effects Via Integration of Computer Systems (Christine Chang) Many principles of computer generated animation are shared with other computerized imaging techniques. The motion of the camera, aimpoint and/or subjects are manipulated in three dimensional space in CGI, motion control, animation stand and optical printer systems. By establishing a communications link between computer systems, motion information may be shared. Moreover, one system can create and preview a motion to be filmed on another. In addition, since each technique maintains repeatability, matte passes as well as ``beauty'' passes can be produced. CGI offers the additional capability of manipulating and processing mattes to create elements for new optical effects. All elements can then be combined via optical printer to yield the final special effect. Of course, it remains true that successful special effects rely on good design. Together with an integrated system, the potential of each technique is maximized and new capabilities realized. Computer Graphics and Cel Animation (Tad Gielow) Disney has been using computers and computer animation as a tool for use in special visual effects and in feature animation for several years. It is a unique approach and as seen in the climax in ``The Great Mouse Detective,'' can contribute to the drama of a scene. This presentation will provide an insight into the marriage of computer generated backgrounds with classic Disney animation. Invisible Computer Graphics (Eben Ostby) The desired goal of an effects shot is sometimes independent of the means of its production. In some of the effects we have produced for motion pictures, the goal is to produce images that look computer-generated; in others, the fact that a computer was used to produce the images should remain obscure. In the latter case, special care must be used to integrate the effect with the rest of the movie. Work done at Pixar and Lucasfilm to accomplish this has involved: · proper anti-aliasing of rendered frames · accurate motion-blur · highly detailed models · digital compositing · laser frame recording Additional attention must be paid to these frames in the process of combining the computer-generated shots with the remainder of the film. Integration of Computer Graphics with Live Action (Randy Roberts) The integration of computer graphics and live action is attractive both economically and aesthetically. In the past, use of pure computer generated imagery in film production has had problems in cost effectivenes. Mixing it with live action has made more economic sense. Whether using live backgrounds with CGI animation or shooting actors for rotoscoping CGI motion, the live has made production more feasible. The live action offers images and motion which would be prohibitively expensive to generate or define on the computer, if possible at all. Aesthetically, the real world offers an infinite complexity of textures and fluid motion. The subtle combination of computer graphics with other techniques creates richer images. CGI should not constantly be calling attention to itself. In the future, art direction will emphasize images whose technical origins are not obvious. Yes, Kids, Mixing and Matching Can Be Fun (Drew Takahashi) For years I have been lamenting the fact that various special effects production tools each did marvelous things by themselves, but when you tried to combine them, their inability to talk to each other made the process rather cumbersome. While much has been done of late to address this problem, we at Colossal have found ourselves quite happy with a very different approach which we call Blendo. Blendo is not a style or technique but an attitude. It is an inclusive approach which seeks to combine techniques and styles to create a playful feel. Rather than striving to create a convincing reality, we instead celebrate complexity and contradiction. This attitude was originally born from the limitations of low-tech beginnings of our studio. Where others were using opticals to insert live people into animation, we shot black and white stills, tinted them and shot them combined with artwork under the animation stand. What we ended up with was very different from an optical solution. The stylized motion (by shooting the stills on 2's, 3's or even 6's) and the hand applied swimming color actually allowed the photographer's reality to participate in the animated reality without seeming ``matted in.'' Aesthetically, it said something different. As the studio's technical capabilities grew, we often chose techniques such as these, not out of necessity but by choice. This sensibility is not exclusive to low-tech solutions but can easily work with various high-tech approaches. While much of our work at Colossal is made up of more exclusive and invisible effects (actually, sometimes no effects at all), this attitude, Blendo, is perhaps the most appropriate contribution Colossal can make in the context of this discussion. B PanelWriteups.tioga Rick Beach, May 7, 1987 1:42:17 pm PDT ÊS ˜ šœ™Icode™&—title˜,IraggedšÏiç˜çhead˜Ibody˜èO˜˜—˜O˜³O˜ O˜ÕO˜ŠO˜²O˜1—˜O˜·O˜´O˜¯O˜ƒ—˜O˜O˜þO˜çO˜ÇO˜O˜öO˜ñ—˜O˜‹O˜¢O˜³O˜–O˜õO˜¢O˜ù—˜I reference˜•P˜çP˜eP˜P˜tP˜lP˜ŽP˜–P˜——˜Mší˜í˜O˜¼OšÏb œÞ˜èOšž œÆ˜ÏOšžœç˜óOšžœÃ˜ÒOšž œê˜ôOšžœÐ˜Û——˜9M˜¼˜O˜òO˜…O˜´—˜Ošž œé ˜ö Ošž œô˜ýOšžœª˜µOšžœ·˜ÂOšœ2žœë˜©—˜P˜?P˜3P˜.P˜>P˜CP˜
žœi˜¯O˜¢O˜é—˜ P˜pP˜TP˜FP˜c——˜MšÇ˜Ç˜O˜Ÿ—˜"O˜ÚO˜ùO˜†O˜Þ—˜O˜¨O˜¦O˜®O˜²—˜O˜è——˜%M˜š˜O˜žO˜þO˜ÎO˜ˆO˜O˜ïO˜’O˜—˜O˜ûO˜ÔO˜äO˜¥O˜õ—˜O˜¸O˜ÕO˜åO˜Å—˜O˜O˜ÉO˜ÎO˜àO˜ãO˜ÌO˜³O˜–——˜OMšù˜ù˜O˜·O˜î—˜O˜ÚO˜åO˜“O˜’—˜˜ÄInumberedlist˜cQ˜eQ˜‚—O˜¼O˜½—˜ O˜ŒO˜¿O˜ü——˜5Mš¬˜¬˜ O˜üO˜’O˜ý—˜>O˜ÛO˜ŠO˜O˜¿—˜?O˜‡˜Q˜Q˜Q˜Q˜J—O˜æO˜“˜ŠQšœ(˜(Qšœ>˜>Qšœ+˜+QšœC˜C—O˜‘O˜É—˜=O˜öO˜ª—˜#O˜ÒO˜‰—˜ P˜ —O˜ —˜9Mš«˜«˜O˜úO˜æ˜±Q˜LQ˜bQ˜S—O˜¨O˜Š—˜O˜õ˜˜Q˜"Q˜1Q˜.Q˜4Q˜,Q˜Q˜&Q˜!Q˜'Q˜)—O˜ý—˜ P˜=P˜j——˜0Mšè˜è˜O˜O˜¸O˜˜˜ŒIbulletšžœ˜Ršžœ*˜+Ršžœ$˜%Ršžœ ˜!Ršžœ(˜)—O˜ö—˜ O˜ÃO˜¢O˜¦O˜pO˜}O˜c——˜=Mš•˜•˜Ošœ±œ‘˜Õ Ošœ÷˜÷—šœ˜OšœÀ˜ÀšœC˜CRšžœ+œ¾˜ëRšžœœ²˜Ê——šœ˜Ošœ£˜£—šœ˜Ošœ”˜”Ošœ‰ ˜‰ —šœ˜Ošœö˜ö——˜]Mšú˜ú˜O˜œO˜äO˜ˆO˜¾——˜SMš¯˜¯˜O˜ —˜EO˜Ç—˜0O˜‰—˜(˜ÖRšžœ(˜)Ršžœ˜Ršžœ˜Ršžœ˜Ršžœ˜O˜Š——˜AO˜ö—˜:O˜O˜ÜO˜â——— …— sÊ z_