Page Numbers: Yes X: 552 Y: 10.5" First Page: 1
Columns: 2 Edge Margin: 60 Between Columns: 25
Margins: Top: 75 Bottom: 60
Line Numbers: No Modulus: 5 Page-relative
Even Heading:
REQUIREMENTS FOR AN EXPERIMENTAL PROGRAMMING ENVIRONMENT
Odd Heading: Not-on-first-page
INTRODUCTION
1. Introduction
Charter and history
The purpose of this report is to collect and set down our experience and intentions in the area of programming environments. This material was pre-pared by a working group consisting of the follow-ing members of the PARC Computer Science Laboratory:
L. Peter Deutsch
James J. Horning
Butler W. Lampson
James H. Morris
Edwin H. Satterthwaite (Xerox SDD)
Warren Teitelman
with the occasional participation of Alan Perlis (Yale University).
We quickly decided that the right way to proceed was to address what we felt was the real question requiring resolution, namely to produce a catalog of programming environment capabilities which included justified value, cost, and priority evaluations.
The working group was given a one-month deadline and held eight two-hour meetings. Needless to say, there were many areas we had to treat superficially because of the time constraint, and some areas in which we realized we simply could not reach agreement. We also realize that we have undoubtedly overlooked some significant issues and viewpoints. However, we expected much more in the way of intractable disagreement than we actually experienced. This suggests that we were successful at avoiding religious debates, and instead we concentrated on the technical issues.
How should we compare programming environments?
Before considering particular features that we feel contribute to ‘‘good’’ programming environ-ments, it is important to consider how we can tell that one programming environment is ‘‘better’’ than another.
Any evaluation must be in some context. There is no reason to believe that some one programming environment could be optimal for all kinds of programming in all places at all times. In our discussions, we have focussed our attention on the foreseeable needs within CSL in the next few years, with particular attention to experimental program-ming. We have taken experimental programming to mean the production of moderate-sized systems that are usable by moderate numbers of people in order to test ideas about such systems. We believe that it will be important to conduct future experiments more quickly and at lower cost than is possible at present.
It is difficult to quantitatively compare program-ming environments, even in fixed contexts. A large number of qualitative comparisons are possible, but the more convincing ones all seem to fall into two categories, both based on the premise that the purpose of a programming environment is, above all, to facilitate programming.
First, a good programming environment will reduce the cost of solving a problem by software. The cost will include the time of programmers and others in design, coding, testing, debugging, system integration, documentation, etc., as well as of any mechanical support (computer time, etc.) Since our human costs continue to be dominant, one of the most convincing arguments in favor of a particular programming environment feature is that it speeds up some time-consuming task or reduces the need for that task (e.g., it might either speed up debugging or reduce the amount of debugging needed). Bottleneck removal is the implicit argu-ment in much that follows.
Second, a good programming environment will also improve the quality of solutions to problems. Measures of quality include the time and space efficiency of the programs, as well as their usability, reliability, maintainability, and generality. Argu-ments relevant to this category tend to be of the form ‘‘this feature reduces the severity of a known source of problems,’’ or ‘‘this feature makes it easier to improve an important aspect of programs.’’ Thus a feature might be good because it reduces the frequency of crashes in programs or because it makes it convenient to optimize programs’ performance.
(These two categories could be reduced to one by noting that there is a tradeoff between cost and quality. Thus by fixing cost, we could compare only quality; by fixing quality, we could compare only cost. This seems to complicate, rather than simplify, a qualitative evaluation of features, so we will not seek to unify these two kinds of argument.)
In the discussion that follows, we have attempted to relate our catalog of ‘‘important features’’ to these more general concepts of what makes a ‘‘good environment’’ whenever the connection is not obvious. In some cases, we have not been entirely successful, because our experience tells us that something is essential, but we haven’t been able to analyze that experience to find out why. In all cases, our arguments are more intuitive than logically rigorous. Strengthening these arguments would be an interesting research topic.
We have been largely guided by experience with three current programming environments available to the PARC community: Interlisp, Mesa, and Smalltalk [Teitelman, 1978; Mitchell et al., 1979; Ingalls, 1978]. Both what we know about the strengths and what we know about the limitations of these environments have been taken into con-sideration. It is of course dangerous to generalize too boldly from the intuitions and preferences of users: it is virtually impossible to be certain that the useful features have been clearly distinguished from those that are merely addictive.