Heading:
Internal Review of Cedar
Page Numbers: Yes X: 527 Y: 10.5"
Inter-Office Memorandum
ToCSLDateFebruary 19, 1981
From R. Ayers, F. Baskett, P. Deutsch, C. Geschke,LocationPalo Alto
J. Warnock, J. Wick, J. Morris (editor)
SubjectInternal Review of CedarOrganizationVarious
XEROX
Filed on: [Ivy]<CedarDocs>review.memo
During the week of January 19, 1981 we reviewed the state of the implementation of the Cedar project. The overall impression we received from the presentations was one of competence and solidity, both in design and in execution. If one looks at a particular area, he is impressed with both the scope and the ability of the persons involved. We are quite confident about the success of this project after hearing the review. Because the presentations were almost entirely retrospective we did not hear of much about plans or designs for the future. In the following sections we shall try to ‘‘fill in the blanks’’ that were left open by the presentations.

Global Organization
Our impression is that Cedar is a bottom-up design process that is in the tool building and feature enabling stage. What we did not get from the review was the feeling that there is a unified target at which the tools were aiming. Cedar needs a bit more planning and control at the project level than was evident. Consider defining the contents of a first release; i.e. what absolutely must be present in Cedar for it to constitute an initial version of a programming environment that people might attempt to use to accomplish useful work without relying much on old Mesa or Alto stuff? It was unclear how all the pieces were supposed to fit together, and who was depending on what. When many people are using Cedar and large packages begin to depend on each other, things will get much more complicated, and operational problems will begin to show up, e.g. version control. Note that this wasn’t much of a problem when Mesa was first developed, since there were only 4.5 implementors/users; with 15 people it will be a problem.
One unifying aspect of the system that we found lacking was a consistent user’s model of the environment analogous to those found in LISP, Smalltalk, and UNIX. Programming decisions are made with some model of what the system will do. If the model is very complex, then error in those decisions is frequent. If the user of Cedar must carry around either multiple models, or a single very complex model of what is going on, then Cedar will fail in its stated objectives. Is the Cedar model the Abstract Machine? Is it a document metaphor? Is it a compile-load-go model? Is it an interpretive model? Or is it all of the above in some random combination? What is the best one-paragraph description of how Cedar works?
Our opinions were mixed on the issue of backward compatibilty with the Mesa world. On the one hand: You are currently depending on a lot of existing Mesa code, both Pilot and Alto based, plus a substantial pile of microcode for two different processors. Needless incompatibility will have a huge cost. Most of the proposed low-level changes can be made in an upward compatible way, perhaps by temporarily sacrificing some performance (e.g., interface records, stopable processes). You should continue this strategy until a usable Cedar environment is at least as functional and stable as the current Mesa one. On the other hand: We are alarmed at the appearance of increased complexity—both where we expected it (language and runtime) and where we didn’t (n string packages). Some of this complexity seems to be due to the policy of retaining upward compatibility at the source and machine code level. You should construct a list of the things Cedar could drop if compatibilty were given up.
Missing Components
The priorities and tasks in the original EPE report should be re-evaluated in the light of what has been accomplished so far. There are many things on that list that need to be done. Some tasks that were assigned difficulty zero in the report still require some work.
Users
Cedar needs users. We have the impression that Cedar was close enough to being able to support some users, for example the JaM system, and had enough to offer them to make it worth their trying it. The question ‘‘Why aren’t we, in fact, using this programming environment to develop Cedar?’’ would yield useful answers. A similar question was asked of the Tools environment folk in SDD a while back; it proved a useful catalyst and giver-of-direction. Forcing functions of this type would be beneficial.
Basic tools
There is the RTTypes (formerly Things) interface, which is fairly complete, but there is apparently no coherent work on a debugger, on an interpreter, on an editor, or on a general user interface. We had no confidence from what we heard in the presentations that any one individual really thought he was going to provide the commonly used debugging facilities. It might be instructive and scary to count how many lines of code there are in the existing Mesa debugger. Expecting ISL’s Tioga system to provide an editor soon is wishful thinking.

Interactive Response
Perhaps the most important EPE functionality of Cedar that seems to be slipping through the cracks is interactive response or quick turnaround for small program changes. This functionality is difficult to pursue, because it isn’t really one tidy thing but rather a number of improvements and inventions in several areas. We view it as crucial to the success of the programming environment. There isn’t a well-defined plan for how to get from here to there. We would like to see a paper of the flavor ‘‘It now takes x seconds to make the one-line fix. It should take only z seconds. We gain the (x-z) seconds via the following improvements.’’
The consequences of neglecting this problem were evident from the discussion on DOCs: As individuals become users of the Cedar environment and notice the absence of quick turnaround they solve the problem in their own way, for instance, by arranging for an interpreted form of their data. The difficulty here lies not in the arrangements themselves, which, individually, may be excellent, but rather in the possibility of a fragmented ad hoc solution to the turnaround issue that, as it grows, will lack overall structure. We hope that a uniform way of dealing with that style of interaction will be agreed upon by all, and that the resulting tools will be used everywhere.

File System
A major problem is the apparent confusion about local and remote file systems and their possible interaction with data base work and file servers. It is all very vague and fuzzy here and there does not appear to be anyone working on making something happen. Of course, something will happen eventually, but if it is a continuation of the IFS/FTP complex then Cedar will have ducked one of the major problems in the distributed systems arena. A universal file system that worked would also improve productivity, communication among people, and reliability. This question must be resolved soon, before a lot of client code gets written. Can the problem be partitioned into smaller chunks than the whole universal file system? If so, what are the smaller pieces?
Dolphin Performance
The issue of how well Cedar should perform on Dolphins compared with Dorados is an important one, but we had no uniform opinion on it. Here are two individual ones:
Dolphins are barely faster than Altos and are a very inappropriate model for what we should be thinking of as our hardware of the future. If we decide we can’t live with just Dorados, consider loading up with Dandelions with 64K chips, and selling our Dolphins to the outside world as Lisp machines. (Deutsch)
Why does Cedar have to be so slow that it might not run on Dolphins? It should be an important goal to trim and tune it so that it will run satisfactorily on Dolphins. Cedar is supposed to be a programming environment on top of which one can easily build interesting applications. If the programming environment uses up all the cycles, you aren’t going to be able to build very interesting applications. Furthermore, a reasonable tuning facility should be part of any good programming environment. What better way to insure that you have one than by using one to clean up your own act? (Baskett)

Area-specific points

Language

The work on the language is outstanding. It appears that it is getting better with the appearance of the abstract machine interface and various extensions
. Not only do these endeavors extend the power of the system, but they also potentially simplify matters. More emphasis should be placed on programming in the checked subset, so that clients can get a feel for what it’s like. We heard nothing about compiler factorization, which we believe is required for the Debugger. We also didn’t understand how the Debugger is keeping up with the language changes; is someone modifying XDebug/CoPilot to know about atoms and lists and such? Some interim facilities will be needed until the full-blown Cedar debugger arrives. We found the performance data on the Cedar runtime confusing and inconclusive; too much comparing of apples and oranges.
There are some very troubling loose ends. The Pilot interfaces all use POINTERs rather than REFs; we are skeptical that this can be concealed at the next level of client. The absence of persistent closures, and VAR parameters means that a wide variety of programs will have to remain in the unsafe language; this is an invitation to disaster.

Architecture/Microcode
It seems clear that the Cedar programming environment or its successors will eventually run in a world where all pointers are 32 bits long rather than the 16 found in the Alto world. Substantial progress has already been made in moving into a 32 bit world for data pointers at the source level. Any new program should initially use REFs or possibly LONG POINTERs. Converting old programs and improving the microcode performance for long values remains to be done. Converting control links and frame pointers to 32 bit quantities is a much more substantial undertaking and should be deferred. Eventually, however, programs will run into various ceilings—64K of address space for frame pointers and 1024 global frame indices. Adding the multiple MDS feature to Pilot is an intermediate solution to the problem and should be evaluated as the shoe pinches. Significant time may be purchased by the aforementioned conversion of old programs to avoid short pointers.
Kernel and Pilot
It is vitally important that the project make a commitment to moving everything into the Pilot world as soon as possible. Maintaining multiple versions of things, and having to maintain backward compatibility, is a large and needless time-sink, and avoids applying the necessary forcing function to make Pilot work and have someone around who understands it.
Moving to Pilot is difficult for two reasons. First, it is a relatively new system and you do not have an implementor of Pilot or equivalent expert in house to resolve show-stopping problems quickly. Second, many of the useful tools that exist in the Alto world do not have equivalents running on Pilot. The Cedar facilities are supposed to fill this vacuum, but until they exist no one will move to the new environment spontaneously.
We were surprised to hear that the decision on read/write vs mapped files was still up in the air. If it is, don’t you have to reconsider your choice of Pilot? Or do you implement read/write on top?

Graphics
Performance seemed to be the major issue here; converting to REF ANY won’t help much. There was some question whether the goals of the graphics package and those of Cedar were properly aligned; apparently the package is much too slow, and has troubling properties with respect to roundoff/truncation phenomena. The structure seems very nice.

DOCs and User interfaces
It would seem to be more important to go on to the editing operations, rather than expanding the display facilities to include more document types. Crowther expressed mixed feelings based on his use of documents. His comment that much of what he thought of as the interface had migrated out of the definitions module into the ATOMs and REF ANYs merits some thought.

The user interface should have a representation entirely in terms of procedure calls, which better techniques (menus, command languages, ...) would make more palatable.
A design goal was to have the language span a continuum of styles of use ranging from early to late binding. A system in which some important interfaces use styles based on completely different models of the world (albeit some early, some late bound) illustrates that this goal has not been achieved.
Library
Although there is currently little central maintenance, that will probably need to change as incompatibilities and new releases of Cedar occur. Someone will have to look after rolling the packages over to the new version. The primary reason for the nonsuccess of the SDD library known as <MesaLib> is that nobody takes care of it. Obviously, the Modeller will help with this.
Data Bases
A point which must not be overlooked in the context of transaction-based file servers is that Juniper is just not usable as it stands.