*start* 01736 00071 US @00045 01536 ftffffffffffffffffffffffffffffff @Date: 18 Dec. 1981 3:17 pm PST (Friday) From: Lynch.PA Subject: Re: Star Performance: Some Methods and Results In-reply-to: Your message of 12 Dec. 1981 10:16 pm PST (Saturday) To: Ayers, GCurry.ES, DaveSmith, PerformanceInterest^, Laaser, StarInterest^.ES, Lipkie^.ES, BLee.ES, Morrison.PA, Ladner, Lauer, Wick, JWeaver, Liddle cc: Lynch Reply-To: Lynch I have a completely different kind of suggestion about Star performance. Ron Smeybe from Dallas notes that Star is much better in actual use than in demos as the first use of anything (everything in our demos is first use) is much slower than repeated use. I have noticed the same thing, even to displaying the aux menu. Sure enough, when a context change is required it takes time and the disk arm can be heard chugging away. This situation is not a thrashing situation (unless you believe that RAM should be big enough to hold ALL of the code). This is a program loading situation. Any program load which requires that the disk arm always be in motion is a program load which will be SLOW. Neither packaging nor improving sequential disk transfers will help such a situation. It is clear that we can do substantially better, but it is not clear just what the problem is. It is not even clear whether this is a Star or a Pilot problem. An early step in attacking this problem is the determination of a sequence of file locations being visited and some analysis as to just why that sequence happens. This disk arm banging situation is a very significant factor in the speed of Star. I suggest that you all listen for the disk arm during significant waits. You will find it churning away. Bill *start* 00593 00071 US @00045 01536 ftffffffffffffffffffffffffffffff @Date: 31-Dec-81 17:03:33 PST (Thursday) From: Daniels.PA Subject: Impact of new instruction set To: PerfProject^.ES Reply-To: Daniels cc: Weaver, Liddle, Wick, Sweet, Daniels I have run a simple experiment to get some feel for how much the new instruction set will buy in terms of performance. For the test cases that I tried, it seems to average about a 50% speedup for a steady-state case, and about 14% when switching contexts. Full results are filed on [McKinley]Perf>Inst Set Impact. -- Andy. -- *start* 00084 00084 UU @00045 01536 fftfffffffffffffffffffffffffffff 00013 01537 @