*start* 01291 00024 US Date: 29 July 1981 10:50 am PDT (Wednesday) From: Stewart.PA Subject: More IFSs for CSL To: Taft, Morris, Ornstein cc: Baskett, MBrown, Gifford, Stewart Another modest proposal. We need another IFS very soon. 1) Ivy is often full these days, because the limit of 8 FTP users is quickly exhausted by PreCascade users (who don't close their FileTool connections I guess -- the problem can only get worse with the new Gifford Special directory system.). 2) For a similar reason, no-one can get any work done when Ivy is broken. I do not necessarily propose any more disk space, just more cycles. Here are some possibilities. Break Ivy into two pieces, each with half the number of disks, and duplicate a few very important directories: Cedar, Cedarlib, APilot, Mesa Use Juniper B for the new IFS. (I hesitate on this one because I want to use Juniper B for a Voice file server...) Use Juniper itself. We can make better use of the machine as an IFS than as a Juniper (faster). The Cedar DB can (yes?) use Leaf instead of Pine, or a local file perhaps. Move all of Juniper's files onto the new IFS. Run IFS on a Dorado, with several Tridents. (If the Cedar users are going to use IFS services heavily, let them provide a machine.) *start* 01051 00024 US Date: 10 Aug. 1981 11:59 am PDT (Monday) From: Taft.PA Subject: Re: More IFSs for CSL In-reply-to: Stewart's message of 29 July 1981 10:50 am PDT (Wednesday) To: Stewart cc: Taft, Morris, Ornstein, Baskett, MBrown, Gifford I've been hoping not to need to establish another IFS, since much of the Cedar load will move to Alpine when it is ready, and since splitting the existing Ivy file system would be a big administrative hassle. However, if the load on Ivy is getting unbearable, we may have no choice. IFS runs only on Altos, not on D-machines emulating Altos. Modifying IFS to run on a Dorado would require a lot of work (probably at least a month of my time) and is not worth the effort. Of your other alternatives, converting the existing Juniper server to be an IFS sounds the most attractive to me, so long as the present Juniper users aren't made too unhappy by this. We don't have the hardware to build another file server; and, like you, I am reluctant to give up the Juniper B machine for that purpose. Ed *start* 00429 00024 US Date: 10 Aug. 1981 12:53 pm PDT (Monday) From: Ornstein.PA Subject: Re: More IFSs for CSL In-reply-to: Taft's message of 10 Aug. 1981 11:59 am PDT (Monday) To: Taft cc: Ornstein Maybe we ought to ask in Dealer this week: 1. When do the Alpine folks expect to have a real server alive 2. Who would be hurt if we converted Juniper to an IFS 3. How badly people feel we NEED another IFS ??? Severo *start* 00894 00024 US Date: 12 Aug. 1981 10:45 am PDT (Wednesday) From: MBrown.PA Subject: Re: Another IFS... In-reply-to: Taft's message of 11 Aug. 1981 5:31 pm PDT (Tuesday) To: Taft cc: Stewart, Morris, Ornstein, Baskett, MBrown, Gifford I have heard a rumor that Bringover uses two connections when it runs. Maybe this can be fixed. Maybe the file tool can be fixed to release connections sooner, say after 15 seconds without a keystroke or mouse button. Since the cost in valuable person-time (Ed Taft's especially) of establishing a new IFS server is high, I think we should really establish the need before doing it. Decommissioning Juniper would reduce our ability to develop database applications in the time from now until Alpine comes along. I am in favor of reducing Juniper to a single T-300 drive, if necessary, by moving customers of normal files (FTP) to Ivy. --mark *start* 00549 00024 US Date: 13 Aug. 1981 12:31 pm PDT (Thursday) From: Taft.PA Subject: IFS 1.34 To: IFSAdministrators^ Reply-To: Taft Version 1.34 of the IFS software is released. The only change is that a bug in the backup system has been fixed. The symptom of the bug was that the server would perpetually refuse connections because "IFS is full". This bug was most easily provoked by Leaf activity concurrent with the nightly backup. As usual, please obtain [Maxc]IFS.run, IFS.syms, and IFS.errors. The documentation is unchanged. *start* 01340 00024 US Date: 19 Aug. 1981 6:54 pm PDT (Wednesday) From: Taft.PA Subject: Planning for new IFS To: RWeaver cc: Fiala, Kolling, Ornstein, Taft Would you please make new accounts and disk usage listings for Ivy (unless your latest ones are less than a month old). Then break down the disk space used according to organization (CSL, ISL, SSL, and other), and within organizations break down individual vs. project accounts. Also, break down the CSL project total according to Cedar-related and non-Cedar projects. The idea is to split Ivy into two roughly equal parts such that the load will be reasonably balanced between the two parts. My initial proposals are to put all personal accounts on one IFS and project accounts on the other, or to put all CSL personal and project accounts on one IFS and all non-CSL accounts on the other. However, there may be other arrangements that are better; you may think of one, since you more familiar with the Ivy accounts situation than I am. Also, many files on Juniper will be moved to one or the other of the IFSs, so that Juniper can be converted from two T-300s to one. You should get together with Karen to get a rough idea of what these files are, and fold them into the breakdowns I described above. (Remember that one IFS page is equivalent to two Juniper pages.) Ed *start* 00949 00024 US Date: 20 Aug. 1981 10:49 am PDT (Thursday) From: Mitchell.PA Subject: Re: Name that file server In-reply-to: Your message of 19 Aug. 1981 3:30 pm PDT (Wednesday) To: Taft cc: Mitchell Ed, We are going to purchase a number of used Dysan T300 packs from XCS at $375 each. This is a bargain. The question is how many to buy: we are going to install a new IFS with 3 drives, and next year we are buying 6 drives for IFSs. If you could tell me the normal ratio of (numberOfPacks/IFSDrive) that will help to determine how many packs to get. Here are my entries for the new IFS name: IBM (I can see it in Datamation now: "rumor has it that Xerox PARC has installed an IBM file server...") Itty (we could name its Alto "Bitty" and the system could be the Itty Bitty IFS) Icy (Datamation rumor: " . . . Xerox PARC has installed a cryogenic file server") Icon, Ikon (obvious reference) Idea Ink or Inky (since it is in ISL) *start* 01557 00024 US Date: 20 Aug. 1981 1:27 pm PDT (Thursday) From: kolling.PA Subject: Juniper files To: JuniperUsers^, CSL^, ISL^ cc: RWeaver, kolling, taft, fiala, ornstein Reply-To: kolling In the next few weeks, many of you will have to move your files which are now on Juniper to the new IFS or to Ivy, so that we can remove a Trident disk drive from Juniper. In order to adjust the space allocations for these IFSes properly, we need to know how much room to allow for these files. Presently, there are many "junk" files on Juniper, so we are not going to automatically hand out new allocations that will allow you to move all your Juniper files directly to the IFSes. Instead, if you "own" a Juniper directory, please send an estimate of the allocation you really need to me, by the end of Tuesday. If I don't hear from you by then, I will assume 0, and you will be in deep water when the time comes to move your files. If you need to keep your files on Juniper instead of an IFS (i.e., you are using transactions, etc.), please let me know. If you have too many non-junk files to easily estimate the space that you really need for them, I have a small program which will find the total space in use for a set of files; see me for details. Gory details of how to move your files will be sent out later. If you have lots of junk on Juniper and want to start cleaning it up now, you will find it more convenient to use FTP rather than Chat, as Juniper Chat has a bug which will abort your connection from time to time. Thanks, Karen *start* 00481 00024 US Date: 20 Aug. 1981 1:35 pm PDT (Thursday) From: Taft.PA Subject: Re: Name that file server In-reply-to: Your message of 20 Aug. 1981 10:49 am PDT (Thursday) To: Mitchell cc: Taft The ratio of packs to drives used by an IFS is about 3:1. We presently appear to have 7 uncommitted T-300 packs. Our present inventory of T-300 packs is: 5 - Ivy primary 8 - Ivy backup 1 - Ivy scratch 2 - Juniper primary 8 - Juniper backup 2 - Juniper test 7 - unused Ed *start* 00609 00024 US Date: 20 Aug. 1981 5:10 pm PDT (Thursday) From: Hains.EOS Subject: Indigo To: Taft.PA cc: Eossupport, Hains Ed, Not be I to stand in the way of progress. Feel free to acquire the Alto name "Indigo" if it fits in so nicely. Fortunately, I have a government publication that I got from Dusty Rhodes that has thousands of names of colors, so we won't run out. If I see anything clever that's close to purple, I'll let you know. Would you please ask them to change ours to "Yellow" when they make the switch. Thanks, chuck P.S. We also own "Purple" if you'd rather have that. *start* 02553 00024 US Date: 20 Aug. 1981 6:40 pm PDT (Thursday) From: Taft.PA Subject: Re: Name that file server In-reply-to: My message of 19 Aug. 1981 3:30 pm PDT (Wednesday) To: CSL^, ISL^ Reply-To: Taft The volume of suggestions was truly astounding, but has finally tailed off. Here is a summary. There were many good suggestions, as you can see. The number of people suggesting a particular name is given in parentheses. Holly as in "the Holly and the Ivy" (a line from a Christmas carol) Iago (2) from the Bard Ibidem IBM I can see it in Datamation now: "rumor has it that Xerox PARC has installed an IBM file server..." Ice (3) Iceplant Ichor blood of the gods Icky Icon (4) Icy (2) Datamation rumor: " . . . Xerox PARC has installed a cryogenic file server" Ida a mountain in Crete Idea (2) a replica of a pattern, an image recalled by memory Ideal (2) a standard of perfection, beauty, or excellence Idem Ides Idi Idio meaning "personal", as in idiosyncratic Idle (2) Idol (2) any likeness of something Idyll pastoral poem Igloo (2) cold storage II Ikon Ilex (4) holly. Then we can have "The Holly and the Ivy". (traditional christmas carol) Ill Impala do we want an old chevy? Index a list of restricted or prohibited material Indi resembling indigo, say, purple, for example Indian Indra vedic deity of thunder and rain Indy Ink (2) because of ISL's function Inky Instar to stud or cover with stars Io (3) one of the moons of Jupiter Ion (3) Iota (3) Iou Ipso Irene Iron (3) in honor of Star; strong and healthy: robust Ishtar Island It Italic Itea hoollyleaf sweetspire (so says the WGB) Item Itty we could name its Alto "Bitty" and the system could be the Itty Bitty IFS Ivan Iwis meaning "certainly" or "knows for certain" Ixia african corn lily Kelp Ream print shop analogy Rhus poison oak and poison ivy Sumac another poison Ysaye just a name The following names were suggested but were already assigned to other machines: Ibis Ibid (2) another IFS Icarus IC tool Indigo (3) because it will be in the Purple Lab Intrepid Isis Isle Ivory (3) because it's easy to confuse with Ivy Oak (3) another poison HOWEVER: the present owner of "Indigo", Chuck , has generously agreed to donate the name to us. I think Indigo is clever without being either corny or obscure; and besides, it's simply a nice-sounding name. Bob Taylor agrees. Therefore, the new file server will be called ======>> Indigo <<====== Thank you for all your good suggestions. Ed *start* 01035 00024 US Date: 26 Aug. 1981 3:07 pm PDT (Wednesday) From: RWeaver.PA Subject: IFS Accounting Program Specification To: Boggs, Taft cc: Fiala, RWeaver I would like to have the IFS accounting program be able to produce a project group listing alphabetical by project group and alphabetical by account name within each project group. Additionally, I would like this listing to itemize the owner of each account and the current disk usage and allocation for each user with project group totals. I would also like to have the option of running the program for all groups or selected groups. It seems that in order to do this, two things must be added to the user account information block (i.e. Project Group and Owner entries). There is a User Group entry now but this often has no bearing on the project to which an individual is assigned. I appreciate your consideration of this request and will be happy to discuss any alternate course which will assist me in managing the Ivy and Indigo file servers. Ron... *start* 02711 00024 US Date: 26 Aug. 1981 3:50 pm PDT (Wednesday) From: Taft.PA Subject: Re: IFS Accounting Program Specification In-reply-to: RWeaver's message of 26 Aug. 1981 3:07 pm PDT (Wednesday) To: RWeaver cc: Boggs, Taft, Fiala, Schroeder, Birrell, Levin, Kolling I think this is a reasonable request. The current IFS User Groups are not intended to be used for accounting purposes, but rather just as a basis for file protections. What is needed is a way to associate IFS directories and users with accounting groups (or "project groups", as I guess they're called on Maxc). The question is how to implement it. One way (and probably the easiest way) is to change IFS to keep, for each directory, the name of the accounting group the directory belongs to. Then change the Accountant program to retrieve this additional information from the IFS, and produce the summaries you require. In the long term, a better scheme is to use Grapevine to maintain accounting group membership (just as we eventually intend to use Grapevine to maintain the group memberships for IFS protections). I'm strongly in favor of this; however, it will require more work. There are two ways to do this: a) Modify the present Accountant program, which is written in BCPL. In addition to putting in the code for generating summaries (which should be quite straightforward), this will require implementing and installing a BCPL version of the GrapevineUser package. While we will eventually have to do this for IFS, it will require a fair amount of work; and it's likely that a GrapevineUser package for IFS will be considerably different from a GrapevineUser package for a standalone program such as Accountant. b) Transliterate the Accountant program into Mesa. The main program itself is only 3 or 4 pages of code; it depends on several packages (primarily Pup and FTP) for which corresponding packages exist in Mesa. My guess is that an experienced Mesa programmer could transliterate the Accountant program into Mesa in 2 days or less, and that the new features (which would depend on the existing Mesa GrapevineUser package) would take an additional 3 days or so. Now, I'm not in a position to undertake this project right now, though I might be persuaded to do it as a weekend hack. Perhaps we could interest someone else in doing it...... In any event, this is unlikely to get done in time to help you with the Ivy/Indigo reconfiguration. If lack of any automatic aids is an intolerable burden on you then let's talk about it further. Ed p.s. I've enlarged the distribution of this message to include other people who might have ideas or suggestions or (heaven forbid) spare programmer cycles. *start* 01302 00024 US Date: 1 Sept. 1981 10:45 am PDT (Tuesday) From: Pier.PA Subject: Purple Lab planning To: RWeaver, Taft, ABell cc: Geschke, Pier, Ornstein I have just learned that power installation for Indigo IFS is scheduled for next week or so. This conflicts with overall plans for the Lab which have been dormant for a while. We need to get a comprehensive floorplan with power installation that encompasses the possible uses of the Lab NOW, before Facilities goes in (details below). I propose that we meet on Wednesday at 9:30 AM in the ISL conference room to resolve this and give Ron Weaver the outline of a floor plan for Facilities to implement. Details: There are three Dorados installed now, with rack space for a fourth 110V Dorado. There are a few Dolphins temporarily installed in the room. Our last plan called for installing outlets for 20 Dorados and 20 Dolphins; Dolphins would likely have to be rack mounted in order to get that many into the Lab. We now need 208 power for Dorados, and also for T300s for the Indigo IFS. It is unlikely that the full complement of machines could go into that room. There is enough air conditioning for them, but the power and noise constraints are probably more severe. However, we need to allow plugs and power cables for that number. *start* 01008 00024 US Date: 1 Sept. 1981 1:49 pm PDT (Tuesday) From: Pier.PA Subject: Re: Purple Lab planning In-reply-to: Ornstein's message of 1 Sept. 1981 12:09 pm PDT (Tuesday) To: Ornstein cc: Pier, Taft Yes. Mike was supposed to have a plan for this room in place long ago. I have talked to him about it several times over the past 6 months. Nothing got done. I believe he hasn't done anything for the usual reason/problem with Mike: he wants it to be perfect and immutable, while I want it to be flexible. In any case, he is going on vacation and Ron seems much more suited to marshalling this effort. Ed told me Ron was supposed to consult with me before getting Facilities to do anything, but I haven't heard from him. That's OK, as long as we can settle up in the next two days before I leave as well. Alan Bell also has a stake in this as half of that space "belongs" to Lynn's group. I expect him to be difficult in his own inimicable style. You can tell how much I'm enjoying this job. Ken *start* 01206 00024 US Date: 2 Sept. 1981 11:56 am PDT (Wednesday) From: Kolling.PA Subject: moving To: rweaver, taft cc: Kolling, mbrown, fiala My impression of the "big move" is that Ivy and Indigo will be thrown up in the air one weekend and come down with Ivy's old files redistributed between the two. Then a new one-drive empty juniper server (JuniperX or something) will be established on the spare juniper machine, to coexist with the existing juniper server for about a week, and all the files on Juniper will be moved (by me or you and users) to Ivy, Indigo, and JuniperX, as appropriate. At the end of that one week or so process, JuniperX will become Juniper and the old Juniper packs will get tucked into a cabinet for six months or so and then get thrown into the scratch pool or whatever. I think we will need a week to let the files migrate off Juniper because Brownieing a large directory takes overnight, even with no other connections inuse. Is this what you plan? I thought you and I could set up Brownies for the big directories that are going to move en masse, like , on our machines in the evenings, and other users would have to port their files off themselves. Karen *start* 01806 00024 US Date: 2 Sept. 1981 4:54 pm PDT (Wednesday) From: Pier.PA Subject: Purple Lab Meeting Summary To: RWeaver, Taft, ABell, Ornstein, Overton, Winfield cc: Geschke, Pier Planning thru end of 1982, purple lab to contain max of 20 Dorados, 18 Dolphins, 3 Altos (2 IFS, 1 Midas), and 8 T-300. Machine distribution estimates: Dorados: 8 VLSI, 10 ISL, 2 "extra" Dolphins: 12 VLSI, 6 ISL IFS: 1 CSL/ISL, 5 drives; 1 VLSI, 3 drives Total power consumption approximately 73 KW: Dorados: 4 @ 110VAC, 16 @ 208 VAC, 2.5 KW each including disks Dolphins: 0.8 KW each Altos: 0.6 KW each T-300: 0.8 KW each The existing power service has a 400-amp (per phase) main breaker currently operating at less than 35 amps per phase. More than half of this is the 3 Dorados presently plugged in. The existing two panels are completely full (i.e., there are no more breaker positions); however, it appears that a large number of the existing circuits are not being used. Floor Plan: Propose: close off the north wall door, fill in the door swing recess with false floor. Install 5 T300, two Altos, then 3 more T300 along the north wall. Within the next few weeks, get enough 208 single phase and 110 single phase power installed to accomodate at least 5 T300s and an Alto along the existing north wall. Get the room power distributed so that 10 Dorado racks can be installed bolted together across the middle of the room with the back edge of this rack row about 10 feet in from the north wall. A similar row of two-high or three-high Dolphins (TBD) should be installed with their back edge 17 feet from the north wall, that is, 7 feet from the back edge of the Dorado row. 3 Alto terminals and the existing Midas Altos will be located in the southeast corner where the existing Midas Alto stands now. *start* 00345 00024 US Date: 2 Sept. 1981 4:59 pm PDT (Wednesday) From: Pier.PA Subject: PS to Purple Lab Meeting Summary To: RWeaver, Taft, ABell, Ornstein, Overton, Winfield cc: Geschke, Pier, Air Conditioning: seems adequate for current plan. Noise: possibly a major problem. Noise absorbent material or curtains need to be investigated. *start* 00585 00024 US Date: 8 Sept. 1981 10:47 am PDT (Tuesday) From: Taft.PA Subject: Name switch; new IFS To: NetSupport.wbst cc: Hains.eos, RWeaver, Taft PARC/CSL and PARC/ISL will shortly be installing another IFS, to take some of the load off Ivy. The new IFS will be called Indigo, and will be located in the "Purple Lab" (35-2016). Chuck Hains is the present owner of the name Indigo, but has generously agreed to give it up. Please make the following changes: DELETE: ContinentalOp = 3#44# Indigo = 12#136# ADD: Indigo = 3#44# Mocha = 12#136# Thanks very much. Ed *start* 01675 00024 US Date: 21 Sept. 1981 1:19 pm PDT (Monday) From: RWeaver.PA Subject: IVY Accounts To: Admin^.pa, PARC-Place^.pa, CSL^.pa, ISL^.pa, ICL^.pa, OtherPA^.pa, OtherParc^.pa cc: Reply-To: RWeaver IVY is to be split into two servers, IVY and INDIGO. Ivy will consist primarily of personal accounts and Indigo will consist entirely of project accounts. I plan to have the new file server Indigo up and running on the 6th of October. Below is a list of project accounts which will be moved from Ivy to Indigo. AIS, AlphaLaurel, AlphaMesa, Alpine, Alto-1822, AltoFonts, AltoTape, AmHer, APilot, Audio, BasicDisks, Bravo, Butterfield, Callup, Cascade, CDS, Cedar, CedarDB, CedarDev, CedarDocs, CedarGraphics, CedarKernel, CedarLang, CedarLib, CedarPublic, CedarUsers, CedarViewers, CHC, Chipmonk, Cholla, CoPilot, CSL-Archives, CSL-Notebook-Entries, CSL-Notebook, CSLDocs, D0, D0Docs, D0Logic, D0Source, DA, Datastudy, Default-User, Defunct, Dict, DocDoc, Dorado, DoradoBuildDocs, DoradoDocs, DoradoDrawings, DoradoLogic, DoradoSource, Dover, EDSupport, Fonts, Forms, Griffin, Guest, IFS, InterDoc, ISLDocs, JaM, Jasmine, JDS, Juniper, KRL, Laurel, Library, Maleson, Maxc, Oracle, Palm, PIE, Pilot, Pimlico, Poplar, Portola, Press, PressFonts, Puffin, Registrar, RWPilot, Sakura, SPG, Spruce, SSS, Straw, TapeServer, TeleSil, Tex, Tioga, University, VAX, WF, XMesa All personal accounts (and some project accounts) will remain on Ivy. Personal accounts will be created on Indigo with zero allocation enabling current Ivy users to login without having to resort to the Guest account. I'll be happy to answer any questions regarding this change. Ron... *start* 00510 00024 US Date: 21 Sept. 1981 1:38 pm PDT (Monday) From: RWeaver.PA Subject: IVY News To: AllPA^.pa cc: Reply-To: RWeaver If you have an interest in what's happening with Ivy when it is split into two file servers, and did NOT recieve my previous message (Subject: IVY Accounts) which was sent to selected DLs, please respond to this message and I will send you a copy of that earlier message. If you don't know what Ivy is, or know and are a non-user, you may ignore this message. Ron... *start* 01587 00024 US Date: 26 Sept. 1981 12:04 pm PDT (Saturday) From: Taft.PA Subject: LeafLogin To: Wobber cc: Levin, Schmidt, Taft Eric is starting to use Leaf for remote file access by some of his System Modelling programs. He has run into a serious performance problem: opening a file takes so long that using Leaf to access large numbers of files is impractical. As far as I can tell, the reason for the bad performance is that Leaf is doing a Login on every open. As you probably know, Login is very slow because it uses the disk stream mechanisms to access the user's DIF. One possible way to fix this is for the Leaf server to remember the current user name and password on a per-connection basis (much as the FTP server does now) and do the Login only if the client's credentials change. However, it seems to me that in the long term, a better solution is for the Login and Connect primitives to keep a cache of recently-used [name, password] pairs. This will be essential when IFS is converted to use Grapevine for authentication; it seems better to centralize this caching rather than having each server (FTP, Leaf, etc.) do it separately. So that is what I propose to do. I wouldn't even need to tell you about this, except that in examining LeafLogin I am extremely puzzled about what is going on. Rather than just calling Login, it is calling ReadDIF and Password directly and is doing some strange dance involving encrypted passwords. Can you explain what is going on here? Would any clients be affected if I changed it to do a simple Login? Thanks. Ed *start* 00567 00024 US Date: 28 Sept. 1981 2:36 pm PDT (Monday) From: Wobber.PA Subject: Re: LeafLogin In-reply-to: Taft's message of 26 Sept. 1981 12:04 pm PDT (Saturday) To: Taft cc: Wobber, Levin, Schmidt That encrypted password stuff is a kludge ... I agree. The intent was, I think, to allow a LOGON using either the encrypted or unencrypted password. I don't think anyone uses it (95% sure). If they are using it, they really shouldn't be ... but it might be worth a call to Butterfield. You're welcome to do away with it as far as I'm concerned. /Ted *start* 00895 00024 US Date: 28 Sept. 1981 3:58 pm PDT (Monday) From: Taft.PA Subject: Re: LeafLogin In-reply-to: Wobber's message of 28 Sept. 1981 2:36 pm PDT (Monday) To: Wobber cc: Taft, Levin, Schmidt I guess Steve didn't really understand about encryption..... Allowing the client to login with the (constant) encrypted form of the password is no safer than sending the password in the clear. In fact, it is less safe, since theft of the encrypted form of the password (say, by pawing around on the owner's Alto disk) now enables the thief to gain access. (The whole idea of one-way encrypting passwords in the first place was so as not to require extraordinary measures to protect their encrypted form.) Anyway, I'd like to remove this "feature", but I do want to give clients warning. Can you give me a list of all people responsible for Leaf client implementations? Thanks. Ed *start* 00908 00024 US Date: 29 Sept. 1981 11:25 am PDT (Tuesday) From: Wobber.PA Subject: Re: LeafLogin In-reply-to: Taft's message of 28 Sept. 1981 3:58 pm PDT (Monday) To: Taft cc: Wobber, Levin, Schmidt Ed - I would be a happy man if I had a list of all the people responsible for Leaf client implentations. Here are those I know about (I wager you know about them too): Smalltalk Steve Weyer (?) Grapevine/Laurel Roy Levin/Doug Brotz BravoX ---- (Does not use 'encryption'.) Alto Games Clint Parker (.WBST) OfficeTalk (Nobody would touch it. Does anyone use it??!!) Lisp (?) Bill VanMelle Other people who have coded Leaf clients in the past or who might know other people who are using Leaf include: Bob Dattola (.WBST) Bob Lansford (.EOS) Skip Ellis Sidney Marshall (.WBST) Kerry LaPrade (.EOS) Brian Reid Dan Swinehart Please do not asuume that this list is complete. /Ted *start* 01492 00024 US Date: 29 Sept. 1981 12:00 pm PDT (Tuesday) From: Taft.PA Subject: Leaf Login To: Weyer, Levin, Brotz, VanMelle, Ellis, Reid, Swinehart, CParker.wbst, Dattola.wbst, Marshall.wbst, Lansford.eos, LaPrad.eos cc: IFSAdministrators^, Wobber, Taft Reply-To: Taft This message is addressed to implementors of Leaf client software. Please pass it on to anyone I have forgotten. I am about to embark on a general cleanup and unification of IFS's authentication and access control facilities. It is likely that the next release of IFS will (optionally) use Grapevine for these functions; this will simplify administration, provide more flexible file protections, and enable elimination of Guest accounts. As part of this cleanup, I would like to eliminate a (possibly unused) facility in Leaf. As part of a LeafOpen request, the client passes credentials consisting of a user name and password. The Leaf server optionally permits the password to be "encrypted" rather than presented in the clear. This feature was ill-conceived and, as it turns out, is LESS secure than presenting the password in the clear! Therefore I would like to eliminate the "encrypted" password option. (It's possible that IFS will eventually introduce encryption-based authentication using Grapevine facilities; but the protocol for this will be considerably different than the present Leaf facility for "encrypted" passwords.) Please let me know if this change will cause any problems. Ed *start* 00698 00024 US Date: 29 Sept. 1981 1:39 pm PDT (Tuesday) From: DonWinter.EOS Subject: Re: Leaf Login In-reply-to: Taft.PA's message of 29 Sept. 1981 12:00 pm PDT (Tuesday) To: Taft.PA cc: DonWinter, Hains, Marlett, Schroeder.PA "It is likely that the next release of IFS will (optionally) use Grapevine for these functions; this will simplify administration, provide more flexible file protections, and enable elimination of Guest accounts." Does this mean that in order to use remote File Servers we will HAVE to be using a Grapevine server for our registry? How else will we be able to login to (say) IVY, if our Names and Passwords are not authenticated on the entire Internet? Don *start* 03488 00024 US Date: 29 Sept. 1981 2:14 pm PDT (Tuesday) From: RWeaver.PA Subject: Ivy/Indigo Schedule (Tentative) To: IvyAccounts^.pa cc: RWeaver Reply-To: RWeaver This is a tentative schedule for reconfiguring Ivy into two file servers, Ivy and Indigo. We won't be able to begin until some construction work (now under way) is completed, so the schedule may slip. Project files will be loaded onto Indigo from Ivy backup disk packs beginning early Wednesday morning, Oct. 7 at 06:00. From this time forth the Project directories on Ivy should be considered READ-ONLY, since any files written after that time will not be moved to Indigo and will therefore be lost. Affected project directories are: AIS, AlphaLaurel, AlphaMesa, Alpine, Alto-1822, AltoFonts, AltoTape, AmHer, Audio, Bravo, Butterfield, Callup, Cascade, CDS, Cedar, CedarDB, CedarDev, CedarDocs, CedarGraphics, CedarKernel, CedarLang, CedarLib, CedarPublic, CedarUsers, CedarViewers, CHC, Chipmonk, Cholla, CoPilot, CSL-Archives, CSL-Notebook-Entries, CSL-Notebook, CSLDocs, D0, D0Docs, D0Logic, D0Source, DA, Datastudy, Default-User, Defunct, Dict, DocDoc, Dorado, DoradoBuildDocs, DoradoDocs, DoradoSource, Dover, EDSupport, Fonts, Forms, Griffin, Guest, IFS, InterDoc, ISLDocs, JaM, Jasmine, JDS, Juniper, KRL, Laurel, Library, Maleson, Maxc, Oracle, Palm, PIE, Pilot, Pimlico, Poplar, Portola, Press, PressFonts, Puffin, Registrar, RWPilot, Sakura, SPG, Spruce, SSS, Straw,TapeServer, TeleSil, Tex, Tioga, University, VAX, WF, XMesa. It is anticipated that the reload may take between 10 and 20 hours, so don't expect to be able to write any files to these accounts on Indigo until Thursday morning. On Thursday morning, October 8, Ivy will be taken down to reload Personal directories onto new system disk packs (thereby eliminating all the above project directories). This will begin at 06:00 am and could take another 10 to 20 hours. Ivy will be down during this period. In conjunction with this reload, the following Project accounts will be loaded onto Indigo: APilot, BasicDisks, DoradoDrawings, DoradoLogic. Indigo will be up but will be somewhat slowed down by the loading operation. On Friday, October 9, the following Juniper Personal accounts will be copied to Ivy: Baskett, Crowther, Gosper, Karlton, Kolling, Mitchell, Orr, Petit, Schmidt, Stewart, Stone, Sturgis, Suzuki, Wyatt. And, the following Juniper Project accounts will be copied to Indigo: Audio, Beads, CedarText, Chameleon, DMS, Grapevine, Maleson, Music, PIC, WordList. If time permits on Friday, Juniper will be taken down and compressed into a one drive system (eliminating the above directories), otherwise it will occur on Monday, October 12th. To summarize: Wednesday, October 7: Ivy will be up, but you should not store files on PROJECT directories. PERSONAL directories may be used normally. Thursday, October 8: Ivy will be down. Indigo will be up, and will contain all PROJECT directories except APilot, BasicDisks, DoradoDrawings, and DoradoLogic. You may store and retrieve project files freely. PERSONAL directories will exist (so you may access Indigo under your own name) but will be empty. Friday, October 9: Ivy and Indigo will be in normal operation; Ivy will contain all personal directories and Indigo will contain project directories. Juniper should be considered READ-ONLY on Friday (this may extend until Monday) while many of its directories are copied to Ivy or Indigo. *start* 01215 00024 US Date: 29 Sept. 1981 2:24 pm PDT (Tuesday) From: Taft.PA Subject: Re: Leaf Login In-reply-to: DonWinter.EOS's message of 29 Sept. 1981 1:39 pm PDT (Tuesday) To: DonWinter.EOS cc: Taft, Hains.EOS, Marlett.EOS, Schroeder Whether or not you make use of this capability on your own IFS will of course be your own decision. I expect most organizations (particularly within OPD) will immediately adopt it for their IFSs, because the security implications of the current Guest arrangement are extremely uncomfortable. We will certainly adopt it for Ivy and Indigo, since maintaining parallel registration data bases (one for Ivy, one for Indigo, and one for Grapevine) is becoming administratively intractible. Whether or not we eventually eliminate the Guest accounts on Ivy and Indigo is a separate issue. I don't expect to complete the IFS changes for several months, by which time I hope EOS will have converted to Grapevine already. There are many advantages to using Grapevine for authentication and access control; and new software will increasingly come to depend on its facilities. (For example, I'm sure you've already encountered this aspect of the current Gateway software.) Ed *start* 01088 00024 US Date: 29 Sept. 1981 6:13 pm EDT (Tuesday) From: CParker.WBST Subject: Re: Leaf Login In-reply-to: Taft.PA's message of 29 Sept. 1981 12:00 pm PDT (Tuesday) To: Taft.PA cc: CParker Ed, I do not use the encrypted password in any of my Leaf software so I don't object to its elimination. While we're on the subject of passwords, did you know that the Leaf server will accept a username and password of zero as legal (like a guest account)? I found this out when I generated my boot version of Flash which uses Leaf to access files on an IFS. I was expecting the program to ask me for a name and password when I tried to retrieve a file, but it never did. Upon investigation, I found out that the IFS was quite will to accept the null name and password provided by the bootbuilder. Also, do you know who the person is to talk to about passwords on Alto disk packs. Did you know that the TEXT string for your password is stored on the disk? I didn't think that you wanted it there, if you did, what is the purpose of encoding the password? Thanks, - Clint *start* 01529 00024 US Date: 29 Sept. 1981 3:36 pm PDT (Tuesday) From: Kolling.PA Subject: Re: Ivy/Indigo Schedule (Tentative) In-reply-to: Your message of 29 Sept. 1981 2:14 pm PDT (Tuesday) To: IvyAccounts^.pa, JuniperUsers^ cc: RWeaver, Kolling, Taft, Karlton, Crowther Reply-To: Kolling Some cautions with regard to the transfer of files from Juniper to Ivy and Indigo: 1. If you have Juniper files which are moving to an IFS, please note that before <<<>>> you must either (a) have your Juniper directory trimmed down to fit into the new space allocation which you previously requested on Ivy/Indigo or, alternatively, (b) give Ron a machine readable list of the files you want moved and which will fit into your allocation. (If you delete a lot of Juniper files, USE FTP, NOT CHAT, because JuniperChat has a bug which will abort your connection from time to time.) 2. The Juniper directories Ron enumerated will be moved onto the directories of the same names, except the dictionary file on will wind up on . If you expected something else to happen to your files during the transfer, please see me or Ron as soon as possible. 3. Since the transfer from Juniper to Ivy/Indigo involves writing into previously established directories in most cases, beware of resulting name conflicts (the files will all be there, but version numbers....etc.) You may wish to rename your files on either Ivy or Juniper in advance of October 9th to avoid such problems. Thanks, Karen *start* 03556 00024 US Date: 30 Sept. 1981 9:40 am PDT (Wednesday) From: Schroeder.PA Subject: Re: Leaf Login In-reply-to: DonWinter.EOS's message of 29 Sept. 1981 5:03 pm PDT (Tuesday) To: DonWinter.EOS cc: Schroeder, Hains.EOS, Marlett.EOS, Taft, VerSchage.henr, Birrell I find the attitude implied by your last message quite puzzling. Research labs make progress on systems questions by evolving things. Are you suggesting that we shouldn't push on? In my opinion CSL has always been extremely responsible about making sure that people who have adopted its software had a reasonable path to follow to keep things working. Why are you so convinced that EOS will get left in the lurch? I said that we had no plans to make the IFS mail servers incompatible with Grapevine. Right now there are 9 IFS mail servers and 5 Grapevine servers in the message system. Ed's messages was entirely coincidental. He's talking about access controls for IFS. He wants to add code so that an IFS optionally can depend on Grapevine to implement user authentication and access control lists. The Juniper file server already uses Grapevine for these functions. As I understnd it, there is no reason that RoseBowl would need to use this feature. If we choose to get rid of our "guest" account, and EOS people still need to get at our file servers, then an account for their access could be set up. Finally, while we certainly are not trying to force you to use Grapevine, let me try to describe here the benefits of converting to Grapevine for message service. They are all second order, since as you point out IFS does a fine job of getting the message bits through, but other organizations have found the conversion to be rewarding. 1) For sending a message, Grapevine can give the sender immdeiate feedback about incorrectly spelled recipient names when the recipient (individual or DL) is in a Grapevine registry (ES, PA, WBST and soon DLOS). This reduces considerably the number of messages you get back 10 minutes later saying that the recipient specified doesn't seem to exist. 2) Sending performance in imporved, especially when some recipients are DLs, since DL expansion is done by the Grapevine server, not Laurel. 3) Users can interactively add and remove themselves from various net wide distribution lists. They can also find out interactively which lists exist. These DLs are beginning to provide an important channel for communication among the members of widly dispersed interest groups. I think that this facility will have a profound effect on the cohesiveness of the Xerox computing community. 4) Administration of the system is much simplier. 5) In conjunction with the IFS changes Ed mentioned, EOS users (say) can be given access under their EOS name to files on foreigh IFS's and visa versa. The authentication and access control mechanism become net wide, with all file servers able to recognize and authenticate names from all registries. 6) If your IFS is starting to load up (ours was) then the Grapevine server removes a significant source of conections, delaying the day you need to have two IFS's. 7) Message deliver and retrieval service is more available. If your message server is unavailable then Laurel automatically finds another one to send through. Everyone has inboxes on multiple servers. I understand that Altos are important resources. The reason I brough up the subject now is that there appear to be a small number of used machines of the right configuration available for reduced prices. *start* 01294 00024 US Date: 30 Sept. 1981 12:00 pm PDT (Wednesday) From: DonWinter.EOS Subject: Re: Leaf Login In-reply-to: Schroeder.PA's message of 30 Sept. 1981 9:40 am PDT (Wednesday) To: Schroeder.PA cc: DonWinter, Hains, Marlett, Taft.PA, VerSchage.henr, Birrell.PA I meant no reflection on CSL. We have been dumped-on previously by one or more factions of SDD (OPD) on the subject of it being our own fault if we hadn't been prepared to keep up-to-date. Of course research must go on evolving things -- but some of us have to predict all capital equipment acquisitions a year in advance. This means that anything we might purchase NOW would have to be with money identified a year ago, whereas anything we identify now cannot be purchased until 1982. Thus the current availability of used Altos (and some of them are non-XM Altos, to boot) is of little import without funds lying around. We have been surprised before, by "instantaneous" developments on the net. Supporting the (casual) network and message system access of EOS's Publishing System probes at (for example) UMI can be quite trying under these circumstances, whether it is self-inflicted by our own blindness or not. We simply don't wish to be similarly caught again. Sorry for any offense this may have caused. Don *start* 00452 00024 US Date: 30 Sept. 1981 1:47 pm PDT (Wednesday) From: Taft.PA Subject: Re: Leaf Login In-reply-to: Your message of 29 Sept. 1981 6:13 pm EDT (Tuesday) To: CParker.WBST cc: Wobber, Taft The reason you are able to log into Erie with null username and password is that Erie has a directory whose name and password are the empty string!! So it's not a bug in Leaf after all. (I'm still removing the "encryption" hack, however.) Ed *start* 01081 00024 US Date: 30 Sept. 1981 1:59 pm PDT (Wednesday) From: Taft.PA Subject: Funny Erie accounts To: Allen.wbst cc: Taft You might be interested to know that Erie has a directory whose name and password are both empty. This has the effect of permitting someone to log in without giving any user name or password. Files in that directory have full names of the form "<>filename!1". Maybe this is intentional, and this is simply another form of Guest account on Erie. I'm pointing it out just in case this directory was created accidentally. (The reason I noticed this is that Clint Parker told me that the Leaf server permits one of his game programs to log in with empty name and password, and he thought this was a bug in the Leaf server!) There is also a directory whose name is "filename!1". This was almost assuredly an accident; and the IFS software now prohibits creating directory names of this form. You may want to delete this. Ed *start* 02743 00024 US Date: 1-Oct-81 14:48:42 PDT (Thursday) From: Murray.PA Subject: IFS Boot Server To: Taft cc: Boggs, Murray FYI: BRIDGEHOUSE is 2 2400 baud and 1 9600 baud hops from Aztec. ------------------------------ Date: 1 Oct. 1981 9:38 am (Thursday) From: KELLOND.WBST Subject: Gateway bootfile propogation To: Murray.pa,Hoffarth.wbst cc: Chilley.rx, KELLOND Reply-To: KELLOND Slow booting between gateways does not appear to work in all conditions. The attached typescript file entries taken from BRIDGEHOUSE shows that we are having problems when booting over slow lines(2400 Baud). The slowbooting seems to work sometimes and always works for the PUP-Network.Directory. I know that we can use FTP to obtain copies from a file server and then place these on the GAteway. Can someone provide me with some advice on this problem. Geoff -------------------------------------------------------------------------- Typescript file from BRIDGEHOUSE --------------------------------------------------------------------------- 28-Sep-81 12:56:45 Alto Gateway of 1-Apr-81 7:14:20 in operation. 28-Sep-81 12:57:17 BootServer: Kal.boot (#1003) is not on this disk. 28-Sep-81 12:57:18 BootServer: Flash.boot (#1016) is not on this disk. 28-Sep-81 12:57:19 BootServer: Reversi.Boot (#1020) is not on this disk. 28-Sep-81 12:57:19 BootServer: Maze.Boot (#1021) is not on this disk. 30-Sep-81 17:18:45 Typescript file reset. 30-Sep-81 19:31:29 Found Pup-Network.Directory!14988 on WGC. 30-Sep-81 19:32:13 Recv aborted because: EFTP Open Failed after page 0, pages left=1183. 30-Sep-81 19:32:15 Found Pup-Network.Directory!14988 on WGC. 30-Sep-81 19:32:58 Recv aborted because: EFTP Open Failed after page 0, pages left=1183. 30-Sep-81 19:34:00 Found Pup-Network.Directory!14988 on WGC. 30-Sep-81 19:34:42 Recv aborted because: EFTP Open Failed after page 0, pages left=1183. 30-Sep-81 19:34:44 Found Pup-Network.Directory!14988 on WGC. 30-Sep-81 19:38:48 Current PupNameLookup Directory version is 14988. 30-Sep-81 19:38:54 Got Pup-Network.Directory ok. Length=143.106, pages left=1183. 30-Sep-81 12:02:06 Found Kal.boot (#1003) on Aztec. 30-Sep-81 12:03:31 Recv aborted because: EFTP Timeout after page 1, pages left=1276. 30-Sep-81 12:05:39 Found Kal.boot (#1003) on Aztec. 30-Sep-81 12:07:03 Recv aborted because: EFTP Timeout after page 1, pages left=1276. 30-Sep-81 12:08:31 Found Maze.Boot (#1021) on Polaris. 30-Sep-81 12:23:23 Time Reset from WGC. Correction was 0. 30-Sep-81 12:25:41 Maze.Boot (#1021) was created on 11-Jul-81 4:10:30. 30-Sep-81 12:25:42 Got Maze.Boot ok. Length=81.346, pages left=1193. ---------------------------------------------------------------- *start* 00610 00024 US Date: 1 Oct. 1981 3:04 pm PDT (Thursday) From: Taft.PA Subject: Re: IFS Boot Server In-reply-to: Murray's message of 1-Oct-81 14:48:42 PDT (Thursday) To: Murray cc: Taft, Boggs IFS's boot server timeouts are appropriate for local network booting (i.e., very short), so long-range booting is unlikely to work. The timeouts are small because currently we allow only one boot server at a time to run, and we don't want it to be tied up very long. I suppose I can detect boot requests from remote networks and allow a second boot server to run in that case. Let me think about it. Ed *start* 05411 00024 US Date: 2 Oct. 1981 9:00 am PDT (Friday) From: Baskett.PA Subject: Re: Ivy Disk Space In-reply-to: Fiala's message of 1 Oct. 1981 10:23 am PDT (Thursday) To: Fiala cc: RWeaver, Taft, Boggs Yes, I understand that things will be easier after Indigo is a full bore operation. And, as I said in my second note, I don't want enough space to do copydisks to Ivy. Howard Sturgis's backup program is a very nice, convenient, economical alternative. It allows you to automatically filter out temps and *$ and *.bcd, etc. I recommend it to you for Alto partitions. But 1918 pages with an allocation of 2000 was just not enough. I'm ok now. Thanks. Forest ------------------------------------------------------------ Date: 11 May 1981 4:10 pm PDT (Monday) From: Sturgis.PA Subject: a new version of backup To: Thacker, Baskett cc: Sturgis You might want to try this new version. 1) It fixes at least one bug (the one you have does not look at write dates, and some of the usual software only changes write dates). 2) The cutOffTime is now in text form. 3) The filters (there are now two) are on files, so they can be edited. 4) Installation is supposed to be easy. I would appreciate any comments. Howard --------------------------- Date: 11 May 1981 4:04 pm PDT (Monday) From: Sturgis.PA Subject: How to use backup.run To: Sturgis Backup.run can be used to save recently created files on a remote file server. (It can also be used to print recently edited source files.) The command file doBackup.cm is used to orchestrate the activity. A number of other files control individual aspects. During normal operation the user types @dobackup which causes the following events: the file cutofftime.backup is read and defines cutOffTime the local disk directory is scanned and files are selected as follows: the create time or write time must be later than cutOffTime the file name must pass the filter defined by acceptingFilter.backup the file name must not pass filter defined by the rejectingFilter.backup A list of files passing this test is written on FileList.backup, and then FTP is called as follows: FTP host dir/c subDir store/c @fileList@ Finally, a new cutofftime is placed on cutofftime.backup via COPY INSTALLATION In order in install backup.run on your Alto or Alto partition: Obtain Backup.run from [Juniper]backupsources type backup $initialSuffix: backup, saveAt: [host]subDir. carriage return (note the ($),the (,), the square brackets, the angle brackets, the (:), and the (.).) backup.run will execute and write out a number of files. Among these files are: dobackup.cm CutOffTime.backup RejectingFilter.backup AcceptingFilter.backup (You may use another suffix other than backup, e.g. print. In this case you will want to edit doprint.cm to remove the FTP command and insert an appropriate print command. If you use another suffix, "backup" will be replaced by "other" in the above file names and in the information below. Of course, backup.run is still backup.run.) dobackup.cm has the following form: backup.run $cutOffTimeFileName: "CutOffTime.backup", nextCutOffTimeFileName: "NextCutOffTime.backup", fileListName: "FileList.backup", acceptingFilterFileName: "AcceptingFilter.backup", rejectingFilterFileName: "RejectingFilter.backup", showNamedValues: "FALSE". FTP host DIR/c subdir Store/c @FileList.backup@ COPY CutOffTime.backup _ NextCutOffTime.backup If desired the entire FTP command can be replaced by other commands, e.g. printing commands. CutOffTime.backup Initially contains the time at which backup.run was first run. Subsequent calls will update CutOffTime.backup to the time of the new execution (by COPY CutOffTime.backup _ NextCutOffTime.backup). If it is initially desired to save files created earlier than the time of installation, then the user should edit CutOffTime.backup. This file has no carriage return, is of fixed length with fixed length fields. The month field must be one of: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, or Dec. The time zone field must be one of AST, EST, CST, MST, PST, YST, HST, ADT, EDT, CDT, MDT, PDT, YDT, or HDT. If the call on FTP fails, the error exit from FTP will cause the copy of NextCutOffTime.backup to be ignored. Thus, the CutOffTime remains unchanged for a later try at backing up. RejectingFilter.backup The initial value of this filter will remove most uninteresting items from the back up. Contains a sequence of filter items separated by single blanks and terminated by a single carriage return. Each filter item is a text string containing at most one asterisk. A file name satisfies the filter item if it is identical to the filter item with the asterisk replaced by some string of characters. The client may add or delete filter items with an editor. AcceptingFilter.backup Contains a sequence of filter items as for the rejection filter. Initialy contains the single item "*", thus all files pass the initial Accepting filter. Normal executions of backup.run will write two additional files: FileList.backup (the files to be backed up) NextCutOffTime.backup (the new CutOffTime) ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------ *start* 00321 00024 US From: Taft.pa Date: 5-Oct-81 19:15:40 PDT Subject: Leaf on Ivy To: Levin, Schmidt cc: Schroeder, Birrell, Boggs, Wobber, Taft Ivy is now running a new version of the IFS software with the changes for cached Login, faster Password, and faster LeafOpen. Let me know if there are any problems. Ed *start* 03641 00024 US Date: 6 Oct. 1981 1:41 pm PDT (Tuesday) From: RWeaver.PA Subject: Re: Ivy/Indigo Schedule In-reply-to: Your message of 29 Sept. 1981 1:35 pm PDT (Tuesday) To: IvyAccounts^.pa cc: Reply-To: RWeaver This is a revised schedule for reconfiguring Ivy into two file servers, Ivy and Indigo. The construction work (now under way) is will be completed tomorrow. Project files will be loaded onto Indigo from Ivy backup disk packs beginning early Thursday morning, October 8 at 06:00. From this time forth the Project directories on Ivy should be considered READ-ONLY, since any files written after that time will not be moved to Indigo and will therefore be lost. Affected project directories are: AIS, AlphaLaurel, AlphaMesa, Alpine, Alto-1822, AltoFonts, AltoTape, AmHer, APilot, BasicDisks, Bravo, Butterfield, Callup, Cascade, CDS, Cedar, CedarDB, CedarDev, CedarDocs, CedarGraphics, CedarKernel, CedarLang, CedarLib, CedarPublic, CedarUsers, CedarViewers, CHC, Chipmonk, Cholla, CSL-Archives, CSL-Notebook, CSLDocs, D0, D0Docs, D0Logic, D0Source, DA, Datastudy, Default-User, Defunct, Dict, DocDoc, Dorado, DoradoBuildDocs, DoradoDocs, DoradoDrawings, DoradoLogic, DoradoSource, Dover, EDSupport, Fonts, Forms, Griffin, Guest, IFS, InterDoc, ISLDocs, JaM, Jasmine, JDS, Juniper, KRL, Laurel, Library, Maleson, Maxc, Oracle, Palm, PIE, Pilot, Pimlico, Poplar, Portola, Press, PressFonts, Puffin, Registrar, Sakura, SPG, Spruce, SSS, Straw, TapeServer, TeleSil, Tex, Tioga, University, VAX, Voice, Walnut, WF, XMesa. Note that Laurel and Guest will not be deleted from Ivy, and therefore do not have the READ-ONLY restriction. It is anticipated that the reload may take between 10 and 20 hours, so don't expect to be able to write any files to these accounts on Indigo until Friday morning (October 9). On Monday morning, October 11, Ivy will be taken down to reload Personal directories onto new system disk packs (thereby eliminating all the above project directories). This will begin at 06:00 am and could take another 10 to 20 hours. Ivy will be down during this period. In conjunction with this reload, the following Project accounts will be loaded onto Indigo: APilot, BasicDisks, DoradoDrawings, DoradoLogic. Indigo will be up but will be somewhat slowed down by the loading operation. On Tuesday, October 12, the following Juniper Personal accounts will be copied to Ivy: Baskett, Crowther, Gosper, Karlton, Kolling, Mitchell, Orr, Petit, Schmidt, Stewart, Stone, Sturgis, Suzuki, Wyatt. And, the following Juniper Project accounts will be copied to Indigo: Audio, Beads, CedarText, Chameleon, DMS, Grapevine, Maleson, Music, PIC, WordList. If time permits on Tuesday, Juniper will be taken down and compressed into a one drive system (eliminating the above directories), otherwise it will occur on Wednedsay, October 13. To summarize: Thursday, October 8: Ivy will be up, but you should not store files on PROJECT directories. PERSONAL directories may be used normally. Monday, October 11: Ivy will be down. Indigo will be up, and will contain all PROJECT directories except APilot, BasicDisks, DoradoDrawings, and DoradoLogic. You may store and retrieve project files freely. PERSONAL directories will exist (so you may access Indigo under your own name) but will be empty. Tuedsay, October 12: Ivy and Indigo will be in normal operation; Ivy will contain all personal directories and Indigo will contain project directories. Juniper should be considered READ-ONLY on Tuedsay (this may extend until Wednesday) while many of its directories are copied to Ivy or Indigo. *start* 02360 00024 US Date: 8 Oct. 1981 6:38 pm PDT (Thursday) Sender: Taft.PA Subject: Ivy/Indigo progress From: RWeaver, Taft To: IvyAccounts^.pa Reply-To: RWeaver, Taft The new Indigo file server was successfully installed and is now in operation. Files in the following project directories were moved from Ivy to Indigo: AIS AlphaLaurel AlphaMesa Alpine Alto-1822 AltoFonts AltoTape AmHer Bravo Butterfield Callup Cascade Cattell CDS Cedar CedarDB CedarDev CedarDocs CedarGraphics CedarKernel CedarLang CedarLib CedarPublic CedarUsers CedarViewers CHC Chipmonk Cholla CSL-Archives CSL-Notebook CSLDocs D0 D0Docs D0Logic D0Source DA Datastudy Default-User Defunct Dict DocDoc Dorado DoradoBuildDocs DoradoDocs DoradoSource Dover EDSupport Fonts Forms Griffin Guest IFS Interdoc ISLDocs JaM Jasmine JDS Juniper KRL Laurel Library Maleson Maxc Oracle PIE Pilot Pimlico Poplar Portola Press PressFonts Puffin Registrar Sakura SPG Spruce SSS Straw TapeServer TeleSil Tex Tioga University VAX Voice Walnut WF XMesa You should now consider Indigo the "official" repository for these directories; you may store and retrieve files on these directories in the normal fashion. The corresponding Ivy directories still exist but are OBSOLETE! They will disappear without a trace on Monday, so don't store any new files into them. The remaining directories (all personal directories and a few project directories) are still on Ivy, and may be used normally. Additionally, all personal directories have been replicated on Indigo with disk limits of zero; this is so that you may access the project directories under your own name. At the moment, free space on both Ivy and Indigo is very low because the above project directories are duplicated on the two file servers. On Monday, Ivy will be down all day while it is rebuilt as a smaller file system, eliminating those directories. This will free up one disk drive which will be added to Indigo (we hope early Monday morning). As part of this process, the following additional project directories will be transferred from Ivy to Indigo: APilot, BasicDisks, DoradoDrawings, DoradoLogic, Palm By Monday evening, we hope to have completed all the file shuffling between Ivy and Indigo. Transfer of files from Juniper to Ivy and Indigo will begin Tuesday; final details of this will be announced Monday. *start* 00736 00024 US Date: 8 Oct. 1981 6:44 pm PDT (Thursday) From: Taft.PA Subject: Ivy/Indigo To: RWeaver cc: Taft 1. I goofed and destroyed the Palm directory; so it must be added to the list of directories to move on Monday. 2. I think as a last-minute check it would be a good idea to make a list of all Ivy directories (by "List <*>!1" to Chat) and make sure that EVERY directory is listed in either IndigoAccounts#.bravo (which we used today) or IvyAccounts#.bravo (which we will use Monday). If any directory is missing then when we are done it will be lost, and that would be just too bad... (You might do this tomorrow, and then declare a moratorium on new Ivy/Indigo accounts until Tuesday to avoid confusion.) Ed *start* 00410 00024 US Date: 8 Oct. 1981 7:02 pm PDT (Thursday) From: Taft.PA Subject: Ivy/Indigo To: RWeaver cc: Taft 3. I started Indigo backup and initialized the first backup with ID "Indigo1" and name "Indigo Backup 1". When you begin the new rotation of Ivy backup packs, you should assign them IDs of "Ivy1", "Ivy2", etc. 4. I changed the default disk-limit for new users on Indigo to be zero. Ed *start* 00348 00024 US Date: 9 Oct. 1981 9:28 am PDT (Friday) From: Taft.PA Subject: Laurel directory To: Brotz cc: Schroeder, Levin, RWeaver, Taft We've decided to leave the Laurel directory on Ivy rather than moving it to Indigo. Among other things, this is to avoid the need for users to edit the RunPath entry in their Laurel.profiles. Ed *start* 01139 00024 US Date: 12 Oct. 1981 2:49 pm PDT (Monday) From: Taft.PA Subject: Re: Ivy/Indigo progress In-reply-to: Your message of 9 Oct. 1981 9:05 am PDT (Friday) To: TonyWest, Israel, DaveSmith cc: RWeaver, Taft You expressed interest in how we accomplished the move. Well, it isn't really very interesting and can be described in a couple of sentences. IFS has a backup system that incrementally backs up files onto a rotation of backup disk packs. So at the beginning of the move, all of Ivy's files were also present on one or more of the backup packs. What we did was simply to "reload" selected directories from Ivy's backup into Indigo's virgin file system. Then we created a new virgin file system on Ivy and "reloaded" the remaining directories from Ivy's backup into it. All of this was pretty simple, though time-consuming. The hard part was identifying the set of directories to be moved from Ivy to Indigo to achieve the balance that we wanted. Ron had to do a great deal of administrative work manually, as the IFS software provides very little help in organization and administration of directories. Ed *start* 01979 00024 US Date: 12 Oct. 1981 6:59 pm PDT (Monday) From: RWeaver.PA Subject: Ivy & Indigo File Transer Complete To: IvyAccounts^.pa, JuniperAccounts.pa cc: RWeaver Reply-To: RWeaver INDIGO: The following is a list of Project directories which have now been moved to Indigo and no longer exist on Ivy: AIS, AlphaLaurel, AlphaMesa, Alpine, Alto-1822, AltoFonts, AltoTape, AmHer, APilot, BasicDisks, Bravo, Butterfield, Callup, Cascade, CDS, Cedar, CedarDB, CedarDev, CedarDocs, CedarGraphics, CedarKernel, CedarLang, CedarLib, CedarPublic, CedarUsers, CedarViewers, CHC, Chipmonk, Cholla, CSL-Archives, CSL-Notebook, CSLDocs, D0, D0Docs, D0Logic, D0Source, DA, Datastudy, Default-User, Defunct, Dict, DocDoc, Dorado, DoradoBuildDocs, DoradoDocs, DoradoDrawings, DoradoLogic, DoradoSource, Dover, EDSupport, Fonts, Forms, Griffin, Guest, IFS, InterDoc, ISLDocs, JaM, Jasmine, JDS, Juniper, KRL, Laurel, Library, Maleson, Maxc, Oracle, Palm, PIE, Pilot, Pimlico, Poplar, Portola, Press, PressFonts, Puffin, Registrar, Sakura, SPG, Spruce, SSS, Straw, TapeServer, TeleSil, Tex, Tioga, University, VAX, Voice, Walnut, WF, XMesa. Personal directories exist on Indigo with zero page allocation, thus allowing login to access these project directories. IVY: The reload of personal directories onto Ivy is now complete. JUNIPER: All of Juniper should now be considered read only for the next few days. So far the following directories have been copied to Indigo: Audio, Beads, CedarText, Chameleon, DMS, Grapevine, Maleson and Music. PIC and WordLists will be copied on Tuesday morning (13 Oct.). The personal directories on Juniper which are to be copied to Ivy will take place on Tuesday. This will take most of the day. On Wednesday morning (14 Oct.) Ivy will be taken down briefly to move its backup pack to Juniper B so that Juniper A can be collapsed down to a single T300 disk drive. A final message will be sent out when this has been accomplished. *start* 00856 00024 US Date: 18 Oct. 1981 4:14 pm PDT (Sunday) From: Taft.PA Subject: IFS boot server To: Murray cc: Boggs, Taft Ivy is now running a boot server that can do fast and slow boots simultaneously. Booting a machine on the same Ethernet is considered a fast boot, and uses short timeouts of 1 second initial and 4 seconds continuing (the same as before). Booting a machine on any other network is considered a slow boot, and uses timeouts of 15 seconds initial and 60 seconds continuing. If you can think of a good way to test the slow booter, then by all means do so. I don't know how to provoke Alto gateways into doing long-range probes. How urgently is this needed at the site that first encountered the problem of boot updates always timing out? I wasn't planning to release a new IFS for a while, but I could change my mind. Ed *start* 01089 00024 US Date: 5 Nov. 1981 1:49 pm (Thursday) From: Chilley.RX Subject: Jaws IFS is now running in the UK To: IFSAdministrators^.PA cc: , Chilley Reply-To: Chilley The Welwyn Hall IFS "Jaws" ('cos there are lots of bytes!) is now up and running. Mail is enabled (registry "RX") and the Leaf server is also enabled. The configuration is: Alto II with 128Kb memory 2 T-300 disk drives (one being used in backup mode) The IFS software being run is 1.34.1 and I have allocated enough directory space to run approximately four T-300s as the primary file system. I guess we went through all the usual hassles -- having the IFS in with our Penguin, forgetting to read the documentation -- but the system is now operational and in use by members of the RX Ethernet community. I would like to thank all of thse people who have helped us establish this service, in particular Art Axelrod, Gail Allen and Rich Hoffarth at Webster and ask for their, and your indulgence in the future when I get a problem and call for help. Carl Chilley -- IFS administrator (junior grade!) *start* 01370 00024 US Date: 18 Nov. 1981 7:22 pm PST (Wednesday) From: Taft.PA Subject: Leaf bug To: Wobber cc: Taft I've been making some major changes to the way IFS does authentication and access control. While running all the various servers through their paces, I found a reproducible protocol for crashing the Leaf server. The error is "Leaf virtual pbi out of sequence". After struggling for a while trying to figure out how I could have introduced this bug (since I didn't touch any of the Leaf protocol or Sequin code), I tried the same thing with the currently-released IFS (1.34), which failed in the same way! It seems that I can only reproduce this if the AllocSpy is running at the same time. (I'm suspicious that there is at least one place in the Leaf server that is losing storage, which is why I was running AllocSpy in the first place). However, I doubt this is really a direct interaction between the Leaf server and the AllocSpy; rather, the AllocSpy is either perturbing the timing or changing the pattern of system PBI allocation. I really don't know how to start looking for this. I'm wondering whether you would be willing to take a look. A Swat sysout is on [Ivy]Leaf.sysout (symbols are [Maxc]IFS.syms); alternatively, if you would like to come over here, I can easily set things up for on-line debugging. Thanks. Ed *start* 00554 00024 US Date: 19 Nov. 1981 6:42 pm PST (Thursday) From: Taft.PA Subject: Grapevine Guest To: Birrell, Schroeder cc: Taft The fact that Guest.pa is registered in Grapevine with an untypeable password has an unfortunate side-effect: on an IFS that is using Grapevine for authentication, the IFS's Guest account becomes unusable. For the short term, I have set Guest.pa's password to "Guest". Once everything has shaken down, we will eliminate the IFS Guest accounts entirely, and Guest.pa's password can revert to being untypeable. Ed *start* 01590 00024 US Date: 19 Nov. 1981 7:25 pm PST (Thursday) From: Taft.PA Subject: IFS meets Grapevine To: PaloAlto^ cc: IFSAdministrators^ Reply-To: Taft The Ivy file server is running a new version of the IFS software that consults Grapevine for user authentication (Login and Connect). That is, when you log in, Ivy asks Grapevine to check your user name and password rather than checking them against information kept in Ivy itself. Two consequences of this change are visible to users: 1. You must use your Grapevine password when logging into Ivy. Since most people use the same password on all servers, this is not likely to bother anyone. 2. You no longer need your own Ivy account in order to log into Ivy; any valid Grapevine name and password is acceptable. That is, you may use your full Grapevine name (e.g., Jones.PA or Smith.ES) to access Ivy. This arrangement is intended to replace the present Guest account; however, Guest will remain valid for a while. (Note: existing Ivy accounts whose names, suffixed with ".PA", do not match any Grapevine name will continue to work as they always have.) If no problems arise, the new version of IFS will also be installed on the Indigo file server within a few days. Sometime thereafter, I will enable another new feature: IFS will use Grapevine to check user group membership. This will replace the present arrangement whereby the association between user names and group numbers is managed independently on each file server. I hope to distribute this new software to all other IFS sites by the end of the year. *start* 00558 00024 US Date: 20 Nov. 1981 9:37 am PST (Friday) From: Taft.PA Subject: How to make IFS groups listing To: RWeaver cc: Boggs, Taft The IFS groups listing is made by a new version of the Accountant program, presently stored as [Indigo]Accountant.run. When started, the program now asks three questions instead of two: Generate accounts listing? Generate group membership summary? Generate disk usage listing? If you answer "yes" to the second question, it asks for a file name and produces a listing like the one I showed you. Ed *start* 00927 00024 US Date: 20 Nov. 1981 2:32 pm PST (Friday) From: TManley.Es Subject: Re: IFS meets Grapevine In-reply-to: Taft.PA's message of 19 Nov. 1981 7:25 pm PST (Thursday) To: Taft.PA cc: TManley Ed, The way I understand this the Grapevine will be responsibily for user authentication for the IFS's, will the Grapevine also have the file protections and group membership authentication, if so will we have to use Maintain to change or update groups and protection. Also if the local Grapevine goes down, will the next Grapevine take up the IFS authentications. And finally what of folks who don't have Laurel accounts. Now for some good stuff, we (SDSupport) now have a Star File Server, it is named Dragon Seed. It consist of a Dandilion and one T-80. Presently it backups on floppies (a pain in the neck). It has been in operation now are about two weeks. You can contact Dones.es for more info. //Ted *start* 01213 00024 US Date: 20 Nov. 1981 2:59 pm PST (Friday) From: Taft.PA Subject: Re: IFS meets Grapevine In-reply-to: Your message of 20 Nov. 1981 2:32 pm PST (Friday) To: TManley.Es cc: Taft When all the Grapevine features are switched on, Grapevine will be responsible for authentication and group membership, but IFS will still be responsible for file protections. That is, to specify what groups have what access to a file you will talk to the IFS; but to specify who is in those groups you will run Maintain and talk to Grapevine. IFS uses the Grapevine location algorithms, so it will find any Grapevine server that is up and has the necessary information. IFS also keeps a local cache of information obtained from Grapevine recently, so even if all the Grapevine servers are down, the IFS can "coast" for a while on its cached information. The new facilities are designed to let you have a mixture of Grapevine and non-Grapevine user and group names, with the non-Grapevine names being managed in the old ways. I suspect, though, that it will be administratively more manageable to convert to using Grapevine exclusively, even if that requires registering a lot of new names in Grapevine. Ed *start* 00538 00024 US Date: 23-Nov-81 11:07:16 PST (Monday) From: Newman.ES Subject: No glitch in IFS 1.34.8L In-reply-to: Your message of 21 Nov. 1981 2:14 pm PST (Saturday) To: Taft.PA cc: Newman Looks like you fixed the problem; I can't make Ivy get out of synch anymore. Two questions about the IFS update: 1) Will everyone's directory name change (e.g. from to )? If not, how do you keep Newman.WBST out of my directory, and vice versa? 2) Will Maxc also use Grapevine for authentication eventually? /Ron *start* 00910 00024 US Date: 23 Nov. 1981 11:46 am PST (Monday) From: Taft.PA Subject: Re: No glitch in IFS 1.34.8L In-reply-to: Your message of 23-Nov-81 11:07:16 PST (Monday) To: Newman.ES cc: Taft 1. Each IFS will have a default registry which is automatically applied to directories that don't have explicit registry names. So your directory name on El Segundo file servers will remain Newman (though you will be able to login as Newman.ES if you want to); but if you have your own directories elsewhere, say on Palo Alto file servers, those directory names will need to be changed to Newman.ES. Assuming you don't actually store any files on those directories, most likely they will simply be deleted, since you will be able to login as Newman.ES regardless of whether or not you have a directory. 2. Maxc will probably never use Grapevine; it's a lot of work and probably not worth the effort. Ed *start* 01315 00024 US Date: 23 Nov. 1981 4:25 pm PST (Monday) From: Taft.PA Subject: IFS 1.35 documentation pre-release To: IFSAdministrators^ Reply-To: Taft I am now in the process of shaking down IFS version 1.35, whose major new feature is use of Grapevine registration servers for authentication and access control. While use of this feature will be the IFS administrator's option, the IFS and Grapevine designers hope that it will eventually be adopted everywhere. This represents a major improvement in information security, as well as being more flexible and easier to administer than the old scheme. I hope to release IFS 1.35 before the end of the year. However, I am making the documentation available for you to read now. This is intended both to enable you to do some advance planning for use of the new facilities and to give me some feedback on the adequacy of the documentation. (It is conceivable that your comments could lead to functional changes in the software also, though I hope this will not be necessary.) Please obtain and read: "How to use IFS", [Indigo]HowToUse.press "IFS Operation", [Indigo]Operation.press You should read the "How to use IFS" document first, since it contains an explanation of IFS's use of Grapevine which is not repeated in "IFS Operation". *start* 01528 00024 US Date: 26 Nov. 1981 4:17 pm PST (Thursday) From: Taft.PA Subject: B-Tree speedup To: Boggs, McCreight cc: Taft From watching the performance of the IFS directory operations, I surmised that a lot of time was being spent in B-Tree insertions. Careful inspection of the B-Tree code led me to suspect that much of this was actually being spent in the storage allocator. Even in simple cases, the code in UpdateRecord was allocating a bunch of big temporary data structures, sucking all the entries out of the B-Tree page of interest, and pouring them back in along with the new entry. I changed this so that if the new entry fits in the page where it belongs, UpdateRecord simply slides the records beyond the insertion point out of the way and copies the new entry in. This means that InsertRecords, where all the hard work is done, is never called at all in the simple case, which I guessed was very common. The improvement in performance is perceptible when using IFS, particularly when running the backup system. I decided that the easiest way to measure the improvement was to build an IFSScavenger with the new B-Tree package and have it rebuild the directory for the Test IFS pack. The old IFSScavenger took 6:12 to do this; the new IFSScavenger takes 3:28. Ed p.s. One of the B-Tree overlays overflowed 1024 words as a result of this change. So I changed BTreeBug to take an error code instead of a string as its second argument, and thereby got rid of a whole lot of strings in the code. *start* 04459 00024 US Date: 27 Nov. 1981 11:22 am PST (Friday) From: Taft.PA Subject: Ivy/Indigo groups To: RWeaver cc: Schroeder, Taft I went over the work you did to establish Grapevine groups for the Ivy/Indigo groups. In general, I think you did a very good job; and you certainly did most of the hard work. I've made a few changes, and I've noted a few other things I think you should follow up on. Changes I made: 1) User group 7, "CIS", for which you created a new group CISAccess^.pa, seemed clearly to be those members of the CIS group who happen to have Ivy/Indigo accounts. Therefore I changed group 7 to refer to the existing group CIS^.pa, and I deleted CISAccess^.pa. (In general, it's best not to create parallel "organization" groups, since it's hard to keep them in sync.) 2) User group 14, "Administration", is not referenced by any existing directory protection, so I don't know what the Ivy/Indigo group is for. I made it refer to the existing group Admin^.pa, which contains most of the members of group 14 as well as a number of others. (More on this later.) 3) User group 16, "GSL", is not referenced by any existing directory protection. I believe you recently created GSL^.pa. I changed its Remark to be "PARC General Sciences Laboratory" (from "Ivy/Indigo user group 16", or whatever it is you had put there). In future, you should register GSL people in this group rather than in OtherPARC^.pa. (More on this later.) 4) User group 18, "IEC", consists of members of the Integrated Ethernet Controller working group. After consultation with Alan Bell, I created a new Grapevine group IEC^.pa, with owner ABell.pa, with initial members taken from a private distribution list that has been used for internal communication among members of the group. You should notify Alan that this action has been taken and that he should maintain IEC^.pa in the future (or delegate someone else to do it). 5) For user group 21, "Sakura", I created a new Grapevine group Sakura^.pa with owner Suzuki.pa. You should notify Nori that this action has been taken and that he should maintain Sakura^.pa in the future. Instances in which I made no change, but you should follow up: 1) User group 19, "DocDoc", is referenced by [Indigo], but the directory appears now to be inactive (no new files stored on it for 3 months). I don't see any point in creating a Grapevine group for this unless Horning (the owner) believes that people besides himself will require connect/write access to this directory in the future. 2) User group 22, "OBR-8", is referenced by [Indigo]. There has been no activity on this directory for several weeks, so I doubt that the invalidation of this group will inconvenience anyone between now and whenever you deal with it. If OBR-8 is the name of some project, it may be appropriate to create a new Grapevine group for it. Instances in which I made some changes, but you should still follow up: 1) User group 20, "OSL/ORA", consists of most of the members of OSL who happen to have Ivy/Indigo accounts. Therefore, I created a new Grapevine group OSL^.pa, Remark "PARC Optical Sciences Laboratory", and used it for group 20. It is conceivable that this is inappropriate, and that access to Ivy/Indigo group 20 should be restricted to members of the "Oracle" project (whatever that is). If so, you should create a new Grapevine group for that project and use it (instead of OSL^.pa) for group 20. You should check this with Don Curry. 2) I added GSL^.pa and OSL^.pa as members of the organizational group PARC^.pa. Enough members of these laboratories have Grapevine accounts that they deserve their own organization groups rather than being lumped into OtherPARC^.pa. 3) I removed all members of GSL^.pa and OSL^.pa from OtherPARC^.pa, and I also moved a few other people from OtherPARC^.pa to their appropriate organizations (e.g., Rawson.pa to OSL^.pa, Kinne.pa to Admin^.pa, and Haeberli.pa to VLSI^.pa) However, OtherPARC^.pa is still somewhat of a mess; it contains a number of people who belong in specific PARC organizations, and even a few people who aren't in PARC any more (e.g., Roberts.pa, who has been in SDD for at least a year). You should go through this group and pare it down so as to contain just those people who are members of PARC organizations that don't have their own Grapevine groups (if there still are any) or who are truly unclassifiable. Ed *start* 00993 00024 US Date: 27 Nov. 1981 11:37 am PST (Friday) From: Taft.PA Subject: Zero allocation accounts To: RWeaver cc: Fiala, Boggs, Taft You should begin preparing to eliminate most of the zero allocation accounts from Ivy and Indigo. Assuming the new IFS software remains stable, you should plan to start deleting them in another week or two. In general, the accounts to be eliminated are all the accounts on Indigo that exist solely to give people access to files-only directories, as well as all non-PARC zero allocation accounts on both Ivy and Indigo. Exceptions are: 1) Accounts with "wheel" capability. This capability is remembered only by IFS and is not represented in Grapevine; so don't destroy the Indigo accounts for RWeaver, Taft, etc. 2) Accounts for people who aren't registered in Grapevine (if there still are any). Note, however, that GailAllen can be eliminated, since Gail can now login as Allen.Wbst; there are probably other accounts like this. Ed *start* 01534 00024 US Date: 27 Nov. 1981 12:00 pm PST (Friday) From: Taft.PA Subject: IFS meets Grapevine (cont'd) To: PaloAlto^ cc: IFSAdministrators^ Reply-to: RWeaver, Taft Ivy and Indigo are now using Grapevine for both authentication and access control. This means that not only is Grapevine consulted to check your name and password when you log in, but also your membership in user groups is represented as Grapevine groups rather than being determined by local information in Ivy and Indigo. In general, the Grapevine groups have been set up to include all the members of the former Ivy/Indigo user groups; so you should be able to access all the directories and files you were able to before. To find out the association between Ivy/Indigo user groups and Grapevine group names, use Chat to access Ivy or Indigo and then issue the "Show System-Parameters" command. An updated "How to use IFS" manual is filed as [Indigo]HowToUse.press. It contains a complete explanation of the new IFS authentication and access control mechanisms and of how IFS and Grapevine interact. Please read the documentation before asking any questions; however, we do want to be informed of any inadequacies of the documentation. The Guest accounts on Ivy and Indigo still exist, but we intend to eliminate them in the near future. As explained in the previous (November 19) message on this subject, you should be able to access Ivy and Indigo under your own R-Name and password, i.e., the ones by which you are known to Grapevine. *start* 00531 00024 US Date: 28 Nov. 1981 2:58 pm EST (Saturday) From: Marshall.WBST Subject: Re: IFS meets Grapevine (cont'd) In-reply-to: Taft.PA's message of 27 Nov. 1981 12:00 pm PST (Friday) To: RWeaver.PA, Taft.PA cc: Marshall I just tried using Indigo with the new grapevine validation and discovered that I get a different message depending on whether I use an invalid name or a valid name with an invalid password. This is insecure - only a single message should respond to unsuccessful validation attempts. --Sidney *start* 00804 00024 US Date: 28 Nov. 1981 12:57 pm PST (Saturday) From: Taft.PA Subject: Re: IFS meets Grapevine (cont'd) In-reply-to: Marshall.WBST's message of 28 Nov. 1981 2:58 pm EST (Saturday) To: Marshall.WBST cc: RWeaver, Taft I appreciate your concern. However, since Grapevine R-Names are public, I believe that from a security standpoint there is nothing to be gained by hiding the distinction between an invalid name and an invalid password. I'm aware that time-sharing systems often do this, but I believe the practice stems from concerns about privacy, not about security. That is, a commercial time-sharing vendor may feel that it's no business of one customer to know who the other customers are. In this case, it is appropriate to hide the user names as well as the passwords. Ed *start* 01322 00024 US Date: 1 Dec. 1981 9:42 am PST (Tuesday) From: Taft.PA Subject: IFS printer name To: Schroeder, Birrell cc: Taft After I sent my response to the following, it occurred to me that IFS might use the individual's "connect site" as the name of the default printer. What do you think? Do you have any other intended ways of using the connect sites of ordinary individuals? --------------------------- Date: 1 Dec. 1981 9:21 am PST (Tuesday) From: Lynch.PA Subject: IFS authentication via Grapevine To: Taft cc: Lynch Is there any way that various IFS directory parameters can be obtained from Grapevine analagous to the login parameters? A case in poin which has just come up is the default printer used by the press and print commands. I would be useful if an entire user profile could be carried forward. Bill --------------------------- Date: 1 Dec. 1981 9:37 am PST (Tuesday) From: Taft.PA Subject: Re: IFS authentication via Grapevine In-reply-to: Your message of 1 Dec. 1981 9:21 am PST (Tuesday) To: Lynch cc: Taft A good idea. Unfortunately, Grapevine has a fixed set of fields associated with each R-Name, so there is no place in Grapevine to put arbitrary information such as the name of an individual's printer. Ed ------------------------------------------------------------ *start* 00357 00024 US Date: 1 Dec. 1981 2:01 pm PST (Tuesday) From: Birrell.pa Subject: Re: IFS printer name In-reply-to: Taft's message of 1 Dec. 1981 9:42 am PST (Tuesday) To: Taft cc: Schroeder There are a few candidates for individuals' connect-sties, including a convenient scratch "instance name" for RPC exports and EtherPhone addresses. Andrew *start* 00304 00024 US Date: 5 Dec. 1981 4:43 pm PST (Saturday) From: Taft.PA Subject: IFS name server To: Murray, Birrell, Boggs, Schroeder cc: Taft I changed the IFS name server so that while it is digesting a new network directory it will continue to service lookup requests using the old one. Ed *start* 01987 00024 US Date: 9 Dec. 1981 7:15 pm PST (Wednesday) From: Taft.PA Subject: IFS 1.35 alpha test To: IFSAdministrators^ Reply-to: Taft, Boggs I am now soliciting volunteers to alpha test IFS 1.35, whose major new feature (Grapevine authentication and access control) was described in previous messages. This software has been running on Ivy and Indigo for approximately three weeks with no unsolved problems. If no new problems turn up, I plan to release IFS 1.35 on January 4, 1982. The new facilities represent a major improvement in information security; they also ease administration, particularly for a site with multiple file servers, since the authentication and access control data base is (logically) centralized in Grapevine rather than replicated in each of the IFSs. If you simply install and run the new software, it will operate in the old way, i.e., not using Grapevine. The Grapevine facilities have to be explicitly enabled. I suggest you run the new IFS in non-Grapevine mode for a short time, then switch to using Grapevine for authentication only, and finally switch to using Grapevine for access control also. The last step may require a considerable amount of administrative work and planning to get the IFS's existing group structure set up in Grapevine; it has to be done carefully, since it is irreversable. Documentation: please obtain and read: "How to use IFS", [Indigo]HowToUse.press "IFS Operation", [Indigo]Operation.press You should read the "How to use IFS" document first, since it contains an explanation of IFS's use of Grapevine which is not repeated in "IFS Operation". The documentation includes a complete step-by-step procedure for converting an existing IFS to use Grapevine. Software: please send me a message, and then obtain: [Indigo]1.34.15>IFS.run [Indigo]1.34.15>IFS.syms [Indigo]1.34.15>IFS.errors Please report any problems you find in either the software or the documentation. *start* 02124 00024 US Date: 11 Dec. 1981 1:03 pm PST (Friday) From: Birrell.pa Subject: Remote mail files To: Schroeder cc: Taft, Brotz Peter was asking why they're not available yet, and I said it was because of IFS overload worries. Ed's message sounds like it might be worth allowing, say, CSL+ISL+LRG to use them experimentally. What do you think? --------------------------- Date: 11 Dec. 1981 10:50 am PST (Friday) From: Taft.PA Subject: Re: Remote mail files In-reply-to: Birrell's message of 11 Dec. 1981 8:31 am PST (Friday) To: Birrell cc: Deutsch, Taft, LaurelSupport Let me clarify my position on this. My concern is that large numbers of Laurel users accessing remote mail files MIGHT present an excessive load on an IFS. I know of several time and space inefficiencies in the Leaf server implementation; heavy Leaf traffic might interfere excessively with FTP service, and might also lead us back to the bad old days of memory deadlocks (a frequent source of crashes before the Extended Memory emulator was implemented). However, to my knowledge, nobody has ever tested an IFS with large numbers of simultaneous Leaf connections. I would be happy if someone would conduct a controlled experiment to prove or disprove my concerns; but until this has been done, I am strongly opposed to making the remote mail file capability generally available. Ed --------------------------- Date: 11 Dec. 1981 11:00 am PST (Friday) From: Deutsch.PA Subject: Re: Remote mail files In-reply-to: Taft's message of 11 Dec. 1981 10:50 am PST (Friday) To: Birrell cc: Taft, LaurelSupport I suggest releasing locally (i.e. in Palo Alto only) a Laurel which supports remote mail files, together with instructions as to how to transfer one's mail file back and forth, and with a caveat that support may be withdrawn if the IFS load becomes excessive. In view of the disappointingly slow progress in moving into the world where people to treat local disks as caches for IFS files, Laurel seems like the most promising source of a test load for IFS/Leaf. ------------------------------------------------------------ *start* 01430 00024 US Date: 11 Dec. 1981 7:23 pm PST (Friday) From: Taft.PA Subject: IFS 1.35 alpha test (cont'd) To: IFSAdministrators^ Reply-to: Taft, Boggs This message is for IFS administrators who are or will be alpha-testing IFS. 1. A new alpha-test version is released. It fixes several bugs in the CopyDisk server, including bogus errors involving connect names and passwords (reported by Don Winter). Please obtain: [Indigo]1.34.16>IFS.run [Indigo]1.34.16>IFS.syms [Indigo]1.34.16>IFS.errors 2. Even if you aren't (yet) using Grapevine authentication or access control, it is a good idea to set the default registry name appropriately. This is because in some circumstances Laurel presents user-names that are qualified by registry name. The old IFS's FTP server simply stripped off the registry name and threw it away. The new IFS, however, treats the registry name as part of the user-name, and strips it off only if it matches the IFS's default registry. 3. I forgot to announce that new versions of the IFSScavenger and Accountant programs accompany this release. The IFSScavenger has minor internal changes to conform to the new IFS release; also it can rebuild the directory about twice as fast as before. The Accountant has changes that are described in "IFS Operation". Please obtain: [Indigo]IFSScavenger.run [Indigo]IFSScavenger.syms [Indigo]Accountant.run *start* 00708 00024 US Date: 14 Dec. 1981 1:28 pm PST (Monday) From: Masinter.PA Subject: new IFS To: Taft, Boggs cc: Masinter, Weyer, Fikes We have been in the situtation at times where one or the other gateways are down. In that case, we have still been able to have access to Phylum. My understanding of the new IFS is that if you haven't talked to Phylum for twelve hours, and the gateway is down, that you won't be able to login to phylum (since we have no Grapevine server on this net.) Is this true? If so, we might be reluctant to turn on grapevine authentication, because of the potential further disruption of services. I don't think we have a spare alto to turn into a grapevine server. Larry *start* 01957 00024 US Date: 14 Dec. 1981 2:16 pm PST (Monday) From: Taft.PA Subject: Re: new IFS In-reply-to: Masinter's message of 14 Dec. 1981 1:28 pm PST (Monday) To: Masinter cc: Taft, Boggs, Weyer, Fikes When you login to an IFS for the first time, the server authenticates you via Grapevine and caches the result locally. When you login again within the next 12 hours, IFS just checks the cache and does not talk to Grapevine. After the 12-hour timeout, IFS attempts to authenticate you via Grapevine; but if it can't contact Grapevine at that time it looks in the cache and authenticates you that way instead. So even if Grapevine is inaccessible for an extended period, you will be able to login to the IFS so long as you logged in once before while Grapevine was accessible. In other words, the cache is never flushed, but rather is updated (if possible) with fresh information whenever the old information has become stale. (This applies only to users who have personal directories on the IFS. Cache entries for other users are handled differently, and are displaced from the cache in an LRU fashion.) Access control is a slightly different matter. Whenever you are successfully authenticated via Grapevine, your user group membership is reset to empty. Subsequently, when you attempt to access files or directories requiring membership in some group, IFS asks Grapevine and remembers the result on a per-group basis. So it's possible that you could be denied access to certain files because Grapevine was inaccessible. This would occur only if you had NOT exercised your user group membership since the last time you were authenticated via Grapevine. I submit that if you find you are cut off from the rest of the internet sufficiently often for it to be a serious inconvenience, then you should have your own Grapevine server to assure adequate local mail service -- independent of whether or not Phylum is using Grapevine. Ed *start* 02939 00024 US Date: 13 Jan. 1982 10:50 am PST (Wednesday) From: Taft.PA Subject: IFS 1.35 To: IFSAdministrators^ Reply-to: Taft, Boggs Version 1.35 of the IFS software is released. This software has been running on Ivy and Indigo for approximately seven weeks and on four other servers for approximately three weeks with no reported problems. The major new feature of this version is use of Grapevine registration servers for authentication and access control. While use of this feature will be the IFS administrator's option, the IFS and Grapevine designers hope that it will eventually be adopted everywhere. This represents a major improvement in information security, as well as being more flexible and easier to administer than the old scheme. Additionally, there are some performance improvements in the FTP, CopyDisk, and Leaf servers; and all known bugs have been fixed. Other changes: a Show System-Parameters command has been added; the Create and Change Directory-Parameters commands now have the same set of sub-commands; reloading the file system from backup now properly restores all system parameters instead of resetting them to default values. If you simply install and run the new software, it will operate in the old way, i.e., not using Grapevine. The Grapevine facilities have to be explicitly enabled. I suggest you run the new IFS in non-Grapevine mode for a short time, then switch to using Grapevine for authentication only, and finally switch to using Grapevine for access control also. The last step may require a considerable amount of administrative work and planning to get the IFS's existing group structure set up in Grapevine; it has to be done carefully, since it is irreversable. Documentation -- please obtain and read: "How to use IFS", [Maxc]HowToUse.press "IFS Operation", [Maxc]Operation.press You should read the "How to use IFS" document first, since it contains an explanation of IFS's use of Grapevine which is not repeated in "IFS Operation". The documentation includes a complete step-by-step procedure for converting an existing IFS to use Grapevine. Software -- please obtain: [Maxc]IFS.run [Maxc]IFS.syms [Maxc]IFS.errors [Maxc]Sys.errors If you have been running the alpha-test version (1.34.16), you should convert to the released version at this time. New versions of the IFSScavenger and Accountant programs accompany this release. The IFSScavenger has minor internal changes to conform to the new IFS release; also it can rebuild the directory about twice as fast as before. The Accountant has changes that are described in "IFS Operation". Please obtain: [Maxc]IFSScavenger.run [Maxc]IFSScavenger.syms [Maxc]Accountant.run For IFS wizards, there are also new versions of: "IFS Software Maintenance", [Maxc]IFSSoftwareMaint.press "IFS Directory Operations", [Maxc]IFSDirOps.press *start* 01403 00024 US Date: 18 Jan. 1982 4:43 pm PST (Monday) From: Taft.PA Subject: Resetting group names To: IFSAdministrators^ Reply-To: Taft It may arise that you have previously associated a group name with some group number (using the "Group" sub-command of "Change System-parameters"), and you later decide to eliminate the group name (i.e., make the group number not be associated with any name). In order to do this, you must assign an empty name to the group number. That is, issue the "Group" sub-command, specify the group number, and then when it asks you for the group name (suggesting the existing one as a default), strike control-W to erase the existing name and then just strike RETURN rather than typing a new name. Lacking adequate documentation, the people at EOS attempted to find the correct procedure for this by trial-and-error, and hit upon the idea of assigning the name "None". Unfortunately, not only does this not work, but it crashes the IFS and leaves the group name table in a messed-up state requiring surgery by a wizard. "None" is a reserved word in the context of group names, and the IFS software ought to disallow associating it with a group number. This will be fixed in the next release; but I don't intend to release a new IFS just to fix this bug. In the meantime, system administrators beware: don't assign "None" to be the name of any group. Ed *start* 01042 00024 US Date: 19-Jan-82 17:36:36 PST (Tuesday) From: Newman.ES Subject: IFS 1.35 access control questions To: Taft.pa cc: Hanzel, Newman We are trying to design the access control implementation for the NSFiling file server, and would appreciate the answers to a few questions about IFS 1.35's access control: 1) The IFS document says that IFS doesn't remember unsuccessful group memberships, but if I ask for directory parameters, they include "Group non-membership hint". Does IFS use this to "skip over" certain groups when evaluating access to a file, coming back to them only after checking the others? 2) If more than one group is on a file's access list, what order does IFS choose to evaluate group membership in? 3) Does IFS remember successful group memberships of all users, or only those users who have accounts? (e.g. on Indigo, showing the directory parameters of Taft.pa returned some information about "Group membership", but showing the directory parameters of Newman.es returned nothing at all) /Ron *start* 01988 00024 US Date: 20 Jan. 1982 10:34 am PST (Wednesday) From: Taft.PA Subject: Re: IFS 1.35 access control questions In-reply-to: Newman.ES's message of 19-Jan-82 17:36:36 PST (Tuesday) To: Newman.ES cc: Taft, Hanzel.ES 1) Exactly right. The reason this is done is that the Grapevine "IsMember" operation typically takes much longer when the answer is "no" then when it is "yes". So IFS leaves until last testing those groups for which the answer was "no" in the recent past. 2) The order of testing groups is descending numerical order by group number. This is fairly arbitrary; I chose this order so that "World" (group 63) would be tested first. 3) This is complicated, and is largely an artifact of how IFS worked before Grapevine authentication and access control were introduced. For users who have their own directories on the IFS, there is a Directory Information File (DIF) whose name is !1. In an IFS that doesn't use Grapevine, this contains the truth information about password and group membership. In an IFS that does use Grapevine, this information is still maintained permanently, but is updated whenever Grapevine is consulted (which is controlled by a timeout). The Show Directory-parameters command reads the DIF, so it doesn't work for users who don't have local directories. In addition, there is a cache of recently-used authentication and group membership information. A cache entry may or may not correspond to a local DIF; if it does, changes to the cache also write through to the underlying DIF. (The reason entries are cached in this case is that access to the DIF is relatively slow and can't be tolerated on every file access.) At present, the cache is relatively small (~25 entries), so on a busy IFS an entry will fall out of it fairly quickly. I am considering making it much larger (indeed, perhaps a B-tree with no built-in size limit) so as to make IFS more resistant to inaccessibility of Grapevine servers. Ed *start* 00746 00024 US Date: 21 Jan. 1982 5:06 pm PST (Thursday) From: Taft.PA Subject: Leaf server bug To: Wobber cc: Levin, Schmidt, Taft Roy and Eric have discovered a new way to crash the Leaf server: send it a Read request for more than 32K bytes. This usually causes the IFS's memory to be seriously clobbered, though once we did get a useful sysout. Looking at ReadLeaf, I see that there are some instances of "gr" and "Min" that should clearly be "ugr" and "Umin". I imagine similar problems arise in WriteLeaf. I'm prepared to fix these without help from you. My question to you is: can you think of any other places in the Leaf server where the choice of signed vs. unsigned operations might have been made inappropriately? Ed *start* 00625 00024 US Date: 22-Jan-82 16:56:09 PST (Friday) From: Wobber.PA Subject: Re: Leaf server bug In-reply-to: Taft's message of 21 Jan. 1982 5:06 pm PST (Thursday) To: Taft cc: Wobber, Levin, Schmidt I can't see any other places where similar problems might exist, (outside of ReadLead and WriteLeaf.) There are a few more inappropriate Min/Max, (found by checking the import lists), but I looked at them all and couldn't see any way the sign error could occur. On the other hand, it probably couldn't hurt to get rid of those instances. It's unlikely that Min/Max is ever right in the Leaf server code. /Ted *start* 02049 00024 US Date: 25 Jan. 1982 4:17 pm PST (Monday) From: Taft.PA Subject: Dovers vs IFSs To: JDWright.xrcc cc: Taft You asked me to tell you what I know about IFS maintenance problems caused by the file server's being near a Dover (or other xerographic printer or copier). We haven't had any experience with this problem in our laboratory (PARC/CSL), so I can't speak from direct experience. SDD-North had two IFSs in the same room as a Dover for at least a year, and there was no end of problems with the disk drives. (I know, because that was when the IFS software was fairly new, and I was sometimes called upon to help put together trashed file systems; often the problem turned out to be with the disk drives.) Later SDD-North moved to a new building with a separate machine room; now they run 4 or 5 IFSs (including the two they had before), and my impression now is that those IFSs run with very high reliability. Jerry is the person to contact if you want more details. More recently, the PARC Science Center has operated an IFS, "Phylum", in the same room as a Dover, and likewise has encountered serious problems, including head crashes on several occasions. One of their technicians has told me that the worst problem is not toner but vaporized fuser oil, which he has to clean out of the disk drives periodically. Richard is the person currently struggling with this problem. We (PARC/CSL) operate two IFSs, Ivy and Indigo. They run in air-conditioned machine rooms containing electronic equipment only and with relatively light human traffic. I would estimate that we have a total of nearly 20 drive-years of experience with the T-300s. To date we have had only one mechanical failure involving damage to recording surfaces: a minor head crash that was attributed to a manufacturing defect (repair was covered by warranty). We perform no preventive maintenance besides changing filters. I attribute our good reliability primarily to the clean environment and little human contact. Ed *start* 00852 00024 US Date: 27 Jan. 1982 9:20 am PST (Wednesday) From: Taft.PA Subject: Re: GV: Caching In-reply-to: Murray's message of 26-Jan-82 23:24:38 PST (Tuesday) To: Murray cc: Schroeder, Birrell, Redell, BLyon, Taft The experience with IFS's use of Grapevine is that a substantial amount of caching on the client side is required for good performance. Accessing Grapevine for every authentication or group membership check would quickly swamp the Grapevine servers as well as being a source of poor IFS performance. And having a substantial cache makes IFS relatively insensitive to momentary overload or unavailability of Grapevine servers. I started to write a memo describing IFS's use of Grapevine in some detail, but never finished it. I might be motivated to do so if I thought it might be helpful in the NS filing redesign. Ed *start* 02575 00024 US Date: 27 Jan. 1982 9:22 am PST (Wednesday) From: Birrell.pa Subject: Re: GV: Caching In-reply-to: Murray's message of 26-Jan-82 23:24:38 PST (Tuesday) To: Murray cc: Schroeder, Birrell, Redell, BLyon, Taft For IFS, we (primarily Ed Taft) concluded that the appropriate strategy was to cache things in the file servers. In addition to performance improvements, this allowed the IFS to continue working even when it couldn't contact Grapevine. It seems fairly unlikely that you'll be able to survive when all Clearinghouse servers are down, so the "continue working" argument probably doesn't apply. The IFS caching strategies are fairly complicated - consult Ed. Caching in Grapevine is quite attractive on the surface, but we haven't investigated it much. The major attraction. of course, is that the benefits of the cache would be there for all clients of the service. Cache invalidation is quite difficult; the only solution we know that doesn't require operations proportional to the number of R-Server instances is the time-out strategies used in IFS. If you rely on time-outs for invalidation, you have the problem that the response you give to an enquiry may be incorrect; on the other hand, our database update algorithm already permits this so perhaps the incorrectness added by caching doesn't make things any worse. Remember, too, that caching in the R-Servers means that ACL enquiries from a client still incur the communication overhead, but that's probably small enough. This strategy also substantially increases the load on the R-Servers. Complicated access control enquiries can be extremely expensive, particularly the full IsmemberClosure operation that Grapevine offers. That operation requires that the R-Server considers each name to see whether it's a group; this involves a B-Tree lookup (say 50 msec); for 1000 names that's too long. The observed times in Grapevine are actually much worse than that. The best optimisation we can offer for IsMemberClosure is to be able to syntactically distinguish "interesting" names (e.g. by them containing an ^). It's probably essential to have some optimisation of IsMemberClosure even in the presence of caching (IFS currently invokes "UpArrowClosure", which uses the syntactic distinction trick). My conclusions are that caching is definitely necessary; that cache invalidation should be by time-out as in IFS; that for your world the cache might as well be in the Clearinghouse; that someone must make sure the closure ACL enquiries perform well enough. Andrew *start* 02415 00024 US Date: 28-Jan-82 10:12:49 PST (Thursday) From: LaCoe.es Subject: IFS Meets Grapevine To: OSBU^, OPD-Other^.es Reply-To: LaCoe.es cc: Birrell.pa, Schroeder.pa, Taft.pa, LaCoe Rain, Sun and Wind file servers are running a new version of the IFS software that consults Grapevine for user authentication and access control (Login and Connect). This means that not only is Grapevine consulted to check your name and password when you log in, but also your membership in user groups is represented as Grapevine groups rather than being determined by local information in Rain, Sun and Wind. In general, the Grapevine groups have been set up to include all the members of the former Rain, Sun and Wind user groups; so you should be able to access all the directories and files you were able to before. To find out the association between Rain, Sun and Wind user groups and Grapevine group names, use Chat to access Rain, Sun and Wind and then issue the "Show System-Parameters" command. Two consequences of grapevine authentication are visible to users: 1. You must use your Grapevine password when logging into Rain, Sun and Wind. Since most people use the same password on all servers, this is not likely to bother anyone. 2. You no longer need your own IFS account in order to log into Rain, Sun or Wind; any valid Grapevine name and password is acceptable. That is, you may use your full Grapevine name (e.g., Jones.ES or Smith.PA) to access the El Segundo file servers. This arrangement is intended to replace the present Guest account; however, Guest will remain valid for a while. (Note: existing Rain, Sun and Wind accounts whose names, suffixed with ".ES", which do not match any grapevine name will continue to work as they always have.) An updated "How to use IFS" manual is filed as [Rain]HowToUse.press. It contains a complete explanation of the new IFS authentication and access control mechanisms and of how IFS and Grapevine interact. The Guest accounts on Rain, Sun and Wind still exist, but we intend to eliminate them in the near future. As explained earlier in this message, you should be able to access Rain, Sun and Wind under your own R-name and password, i.e., the ones by which you are known in Grapevine. If you have any problems logging into the IFS's or accessing the IFS group directories, please let me know. Thank you. Joyce ext. 8+823+5654 *start* 01453 00024 US Date: 8 Feb. 1982 2:47 pm PST (Monday) From: Sanders.ES Subject: IFS-GV To: Taft.pa cc: Sanders Here's excerpt of a msg we received from a concerned user. I presume they are referring to going into IFS via Laurel. With the IFS-GV conversion I can see that a user can go and access/read another's directory; what about in creating or storing......... (I would think a user would not want anyone else creating/storing in their directories but reckon need occurred in our users' life at one time or another) --------------------------- There is a possible problem with the new GV login method. It affects Laurel, at least used to. Sometimes someone will give out their password so that someone else may retrieve or store something in their account. If that happens and the user doesn't change the user/password back to their own, then if they ask for new mail or send out messages, they will retrieve the other persons mail or send out mail in the other person's name. For my own protection I changed passwords giving my IFS and Laurel accounts different passwords so this wouldn't happen. If this is still a problem, the user's should be made aware of it. Obviously the solution is to keep passwords private, but in some areas this isn't always too practical and one can make a mistake. If this is no longer a problem, please let me know. ------------------------------------------ Anxious to hear from you, ~Lili *start* 00816 00024 US Date: 8 Feb. 1982 3:13 pm PST (Monday) From: Taft.PA Subject: Re: IFS-GV In-reply-to: Your message of 8 Feb. 1982 2:47 pm PST (Monday) To: Sanders.ES cc: Taft I think that the message you sent me sums up the situation rather nicely. Users should never never ever give their passwords to anyone else for any reason. If you want to temporarily make it possible for another person to store files on your IFS directory, you can do so by changing the directory's Create protection to include World (or some other group that includes that person). That is: @Change Directory-parameters (of directory) YourDirectory @@Create (access permitted to) World This provides the exact access that is needed, and does not compromise your access to other IFSs, your Laurel messages, or whatever. Ed *start* 00921 00024 US Date: 12 Feb. 1982 10:37 am PST (Friday) From: Taft.PA Subject: Changing IFS password to be empty To: Sanders.ES cc: Schroeder, Birrell, Taft Mike told me that you were having problems making passwords be empty, as I had suggested for dealing with Oly's mystery accounts. Unfortunately, I forgot to give you one important piece of information. There are two ways for an IFS wheel to change a user's password: (1) ! Change Password directory oldPassword newPassword (2) ! Change Directory-parameters directory !! Password newPassword Method (1) is how ordinary users change their own password. An empty newPassword is rejected in this case so as to prevent users from accidentally making their directories inaccessible. Method (2) is the only way to change a directory's password to be empty. That is: ! Change Directory-parameters directory !! Password Sorry for the confusion. Ed *start* 02444 00024 US Date: 16 Feb. 1982 12:57 pm PST (Tuesday) From: Fikes.PA Subject: IFS Question Regarding "World" To: Taft cc: Putz Ed, I don't understand what's going on here. "World" has been set to USRegistries^.internet in Phylum (and show system parameters confirms that), as has group 0. Therefore, I would expect them to behave the same. Can you provide some clarafication? richard --------------------------- Date: 16 Feb. 1982 11:33 am PST (Tuesday) From: Putz.PA Subject: Phylum group access To: Fikes cc: PhylumUsers^ Reply-To: Putz I made some tests, and found that logged into Phylum as Smalltalk-User, I could access files with "World" access (group 63) but not files with "USRegistries^.internet" access (group 0). This indicates that "World" is not identical to "USRegistries^.internet". Phylum users who use Smalltalk might want to change the protection of certain files to include "World" if they would like to access them from Smalltalk without logging in. I have changed the file protection on my account (and files) from R: Owner USRegistries^.internet; W: Owner; A: Owner to R: Owner World; W: Owner; A: Owner so I can access them from Smalltalk. -- Steve --------------------------- Date: 12 Feb. 1982 1:47 pm PST (Friday) From: Putz.PA Subject: Have I got this right? To: Fikes cc: Deutsch, Putz Tell me if this is accurate: BEFORE: Phylum group 0 Honest-to-goodness Phylum accounts, but excluding Guest, etc. 63 (World) All Phylum accounts Most peoples' default file protections were: R: Owner 0; W: Owner; A: Owner NOW: Phylum group 0 (USRegistries^.internet) All Grapevine accounts 63 (World) All Phylum accounts + All Grapevine accounts Most peoples' default file protections have been changed to: R: Owner USRegistries^.internet; W: Owner; A: Owner The new group 0 excludes Phylum accounts (like "Smalltalk-User") which are not also Grapevine accounts. --------------------------- Date: 13 Feb. 1982 8:39 am PST (Saturday) From: Fikes.PA Subject: Re: Have I got this right? In-reply-to: Putz's message of 12 Feb. 1982 1:47 pm PST (Friday) To: Putz cc: Fikes, Deutsch Not quite. Group 0 and "World" are now the same, namely USRegistries^.internet, which means any legitimate Grapevine account in the US. richard ------------------------------------------------------------ ------------------------------------------------------------ *start* 00990 00024 US Date: 16 Feb. 1982 3:55 pm PST (Tuesday) From: Taft.PA Subject: Re: IFS Question Regarding "World" In-reply-to: Fikes' message of 16 Feb. 1982 12:57 pm PST (Tuesday) To: Fikes cc: Taft, Putz The difference is fairly obscure. "World" is actually defined to be the union of USRegistries^.internet and all users who have their own directories on Phylum. Group 0, on the other hand, is just USRegistries^.internet. Smalltalk-user is not a member of USRegistries^.internet because it is not registered in Grapevine as Smalltalk-user.PA. Therefore Smalltalk-user is not a member of group 0. However, it is a member of "World" by virtue of there being a directory on Phylum. From this, it should be clear that there is no point in having group 0 equal to USRegistries^.internet. But this is all academic anyway, since Smalltalk-user is no longer needed and should be deleted -- people should log in as themselves, not as some ficticious user. Ed *start* 01019 00024 US Date: 17 Feb. 1982 10:06 am PST (Wednesday) From: Fikes.PA Subject: Re: IFS Question Regarding "World" In-reply-to: Taft's message of 16 Feb. 1982 3:55 pm PST (Tuesday) To: Taft cc: Fikes, Putz Thanks, that clears it up. We are keeping the Smalltalk-user directory for a nonstandard reason. Namely, the Smalltalk-76 system, which uses that directory, is no longer being maintained but still has users. Hence, if we do away with the directory, then either a new version of Smalltalk-76 needs to be created (which the Smalltalk folks don't want to do) or the system becomes unusable. I chose to keep the directory around. Regarding Group 0. Previous to this new IFS release, Phylum was set up so that the standard default read access for files was group 0, which was defined to be all Phylum directories. Hence, most existing files on Phylum have group 0 read access rather than world. That's why I kept group 0 and defined it to be USRegistries^.internet. Sound right to you? richard *start* 00865 00024 US Date: 19 Feb. 1982 6:58 pm PST (Friday) From: Taft.PA Subject: IFS bug to watch out for To: IFSAdministrators^ Reply-To: Taft I just discovered a bug in the IFS boot server which apparently has been around for quite some time. If you enable the boot server on an IFS that has never run the boot server before (and consequently has an empty boot file directory), then it will crash. To get around this, first do the following. Store some random file (say, User.cm) onto Boot>Foo-bar. (It's important that this name have a hyphen in it!) Then enable the boot server. When you see that Boot> is starting to fill up with real boot files, you can delete Boot>Foo-bar. This bug will be fixed in IFS 1.36; but it's unlikely that any new IFS releases will be made for a long time unless a more serious bug is found. *start* 00591 00024 US Date: 5 March 1982 7:00 pm PST (Friday) From: Taft.PA Subject: Don't use the "What" command To: IFSAdministrators^ Reply-To: Taft Thanks to Lili Sanders of PSDSupport, a bug has been discovered in the "What" command. (In case you've never used it, this command identifies disk packs of unknown vintage, in case you've lost the pack's label.) The bug occasionally causes IFS to crash within 15 seconds after the "What" command is issued, though the command itself may run to completion successfully. I suggest that you stop using this command until IFS 1.36. Ed *start* 01002 00024 US Date: 23 March 1982 8:29 am PST (Tuesday) From: Taft.PA Subject: New access control policies To: IFSAdministrators^, GVAdmin^ cc: Birrell, Elkind, Schroeder, Taft, Murphy Reply-to: Birrell, Elkind, Schroeder, Taft The continuing growth of the Xerox Research Internet has led to the need for more careful and systematic policies for controlling access to electronic information. We have developed a set of policies and procedures that are intended to improve information security. The responsibility for implementing these policies falls most directly on the IFS and Grapevine administrators. There are several items that require attention very urgently; we request that you (and possibly your managers) begin action on these as soon as possible. Please obtain and read carefully either [Maxc]AccessControls.press or [Indigo]AccessControls.press; and if you have any questions, please reply to this message. Andrew Birrell Jerry Elkind Mike Schroeder Ed Taft *start* 03602 00024 US Date: 26 March 1982 2:08 pm PST (Friday) From: Taft.PA Subject: Conversion of SDD-North IFSs to Grapevine To: Lauer, Johnsson, Wick cc: Taft I don't know who runs your IFSs these days, so I'm directing this to you. SDD-North is the last major holdout in conversion of IFSs to use Grapevine for authentication and access control, and is about two months behind everyone else. (There are a few other sites, such as Leesburg and Corporate Headquarters; but they don't have widespread user communities spanning multiple registries.) Once or twice a week, I receive messages such as the first attachment (my reply follows). It is precisely this sort of problem that the new mechanisms are intended to eliminate. What makes this more urgent is that there is a recently-issued set of access control policies and procedures, developed by members of the Grapevine and IFS projects in consultation with Jerry Elkind. An announcement was distributed to all IFS administrators early this week. I'm forwarding you a copy (third attachment), just in case you haven't yet been told about it by your IFS administrator. --------------------------- Date: 26-Mar-82 12:20:32 PST (Friday) From: Newman.ES Subject: IFS 1.35 on Idun vs. fully qualified names To: Taft.pa cc: Newman If I try to retrieve files on Idun while logged in as "Newman.es", it rejects the login name. I can only use Idun if I am logged in as just "Newman". Whenever I've complained to SDD-North people about this, they claim my problem doesn't exist, and say something like "the IFS gurus have assured us that the IFS is supposed to ignore registries if Grapevine authentication is disabled". Can you clarify? /Ron --------------------------- Date: 26 March 1982 1:49 pm PST (Friday) From: Taft.PA Subject: Re: IFS 1.35 on Idun vs. fully qualified names In-reply-to: Your message of 26-Mar-82 12:20:32 PST (Friday) To: Newman.ES cc: Taft The information you are getting from the SDD-North "IFS gurus" is false. The OLD version of IFS ignored registries. The NEW software deals uniformly on the basis of fully-qualified R-Names; but if you log in with an unqualified R-Name it is assumed to belong to the IFS's default registry. Idun's default registry is PA. When you log in as "Newman" on Idun, it thinks of the name as Newman.PA, which is different from Newman.ES. Of course, the reason you can't log in as Newman.ES is that Idun (and the other SDD-North IFSs) have still not converted to using Grapevine for authentication. Ed --------------------------- Date: 23 March 1982 8:29 am PST (Tuesday) From: Taft.PA Subject: New access control policies To: IFSAdministrators^, GVAdmin^ cc: Birrell, Elkind, Schroeder, Taft, Murphy Reply-to: Birrell, Elkind, Schroeder, Taft The continuing growth of the Xerox Research Internet has led to the need for more careful and systematic policies for controlling access to electronic information. We have developed a set of policies and procedures that are intended to improve information security. The responsibility for implementing these policies falls most directly on the IFS and Grapevine administrators. There are several items that require attention very urgently; we request that you (and possibly your managers) begin action on these as soon as possible. Please obtain and read carefully either [Maxc]AccessControls.press or [Indigo]AccessControls.press; and if you have any questions, please reply to this message. Andrew Birrell Jerry Elkind Mike Schroeder Ed Taft ------------------------------------------------------------ *start* 02610 00024 US Date: 26-Mar-82 14:17:24 PST (Friday) From: Newman.ES Subject: Re: IFS 1.35 on Idun vs. fully qualified names To: Taft.pa cc: Newman Could you please strongly hint to whoever runs the SDD-North IFS's that they should convert to Grapevine authentication? They have resisted doing so, based on the (apparently false) belief that IFS 1.35, when run without Grapevine authentication, ignores registries. ------------------------------ Date: 23-Mar-82 17:22:12 PST (Tuesday) From: Newman.ES Subject: Grapevine authentication on Northern IFS's To: Johnsson.pa cc: Karlton.pa, Guarino.pa, Davirro.pa, Newman Could someone please turn Grapevine authentication on for Idun, Igor, and Iris?  I believe this can be done separately from turning on Grapevine group checking. Your IFS's currently allow a login name of "Newman" but reject "Newman.es" when doing file transfer, chat, etc. In Cascade 8.0k, this is an annoyance; in 8.0l, with the new FileTransfer that does an automatic Profile.Qualify with the registry, it will become disabling. Nobody down here will be able to access the Northern file servers. /Ron ------------------------------ Date: 23-Mar-82 17:44:05 PST (Tuesday) From: Johnsson.PA Subject: Re: Grapevine authentication on Northern IFS's In-reply-to: Newman.ES's message of 23-Mar-82 17:22:12 PST (Tuesday) To: Newman.ES cc: Johnsson, Karlton, Guarino, Davirro The 8.0l FileTransfer change will not change anything. At least that's what we're told by the IFS gurus. If it becomes disabling, we'll have to deal with it. ------------------------------ Date: 23-Mar-82 17:53:07 PST (Tuesday) From: Newman.ES Subject: Re: Grapevine authentication on Northern IFS's In-reply-to: Johnsson.PA's message of 23-Mar-82 17:44:05 PST (Tuesday) To: Johnsson.PA cc: Newman, Karlton.PA, Guarino.PA, Davirro.PA Well, I know that in 8.0k, if I fill in "Newman.es" as my login name in the FileTool, and try to retrieve a file from Idun, I get the message "Incorrect user-name. for Idun". This is causing problems right now, in that attempts to send Hardy mail through private dl's stored on Iris, Igor, or Idun (e.g. "@[iris]Starspec.dl") will fail. Hardy uses the fully qualified name to retrieve the indirect file, and this fails. If Adobe is changed to use the fully qualified name, we'll be unable to use Adobe Edit on the Core Software system. I like the idea of all programs using the fully qualified names; why must SDD-North have the only IFS's that do not use Grapevine authentication? /Ron ---------------------------------------------------------------- *start* 00554 00024 US Date: 26-Mar-82 15:00:51 PST (Friday) From: Johnsson.PA Subject: Re: Conversion of SDD-North IFSs to Grapevine In-reply-to: Taft's message of 26 March 1982 2:08 pm PST (Friday) To: Taft cc: Lauer, Johnsson, Wick The problem is that this caught us at a particularly bad time. We would still be running 1.34 except that we stumbled into one of the bugs that required 1.35 for a fix. We even rolled back to 1.34 once but were told that "setting a default registry makes 1.35 behave just like 1.34". We're working on it. Richard *start* 00974 00024 US Date: 2 April 1982 1:32 pm PST (Friday) From: Taft.PA Subject: Group 57 To: Sanders.es cc: Taft The problem with group 57 is a minor bug in the IFS software. When you specify the group number, IFS is actually prepared to accept either a number or a name at that point. You have two existing groups whose names begin with "5700". The bug is that IFS thinks you are trying to type one of these names (and dings at you because you haven't typed enough to distinguish one of them uniquely), and doesn't realize you might be trying to type a number. I'll fix this in the next IFS release; in the meantime you will just have to avoid trying to use group 57. On a related issue, you haven't yet established the World group membership as USRegistries^.internet. See sections 6.2 and 6.3.7 of "IFS operation". You might want to recheck the conversion procedure in section 6.3, just to make sure you haven't accidentally left out any other steps. Ed *start* 00638 00024 US Date: 5 April 1982 1:21 pm PST (Monday) From: Taft.PA Subject: IFS bug: duplicate group names To: IFSAdministrators^ Reply-To: Taft Another bug has turned up in the IFS 1.35 group name logic. For various reasons, it is illegal to associate the same group name with two different group numbers. The software is supposed to prevent you from doing this, but due to a bug this check does not work. Therefore, be careful not to assign the same group name to two different group numbers. Doing so causes IFS to crash, and to keep crashing until the duplicate is removed by octal-editing the Info file! Ed *start* 00940 00024 US Date: 12 April 1982 6:55 pm PST (Monday) From: Gifford.PA Subject: Interaction of CIFS and IFS To: Schroeder, Taft cc: Gifford Mike and Ed, At our meeting, two issues were discussed. The first, which I was to investiagate, was how to provide some sort of concurrency control for CIFS directories. The second was the proposal that IFS add entries directly to a dir.dir file or to a log indicating which files had been created and deleted. It seems simple to provide some sort of concurrency control by renaming the dir.dir file when someone updates it. This would serve as notice to the world that the directory was busy, until the "lock owner" renamed it back to dir.dr. A kludge, true, but it works. If we need this feature I'll implement it. On the issue of a log file built by IFS - it would be nice to know when files are created and deleted via FTP. If Ed constructs a log file, I'll parse it. Dave *start* 00887 00024 US Date: 16 April 1982 2:43 pm PST (Friday) From: Fikes.PA Subject: IFS Archiving To: IFSAdministrators^, Dekleer, Kaehler, Taft, Boggs Reply-To: IFSAdministrators^, Dekleer, Kaehler, Taft, Boggs The IFS I am administrator for (Phylum) is continually low on disk space. I assume other IFS's have this problem, and I would like to know how other administrators deal with it. It seems to me that the critical problem is that there is no reasonable way of archiving inactive IFS files. Hence, people tend to use up whatever space is available and quite reasonably resist the hassle required to archive (i.e., move the files to maxc). Does anyone have a facility that eases the archiving burden? (Ed and Dave: is there anything that IFS Administrators could do to help make the creation of an archiving facility in IFS more palatable to you?) thanks, Phyques *start* 00749 00024 US Date: 19 April 1982 8:33 am EST (Monday) From: Sperry.WBST Subject: Re: IFS Archiving In-reply-to: Fikes.PA's message of 16 April 1982 2:43 pm PST (Friday) To: IFSAdministrators^.PA, Dekleer.PA, Kaehler.PA, Taft.PA, Boggs.PA cc: Sperry Reply-To: Sperry Here in Webster we have decided that the easiest way to handle that problem is just to add another T-300 drive to the file-server. The alternatives, from our point of view, are much less attractive and for the most part administratively infeasible. In addition, our accountant runs indicate that we are the biggest users on our own file-server. (i.e. Maintaining copies of directories such as Mesa, Ceader, Pilot, SmallTalk, etc. really eats up the space.) Bob Sperry *start* 00665 00024 US Date: 19 April 1982 9:25 am PST (Monday) From: Lansford.EOS Subject: Re: IFS Archiving In-reply-to: Fikes.PA's message of 16 April 1982 2:43 pm PST (Friday) To: IFSAdministrators^.PA, Dekleer.PA, Kaehler.PA, Taft.PA, Boggs.PA c: Lansford Reply-To: Lansford Liberal and frequent application of the Chat or Telnet "Delete *, confirm, keep n" messages will do wonders. Phylum in particular seems to have multitudinous copies of some very large files. This technique has kept Rose running for two years with on the order of 10% free space. There also exists a tape server with a semi-human interface. But I don't think you want this. Bob *start* 00984 00024 US Date: 19 APR 1982 1041-PST From: PIPES.PA Subject: Re: IFS Archiving To: IFSAdministrators^, Dekleer, Kaehler, Taft, Boggs cc: PIPES, Cude In response to the message sent 16 April 1982 2:43 pm PST (Friday) from  IFSAdministrators^, Dekleer, Kaehler, Taft, Boggs Our method of archiveing is fairly crude. What we generally do is create a mini IFS with a T-80 or a T-300. Copy complete directories via Brownie. Remove the packs and save them. In the last 4 years or so, I can only recall one user requesting something back from the archive packs. We try to encourage users to use some other means for saving old stuff. That usually works, especially if you make 'em wait some appreciable amount of time before actually archiveing. One other scam that Mr. Cude and I use is to hide space as it becomes available. Doing this has allowed us to have 10 to 15k pgs. available for emergenicies. The major problem with hiding space is the backup. /jp *start* 01751 00024 US Date: 19 April 1982 11:36 am PST (Monday) From: Deutsch.pa Subject: IFS accounting program To: Boggs, Taft cc: Fikes I would like to request that the IFS accounting program be changed to print the file names as well as the total # of pages. In the absence of this information, and in the absence of a user interface (either in IFS itself or in FTP) that sort files by date of last access, it is very tedious for users with hundreds of files (who are the typical users with obsolete files!) to discover which ones need archiving. --------------------------- Date: 16 April 1982 10:53 am PST (Friday) From: Fikes.PA Subject: Re: Cleanup Your Phylum Directory In-reply-to: Deutsch's message of 15 April 1982 4:41 pm PST (Thursday) To: Deutsch cc: Fikes, Adele,Burton,DeKleer,Ingalls,Kaehler,Kaplan,Krasner,Masinter, McCall,Moreland,Robinson,Treichel,VanLehn,Williams One has to be careful about apparently obvious truths. The accounting program supplied with IFS lists for each directory the number of pages that have not been accessed in 30, 60, 180, 365, and 730 days. Nothing about which files those are. I have no other facilities available to me. I see no alternative than for people to go through the hassle that is required to archive their files until either someone around here creates an archive facility or we somehow convince Taft and Boggs to include an archive facility in IFS. The last time I talked with them about an IFS archive facility, they had no plans to create one. Without an archive facility, adding disk drives to Phylum is a nonsolution, since people will simply wait longer before moving their inactive files to Maxc. Phyques ------------------------------------------------------------ *start* 00706 00024 US Date: 20 April 1982 4:18 pm PST (Tuesday) From: Boggs.PA Subject: Re: IFS Archiving In-reply-to: PIPES' message of 19 APR 1982 1041-PST To: PIPES cc: IFSAdministrators^, Dekleer, Kaehler, Taft, Boggs, Cude Reply-To: Boggs @Change Attributes (of files) @@No Backup @@ will cause the backup system to skip (which may describe many files, e.g. *). You might want to do that on files being used to hide free pages and on replicated public directories like . It's not likely that Ed or I will ever add archiving to IFS. Ed is busy working on the replacement for IFS and I am working on the replacement for Alto/D0 gateways. /David *start* 01534 00024 US Date: 23 April 1982 10:32 am PST (Friday) From: Fikes.PA Subject: IFS-Gateway Interaction I Don't Understand To: Taft cc: Fikes, boggs Ed, Tom Lipkis from ISI is doing some work with us and he has a Phylum account as follows: @show directory-parameters (of directory) Fikes_ lipkis Used 2 out of 5000 disk pages Default file protection: R: USRegistries^.Internet Owner; W: Owner; A: Owner Create permission: Owner Connect permission: Owner Group membership: USRegistries^.Internet World Group non-membership hint: None Group ownership: None Default printing server: Yoda Last Grapevine authentication at 21-Apr-82 9:34:21 As you can see, he successfully logged in and obtained Grapevine authentication two days ago. However, I cannot find out, via Maintain, what Grapevine knows about him. I tried the "Type Entry" command with lipkis.pa, lipkis@usf-isif.arpagateway, etc. and always got the response that the name was not recognized. What's going on? How is Grapevine authenicating him? thanks, Phyques P.S. Regarding the IFS archiving discussion that has been going on around here. It seems that the pain of archiving could be alleviated a great deal by adding an option to the chat list command that would list the files in the order of their last access (or if no access, write). That would give the user a text file of candidate files to archive which he could then edit to produce the appropriate pupftp and chat command files. Any chance of getting that facility added to chat? *start* 01466 00024 US Date: 23 April 1982 11:29 am PST (Friday) From: Taft.PA Subject: Re: IFS-Gateway Interaction I Don't Understand In-reply-to: Fikes' message of 23 April 1982 10:32 am PST (Friday) To: Fikes cc: Taft, boggs Since Phylum's default registry is PA and Lipkis.PA doesn't exist in Grapevine, the Phylum account "Lipkis" is being treated as a non-Grapevine name. That is, Lipkis can log into Phylum but nowhere else. The reason Lipkis is shown as being a member of World is that in IFS 1.35 all users who have their own directories on the IFS are automatically a member of World, regardless of whether or not they are in USRegistries^.Internet or are even registered in Grapevine at all. I am changing this in IFS 1.36, which I expect to release soon. The message "Last Grapevine authentication at 21-Apr-82 9:34:21" is misleading. The time displayed is really the last successful authentication attempt; if the name is not registered in Grapevine but does correspond to a local directory, and the password matches that directory's password, then the authentication attempt is successful. I will change the wording of the message. I'll add your List command suggestion to the wish list, but it's unlikely ever to be done. Sorting the directory would require managing an auxiliary buffer of unlimited size and use of an external sort procedure; no such mechanisms presently exist in IFS, and adding them would be a fair amount of work. Ed *start* 04096 00024 US Date: 27 April 1982 9:48 am PDT (Tuesday) From: Taft.PA Subject: IFS support for CIFS To: Gifford cc: Schroeder, Boggs, Taft Here is a tentative proposal: The CIFS directory consists of an initial enumeration, whose format is private to CIFS, followed by a log of incremental changes (file creations and deletions) maintained by IFS. The directory and log together are stored as one or more versions of the file Dir.Dir in the subdirectory being described. When a file is created, deleted, or renamed, IFS looks for the highest numbered version of an existing Dir.Dir file in the same subdirectory. If one exists, IFS appends an entry to the file. The entry is "+filename" for a create and "-filename" for a delete; filename is the IFS file name, with directory and subdirectories stripped off but version number included. A rename is recorded as a create of the new name and a delete of the old. If IFS is unable to append to the existing Dir.Dir file because it is locked by some other client, it creates a new version. That is now the highest numbered version, and all subsequent log entries are appended to it. (Note that appending to a file sets its create time to "now".) Intended use: upon the first attempt to access a subdirectory, CIFS compares the create time of the highest numbered version of Dir.Dir on the IFS with that of the one on the local disk. If the IFS's one is newer, CIFS enumerates all versions of Dir.Dir (in ascending order of version number) and merges them together to produce a new local Dir.Dir. If the client has sufficient credentials, CIFS then stores the updated Dir.Dir file as a new version on the IFS and deletes all the older versions. Implementation: When a file named Dir.Dir is created (e.g., by an FTP Store), a bit is set in the directory entry for the enclosing main directory's (not subdirectory's) directory information file (DIF). Thus if the bit is set, a Dir.Dir file MAY exist in that directory or some subdirectory of it. This bit is checked by all operations that modify the directory (at negligible cost, because the DIF's directory entry is always referenced by such operations anyway); if the bit is set then the Dir.Dir file is actually looked up. Thus the cost of looking up Dir.Dir files is incurred only during accesses to directories that are known to have at least one Dir.Dir file already. (There is no automatic mechanism for clearing the bit when the last Dir.Dir file is deleted; but I can arrange to reset it eventually in some process that enumerates the entire directory, such as the backup system or the scavenger.) (The new bit is added to the DIF's directory entry in a compatible way by ripping off a bit from an existing field, probably the high-order bit of the disk page limit. In fact, I will probably shrink the disk page limit from its current 32 bits to 24 so as to leave room for future expansion.) Appending to the Dir.Dir file is done by an operation that bypasses access controls; so the client who creates or deletes files in a subdirectory need not have privileges to append to that subdirectory's Dir.Dir file. IFS retries a failing attempt to open an existing Dir.Dir file several times before creating a new version; this avoids needless proliferation of new versions caused by transient lock conflicts. Create and delete operations on Dir.Dir files themselves are, of course, not logged so as to avoid recursion. Remaining issues: (1) Dir.Dir seems like a rather short name to preempt for "system" purposes. Perhaps the name used on the IFS should be longer, e.g., CIFSDir.Dir, or should contain strange characters, e.g., /Dir/.Dir. (2) I'm happy to adopt a different format for the log portion of Dir.Dir, e.g., an array of records containing tag fields and Mesa strings (or some other representation that has the logical structure of a property list). This might make it easier to recognize the boundary between the initial enumeration and the log, and better accomodates future extension to represent other types of entries. Comments? Ed *start* 01459 00024 US Date: 27 April 1982 12:18 pm PDT (Tuesday) From: Gifford.PA Subject: Re: IFS support for CIFS In-reply-to: Taft's message of 27 April 1982 9:48 am PDT (Tuesday) To: Taft cc: Gifford, Schroeder, Boggs Ed, Sounds good. Here is the scheme that I have been thinking about: Directory Reader If local dir.dir not current Read dir.dir Read all log.dir files Merge logs into local copy Directory Reader becomes Writer Rename dir.dir to be a unique file name (this locks directory) Directory Writer closes directory If there are new logs on the ifs Merge new logs in Delete all log files Store directory as dir.dir on ifs (this unlocks directory) Delete renamed version on ifs Catalog command Delete all log files Recreates dir.dir from first principles Here are the problems I see with this scheme: 1. Someone's machine crashes and leaves a directory locked. Catalog can be used to get us out of this corner. 2. There is a small window in "Writer closes the directory" where a new log entry could be created and then lost. Once again, when catalog is run it will recover the missing entry. This seems to be compatible with your plan, except that I use separate log files so I can merge faster and implement locking by renaming dir.dir. Sure, I'll pick a different file name if you like. The format that you have picked seems fine to me. Whatca think? Dave *start* 05873 00024 US Date: 29 April 1982 1:50 pm PDT (Thursday) From: Stewart.PA Subject: CIFS debates To: Schroeder, Gifford, Levin, Taft, TonyWest, Schmidt cc: Stewart My proposal is: Add the link creating facility to Bringover. Add the ReleaseAs/CameFrom flip to SModel. Modify Smodel to permit Move instead of Copy semantics. Do not add any IFS support for dir.dir files. Do add fast enumeration for IFS, preferably including an option to exclude '> from star-matching. This would make level-order enumeration even faster. Include in CIFS the use of fast IFS enumeration as the fallback for a non-existant dir.dir files. I believe this set of work will meet most goals of the group. Discussion: Local-Only Files At least one participant voiced a desire to create "local only" files which are not backed up. These files were further deemed to fall into two categories; roughly those which do not need human sensible names and those which do. Files without human-sensible names seem adequately served by Pilot temporary files. Perhaps a way is needed to create them via CIFS. (For example a special directory called, of course, /tmp.) I did not here any good reasons why files with human-sensible names must not be backed up, only that they need not be. It seems to me that if there is a possiblility that a human might want to look at one of them then they should be preserved accross sessions by the usual CIFS backup machinery. Give them names ending in '$, just like always. Constructing new release components: There was considerable discussion of the methods of constructing new release components. My understanding of this may be blurred because I arrived late: The user desires to improve the XYZ package. The user sets his working directory to, say, /ivy/jones/XYZ The user does a bringover /a [indigo]Top>XYZ.df. This action causes the creation of CIFS links in /ivy/jones/XYZ such as: /ivy/jones/XYZ/XYZ.config!1 => /indigo/cedar/XYZ/XYZ.config!27 /ivy/jones/XYZ/XYZ.df!1 => /indigo/cedar/XYZ/XYZ.df!34 Now the user edits some of these files, say XYZ.config. Tioga will use CIFS to create a new version of XYZ.config in the current working directory, which will now look like this: /ivy/jones/XYZ/XYZ.config!1 => /indigo/cedar/XYZ/XYZ.config!27 /ivy/jones/XYZ/XYZ.df!1 => /indigo/cedar/XYZ/XYZ.df!34 /ivy/jones/XYZ/XYZ.config!2 Now the user runs SModel on XYZ.df. Smodel will observe the CameFrom clause and conclude that the two "directory" names need to be switched and the CameFrom replaced by ReleaseAs. SModel will now CIFS-enumerate the working directory, /ivy/jones/XYZ, looking for modified create dates. Now there seem to be two cases, either the CameFrom directory is /ivy/jones/XYZ (same as the working directory) or it is, say, /indigo/cedarlang/XYZ. In the former case, all smodel needs to do is update the DF file and call CIFS.Backup. In the latter case, Smodel will have to update the df file, call CIFS.Copy a number of times and then call CIFS.backup. There is a problem with the latter case, smodelling to a place which is neither the ReleaseAs directory nor the working directoy. In this case, smodel does not check to see if the "unmodified" files are actually there. This, however, is already true and makes changing the "CameFrom" site of a package difficult. This does not seem to be a CIFS problem. I guess SModel has a switch to take care of this? Please note that it does not matter very much if the dir.dir file for /ivy/jones/XYZ reflects the true state of affairs on /ivy/jones/XYZ, since bringover will create new links and Tioga will create new version on top of whatever might have been there. Taft's suggestion: My understanding of Ed's idea fleshed out a bit, is that a user really needs only one directory: the "local" machine. Suppose that a typical user walks up to a public dorado and retrieves, somehow, the "local-map" file, including all the links created by bringover during previous sessions. During the course of the session, various files are referenced via link, and thereby retrieved. Other files are created or modified and are made dirty. Any files that are SModeled are made non-dirty as they are transferred to wherever. At the end of a session, the user saves his "local-map" and any dirty files that have not been smodelled yet. It seems to me that this mode of operation is identical to using CIFS in a style wherein each user uses a single fixed working directory. A user setting his current working directory is the same as aquiring the "local-map". The fact that CIFS supports multiple directories and search paths is extra. Bringover creating links in the working directory is the same as creating them in a "local-map". Smodel can either CIFS.Copy a modified file to it's destination or CIFS.Rename a modified file. In the former case, the file would be "dirty" in both places and would be backed up to both places. In the latter case, the file would only be backed up to the smodel-destination, not to the remote copy of the working directory. At the end of a session, the user does a backup, which stores the working directory, with its modified links etc. and also stores any dirty files which have not been taken care of earlier. Comments and closing notes. By restricting the recommended use of CIFS to situations which do not have multiple writers of remote directories, we will avoid the locking problems without removing the versatility and functionality of CIFS Cedarlang and other big, public, mutable IFS directories should not present a big locking problem anyway because CIFS does not treat Cedarlang as a single giant directory, but as many sub directories. Implementors may be working in /Indigo/CedarLang/Runtime, and /Indigo/CedarLang/IOStream, and /Indigo/CedarLang/XYZ all at the same time. -Larry *start* 00659 00024 US Date: 29 April 1982 5:37 pm PDT (Thursday) From: Taft.PA Subject: Re: CIFS debates In-reply-to: Stewart's message of 29 April 1982 1:50 pm PDT (Thursday) To: Stewart, Schroeder, Gifford, Levin, TonyWest, Schmidt cc: Boggs, Taft David and I have worked out how to do the fast FTP enumerate in IFS. It is quite straightforward and shouldn't require more than a couple hours' work. I'll present the details at our meeting tomorrow. A pattern match that excludes '> seems like a reasonable suggestion, but I suspect it wouldn't fit very well into the simple KPM pattern matcher now used in IFS. Let me think a bit about that one. Ed *start* 02105 00024 US Date: 30 April 1982 5:07 pm PDT (Friday) Sender: Taft.PA Subject: FTP extension From: Taft, Boggs To: Stewart, Schroeder, Gifford, Levin, TonyWest, Schmidt cc: Boggs, Murray, Shoch, Taft We are proposing to make the following extension to the FTP protocol, and to implement it in IFS so as to substantially improve the performance of remote enumerations done by CIFS and the DFFiles software. Speak now, or forever... [Hal: you might want to see whether SDD is interested in tracking this. John: add this to your FTP file, in the unlikely event you ever revise the FTP spec.] We define a new property named "Desired-property", whose appearance is meaningful only in a [Directory] command from client to server (i.e., FTP enumerate). The property value is the name of a file property, i.e., one of Author, Byte-size, Creation-date, Directory, Name-body, Read-date, Server-filename, Size, Type, Version, or Write-date. If no Desired-property property is present in the property list of a [Directory] command, then the server sends all known properties of each file enumerated. But if one or more Desired-property properties are present, then the server may send only the specified properties and omit the rest. For example, the command "[Directory] ((Server-filename *))" causes all properties to be sent for each file enumerated; but "[Directory] ((Server-filename *)(Desired-property Server-filename)(Desired-property Creation-date))" causes only the Server-filename and Creation-date properties to be sent for each file enumerated. (We anticipate that asking for just the Server-filename property will speed up FTP enumerations by at least a factor of 10, which will help CIFS remote directory enumerations considerably. Asking for any small subset of the properties should improve performance somewhat, since the number of unnecessary date and time conversions, number conversions, etc., will be substantially reduced; and besides which, the total amount of data will be reduced.) If you have any comments, please let us know right away. Ed and Dave *start* 00652 00024 US Date: 30 April 1982 8:18 pm PDT (Friday) From: Stewart.PA Subject: Re: FTP extension In-reply-to: Taft's message of 30 April 1982 5:07 pm PDT (Friday) To: Taft, Boggs cc: Stewart, Gifford Did you come to a conclusion regarding '> exclusion? I suppose this is more complicated though. What a CIFS level enumeration would want to know would be: Files at a certain level. "Directories" at a certain level. Where a directory might be something like (*exclude'>)>!* or something. Another alternative is *dir.dir without exclusion, which would (for a cataloged tree) find a representative of each "directory." -Larry *start* 00941 00024 US Date: 1 May 1982 2:15 pm PDT (Saturday) From: Taft.PA Subject: Re: FTP extension In-reply-to: Stewart's message of 30 April 1982 8:18 pm PDT (Friday) To: Stewart cc: Taft, Boggs, Gifford For '> exclusion, I plan to consult McCreight, whose KPM pattern matcher served as the original model for the one used in IFS. I'm pretty sure that a "pure" KPM pattern matcher can't handle not-character matching; but McCreight might know of a simple extension that can. This solves half the problem, namely the ability to enumerate all files at a given subdirectory level but exclude files at deeper levels. The other half (being able to enumerate the subdirectories themselves) is much harder, for the simple reason that subdirectories aren't files. Conceptually, a subdirectory is the set of files that share a common initial prefix ending in '>. There is no single file that represents the subdirectory as a whole. Ed *start* 01330 00024 US Date: 10 May 1982 4:16 pm PDT (Monday) From: Taft.PA Subject: IFS/IFSScavenger mods To: Boggs cc: Taft This is a sketch of the proposed mods we discussed today. IFSScavenger: 1. Unless the debug switch is set, the log file should contain only the information that the IFS administrator requires to recover from and/or notify users of file system damage. In particular, the "inaccessible page" messages should be suppressed. 2. When substantive damage to a file is detected (i.e., not just bad end hints), and the file is not deleted entirely, then a bit should be set in the leader page to indicate that the file is damaged. This bit will be added to the ILD.flags word in IFSFiles.decl. Also, the IFSScavenger should reset the file's protection to all zeroes so that any attempt to access the file will fail, prompting the user to examine it more carefully. IFS: 1. The List command should display the damaged status of the file. 2. The FTP server should send back the damaged status as a comment. 3. An option should be added to Backup Reload to restore all files that are marked damaged in the primary file system plus all files that existed at the time of the last backup but are no longer in the primary file system. This fixes all repairable damage left behind by the IFSScavenger. Ed *start* 00744 00024 US Date: 12 May 1982 3:49 pm PDT (Wednesday) From: Taft.PA Subject: "Can't flush locked page" To: IFSAdministrators^ Reply-To: Taft There is a long-standing software problem that has been around since the first release of IFS over 5 years ago. The symptom is that the server falls into Swat with the error: CallSwat from xxxxxx Can't flush locked page This bug is reasonably well understood but extremely difficult to fix. It's my impression that the bug strikes so rarely as not to be worth the effort required to fix it. (For example, I don't believe this has happened more than about once a year on Ivy and Indigo.) If this bug strikes your IFS more frequently than once a year, I'd like to hear about it. Ed *start* 00265 00024 US Date: 12 May 1982 5:09 pm PDT (Wednesday) From: Hains.EOS Subject: Re: "Can't flush locked page" In-reply-to: Taft.PA's message of 12 May 1982 3:49 pm PDT (Wednesday) To: Taft.PA cc: Hains Ed, I don't believe its ever hit Rosebowl. chuck *start* 00781 00024 US Mail-from: Arpanet host SU-SCORE rcvd at 13-MAY-82 0827-PDT Date: 13 May 1982 0827-PDT From: Mark Roberts Subject: Re: IFS bugs To: Taft at PARC-MAXC cc: Admin.MDR at SU-SCORE Ed, Our IFS (Lassen) has never died as the result of the 'locked page' bug. We have experienced another bug. The symptom is: The IFS thinks it is full (any attempt to open a Telnet or FTP connection fails with the reply 'Sorry - IFS is full, try again later') Using EtherWatch, I could see that nobody was using the IFS, yet the problem would persist until I restarted IFS. I made SYSOUT files in Swat each time this happened, but I have not examined them (IFS is a sizeable program, so I'm not sure where to look). Any Ideas? Thanks, Mark ------- *start* 01094 00024 US Date: 13-May-82 8:57:14 PDT (Thursday) From: LaCoe.ES Subject: "Can't flush locked page" To: Taft.pa cc: Dones, LaCoe Ed, So far, Rain had a swat with subject message on 3/1/82 and then again on 5/11/82. I will message you if it happens again. Joyce ------------------------------ Date: 12 May 1982 3:49 pm PDT (Wednesday) From: Taft.PA Subject: "Can't flush locked page" To: IFSAdministrators^ Reply-To: Taft There is a long-standing software problem that has been around since the first release of IFS over 5 years ago. The symptom is that the server falls into Swat with the error: CallSwat from xxxxxx Can't flush locked page This bug is reasonably well understood but extremely difficult to fix. It's my impression that the bug strikes so rarely as not to be worth the effort required to fix it. (For example, I don't believe this has happened more than about once a year on Ivy and Indigo.) If this bug strikes your IFS more frequently than once a year, I'd like to hear about it. Ed ---------------------------------------------------------------- *start* 02966 00024 US Date: 13 May 1982 4:43 pm PDT (Thursday) From: Taft.PA Subject: Re: FTP extension In-reply-to: my message of 30 April 1982 5:07 pm PDT (Friday) To: Stewart, Schroeder, Gifford, Levin, TonyWest, Schmidt cc: Boggs, Murray, Shoch, Taft Here is what we actually implemented in IFS; it differs slightly from our original proposal. I plan to bring up the new software on Ivy and Indigo this evening. 1. Desired-Property We define a new property named "Desired-Property", whose appearance is meaningful only in a file-related command from client to server which gives rise to file property lists being sent from server to client (principally [Directory], [Retrieve], and [New-Store]). The property value is the name of a file property, i.e., one of Author, Byte-Size, Checksum, Creation-Date, Device, Directory, Name-body, Read-Date, Server-Filename, Size, Type, Version, or Write-Date. If no Desired-Property property is present in the property list of a file-related command sent from client to server, then the server sends all known properties of each file enumerated. But if one or more Desired-Property properties are present, then the server may send only the specified properties and omit the rest. For example, the command "[Directory] ((Server-Filename *))" causes all properties to be sent for each file enumerated; but "[Directory] ((Server-Filename *)(Desired-Property Server-Filename)(Desired-Property Creation-Date))" causes only the Server-Filename and Creation-Date properties to be sent for each file enumerated. 2. New-Directory As currently specified, the [Directory] command causes the server to generate mark-data pairs of the form "[Here-is-Property-List] (...property list...)" for each file enumerated. This results in the generation of large numbers of small packets, which in turn causes a lot of communication overhead. We have defined a new command, [New-Directory], mark type 14B, which causes the server to send all the property lists as a continuous stream not separated by marks. That is, the server's response to [New-Directory] is "[Here-is-Property-List] (...first property list...)(...second property list...) ..." In all other respects, [New-Directory] is identical to [Directory]. A client program making use of [New-Directory] should be prepared for the command to fail because the server doesn't implement it, and should automatically fall back to using [Directory] instead. 3. Results For a large enumeration on an IFS, a [New-Directory] command specifying only (Desired-Property Server-Filename) is at least a factor of 10 faster than before. The rate of enumeration is approximately 45 files per second. When additional Desired-Properties are specified, the rate of enumeration is correspondingly decreased, but is still much faster than before due to the [New-Directory] optimization and elimination of several implementation inefficiencies which we discovered. Ed *start* 00721 00024 US Date: 19 May 1982 5:37 pm PDT (Wednesday) From: Resnick.ES Subject: Security To: Taft.PA, Boggs.PA cc: Ramchandani, Resnick Ed and/or David; The software people in El Segundo have raised several questions concerning the security of their software creations. Tulsi Ramchandani suggested that if the IFSs had a transaction file(the user name, password and connecting alto address, the file name(s), a list of commands, etc), we could at least monitor, after the fact, who is accessing what accounts. With enough of the right kind of information, we might have half a chance of catching a bad doer at his dirty work. By the way, the transaction file would be quite useful in all servers. Richard *start* 01142 00024 US Date: 19 May 1982 6:33 pm PDT (Wednesday) From: Taft.PA Subject: Re: Security In-reply-to: Resnick.ES's message of 19 May 1982 5:37 pm PDT (Wednesday) To: Resnick.ES cc: Taft, Boggs, Ramchandani.ES There is no log kept of accesses to IFS files, though I suppose adding one at this point wouldn't be too hard. (It would have been much harder in earlier days because IFS was so seriously short of memory.) I'll add it to the wish list, but I rather doubt it will ever be done. (By the way, I suggest referring to this as a "log" rather than a "transaction file", as "transaction" is already something of a reserved word with an entirely separate meaning.) A universal logging and accounting facility, to accompany our existing universal authentication and access control facility (Grapevine), seems like an attractive idea. To my knowledge no such work is now going on at PARC, though I do know of some ARPA-sponsored research that has been done along these lines elsewhere. If you are concerned about this, I suggest that you try to influence the people implementing the Services software for OPD products. Ed *start* 01651 00024 US Date: 22 May 1982 1:01 pm PDT (Saturday) From: Taft.PA Subject: IFS alpha test To: IFSAdministrators^ Reply-To: Taft Version 1.35.4 of the IFS software is now available for testing. This software will be released as IFS 1.36 on Monday, June 7, if no problems are reported before that time. This is principally a maintenance release to fix a number of bugs found in previous releases. Functional changes are as follows: 1) In an IFS that uses Grapevine group checking, a user who is NOT registered in Grapevine but DOES have a login directory on the IFS is no longer automatically a member of World; his membership in World is now controlled just the same as membership in other groups (using the Change Group-Membership command). 2) The Backup Reload command may now be used to repair damage detected by the IFSScavenger. (Note: the version of IFSScavenger that supports this is not yet released.) 3) The FTP server supports some recent extensions to the FTP protocol that permit substantially improved performance in certain operations (particularly enumerations, which in certain cases are now over 10 times as fast as before); some changes to client software are required to take full advantage of this improvement. If you would like to alpha-test this version of IFS, please send me a message, and then obtain: [Indigo]1.35.4>IFS.run [Indigo]1.35.4>IFS.syms [Indigo]1.35.4>IFS.errors The "IFS Operation" document has been revised; please obtain: [Indigo]Operation.press Note: please do not test this version of IFS if you are not currently running the released IFS version 1.35. *start* 00821 00024 US Date: 27 May 1982 4:58 pm PDT (Thursday) From: Taft.PA Subject: IFS alpha test: 1.35.5 To: IFSAdministrators^ Reply-To: Taft For those of you testing (or thinking about testing) the new IFS, please use: [Indigo]1.35.5>IFS.run [Indigo]1.35.5>IFS.syms [Indigo]1.35.5>IFS.errors This version fixes another bug in the logic for setting Grapevine group names. (The bug was present in the released IFS 1.35 as well, and may explain several reported occurrences of the group name table ending up in the wrong state -- occurrences which I had previously attributed to probable operator error.) Also, in response to several questions: this IFS release fixes all known bugs, with the exception of the "Can't flush locked page" bug about which I sent a message several weeks ago. Ed *start* 02037 00024 US Date: 5 June 1982 3:20 pm PDT (Saturday) From: Taft.PA Subject: IFS 1.36 To: IFSAdministrators^ Reply-To: Taft Version 1.36 of the IFS software is released. It has been running for over a month on Ivy and Indigo and for two weeks on three other servers with no unsolved problems. This is likely to be the last IFS release for a very long time. This is principally a maintenance release to fix a number of bugs found in previous releases. All known bugs have been fixed with the exception of the very rare "Can't flush locked page" bug, which we have decided not to try to fix. Functional changes are as follows: 1) In an IFS that uses Grapevine group checking, a user who is NOT registered in Grapevine but DOES have a login directory on the IFS is no longer automatically a member of World; his membership in World is now controlled just the same as membership in other groups (using the Change Group-Membership command). 2) The Backup Reload command may now be used to repair damage detected by the IFSScavenger. (Note: the version of IFSScavenger that supports this is not yet released. In the absence of IFSScavenger support, the IFS Backup Reload command will restore missing files but will not replace damaged ones; you must first delete damaged files manually, using the IFSScavenger.log as a guide.) 3) The FTP server supports some recent extensions to the FTP protocol that permit substantially improved performance in certain operations (particularly enumerations, which in certain cases are now over 10 times as fast as before); some changes to client software are required to take full advantage of this improvement. (The protocol extensions are described in the revised FTP specification, filed as [Maxc]FTPSpec.press.) The software is available from the usual place: [Maxc]IFS.run [Maxc]IFS.syms [Maxc]IFS.errors The "How to use IFS" and "IFS Operation" documents have been revised; please obtain: [Maxc]HowToUse.press [Maxc]Operation.press *start* 00583 00024 US Date: 10 June 1982 9:44 am PDT (Thursday) From: Fiala.PA Subject: Re: IFS Protection Bug In-reply-to: Taft's message of 9 June 1982 6:47 pm PDT (Wednesday) To: Taft cc: Fiala, Boggs, Kolling, MBrown In your reply, I think you missed one point which I tried to make. At the moment, I think some directories are set up so that people can add new files to them but not modify old files. This allows a directory to be written into by semi-trusted individuals. However, the result is that an individual who creates a file winds up being unable to delete it. *start* 00652 00024 US Date: 30 June 1982 4:28 pm PDT (Wednesday) From: Deutsch.PA Subject: Boot servers To: Murray cc: Boggs, Taft, TonyWest I've acquired the boot server code, and although I think I can extract what I need from it, it would serve us better in the long run if you could simply implement the Sun boot protocol in the standard boot server. From the description I sent you (forwarded from a third party) it looks pretty trivial to me. Incidentally, I assume that the reference to the "well known socket" being 0303 rather than 0244 is wrong, and that it is the Pup type, not the socket, that is different, but I will check on this. *start* 01163 00024 US Date: 30 June 1982 4:50 pm PDT (Wednesday) From: Taft.PA Subject: Re: Boot servers In-reply-to: Deutsch's message of 30 June 1982 4:28 pm PDT (Wednesday) To: Deutsch cc: Murray, Boggs, Taft, TonyWest Hal showed me that message, but I don't have a copy of it. If I remember correctly, the changes are a different Pup type and optional identification of the boot file by file name instead of by boot file number. Although I was hoping not to change IFS again for a long time, it seems fairly clear that including support for the Sun boot protocol in the IFS boot server would be considerably more useful than doing so in the Alto Gateway. This would enable a user to boot arbitrary files stored on the IFS, as opposed to being limited to ones that have been "installed" by a system administrator. (I assume one of the standard Sun boot files is a NetExec-like program that interacts with the user and permits specification of an arbitrary boot file to be invoked by name. If my model of this is wrong, please enlighten me.) If you're interested in this, please give me complete details and I'll look into implementing it in IFS. Ed *start* 01502 00024 US Date: 30 June 1982 5:04 pm PDT (Wednesday) From: Deutsch.PA Subject: Re: Boot servers In-reply-to: Taft's message of 30 June 1982 4:50 pm PDT (Wednesday) To: Taft cc: Murray, Boggs, TonyWest Actually, the Sun comes with a ROM that includes a tiny Exec/DDT. One of the commands to this Exec is to do a network boot, with the name specified in the command. Here is the story on the Sun boot protocol. It is the same as Alto boot protocol with the following exceptions: ** The Pup type is 303B rather than 244B. ** While the boot file ID in the Alto world is only supplied by the PupID field of the request packet, the SUN protocol permits a boot file pathname in the data field. In other words, if the PupLength field in the request packet is non-zero, the bootfile's pathname is in the PupData field of the request packet (which contains a string PupLength long). If the length field is zero, the PupID field identifies the file, as in the Alto protocol. The SUN ROM monitor ALWAYS uses this "explicit pathname" protocol. The ROM monitor network boot code makes no assumptions about the format of the string to be transmitted in the boot file request. It is included in the request packet exactly as supplied by the user (from the keyboard). We (SCG, and Alan Bell who is also getting a Sun) would be absolutely delighted if you could implement the boot protocol in IFS as you suggested, especially if it happened soon (i.e. within the next month or so). *start* 01077 00024 US Date: 30-Jun-82 23:03:28 PDT (Wednesday) From: Murray.PA Subject: Re: Boot servers In-reply-to: Deutsch's message of 30 June 1982 4:28 pm PDT (Wednesday) To: Deutsch cc: Murray, Boggs, Taft, TonyWest Fine print: There are actually several programs around PARC that include boot servers. Aside from IFS and the AltoGateways, there are also Pilot Gateways, and Peek. I'm not very interested in changing the Alto Gateways. On the other hand, it wouldn't be very hard to convince me to make the additions to the Pilot Gateway, but that would be to the Trinity version rather than the Rubicon version that Masinter is using. (It's not a big deal to update his machine unless he is also using Rubicon for something else, for example Cedar.) If you are building your own boot server, it might be simpler to start with the sources for the Peek version. Have you considered patching the SUN rom to use the protocols our servers already implement? We can easily assign you a specific number to get off the ground and/or a clump of numbers if that helps. *start* 01975 00024 US Date: 11 July 1982 11:07 am PDT (Sunday) From: Taft.PA Subject: Re: Boot servers In-reply-to: Deutsch's message of 30 June 1982 5:04 pm PDT (Wednesday) To: Deutsch cc: Taft, Murray, Boggs, TonyWest I started to think about the Sun boot server in more detail; and I thought I sent a message about it, but apparently I didn't. Two main questions come to mind. 1. When a boot file is requested by file name, how is the desired server identified? Is the request sent to the specific server that has the file? Or is the request broadcast, and the server identified as part of the enclosed file name? 2. How should we handle access control? Being able to boot an arbitrary file by name provides a way of gaining read access to any file in the IFS if we aren't careful. Several possible approaches come to mind: (a) Only permit booting of files from a specific directory. This centralizes the administration of boot files, and eliminates the main advantage of the booting-by-name scheme (our existing mechanism is just as good if not better). (b) Only permit booting of files whose names match some specific pattern. (c) Accept only requests from the directly-connected Ethernet for files that give read permission to World. (d) Extend the protocol so that the user's credentials are presented as part of the boot file request. Now, strictly speaking, only (d) really meets the electronic information security policy; and I think it's the most desirable approach if it's practical. If you are in a position to change the Sun's ROM executive, then I think this is the way to go. Even if you can't change the ROM executive, it could be used to boot some sort of NetExec-like program, which then obtains the user's credentials and in turn boots an arbitrary user-specified file. However, if you feel this is unreasonable, I am willing to go with (a), (b), or (c), or some combination of them. Any further thoughts or suggestions? Ed *start* 01404 00024 US Date: 12 Jul 1982 01:28 PDT From: Deutsch at PARC-MAXC Subject: Re: Boot servers In-reply-to: Taft's message of 11 July 1982 11:07 am PDT (Sunday) To: Taft cc: Deutsch, Murray, Boggs, TonyWest, Schiffman@SRI-KL, Hagmann, Braca I think the Sun people assumed that there would only be one boot server per Ethernet, and just broadcast the boot request packet on the directly connected Ethernet, as does the Alto. I think the questions you raise of security for booting named files are valid, and a potential can of worms. In response, I have an alternative proposal to make, that doesn't require you to implement booting from named files at all, and only makes life slightly more of a nuisance for us: if a Sun-style boot request comes along, require that the "file name" be a sequence of octal digits, and treat it as though it were an existing-style boot request. This allows us to use your current mechanisms for updating boot files, including authentication. All I then ask is that you assign some modest number of boot file IDs to us (16 is probably plenty), and arrange things so that we (Deutsch, Schiffman, Hagmann, and Braca) can update them. I assume that there is some way to arrange things so that the latency between storing a new version of a numbered boot file and being able to boot from it can be made small (seconds rather than minutes or hours). Comments? *start* 01424 00024 US Date: 12 Jul 1982 11:02 PDT From: Taft at PARC-MAXC Subject: Re: Boot servers In-reply-to: Deutsch's message of 12 Jul 1982 01:28 PDT To: Deutsch cc: Taft, Murray, Boggs, TonyWest, Schiffman@SRI-KL, Hagmann, Braca I guess I'm confused now. In what way is booting files by "names" which are octal numbers any better than the current scheme using boot file numbers? I think I can do a little better than that, at least in IFS. Assuming we restrict the boot server to give out "installed" boot files, then it's perfectly easy to permit the file to be identified by name as well as by number. Installing boot files on IFSs is done simply by using FTP to store them in Boot>. It's not possible to set up the access controls to quite the precision you might like. Probably the best thing that can be done is for the IFS administrator to create a new group and give that group create access to and write access to the specific boot files in question. The create permission allows you to store new versions of the boot files, and the write permission allows you to delete old versions. (This works because protections are inherited by new versions of existing files.) Currently, using FTP to store files in Boot> does not cause the boot server to recognize their existence immediately but only after a timeout of up to 8 hours; however, this should be easy to change. Ed *start* 00662 00024 US Date: 12 Jul 1982 11:58 PDT From: Deutsch at PARC-MAXC Subject: Re: Boot servers In-reply-to: Taft's message of 12 Jul 1982 11:02 PDT To: Taft cc: Deutsch, Murray, Boggs, TonyWest, Schiffman@SRI-KL, Hagmann, Braca The only advantage of booting by "names" which are octal numbers is that it doesn't require changing the Sun ROM for us to get started. Enabling the IFS boot server to give out installed files by name is a fine compromise. If you can change the IFS boot server so it accepts the names of installed files, and so it recognizes new versions immediately, that will take care of our needs eminently well. Thanks again. P. *start* 01639 00024 US Date: 12 Jul 1982 18:45 PDT From: Taft at PARC-MAXC Subject: Re: Boot servers In-reply-to: Deutsch's message of 12 Jul 1982 11:58 PDT To: Deutsch cc: Taft, Murray, Boggs, TonyWest, Schiffman@SRI-KL, Hagmann, Braca OK, I have put in the new boot server. I have tested it only superficially; but the changes were pretty simple and I don't expect any trouble. Ask your IFS administrator to obtain and run: [Indigo]1.36.6>IFS.run [Indigo]1.36.6>IFS.syms [Indigo]1.36.6>IFS.errors Any boot file that may be booted by number may now also be booted by name, by supplying the name string as data in a Pup of type 303B. Boot files are stored on IFS have names of the form Boot>number-name!version, where number is the boot file number in octal and name is the name by which the boot file is known by users (for example, Boot>10-NetExec.boot!3). That is, a Sun user could simply ask for "NetExec.boot" (assuming that was a boot file you could run on the Sun). The boot file number is not needed when requesting the boot file, but must still be present in the IFS file name to control boot file propagation. I suggest that for initial experimentation you assign boot file numbers greater than 100000 octal for your Sun boot files; these numbers are private to each server, and boot files with these numbers never propagate to other servers. Let me know when you want some centrally-assigned boot file numbers that enable files to propagate. Storing or deleting a file in Boot> should be noticed by the boot server within 30 seconds, unless the IFS is extremely busy. Ed *start* 02047 00024 US Date: 29 July 1982 5:34 pm PDT (Thursday) From: Boggs.pa Subject: IFS Scavenger To: IFSAdministrators^ Reply-To: Boggs.pa A new version of the IFS Scavenger is now available. Retrieve: [Indigo]IFSScavenger.run [Indigo]IFSScavenger.syms. This is a maintenance release; there are no documentation changes. When it starts it should say "IFS Scavenger of July 27, 1982". Three things were changed: 1) When the Scavenger has to create or extend a critical system file (e.g. IFS.Dir), it now does it using the minimum number of page runs each as large as possible. This should eliminate the following problem: suppose a hard disk error causes the Scavenger to truncate or delete IFS.dir. Further assume that the file system is old (i.e. the free pages are scattered all over the pack) and that the file system is nearly full (disk usage expands to fill the available space). Before this change, the recreated directory file would consist of zillions of 1 or 2 page runs. This would run IFS's file map out of space (causing wierd crashes) unless you always started IFS (and the Scavenger) with several /Fs. 2) When the Scavenger finds a damaged file, it sets a bit in the file's leader page. LISTing this file from Chat will then display ** Damaged ** after the filename. A file system RELOAD (a backup system option) will automatically restore damaged files from backup. This feature was added to IFS 1.36, but I was busy doing other things at the time and didn't get around to putting the damage marking logic into the Scavenger until now. 3) the Scavenger no longer prints "[1-3] Inaccessible page nnn" messages unless the debug flag is set. If a big file (like IFS.Dir) got clobbered, these messages (one per page, along with some other info) often ran the model 31 disk (where the log is kept) out of space, causing other more important error messages to be lost. While I have your attention: Ed Taft is on vacation until at least 17 Aug, so if you need IFS help call ME at 8*923-4421. /David *start* 01252 00024 US Date: 5-Aug-82 14:29:56 PDT From: Schroeder.pa Subject: Cherry's problems To: JWhite.pa cc: Brotz, Transport^.ms My appeal for information has paid off already. Because of the first response I was able to determine that Cherry has failed to discover if a PA user was in the IFS's "World" group. This group is defined to be "USRegistries^.internet".  Further checking showed me that the small "Internet" registry is known by 12 out of the 13 Grapevine servers, with Chardonnay being the sole exception. So, I've added the Internet registry to Chardonnay by brute force means that will only work for a small inactive registry. This will make a definition for "World" exist on the same net with Cherry. Since almost all files on Cherry have read access permitted to World, I suspect this change will make the problem you've been having occur much less frequently. I also suspect that there is a bug in the IFS code that does access control checks with Grapevine. It may not respond very well whenever the Grapevine server rejects the connection attempt. I'll have Taft look into this when he gets back.  In the meantime, the above change will alleviate (sp?) the problem. Please let me know how it goes. Mike Schroeder *start* 00460 00024 US Date: 9 Aug. 1982 10:44 am PDT (Monday) From: Fikes.PA Subject: IFS Nonfeature To: Taft,Boggs cc: Masinter, Fikes I discovered another problem involving nearly full file systems. Namely, I attempted to restore from backup a large file for which there was not enough space. The result was that Phylum went into Swat saying that one of the disk drives was full, etc. Some more graceful reaction to that situation is needed. Phyques *start* 01121 00024 US Date: 9 Aug. 1982 2:34 pm EDT (Monday) From: Denber.WBST Subject: IFS Mystery To: ifsadministrators^.pa Reply-To: Denber I recently started getting complaints from people who couldn't connect to directories other than their own on Ice, our file server. It turned out that the account "default-user" had "none" specified in the group membership field and that had been carried over into all of the accounts I created. All was fine as long as Grapevine was running, according to the explanation I got, but when it went down, IFS checked the groups to determine whether or not to allow access. Result: access denied. That problem was indeed fixed by going into everyone's account (including default-user) and changing their group membership from none to "world". Now here's the mystery: a few days after making the change, I started getting the same complaints again. Sure enough, somehow all the group memberships got changed back to "none". I changed them again, and the same thing happened once more. How can I get these changes to stick? We're running IFS 1.36L. Thanks. - Michel *start* 01565 00024 US Date: 20 Aug. 1982 4:07 pm PDT (Friday) From: Taft.PA Subject: Re: IFS Mystery In-reply-to: Denber.WBST's message of 9 Aug. 1982 2:34 pm EDT (Monday) To: Denber.WBST cc: ifsadministrators^.pa Reply-To: Taft When you have Grapevine authentication and access control enabled on an IFS, the "group membership" information associated with individual accounts is used as a cache of results of recent Grapevine requests. It is no longer useful to set it manually, except on a temporary basis, since it is reset every time a successful Grapevine authentication occurs. Thereafter, group membership is recomputed only as needed. IFS will remember that a user is a member of some group only if he has exercised that group membership since he was most recently authenticated by Grapevine, which is at most 12 hours previously. It should not matter whether or not your local Grapevine server is up or not, since IFS can locate any operating Grapevine server. Of course, if some gateway or phone line is also down and makes all the other Grapevine servers inaccessible to your IFS, you are pretty much out of luck. But then, in that situation your local users won't be getting any mail service either. The only way to improve availability of both services is to have more local Grapevine servers. I suppose IFS could do some longer-term caching of the group membership information obtained from Grapevine (as it does now for authentication information). I will add that to my IFS wish list, but I'm not likely to get to it any time soon. Ed *start* 01295 00024 US Date: 21 Aug. 1982 4:26 pm PDT (Saturday) From: Taft.PA Subject: Re: Cherry's problems In-reply-to: Schroeder's message of 5-Aug-82 14:29:56 PDT To: Schroeder cc: Transport^.ms I tested your hypothesis that IFS fails to take correct action when a Grapevine registration server rejects a connection attempt. I disabled Cabernet's registration server and then performed an operation on Ivy that required Ivy to consult Grapevine for a group membership check. I observed Ivy with PupWatch, and things happened just as I expected they would. Ivy tried Cabernet first, but was rejected; so it then tried the next-nearest Grapevine server for the PA registry (usually Zinfandel) and was successful. I suspect that something more subtle is going wrong with Cherry, perhaps involving timeouts. I should point out, however, that once an R-Server connection is open, IFS waits for up to 2 minutes for a reply to a request (the timeouts for getting the connection open are much shorter). The only other possibility is that Cherry is having a terrible time talking to Cabernet, perhaps due to some communication difficulty involving Twinkle. I can't see any evidence of such difficulty, however; Twinkle can echo to both Cabernet and Cherry with no trouble. What next? Ed *start* 01192 00024 US Date: 22 Aug. 1982 5:12 pm PDT (Sunday) From: Taft.PA Subject: Sequin bug? To: Wobber cc: Boggs, Taft I've been hunting for a bug that once caused Ivy to crash because the PBI queues had become scrambled. The symptoms are suggestive of what might happen if you Enqueued a PBI onto the same queue twice. I examined all the PBIs in the system with some care and found a few Leaf PBIs. So I read through the Sequin code briefly and found what appears to be a bug. In IfsSequinSwap.bcpl, HandlePBI: case sequinDestroy: [ sequin>>Sequin.state = dallyingState; SequinAnswer(sequin, pbi, sequinDallying); docase sequinNop; ] case sequinCheck: case sequinNop: SequinAnswer(sequin, pbi, sequinAck); endcase; Now, SequinAnswer calls CompletePBI on the supplied pbi unconditionally. It appears, then, that if a sequinDestroy occurs, SequinAnswer (and therefore CompletePBI) is called twice on the same pbi, because of the "docase" at the end of the sequinDestroy case. Am I missing anything? If not, what should this code really do? Why has it ever worked? Is it possible that nobody has ever used sequinDestroy until now? Ed *start* 00301 00024 US Date: 23-Aug-82 11:53:19 PDT (Monday) From: Wobber.pa Subject: Re: Sequin bug? In-reply-to: Taft's message of 22 Aug. 1982 5:12 pm PDT (Sunday) To: Taft cc: Wobber, Boggs Gad! What a bug ... I'm glad your code is so resilient. "docase SequinNop" should be "endcase". /Ted *start* 01432 00024 US Date: 14 Nov. 1982 11:36 am PST (Sunday) From: Taft.PA Subject: IFS 1.37 alpha test To: IFSAdministrators^ Reply-To: Taft A pre-release version of IFS 1.37, called IFS 1.36.10, is now available for test. This has been running on Ivy and Indigo for over a month with no problems encountered. Changes since IFS 1.36 are as follows. The boot server can now boot-load Sun workstations; also, new boot files installed by manual means (e.g., FTP) are now noticed immediately instead of after a delay of up to 8 hours. A new server, LookupFile, is included and may optionally be enabled. The backup system now checks the disk usage total of each directory, and fixes it if it is incorrect. Additionally, internal changes in the VMem software have resulted in a modest performance improvement and elimination of the long-standing "Can't flush locked page" bug. Complete information may be obtained from the revised "IFS Operation" document, which is [Indigo]Operation.press. If you would like to test this software, please send me a message, and then obtain the following files: [Indigo]1.36.10>IFS.run [Indigo]1.36.10>IFS.syms [Indigo]1.36.10>IFS.errors For correct error reporting, please be sure you have the latest Sys.errors, which may be obtained from [Indigo]Sys.errors. If no problems are reported, this software will be released as IFS 1.37 on Monday, November 29. *start* 00719 00024 US Date: 24 Nov. 1982 12:57 pm EST (Wednesday) From: dgustafson.XRCC Subject: Re: IFS 1.37 alpha test In-reply-to: Taft.PA's message of 14 Nov. 1982 11:36 am PST (Sunday) To: Taft.PA cc: dgustafson Ed, I installed the files ifs.run,ifs.syms,ifs.errors and sys.errors from indigo on Aklak and it seemed to start okay but later on in the day some people were unable to retrieve their mail. When they started up laurel it said that they had new mail but when they bugged mail file it would come back with an error message that said "mail not retrieved unknown error". I then tried a number of other accounts and disks with the same problem. When I reinstalled ifs.135 the problem disappeared. Don *start* 01022 00024 US Date: 25 Nov. 1982 10:25 am PST (Thursday) From: Taft.PA Subject: Rename bug To: Karlton, Davirro cc: Taft I have found the Rename bug. It is caused by a race between the Rename and other directory activity. In order to provoke the bug, it is necessary for IFS to consult Grapevine for access control (relatively uncommon since IFS keeps a cache of Grapevine information) AND for some other client to change the directory while the first process is waiting for Grapevine. In this situation, the file itself is correctly renamed (i.e., the name changed in the leader page) and the old name deleted from the directory, but the pointer to the file is not correctly written into the new directory entry before the entry is inserted into the directory. Instead, the entry contains the pointer left over from the previous lookup of the new name; so you end up with two versions of the new name pointing at the same file, and the renamed file not being pointed to by any directory entry at all. Ed *start* 02774 00024 US Date: 28 Nov. 1982 1:25 pm PST (Sunday) From: Taft.PA Subject: IFS 1.37 To: IFSAdministrators^ Reply-To: Taft Version 1.37 of the IFS software is released. It has been running for over 6 weeks on Ivy and Indigo and for shorter intervals at several alpha-test sites with no unsolved problems. Changes since IFS 1.36 are as follows. The boot server can now boot-load Sun workstations; also, new boot files installed by manual means (e.g., FTP) are now noticed immediately instead of after a delay of up to 8 hours. A new server, LookupFile, is included and may optionally be enabled. The backup system now checks the disk usage total of each directory, and fixes it if it is incorrect. Additionally, internal changes in the VMem software have resulted in a modest performance improvement and elimination of the long-standing "Can't flush locked page" bug. Several bugs turned up during alpha testing and have been fixed. It was possible to hang the system by changing backup parameters at just the wrong time (this was a very long-standing bug). The mail server had stopped working altogether (I suspect it didn't work in IFS 1.36 either; more on this below). There was a very low-probability bug in Rename which caused occasional flakey behavior during heavy use (e.g., Brownie moving files en masse from one directory to another); this bug dates from IFS 1.35, and has had two symptoms: file system inconsistencies such as having two directory entries for the same file, and occasional crashes after Rename. Complete information may be obtained from the revised "IFS Operation" document, which is [Maxc]Operation.press or [Indigo]Operation.press. The "How to use IFS" document is unchanged. Software may be obtained from either [Maxc] or [Indigo]1.37>, and consists of files IFS.run, IFS.syms, and IFS.errors. For correct error reporting, please be sure you have the latest Sys.errors, which may be obtained from [Maxc] or [Indigo]. (Alpha-testers: if you are running IFS 1.36.10, you should convert to IFS 1.37 at this time.) While I am on the subject, I should mention that we would like to discontinue support for the IFS mail server altogether in the near future. There are only a very few sites which have not yet converted to Grapevine and are relying on IFS mail servers. Maintaining this software, and maintaining the Grapevine software that provides compatibility with the old MTP protocol used by IFS, is a burden which is no longer justified by the amount of use this software receives. Therefore, sites which are presently using IFS for mail service should begin planning to install a Grapevine server or to make arrangements to keep local mailboxes on some existing Grapevine server. *start* 00396 00024 US Date: 6-Dec-82 10:31:39 PST (Monday) From: LaCoe.ES Subject: IFS Rain To: Taft.pa cc: LaCoe Hi Ed, Rain was in swat this morning with: Attempt to access non-existant Leaf virtual pbi #2. The files are on [Maxc2] Rain12-6-82.swat and RainDumpRam.Run12-6-82. Do you know what the problem was? Thanks in advance for any information you can supply us. Joyce *start* 00360 00024 US Date: 9 Dec. 1982 1:38 am PST (Thursday) From: Deutsch.PA Subject: Sun boot file To: Taft The file [Phylum]boot>110010-st68k.boot seems to have gotten mysteriously truncated to 130K bytes. It started out life at about 700K bytes, and when I booted it onto the Sun this afternoon it seemed to be all there. What's going on?? *start* 00922 00024 US Date: 9 Dec. 1982 8:41 am PST (Thursday) From: Taft.PA Subject: Re: Sun boot file In-reply-to: Your message of 9 Dec. 1982 1:38 am PST (Thursday) To: Deutsch cc: Taft That is what I was afraid would happen, though I'm surprised it took so long. The boot server noticed that this was a type S boot file, which it dutifully transformed into a type B boot file by rearranging pages. Type S boot files always have about 130K bytes... Two things will need to be done. You will need to prefix one overhead page to the boot file, containing the stuff the boot server wants to see, namely: structure BootHeader: [ blank word type word // zero => type B, nonzero => type S blank word 2 date word 2 // creation date of boot file (BCPL format) ] And I will need to modify the boot server so that when it responds to a Sun boot request it skips the first page of the boot file. Ed *start* 01148 00024 US Date: 10 Dec. 1982 10:03 am PST (Friday) From: Taft.PA Subject: Boot server change To: Deutsch cc: Taft I can't make the inclusion or omission of the overhead page be a property of the boot file, for the reason I described earlier. Namely, boot file update is done by the same protocol as boot loading, and the server can't tell the difference. Boot file update requires that the overhead page be included, since it contains the creation date on which the update depends. So I am returning my original scheme of omitting the overhead page only if the file was requested by a Pup of type "Sun boot request". Note that this request may designate the boot file either by name or by number: by name if the body of the Pup is non-empty, by number otherwise. If you prefer, I will arrange to omit the overhead page only if the file was requested by a "Sun boot request" AND the boot file contains -1 in the type word. The only requirement is that the overhead page always be present during boot file update. Independent of this, I will change the boot server not to reformat files whose types lie outside [1..377B]. Ed *start* 00452 00024 US Date: 10 Dec. 1982 3:15 pm PST (Friday) From: Deutsch.PA Subject: Re: Boot server change In-reply-to: Your message of 10 Dec. 1982 10:03 am PST (Friday) To: Taft If you change the boot server not to reformat files whose types lie outside [1..377B], there is no need to make the other change, since Sun boot files (and indeed all x.out files) have a number between 400B and 477B in the type word, which is in fact a type code. *start* 00314 00024 US Date: 10 Dec. 1982 5:25 pm PST (Friday) From: Taft.PA Subject: Re: Boot server change In-reply-to: Your message of 10 Dec. 1982 3:15 pm PST (Friday) To: Deutsch cc: Taft I'm happy to do it this way, so long as it's OK for words 4 and 5 of the boot file to contain the creation date. Ed *start* 00799 00024 US Date: 23 Dec. 1982 9:51 am PST (Thursday) From: Deutsch.PA Subject: Re: Boot server change In-reply-to: Taft's message of 12 Dec. 1982 11:59 am PST (Sunday) To: Taft cc: Fikes Sigh, I looked through the documentation for b.out files, and they will NOT be happy with arbitrary stuff in words 4 and 5. So I'm afraid we need to go back to the original plan, which is: - don't reformat files whose type is outside [1..377B] - strip off the first page when responding to a boot request if the type is some distinguished value (I suggest -1) I'm still not sure I fully understand the way that boot file propagation gets done, but if the first page only gets stripped for a "by-name" rather than a "by-number" boot request, that's fine too. Thanks a lot for your help. P. *start* 00872 00024 US Date: 27 Dec. 1982 12:26 pm PST (Monday) From: Fikes.PA Subject: Question About IFS Date and Time To: Taft,Boggs cc: Fikes,Trow Could one of you respond to this? thanks, Phyques --------------------------- Date: 24 Dec. 1982 11:55 am PST (Friday) From: Trow.pa Subject: Wrong date on Phylum To: Fikes cc: Trow Phylum was down from 4:30 pm to 10 pm on 22 Dec due to power outage. When it was restarted, date and time was not available from the Ethernet. Consequently, it used 31 Oct 32 as its date until 10:55 am on 24 Dec when someone was finally found who could issue a Reset Time command. This probably has some interesting effects on file backup and FTP commands that check write dates. Is there a reason why it doesn't compare itself to network time once in a while? Jay ------------------------------------------------------------ *start* 00260 00024 US Date: 7-Jan-83 15:36:14 PST From: Paxton.pa Subject: T300 for IFS To: Ornstein, Taft cc: Pier, Paxton ISL has allocated $11K in our 83 capital budget for a T300 to beef up Indigo or Ivy. Let me know when you'd like to collect. Bill *start* 00898 00024 US Date: 22 Jan. 1983 11:57 am PST (Saturday) From: Taft.PA Subject: IFS boot server To: Deutsch cc: Fikes, Taft I have (finally) made the boot server change you requested. The server now skips sending the first page of the file if the boot was requested by a "Sun boot request" (Pup type 303B), regardless of whether the request specified the boot file by name or by number. Boot files whose types are outside [1..377B] are never reformatted. So I suggest you prefix to your standard Sun boot files a page containing a type word of your choice outside [1..377] and the date on which the boot file was built. Please install [Indigo]1.37.2>IFS.run, IFS.syms, and IFS.errors on your file server. I have tested this software to the extent of verifying that the boot server still works in the normal fashion, but have not actually tried out the new functionality. Ed *start* 01388 00024 US Date: 8 FEB 1983 1525-PST From: DEKLEER Subject: IFS BUG? To: taft cc: dekleer When I try to set the creation-date on a new-store, it works, but not on the plist it returns initially. (setq z (funcall x ':open x ':direction ':out ':creation-date time)) U: [NewStore]((User-name dekleer)(User-password xxx)(Directory CONS>JOHAN)(Name-body DELETE.ME)(Type Text)(End-of-line-convention CR)(Creation-date 1-Jan-83 00:00:00 PST)) U: [End-of-Command] S: [Here-is-Plist]((Author DeKleer.PA)(Creation-date 8-Feb-83 15:21:37 PST)(Device Primary)(Directory CONS>JOHAN)(Name-body DELETE.ME)(Server-filename JOHAN>DELETE.ME!18)(Version 18)(Write-date 8-Feb-83 15:21:37 PST)) S: [End-of-Command] U: [Here-is-File] # (tyo 1 z) 1 (close z) U: [Yes] <0> Transfer Complete U: [End-of-Command] S: [Yes] <0> Store completed S: [End-of-Command] Releasing ftp connection with phylum T (fs:directory-list x) U: [New-Enumerate]((User-name dekleer)(User-password xxx)(Directory CONS>JOHAN)(Name-body DELETE.ME)(Version H)) U: [End-of-Command] S: [Here-is-Plist]((Author DeKleer.PA)(Creation-date 1-Jan-83 0:00:00 PST)(Device Primary)(Directory CONS>JOHAN)(Name-body DELETE.ME)(Server-filename JOHAN>DELETE.ME!18)(Size 1)(Type Text)(Version 18)(Write-date 8-Feb-83 15:21:37 PST)) S: [End-of-Command] Releasing ftp connection with phylum ------- *start* 01840 00024 US Date: 20 APR 1983 1515-PST From: DEKLEER Subject: IFS To: taft I'd like to throw a couple of things into the wish-list for the next time (if ever) you change IFS code for FTP. [1] The only way I know of changing properties of a file (aside from its name) is to copy the file from phylum and back again. It would be really convenient for all my imported M.I.T. software if there were some way to change the properties a file. I've tried using rename to do so, but it only changes the name. Not the author, creation date, creation time, checksum, etc. (The standard practice at M.I.T.. seems to be to create an absolutely random file with a random name and then set the properties, date and true author after or during the file is being written. --- This problems comes up most acutely for me for their software release software.) The Symbolics/MIT people refuse to change their software for my sake as IFS are the only file servers they have ever had to connect machines to which cannot change file properties after the file exists. [2] This is probably more work. It would be very useful to be able to store more information with the file properties; such whether it has been/should be archived, its type, its real author, who read it last, etc. Right now I use the 16bits of checksum to store such information (since IFS ignores those 16 bits I can use them for anything I want). It would be usefull to have a few 16 bit words explicitly allocated to this purpose, or more generally an arbitrary updatable string of 8 bit bytes which can be associated with IFS files. (Putting such information into an invisible prelude defeats the purpose because that takes too long to read when accessing an entire directory and the impossibility of read/updating-in-place parts of files.) Thanks, Johan ------- *start* 00934 00024 US Date: 4 May 1983 5:11 pm EDT (Wednesday) From: Sperry.WBST Subject: Possible T-300 Problem To: IFSAdministrators^.pa cc: Sperry Reply-To: Sperry We recently suffered a head crash on a T-300 and the cause of the problem was traced to the air system boot. The air system boot is a rubber duct which is used to conduct the air from the plenum outlet (i.e. the top cover for the absolute filter) to the air shroud which surrounds the pack. It is possible to inspect the boot by removing the front cover of the disk-drive, and checking the rubber boot which is located near the top left hand side. The corners of the rubber boot apparently split from the constant flexing caused by the airflow through it. On inspection, we found two other drives which had the same problem. Both drives were several years old, and it appears that they have changed the composition of the rubber on the newer drives. Bob Sperry *start* 01740 00024 US Date: 6 May 83 12:16:47 PDT (Friday) From: Hanzel.ES Subject: IFS & multiple volumes questions To: Taft.PA cc: Hanzel Ed, I am directing this message to you without knowing an IFS expert with whom I should be speaking; please feel free to redirect this message to a more appropriate person, if necessary. We are beginning to wrestle with the implementation issues concerning multiple server volumes and the IFS provides us with a working example against which to gauge any proposed design. I am particularly interested in the way in which IFS allocates files among its online packs so as to level the load among the volumes (it really IS doing this isn't it?). That is, if an IFS is initialized as a 4-pack system, it does not simply store files as they are received onto the first volume it can, but rather distributes the load across the available volumes. Another issue is the scavenging/backup of individual packs. Is it possible to scavenge or backup a particular pack without having to do all of them? What aspects of the implementation make this possible? (It seems awfully desirable). Are there restrictions placed on the distribution of files on the volumes (e.g. all files of a given directory must coreside on the same pack)? Lastly, what about the ability to grow or shrink the number of packs? I understand that to do this the IFS must backup everything and restore to the new number of volumes instead of providing an in-place mechanism. Was this necessary because the IFS was not ambitious enough to want to support this (which I believe is reasonable not to have wanted to) or was some aspect of the design the inhibiting factor? Any input you can provide will be appreciated. --Frank *start* 02554 00024 US Date: Fri, 6 May 83 13:36 PDT From: Taft.PA Subject: Re: IFS & multiple volumes questions In-reply-to: "Your message of 6 May 83 12:16:47 PDT (Friday)" To: Hanzel.ES cc: Taft Allocation of files to volumes: IFS simply creates a new file on the emptiest volume. That is all. Scavenging: this is really a two-stage operation. The first stage operates on one volume at a time. It ensures that all files are well formed and that all file-level data structures are correct. It leaves behind a log file containing the names and addresses of all the files. This is roughly equivalent to a Pilot logical volume scavenge. The second stage reads the accumulated log files for all volumes and ensures that the directory is correct. The entire IFS directory is a single large file on one volume. The effect of this is that you can fix a problem involving a single volume so long as fixing the problem doesn't involve changing the directory. But if there are problems with the directory then you need to scavenge all volumes in order to get a complete log. Distribution of files on volumes: the only restriction is that a single file resides entirely on one volume. The actual location of a file is not visible at the directory level or above. Note that the requirement that a file reside on a single volume can cause real problems in storing large files into a nearly-full file system. If there are p free pages in the file system and v volumes then it's unlikely you will be able to store a new file large than p/v pages. This problem would be alleviated by implementing multi-pack volumes. This was not practical for IFS because it was based on the Alto file system. There are provisions for multi-pack volumes in the Pilot disk data structures, but implementation of this capability has never been completed. Growing and shrinking the number of packs: to add a pack to the file system, IFS simply creates an empty volume on it and makes it available to the disk allocator. This is supported by the IFS software, and is the standard means of enlarging a file system. To remove a pack requires reloading from backup, as you say. Doing it in place would require some sort of file permuting program, which we've never bothered to implement. By the way, there are some IFS design sketches you might be interested in.  They are stored as Press files in [Maxc]. Probably the most interesting ones are IFSFileStructure.press, IFSScavDesign.press, and IFSScavOp.press, though the others might be useful also. Ed Taft *start* 00565 00024 US Date: Tue, 17 May 83 16:54 PDT From: Taft.PA Subject: New IFS To: Fikes cc: Taft I've modified IFS to back up files that appear to have been backed up in the future, in addition to the files it would normally back up. Obtain: [Indigo]1.37.3>IFS.run [Indigo]1.37.3>IFS.syms [Indigo]1.37.3>IFS.errors To obtain a backup consisting of just the missing files, set the backup interval to a large number (say 1000 days) and the starting time to now. Of course, you should use a fresh backup pack for this. Good luck. Ed *start* 00865 00024 US Date: Mon, 29 Aug 83 09:56 PDT From: Taft.PA Subject: Clean up Indigo directories To: PARC^ Reply-To: RWeaver, Taft If you are responsible for a project directory on Indigo, please go over it now and delete unneeded files and obsolete versions. Free disk space is quite low at the moment; this morning's downtime was caused by running out of space in one disk volume. There is no immediate prospect for enlarging Indigo. However, a recent survey showed that there were massive amounts of junk; cleaning this up would help matters considerably. Lisp users are requested not to store extremely large files (such as sysouts) onto Indigo directly from Lisp. The Leaf server which Lisp uses does not recover correctly from running out of disk space while attempting to create or enlarge a file. Please store such files using FTP instead. *start* 01547 00024 US Date: 23 Sep 83 17:42:38 PDT (Friday) From: Lansford.pasa Subject: IFS To: Taft.PA, Boggs.PA cc: Hains, Lansford.pasa Now that you are momentarily thinking of IFS things.....perhaps you would speculate on the following ; Chuck and I just returned from the U.S. Senate where WE observed that when doing a FTP window List * of certain directories, listing would proceed as expected until a certain file(s) was (were) encountered, then "command failed" would occur. This was repeatable - same file everytime, no exceptions. Sometimes, but not always, there was an appreciable time (~10 seconds) between the last file name listed and the "command failed" message. The same effect did not occur when Listing via the Telnet window. The FTP was the latest release. The IFS is 1.36L. The configuration is 2 T-80s as primary + a T-80 for backup. Other features: the drives are claimed to have had regular maintenance (new filters, heads checked, etc), the packs are new, the Ether consists of two legs issuing from the IFS, believed (but not confirmed) to exceed the allowable net length. (aha?) the drives have had some hard times in the past - they were sitting in a hall where they were bumped into regularly and where a cleaning person once opened one of the drives and applied her feather duster! Presently, they are sited properly. The only time I've seen that message is when an overloaded IFS over lots of hops timed out. But on the same file each time? Your thoughts will be appreciated. Thanks, Bob *start* 00432 00024 US Date: 21 SEP 83 23:35 PDT From: MASINTER.PA Subject: Time on Phylum said EDT To: Fikes, Murray, Boggs, Taft Any idea why Phylum would suddenly start thinking it was on the east coast? I told Phylum to reset its time , and it converted back to PDT, but the hiccup was confusing; filedates were coming back with EDT on them, and when I chatted to Phylum and asked for the DayTime it said ... EDT. Larry *start* 00892 00024 US Date: 22 Sep 83 15:07:47 PDT (Thursday) From: Murray.PA Subject: Re: Time on Phylum said EDT In-reply-to: MASINTER's message of 21 SEP 83 23:35 PDT To: MASINTER cc: Fikes, Murray, Boggs, Taft, Hoffarth.wbst Mystery solved/understood. Back in the dark ages, before Nebula was connected to the satelite time receiver, the time around the net drifted until somebody got annoyed and fixed it.  "Fix"ing it involved a lot of manual effort. I wrote a hack to search the whole net, and fix all the active time servers. A day or two ago, Rich ran it. (I assume that the time in the Wbst area had drifted far enough to attract his attention.) IFS gets the time parameters whenever it gets the time. Normally (at startup) that's from the local net. This time.... Rich: Taft is working on a fix to IFS. Please don't "fix" the time until the next IFS hits the streets. *start* 01367 00024 US Date: Thu, 22 Sep 83 15:37 PDT From: Taft.PA Subject: IFS time problem To: IFSAdministrators^ Reply-To: Taft.PA A couple of days ago, an attempt was made to resynchronize all the clocks in the Research Internet, some of which had drifted by quite a lot. A program was run that located and checked all available time servers in the Internet. All clocks that were found to be too far off were reset from a single time server in Webster. Unfortunately, this exposed a problem in the IFS time maintenance code. In addition to resetting the time, IFS also set its local time parameters to the information in the time server replies. As a result, several west-coast IFSs had their time zone set to EDT. This had a number of surprising effects. You should check to see whether this has happened to your IFS. Use Chat to connect to your IFS, issue the "DayTime" command, and check that the answer includes the correct time zone. If it doesn't, issue the commands "Enable", "Change System-parameters", and "Reset-time", which will cause the IFS to reset its time and time parameters from some other time server on the directly connected Ethernet. Needless to say, no further attempts will be made to resynchronize clocks until the problem has been fixed. I expect to have a pre-release of IFS 1.38 ready for testing fairly soon. Ed Taft *start* 02100 00024 US Date: Mon, 26 Sep 83 14:47 PDT From: Taft.PA Subject: IFS 1.38 alpha test To: IFSAdministrators^ Reply-To: Taft.PA A pre-release version of IFS 1.38, called IFS 1.37.5, is now available for test. It has been nearly a year since the last IFS release, and a small number of bugs and other changes have accumulated. There have been no major changes, and no new problems are anticipated. Nevertheless, I would like to alpha-test this software on at least a few servers outside PARC before the formal release. The following bugs have been fixed: -- Grapevine authentication and access control sometimes worked sluggishly or not at all under certain conditions of Grapevine server unavailability. This bug began occurring only when the number of Grapevine registration servers exceeded some threshold (there are currently 19). -- The FTP server failed to parse numeric time zones. -- The time server would erroneously change its time zone when commanded to reset its clock from a remote source. -- If IFS was ever operated with its clock set to a time far in the future, some files might never again get backed up (until that remote future time). IFS now backs up files that appear to have last been backed up in the future. -- Certain file properties were reported incorrectly during FTP Store, though they were actually set correctly in the stored file. Other changes since IFS 1.37 are as follows: -- The Author property may now be supplied by the client during FTP Store. -- The Type, Byte-size, Creation-date, and Author properties may now be changed during FTP Rename by supplying one or more of those properties in the "new" property list. -- The boot server uses a revised protocol for booting Sun workstations. There are no changes in any documentation. If you would like to test this software, please send me a message, and then obtain the following files: [Indigo]1.37.5>IFS.run [Indigo]1.37.5>IFS.syms [Indigo]1.37.5>IFS.errors If no problems are reported, this software will be released as IFS 1.38 on Monday, October 10. *start* 01172 00024 US Date: 4 Oct 83 07:59:46+0100 (Tuesday) From: Englund.RX Subject: Info about IFS 1.37.5 To: Taft.pa cc: Gittins, Englund Ed, I thought it might be interesting for you to get some feedback on our thesting of IFS 1.37.5. We have had problems with time stamps changing on files when storing them from CoPilot to our IFS (Jaws) and later retrieving them and as you will find in the message below some of our problems have been cured. By the way our time problems only shows up during Daylight Saving. ---Anders ---------------------------------------------------------------- Date: 3 Oct 83 12:01:51+0100 (Monday) From: Gittins.RX Subject: Re: IFS - Jaws In-reply-to: Marshall's message of 29 Sep 83 09:41:37+0100 (Thursday) To: Marshall cc: AllINS^, O'Flaherty Reply-To: Gittins.RX You will be pleased to know that this version has improved the time change when storing to Jaws and back, timestamps are only changed by 2 hours now instead of 3. This is good news because we know the timestamp is 2 hours out due to a pilot bug, so if this bug is fixed in Klamath .... Martin ---------------------------------------------------------------- *start* 01789 00024 US Date: 13 OCT 1983 1149-PDT From: DEKLEER Subject: New IFS To: taft I just updated my code and experimented with the new IFS on IVY. Everything works as you advertised, which is a major win and makes things a lot easier. I just threw out a lot of kludgey code for determinging the true author's of files. For a moment, I feared that you might have allowed one to set the author, but only to a legal user on that IFS. Fortunately, you did not do that and I hope you never will. Now all I have to do is convince my system administrator to install the new IFS and I will have won completely. Now that I use rename a lot more (just to change properties) I have a question about it. I want to use RENAME, only to change the name, not the properties. The way I am doing it now is first doing a RETRIEVE followed by NO to get back the expanded name (DIRECTORY/ENUMERATE works too), and then renaming on exactly the same fpl. The reason I was doing that is that I am paranoid that unless RENAME is given a complete filename that it might change the name of the file. I haven't experimented with this enough to tell, and you might just know the answer. Basically, I'm asking does the following property hold for ifs ``if all of the name-like properties of the old fpl are identical to those of the new fpl, the name does not change.'' For example, will renaming "x!H" to "x!H" increment x's version or renaming "x*" to "x*" if there is a unique match, etc. etc. etc. I suspect IFS might have an explicit check for the case where only file properties are changed. I'm of the opinion that I should be able to change all the properties of the file (SIZE,WDAT,RDAT,CKSM, etc.) But Author and creation-date only critical ones I need. Thanks again, Johan. ------- *start* 02054 00024 US Date: Thu, 13 Oct 83 16:58 PDT From: Taft.PA Subject: Re: New IFS In-reply-to: "Your message of 13 OCT 1983 1149-PDT" To: DEKLEER cc: taft The canonical form of a file name is considered to be the Server-filename, with the User-name, Connect-name, Directory, Name-body, and Version properties applied as defaults when necessary, and version variables (!H, etc.) bound to specific values. If you present a fully-specified Server-filename then the other name properties are irrelevant (except User-name and Connect-name, which also impart access rights). So if you rename "x!H" to "x!H", the name will remain the same, since both names evaluate to the same canonical form. On the other hand, if you rename "x" to "x" (with no versions specified), the version number will be incremented, since the default version for the old name is !H and for the new name is !N. Rename does not permit "*" to appear in either name, regardless of whether or not the match is unique. A pattern-match substitution capability is on my IFS wish list but will probably never be done. As for non-name file properties, they are changed by Rename if and only if they are present in the new property list. This is independent of whether or not the name is changed as well. The Checksum property actually can be set. I do not permit setting the Write-date and Read-date because they are properties of the "container", unlike the Creation-date which is a property of the "contents". They are the exclusive province of the file system and are used for internal bookkeeping purposes (e.g., file backup). I do not permit setting the Size property because that would not be a meaningful operation in the context of FTP. The true size of a file is entirely defined by the number of bytes that actually flow over the byte stream during an FTP transfer. If the file system maintains a separate "size" property (as IFS does), it is intended only as a hint, e.g., to permit efficient disk space preallocation at the destination of an FTP transfer. Ed *start* 02064 00024 US Date: THURSDAY, 13 OCTOBER 1983 22:04-PDT From: DEKLEER at PARC-MAXC To: taft at PARC-MAXC Subject: IFS Richard Fikes installed the new IFS on phylum and it works fine for me (the comments below don't affect my software enough to for you to bother changing IFS). Thanks for clarifying how rename works wrt filenames. cksm is not settable on rename. argument for changing size is that I thought size was length of file in bytes of the size indicated in the files byte size. So actually changing the files byte size from 8 to 16 should halve its size. My software doesn't look at size so this does not matter. argument for changing read date on a file. Often I peruse my directory and look at files to see whats in em. This operation should not affect the read date vis-a-vis the archiving and deltion in the future. Hence my software distinguishes between "viewing" and "reading". The first leaves the read date unchanged. The idea is the file read date stands for the last time the file was read to do something usefull. argument for changing write-date on a file. My software rarely cares about when a file was actually created, but rather when the data representing the logical file last changed. Hence, I really should be using wdat, not cdat. I suppose the problem here is there really should be 4 dates: (1) when logical file was created, (2) when the logical file's contents was last changed, (3) when the current physical file was created and (4) when the current physical file's contents was last changed. Its this line of argument for why I wat to change author properties. Every file has two authors (1) the person who created the logical file, (2) the person who created the physical file which now contains the contents of the logical file. In general I tend to care more about the properties of the logical file than the physical file. As you see I'm just a pooor loser who would like a distributed file system which covered every file in the world (or at least those at Xerox, MIT, BBN and Symbolics). *start* 00930 00024 US Date: Fri, 14 Oct 83 09:44 PDT From: Taft.PA Subject: IFS To: DEKLEER cc: Taft The prevailing local standard is that the thing called Creation-date is the time at which the logical contents of the file were created. When you copy a file verbatim from one place to another, you should carry the Creation-date over with it. This is why IFS (and other servers) allow you to set the Creation-date. There is a complete description of this in "Alto file date standard", file [Maxc]FileDates.press. In light of this, it sounds to me as if the thing you want to do to the Write-date you should instead be doing to the Creation-date. Your argument about Read-date is plausible, and I will put it on my wish list. By the way, what mail system did you use to send that last message? The Date, From, and To fields were all illegal according to the current ARPA standard (822) which we use now. Ed *start* 02128 00024 US Date: Fri, 14 Oct 83 10:10 PDT From: Taft.PA Subject: IFS 1.38 To: IFSAdministrators^ Reply-To: Taft.PA Version 1.38 of the IFS software is released. It has been nearly a year since the last IFS release, and a small number of bugs and other changes have accumulated. There have been no major changes. This software has been running for three weeks at about 6 sites with no reported problems. The following bugs have been fixed: -- Grapevine authentication and access control sometimes worked sluggishly or not at all under certain conditions of Grapevine server unavailability. This bug began occurring only when the number of Grapevine registration servers exceeded some threshold (there are currently 19). -- The FTP server failed to parse numeric time zones. -- The time server would erroneously change its time zone when commanded to reset its clock from a remote source. -- If IFS was ever operated with its clock set to a time far in the future, some files might never again get backed up (until that remote future time). IFS now backs up files that appear to have last been backed up in the future. -- Certain file properties were reported incorrectly during FTP Store, though they were actually set correctly in the stored file. Other changes since IFS 1.37 are as follows: -- The Author property may now be supplied by the client during FTP Store. -- The Type, Byte-size, Creation-date, and Author properties may now be changed during FTP Rename by supplying one or more of those properties in the "new" property list. -- The boot server uses a revised protocol for booting Sun workstations. There are no changes in any documentation. Software may be obtained from either [Maxc] or [Indigo]1.38>, and consists of files IFS.run, IFS.syms, and IFS.errors. For correct error reporting, please be sure you have the latest Sys.errors, which may be obtained from [Maxc] or [Indigo]. (Alpha-testers: if you are running IFS 1.37.5 or 1.37.6, you should convert to IFS 1.38 at this time.) Please install this new software at your earliest convenience. *start* 00997 00024 US Date: Sun, 13 Nov 83 11:21 PST From: Taft.PA Subject: T300 for IFS To: Mitchell, Paxton cc: Ornstein, Pier, Taft --------------------------- Date: 7-Jan-83 15:36:14 PST From: Paxton.pa Subject: T300 for IFS To: Ornstein, Taft cc: Pier, Paxton ISL has allocated $11K in our 83 capital budget for a T300 to beef up Indigo or Ivy. Let me know when you'd like to collect. Bill ------------------------------------------------------------ We have never "collected" on this offer. We recently enlarged Indigo by using a T-300 drive previously purchased, and we still have one spare plus two drives connected to the Alpine server which aren't being used yet. If the money is sitting in a corner waiting to be used to buy T-300s, perhaps we should purchase a couple of AMS315s for future use, say, with a second Alpine server. But if the money has already been put to some other use or disappeared in the course of ISL's demise then there's no reason to do anything. Ed *start* 00253 00024 US Date: 15 Nov 83 16:59:55 PST From: mitchell.pa Subject: Re: T300 for IFS In-reply-to: "Taft's message of Sun, 13 Nov 83 11:21 PST" To: Taft Cc: Mitchell, Paxton, Ornstein, Pier The money for T300s is apparently all gone. --Jim *start* 04416 00024 US Date: Wed, 16 Nov 83 17:26 PST From: Taft.PA Subject: "Stack overflow" crashes To: IFSAdministrators^ Reply-To: Taft.PA A bug has turned up that you should be aware of. The symptom is that IFS falls into Swat with a "Stack overflow" message, and repeatedly crashes within about 10 minutes of being restarted. The way to make this problem go away is to start IFS with the /F switch (i.e., "IFS/F"). This is a "latent" bug that has been lurking unnoticed for at least two years. Not all servers are susceptible to this problem, so do not use /F unless the problem happens to you. It is most likely to happen on servers whose disks are nearly full. This bug will be fixed in the next release of IFS if there ever is one. There are no current plans to make such a release. The technical details of this bug are sufficiently interesting that they are worth describing. One of the things IFS does is to obtain and install a new copy of the Pup network directory data base whenever a new version of it is released. This is the file that translates between machine names and addresses. Every IFS maintains a copy of it, whether or not its name server is enabled. IFS does not make use of this file in its raw form, but instead constructs a B-Tree (file Pup-network.tree) and inserts all the information from the network directory into the B-Tree. The B-Tree is a random-access file which requires that a data structure called a file map be kept in main memory. Each entry in the file map describes a "run" of file pages that are located at consecutive addresses on the disk. The file map is implemented in a fairly simpleminded way; in particular, the maximum number of entries that a file map can hold is determined when it is created. If a file is so badly fragmented as to consist of more runs than the file map has entries, matters take a decided turn for the worse. (There is some recovery logic that is supposed to handle this at the cost of a considerable performance degradation; but it does not work properly for reasons that are too complicated to go into here. The "stack overflow" is a symptom of the recovery logic getting into trouble.) The file map used for the Pup-network.tree file has approximately 40 entries. This was deemed to be more than enough to describe a file of 100 pages. Unfortunately, the Pup-network.tree file is not preallocated and re-used, but instead is recreated from scratch every time a new version of the network directory is distributed. (The reason for this is to permit IFS to continue operating with the old B-Tree while it is building a new one, an operation that takes about 10 minutes.) No attempt is made to cause the new Pup-network.tree file to be created contiguously. So eventually IFS may have the misfortune to create some version of this file with more than 40 fragments. This is most likely to occur in a file system that is close to being full, since free space tends to be extremely fragmented under those conditions. However, the problem just occurred on Ivy, which is only about 87% full at the moment. The reason the problem has not shown up before is that until recently only 60 out of the possible 100 pages in Pup-network.tree were actually being used. The probability of creating a 60-page file with more than 40 fragments is rather low. But the network directory has recently grown slightly, and the B-Tree storage requirements have crossed a quantum boundary from 60 to 80 pages. This seems to have changed the probabilities enough to cause the file map overflow to actually occur. What the /F switch does is to cause all file maps to be created 50 entries larger than normal. Our good fortune in having this feature available is not due to any prescient vision that such a problem might someday occur. Rather, it was put in to circumvent a related problem: excessive fragmentation of the file containing the IFS file directory (which is also implemented as a B-Tree). That file is never supposed to get fragmented, because it is preallocated when the file system is first initialized, and its length is never intentionally changed thereafter. However, it can become fragmented as a result of IFSScavenger actions if disk errors occur in the directory file, a problem that has occurred on several occasions. It is just lucky that /F affects all file maps and not just the one for the IFS file directory. Ed Taft *start* 00431 00024 US Date: 16 Nov 83 22:11:50 PST (Wednesday) From: Murray.PA Subject: IFS tweak To: Taft cc: Boggs, Murray Do you have a list? The Dicentra name server needs support for 2 more packet types. One is name to entry and the other is address to entry. I'll fish out the fine print when you need it. We will need this in an IFS when we try to run a Dicentra in a place without D0s or DLions running PupGateway. *start* 01144 00024 US Date: 30 Nov 83 20:03:25 PST (Wednesday) From: Murray.PA Subject: IFS Delete bug To: Taft cc: Boggs, Murray I was SModeling some Dicentra bits to my Ivy directory. I knew it would push me over my allocation, so I Chated to Ivy, and while SModel was storing, I did a Delete xx*, Keep 1. I have done this several times before with no troubles. It found the old version of the big boot file, but I was banging on the keyboard in another window, and didn't confirm until after I noticed that SModel was printing out error messages. Then I confirmed, and it asked me to confirm another file. I didn't recognize that I had changed it recently, but since I had the bits on my disk, I typed a CR. This repeated for lots of files until I was sure that something fishy was going on. From the Chat typescript, in case it helps: @delete (files) osc*, @@keep (# of versions) 1 @@ Oscar>Friends> OscarDicentra.boot!29 [Confirm] yes. Oscar>Private> DESTesterOthello.bcd!11 [Confirm] yes. DESTesterOthello.mesa!5 [Confirm] yes. DogWatcher.bcd!10 [Confirm] yes. ... @list (files) osc*des* ? XXX ... *start* 00298 00024 US Date: 1 Dec 83 15:10:20 PST (Thursday) From: Murray.PA Subject: Re: IFS Delete bug In-reply-to: Your message of Thu, 1 Dec 83 09:31 PST To: Taft cc: Murray Sorry. I didn't copy enough of the deleting part of the session. DESmumble was one of the files that got deleted. *start* 02689 00024 US Date: 1 Feb 84 14:54:24 GMT (Wednesday) Subject: IFS access control To: Taft.pa cc: Englund, Schwartz, Marshall, Molloy From: Anders Englund Ed, I have a question for you regarding the access control on IFS. We recently changed the Protection groups here in the RX world to reflect our new name, SDD-RX. Since that is an organisational list and all the masters of SDD dl's are kept in the ES registry we created RX copies of all dl's but they only have one member each and that is the corresponding dl in the ES registry. Does this mean that the IFS always wants to access the ES registry in order to check your access level? We often have problems with out transatlantic line and when that happens we have noticed access problems on Jaws. I thought that there was some kind of cashing either in the Grapevine server or in the IFS but we haven't noticed anything like that. What do you suggest that we do in order to get around this problem? Below you will find copies of a chat session with Jaws and a log of a session in Maintain. While I have your attention. Can you tell me if there is an easy way to find out the different directories to which a specific Protection group has access? The reason I would like to find that out is so that we can find a reasonable way of listing the directories/files that we have access to and to give the Export Control people an easy way of producing a log of those files. Thanks for any help that you can give us. ---Anders !show directory-parameters (of directory) System_ hacks Files-only, owner is System Used 5093 out of 15000 disk pages Default file protection: R: SDD-RX^.RX Owner; W: SDD-RX^.RX Owner; A: SDD-RX^.RX Owner Create permission: SDD-RX^.RX Owner Connect permission: SDD-RX^.RX Owner Never authenticated !show system-parameters Server-limit: 6; lenJobT: 6 Clock correction: +0 seconds/day Boot server is enabled New boot file acquisition is enabled Name server is enabled Time server is enabled Press printing is disabled Leaf server is enabled CopyDisk server is enabled LookupFile server is disabled Grapevine authentication is enabled Grapevine group checking is enabled Default registry: RX Protection group names: 1 SDD-RX^.RX 2 AllEP^.RX 3 IFSAccounts^.ms 4 SDD^.es 5 SDD-RX-Dev^.RX 6 RXContractors^.RX 62 Owner 63 World = *.RX Grapevine Registration Server Maintenance Program Version of 16-Mar-83 1:17:13 Login Englund.RX ... done GV: Type Members of group: SDD-RX^.RX ... 73#50#50 ... done Members: SDD-RX^.es GV: Type Members of group: SDD-RX^.ES ... Locating registration server ... couldn't contact needed server. Try later. GV: *start* 01809 00024 US Date: Wed, 1 Feb 84 09:56 PST From: Taft.PA Subject: Re: IFS access control In-reply-to: "Englund.rx's message of 1 Feb 84 14:54:24 GMT (Wednesday)" To: Anders Englund cc: Taft, Schwartz.rx, Marshall.rx, Molloy.rx There are two reasons you are having trouble using ES groups (directly or indirectly) as access control lists. First, you are likely to have performance problems because the nearest replica of the ES registry is on a far-away Grapevine server, as you suggest. Second (and probably more important), I notice that you are using SDD^.es as one of your access control lists. SDD^.es is a very large and complex group, and it takes Grapevine a long time to search it for group membership (sometimes as much as several minutes). IFS uses a one-minute timeout on Grapevine inquiries. The SDD file servers do not use SDD^.es for access control, but instead use a flattened version called IFSControl-SDD^.es which the Grapevine servers can search much more quickly. IFS keeps a cache of the results of Grapevine inquiries for 12 hours. For a group membership check, it remembers only whether or not a specific user is a member of a specific group; it does not obtain or remember the entire membership of the group. I don't believe Grapevine servers perform any caching of the results of external inquiries. One possibility you should consider is to keep a copy of the ES registry on your Grapevine server. There are advantages and disadvantages to doing this, the main disadvantage being a substantial increase in trans-Atlantic update traffic. Perhaps you should discuss this with Andrew Birrell. The Accountant program, described in the IFS Operation manual, produces a list of the directories whose protections refer to each group, among other things. Ed