Heading:
Alpine file page manager design, version 2
Page Numbers: Yes X: 527 Y: 10.5"
To:Alpine designersDate:May 28, 1981
From:Karen Kolling, Mark BrownLocation:PARC/CSL
Subject:Alpine file page manager design, version 2File:[Ivy]<Alpine>Doc>FilePageMgrDesign2.bravo
XEROX
Attributes:informal, technical, Alpine, Filing
Abstract:This memo proposes the scope of and interface to Alpine’s file page manager, and contains notes on its implementation. This is not the final word on the file page manager; the design of other parts of Alpine will test this design. This memo supercedes BufferDesign0.bravo.
Overview
The FilePageMgr is an interface to a simple file system that is a component of FileStore’s implementation. It provides files at roughly the same level of abstraction as Pilot does (no access control, no transactions, no leader page is supported but a few special file properties are.) The FileStore implementation uses FilePageMgr for all access to the files it implements and to its log files; it does not call the Pilot File interface directly for these files. (This gives the FileStore implementation better isolation from Pilot’s file primitives.)
Data access through FilePageMgr is in the "mapped" style of VMDefs: FilePageMgr is responsible for the assignment of virtual memory to the accessible portions of files (unlike Pilot which makes the client responsible for this.) The major departure from VMDefs is the presence of runs of pages in FilePageMgr. A run of pages is returned as a list of "chunks" (possibly shorter runs), in recognition of the fact that it may be inconvenient for the FilePageMgr implementation to arrange the entire run in contiguous virtual memory. As a result it will be inconvenient for clients of FilePageMgr to manipulate data structures that overlap page boundaries.
Interface
FilePageMgr: DEFS = BEGIN
FilePageRun: TYPE = RECORD [
firstPage: PageNumber,
count:
CARDINAL ← 1 ];
-- Identifies a run of pages within a file ([firstPage .. firstPage+count).)
VMChunkRun: TYPE = LIST OF VMChunk;
VMChunk: TYPE = RECORD [
pages:
LONG POINTER,
nPages: PageCount,
chunkBase:
PRIVATE WholeChunk ];
-- Represents a run of pages from a file, as mapped to virtual memory. The first VMChunk gives the first "nPages" pages, starting at the virtual address "pages"; the next VMChunk (if any) gives the next "nPages" pages, etc.
WholeChunk: TYPE [2];
-- (Really defined in the implementation of FilePageMgr; this is a handle on an entire VM chunk, passed with a VMChunkRun so that ReleaseVMChunkRun can identify the chunks to release.)
-- <Should we pass around FileIDs or Capabilities? Do all operations need to take a FileStoreID as well as a FileID?>
ReadAhead: PROCEDURE [fs: FileStoreID, f: FileID, p: FilePageRun];
-- Notifies the file page manager that the indicated pages are likely to be read soon.
ReadPages: PROCEDURE [fs: FileStoreID, f: FileID, p: FilePageRun] RETURNS [VMChunkRun];
-- Returns a VMChunkRun containing the pages [p.firstPage .. p.firstPage+p.count) of file f. The caller has read / write access to these pages. Increments the share count of all WholeChunks returned in the VMChunkRun.
UsePages: PROCEDURE [fs: FileStoreID, f: FileID, p: FilePageRun] RETURNS [VMChunkRun];
-- Semantically identical to ReadPages, except that the contents of the pages given by the FilePageRun p are undefined. (The implementation may therefore avoid actually reading the pages, if it is fortunate.)
ShareVMChunks: PROCEDURE [vm: VMChunkRun];
-- Bumps the share count of all WholeChunks in the VMChunkRun.
PinVMChunks: PROCEDURE [vm: VMChunkRun];
-- Prevents all pages in vm from being written to disk, until UnPinVMChunks is called.
UnPinVMChunks: PROCEDURE [vm: VMChunkRun];
-- Undoes the effect of PinVMChunks; a noop if PinVMChunks was not previously called.
ReleaseVMChunkRun: PROCEDURE [vm: VMChunkRun, deactivate: {no, yes, writeBehind}, wait: BOOLEAN];
-- Indicates that the client is through with the given VMChunkRun (decrements share counts.) If wait and any of the pages are dirty, then the call will return after the pages have been written to disk; otherwise the call returns immediately. If deactivate = yes then the caller is unlikely to reference the given chunks in the near future; the FPM will throw any with use counts of 0 out of its vm. If deactivate = writeBehind then the caller expects to deactivate other "nearby" chunks in the near future; the FPM will keep track of these pages and start the writes en masse at some count that it deems appropriate.
ForceOutVMChunkRun: PROCEDURE [vm: VMChunkRun];
-- Returns when all the dirty pages in the chunk run have been written to the disk. Does not alter share count.
ForceOutEverything: PROCEDURE [];
-- Returns when all the dirty pages under control of the file page manager have been written to the disk. <More procedures along this line may be required, e.g. "ForceOutFile".>
Create: PROCEDURE [fs: FileStoreID, initialSize: PageCount, type: Type] RETURNS [file: FileID];
-- Creates the (mutable, permanent) file.
CreateWithID: PROCEDURE [fs: FileStoreID, file: FileID, initialSize: PageCount, type: Type];
-- Error if file with given ID already exists.
Delete: PROCEDURE [fs: FileStoreID, file: FileID, isImmutable: BOOLEAN];
-- Error if isImmutable # GetAttributes[file].immutable.
MakeImmutable: PROCEDURE [fs: FileStoreID, file: FileID];
GetAttributes: PROCEDURE [fs: FileStoreID, file: FileID] RETURNS [type: Type, immutable: BOOLEAN];
GetSize: PROCEDURE [fs: FileStoreID, file: FileID] RETURNS [size: PageCount];
SetSize: PROCEDURE [fs: FileStoreID, file: FileID, size: PageCount];
-- <The FileStoreID parameter is logically redundant for GetAttributes and Get/SetSize (SetSize does not apply to immutable files, and all copies of an immutable file have same size and attributes.)>
Unknown: ERROR [fs: FileStoreID, file: FileID];
Error: ERROR [type: ErrorType];
ErrorType: TYPE = {immutable, nonuniqueID, notImmutable, reservedType, insufficientSpaceOnFileStore, ... };
END.--FilePageMgr
Implementation
FilePageMgr can be implemented using mainly the Pilot Space and File interfaces, i.e. using the Pilot swapper for data transfers. (Some use of Pilot internal interfaces may be necessary, e.g. to implement pinning.) FilePageMgr will manage a collection of spaces mapped to files that are being accessed. By maintaining these spaces it avoids doing all I/O on demand, and allows read ahead / write behind of page runs. (An alternative implementation of FilePageMgr would bypass the swapper by managing its own real memory for file page buffers, and would perform file page I/O at the Pilot Filer level. This would require a minor addition to the Filer to allow it to distinguish FilePageMgr I/Os from swapper I/Os. Such an implementation would require writers of file pages to indicate the writes they perform explicitly, whereas a swapper-based implementation relies on the dirty bit in the map.)
The leaf spaces (or uniform swap units) of a FilePageMgr space must be single pages. The reason is that Pilot writes an entire swap unit if any single page of the swap unit is dirtied. A FileStore implementation must avoid such redundant writes since it will not have logged the redundantly-written pages in advance. Single-page swap units mean that all demand-paging of these spaces will be done one page at a time. But the FilePageMgr implementation of ReadAhead and ReadPages can Activate and ForceOut larger containing spaces to get better performance.
If FileStore files have a single leader page, then this can be mapped to a single-page space, and the remainder of the file mapped (on demand) in larger chunks. These chunks may be of some fixed size, for the convenience of implementation, or might be variable. The FilePageMgr client should make no assumptions about this.
One nasty point in the implementation comes in obeying Pilot’s restrictions on changing the length of a mapped file. This will lead to corresponding restrictions in the FilePageMgr interface.
The FilePageMgr implementation will contain some sort of demon that periodically increments the ages of vm chunks which have use counts of zero and ForceOuts those over a certain age. This write does not break the mapping between the chunk and the file window.
If the FilePageMgr needs to reclaim vm, it takes the LRU non-dirty chunk with use count = 0. (Note that sequential file accesses break the vm-file mappings at ReleaseVMChunkRun.)
The pin / unpin primitives are designed to allow a log file to be managed through this interface. It is still not clear that they suffice; for instance, there is currently no way to get Pilot to swap out a multi-page chunk while guaranteeing that the writes occur in ascending logical page order. We expect this part of the interface to evolve somewhat as the actual requirements of the log implementation become clearer.