Heading:
FileStore data access code, version 0
Page Numbers: Yes X: 527 Y: 10.5"
CSL Notebook Entry
To:Alpine designersDate:May 28, 1981
From:Mark BrownLocation:PARC/CSL
Subject:FileStore data access code, version 0File:[Ivy]<Alpine>Doc>FileStoreDataAccessCode0.bravo
XEROX
Attributes:informal, technical, Alpine, Filing
References:[Ivy]<Alpine>Doc>FileStoreDesign3.bravo
Abstract:This memo contains Mesa-like code implementing the data access primitives of FileStore: Read/WritePages, Get/SetSize, Create/DestroyFile.
The model
A FileStore is a map:
[fileID, transID] -> [type, mutable, <other attributes, maybe in leader page>, size, values of pages numbered [0..size)]
Implementation restrictions
An implementation of FileStore inevitably places restrictions on the generality of the FileStore map. For instance, it is attractive to assume that a FileStore map contains at most two distinct values for any page (the committed value, and at most one uncommitted update value.) As a rule, a more restricted map is simpler to implement and allows less concurrency of access than a less restricted map. Restrictions arise from the use of coarse-grained locking and from a lack of generality in the data structures representing a FileStore.
noTransaction will only apply in a few situations. It cannot be used to create or change the length of a file. It may be used to read and write files, but writing is restricted to files that have no readers or writers using real transactions.
(As an alternative to using noTransaction, which is intended for use by experimental database applications that want to do everything for themselves, a client may use transactions but sacrifice some of their recovery properties for efficieny reasons. A client may turn off the deferring of updates until commit (with or without logging of update for media recovery). Note that such "transactions" are vulnerable to crashes and also to deadlock, which might force such a "transaction" to be killed without the possibility of automatic recovery. Just how to present this to the client in a useful way is unclear.)
The data structures representing a FileStore fall into two categories: recoverable and volatile. The recoverable representation of a FileStore consists of the state of a Pilot logical volume (the base) plus the state of a log. The volatile representation of a FileStore includes other data structures, assumed lost in a crash, that make FileStore accesses more efficient in the average case.
1) for all transactions, a file’s type attribute has the same value. (Since this is set by the transaction that creates the file, and this transaction logically has exclusive access to the file, this is not really much of a "restriction".)
2) for all transactions, a file’s mutable attribute has the same value. (This can be implemented by requiring a file to be locked exclusive before it can be made immutable.)
3) at any point in time, a file has at most two sizes: the committed size, and at most one uncommitted updated size.
4) at any point in time, a file page has at most two values: the committed value, and at most one uncommitted updated value.
CreateFileWithID: PROC [transID: TransID, fileStoreID: FileStoreID, fileID: FileID, clientID: ClientID, owner: ClientID, accessListID: AccessListID, initialPageCount: PageCount, type: File.Type]
RETURNS [createOutcome: CreateOutcome, openFileID: OpenFileID] = {
openFileID ← nullOpenFileID;
-- Client is required to use a real transaction, in order to avoid orphan files.
IF transID = noTransaction THEN RETURN [createOutcome: invalidTransaction];
-- Resource control (ensure that client cannot exceed his page allocation.)
IF NOT AccessCtl.ValidateClient[clientID] THEN RETURN [createOutcome: invalidClient];
IF NOT AccessCtl.ValidateCreate[clientID, owner] THEN RETURN [createOutcome: invalidClient];
IF NOT AccessCtl.VolatileReserveSpace[clientID, pageCount] THEN RETURN [createOutcome: clientSpaceAllocationExceeded];
openFileID ← OpenFile.Create[clientID, transID, fileStoreID, fileID];
-- Log intent to create file, then create file at FilePageMgr level.
fileID ← Lock.Lock[trans: transID, lock: [entity: <fileStoreID, fileID>, subEntity: wholeFile], mode: W];
[] ← Log.Write[fileStore: fileStoreID, record: [transID, createFile[fileID, clientID, accessListID, initialPageCount, type]], forceLog: TRUE];
[createOutcome] ← FilePageMgr.CreateFileWithID[fs: fileStoreID, file: fileID, initialSize: initialPageCount, type: type];
IF createOutcome = ok THEN {
[] ← AccessCtl.PermanentReserveSpace[transID: TransID, clientID: ClientID, pageCount: PageCount];
-- This logs the fact that space has been allocated by the transaction. When the transaction commits the space database update commits also; if the transaction aborts the space is reclaimed and the space database update does not happen. This log record is not processed until file-level recovery is complete (like client-specific log records.)
[] ← FileProperties.InitProperties[openFileID, AccessListID] };
-- This sets the times, byte length, access list, ... .
ELSE { --back out due to nonuniqueID, insufficientSpaceOnVolume, ...
[] ← AccessCtl.VolatileUnreserveSpace[clientID: ClientID, pageCount: PageCount];
[] ← Log.Write[fileStore: fileStoreID, record: [transID, createFileFailed[fileID]], forceLog: FALSE] };
-- This logs the fact that the create file failed, which is necessary since we logged the creation and our failure to create does not abort the transaction.
};
RedoCreateFileWithID: PROC [logRecord[createFile]] = {
-- See if corresponding createFileFailed record is in the log (how to bound the search?) If so, do nothing. If not, see if file is there and has right length. If so, proceed; if not, create it for sure (failure = disaster.) Don’t manipulate space accouting database.
};
UndoCreateFileWithID: PROC [logRecord[createFile]] = {
-- See if file is there, and if so, delete it. Don’t manipulate space accouting database.
};
RedoReserveSpace: PROC [logRecord[reserveSpace]] = {
-- Implementation depends on details of how space accounting database is represented. This proc is not called until file-level recovery is complete (like client-specific recovery actions.)
};
UndoReserveSpace: PROC [logRecord[reserveSpace]] = {
-- Implementation depends on details of how space accounting database is represented. This proc is not called until file-level recovery is complete (like client-specific recovery actions.)
};
GetSize: PROC [fs: FileStoreID, f: OpenFileID, lockMode: Lock.Mode]
RETURNS [pageCount: PageCount] = {
FileLock.LockOpenFileSize[openFileID: f, mode: lockMode];
-- This call first makes sure that lockMode is not an intention mode (illegal for locking the size.) Then it sees if an existing whole file lock covers the request, and if so it returns immediately. Otherwise it locks the file in the weakest mode that covers the request to lock the size, and then locks the size.
RETURN [OpenFile.Size[f]];
};
SetSize: PROC [fs: FileStoreID, f: OpenFileID, pageCount: PageCount, lockMode: Lock.Mode] = {
IF lockMode = R THEN lockMode ← W; --being defensive
FileLock.LockOpenFileSize[openFileID: f, mode: lockMode];
oldSize: LONG INTEGEROpenFile.Size[f];
IF pageCount > oldSize THEN {
-- Since file is getting longer, we do not defer the work. We must add this work to the persistent and volatile undo list, in case the transaction aborts. We must go through much the same work as for creating a file, including the space accounting.
[] ← Log.Write[fileStore: fs, record: [transID, lengthenFile[fileID, clientID, oldSize, pageCount]], forceLog: TRUE];
... }
ELSE IF pageCount < OpenFile.Size[f] THEN {
-- Since file is getting shorter, we defer the work until phase two of commit. We must add this work to the persistent and volatile intentions.
[] ← Log.Write[fileStore: fs, record: [transID, shortenFile[fileID, clientID, oldSize, pageCount]], forceLog: FALSE];
... };
OpenFile.SetSize[f, pageCount];
};