Heading:
FileStore transaction code, version 1
Page Numbers: Yes X: 527 Y: 10.5"
CSL Notebook Entry
To:Alpine designersDate:February 20, 1981
From:Mark BrownLocation:PARC/CSL
Subject:FileStore transaction code, version 1File:[Ivy]<Alpine>Doc>FileStoreTransCode1.bravo
XEROX
Attributes:informal, technical, Database, Distributed computing, Filing
References:[Ivy]<Alpine>Doc>FileStoreDesign1.bravo
Abstract:This memo contains Mesa-like code implementing the Transaction Coordinator and Transaction Worker sections of FileStore.
The following paragraphs provide some context for the code. The code is organized around the manipulation of volatile objects: collectible records, monitors, etc. Since this is a concurrent program there is no choice; we don’t have stable collectible records or stable monitors. But the crucial invariants of the program involve the contents of the disk, since this is all that is available during crash recovery.
The log
The log is an infinite, sequential write-once, random access read, store for log records. The log is implemented using a circular buffer of disk pages. We make it seem infinite by (1) writing a page on tape before reusing it, and (2) never permitting a transaction to run so long that it would require access to portions of the log that now reside on tape. We achieve (2) by aborting any long-running transaction. The tape copy of the log can be used for recovery from media failures. If this recovery is not needed, the tape copy is unnecessary.
Each log record is uniquely identified by its log record ID. Depending on the structure of log records or pages, this might be the index relative to the beginning of the log of the first (or last) byte of the log record. It should be possible to do a random access read to an online log record, given the log record ID.
We make checkpoints in the log in order to bound the amount of work that is required during recovery. A checkpoint record contains, among other things, a list of all active transactions at the time of the checkpoint (both transactions whose outcome is not yet determined, and transactions that are committed or aborted but still have work to do.)
The restart record is the "root" of the log. It points to the most recent checkpoint. The restart record should be written in two independent places in order to tolerate failures during write. The restart record itself should be found without searching.
At restart it is also necessary to find the end of the log (the most recently written log record) by reading forward from the most recent checkpoint. There seem to be several ways of structuring log records or log pages to achieve this.
Interface
We have not attempted to define a complete Mesa interface to the log. The code below uses the following three log calls.
Log.Write: PROC [record: LogRecord, forceLog: BOOLEAN] RETURNS [LogRecordID];
Writes record to the log, and returns its ID. If forceLog, then waits until record actually appears on the disk before returning.
Log.Force: PROC [] RETURNS [LogRecordID];
Returns the ID of the last record in the log. Waits until this record actually appears on the disk before returning.
Log.FindTransState: PROC [transID: TransID] RETURNS [{committed, aborted, unknown}];
Searches online log for outcome of given transaction.
The code below uses several log record types. The precise encoding should not be taken seriously; only the information carried by each record is significant. The encoding will have an impact on the complexity of the recovery algorithm, however.
Log.LogRecord: TYPE = RECORD [;
transID: TransID,
SELECT * FROM
-- coordinator
begin => [],
registerWorker => [worker: FileStoreID],
collecting => [workers: LIST OF FileStoreID, places: LIST OF FileStoreID],
committing => [],
aborting => [],
committed => [],
aborted => [],
-- worker
workerRegistered => [],
workerReady => [places: LIST OF FileStoreID],
workerCommitting => [],
workerAborting => [],
workerCommitted => [],
workerAborted => [],
-- both (one phase protocol on single FileStore)
workerCommittingAndCoordinatorCommitted => [],
workerAbortingAndCoordinatorAborted => [],
-- ...
ENDCASE ];
Recoverable states
We define the recoverable state of a transaction agent (coordinator or worker) to be the state to which it returns upon crash recovery. Only states recorded in logs are recoverable.
Recoverable states of a coordinator
Unknown. A transaction is unknown if (1) it has no COLLECTING record in the log, or (2) has either a COMMITTED or an ABORTED record in the log. (It has not entered the first phase of the commit process, or it has committed and carried out all of its intentions, or it has aborted and undone all of its pre-commit updates.)
Collecting. A transaction is collecting if (1) it has a COLLECTING record in the log, and (2) has no COMMITTING or ABORTING record in the log. (It wishes to commit, is collecting "ready votes" from workers, but has not yet decided the transaction outcome.)
Committing. A transaction is committing if it has a COMMITTING record in the log but has no COMMITTED record in the log. (It has committed but has not yet heard "ack" from all of its workers.)
Aborting. A transaction is aborting if it has a ABORTING record in the log but has no ABORTED record in the log. (It has aborted but has not yet heard "ack" from all of its workers.)
Note that the outcome of a transaction (committed or aborted) is not viewed as a recoverable state, but instead these two outcomes are merged into the unknown state. Recovery must be prepared to deal with transactions that are so old that no online record of them exists. These transactions might have committed or aborted, but our algorithms should be insensitive to this, since otherwise we would be forced to mount a tape and search for the old transaction.
We consider that part of the coordinator’s role in the system is to respond to requests for information about the outcome of specific transactions (this is the motivation for encoding the coordinator’s FileStore in a TransID.) Workers should have no part in this. As a practical matter, the system is unlikely to answer requests that cannot be resolved by searching the online log.
Recoverable states of a worker
Unknown. A transaction is unknown if (1) it has no READY record in the log, or (2) has either a COMMITTED or an ABORTED record in the log. That is, either it has not entered the first phase of the commit process, or it has committed and carried out all of its intentions, or it has aborted and undone all of its pre-commit updates.
Ready. A transaction is ready if (1) it has a READY record in the log, and (2) has no COMMITTING or ABORTING record in the log. That is, it has voted ready to the coordinator, but has not yet heard the outcome of the transaction from the coordinator.
Committing. A transaction is committing if it has a COMMITTING record in the log but has no COMMITTED record in the log. That is, it has committed but not carried out all of its intentions.
Aborting. A transaction is aborting if it has a ABORTING record in the log but has no ABORTED record in the log. That is, it has aborted but not undone all of its pre-commit updates.
Discussion
The worker states committing and aborting are an optimization. They allow less undo and redo work at restart by certifying that certain transactions are completely finished. This is also one of the functions of checkpoints, but checkpoints are taken less frequently than committing and aborting records are written. (Checkpoints themselves may be viewed as an optimization, since in principle the state of the file store can be restored by reading the log. In the same sense, the file store is an optimization over doing a linear scan of the log for each read access.)
Crash recovery
During crash recovery, we read the log to determine the recoverable state of each transaction that has been active since the most recent checkpoint. Then we perform the actions described below (later we’ll have Mesa code for these too):
Coordinator
Unknown. If the transaction has no COMMITTED or ABORTED record in the log, we abort it (write ABORTED to the log, and optionally notify all of the workers whose identity has been logged); otherwise there is nothing to do.
Collecting. We call all workers again (they are identified in the COLLECTING record), asking for ready/notReady decisions.
Committing, Aborting. We call all workers again, telling them to commit or abort.
Worker
Unknown. If the transaction has no COMMITTED or ABORTED record in the log, we abort it (we must notify the coordinator); otherwise there is nothing to do.
Ready. We restore the transaction’s volatile state from the log, reacquire the transaction’s resources (locks), then wait for word from the coordinator.
Committing, Aborting. We restore the transaction’s volatile state from the log. This mainly means reconstructing the buffer pool at the time of the crash, as well as lists of other deferred updates such as file shortening and deletion. Locks do not have to be reacquired for pages (unlike the ready case), since in normal operation the buffer pool is treated as "truth". We then set in progress whatever processes are normally used to carry out intentions or undo updates. These are carefully coded to be idempotent, so they can tolerate any number of crashes.
Remote procedure call
The remote calls in this code are FileStore[F].PrepareWorker, FileStore[F].FinishWorker, and FileStore[F].RegisterWorker, where F is a FileStoreID. I chose this strange syntax since it cannot possibly be confused with the syntax we’ll end up with.
My assumption about a remote call is that when it returns, it has executed at the remote site one or more times. The result of the call, if any, is the result of one of the calls at the remote site.
I do not quite understand what happens when the remote call fails because the remote site doesn’t respond and we time out. I would prefer not to catch a signal in this case. But this leaves out procedures that return no results.
It may be helpful to consider the three remote procedures used here. If a call to RegisterWorker fails, then the proper response is to abort the operation that required the RegisterWorker. At a higher level, the client may then decide to abort the transaction. If a call to PrepareWorker fails, we must assume the worst: a call got through and the worker is, in fact, ready. The coordinator can decide to abort the transaction, but he retains the responsibility to notify the worker of this fact. Similarly, if a call to FinishWorker fails, we must repeat the call until it succeeds, since the worker must be assumed ready.
Deficiencies
There is some potential for useful special-casing of readonly workers. They do not have to participate in phase two of commit, since commit and abort are the same for a readonly worker. The code does not take advantage of this.
The implementation of FinishTransaction is crude since it does not attempt to contact workers in parallel. In a real implemetation, we expect to have a group of processes whose role in life is to call PrepareWorker and FinishWorker. The process doing the FinishTransaction might be responsible for one of the calls, or (more likely) none of them. In the latter case, it could wait on a CV associated with the Coordinator record, and be notified as the responses arrived.
Recovery needs to be coded to a similar level of detail. A strategy for cleaning up obsolete Coordinators and Workers needs to be specified; the current thing would accumulate them indefinitely.
CoordinatorImpl: MONITOR LOCKS t USING t: Coordinator
IMPORTS Log, CoordinatorTable EXPORTS FileStore = BEGIN
CreateTransaction: PUBLIC PROC [stableCreation: BOOLEAN] RETURNS [t: TransID] = {
c: Coordinator ← ConsCoordinator[
transID: [
coordinatorFileStoreID: FileStorePrivate.MyFileStoreID[],
idOnFileStore: FileStorePrivate.UniqueLongCardinalOnMyFileStore[]],
transState: unknown];
c.beginRecordID ← Log.Write[record: [c.transID, begin[]], forceLog: stableCreation];
c.state ← active; IF stableCreation THEN c.recoverableState ← active;
RETURN [c.transID] };
ConsCoordinator: PROC [transID: TransID, transState: TransState] RETURNS [c: Coordinator] = {
c ← NEW[CoordinatorObject ← [
transID: transID, state: transState, recoverableState: transState,
coordinator: MyFileStoreID[] ]];
CoordinatorTable.Insert[c];
RETURN[c] };
CoordinatorFileStoreIDFromTransID: PUBLIC PROC [t: TransID] RETURNS [FileStoreID] = {
RETURN [t.coordinatorFileStoreID] };
RegisterWorker: PUBLIC ENTRY PROC [t: TransID, w: FileStoreID, stableRegistration: BOOLEAN] RETURNS [success: BOOLEAN] = {
-- The caller (a Worker) should have no monitors locked.
c: Coordinator ← CoordinatorFromTransID[t: t, whereToLook: volatileOnly];
IF c = NIL OR c.state # active THEN RETURN [FALSE];
IF ~IsMember[w, c.workers] THEN {
c.workers ← Append[w, c.workers];
[] ← Log.Write[record: [t, registerWorker[w]], forceLog: stableRegistration] };
RETURN [TRUE] };
CoordinatorFromTransID: PROC [t: TransID, whereToLook: {volatileOnly, volatileOrLog}] RETURNS [Coordinator] = {
IF CoordinatorFileStoreIDFromTransID[t] # FileStorePrivate.MyFileStoreID[] THEN ERROR;
c: Coordinator ← CoordinatorTable.Lookup[t];
IF c # NIL OR whereToLook = volatileOnly THEN RETURN[c];
-- No volatile record of t exists. t has been recorded in the log unless it was created with stableCreation = FALSE. Search the online portion of the log for the outcome of t. (We should keep volatile records of recent transactions to keep the frequency of this operation down to an acceptable level. By doing some work at checkpoint time, only checkpoint records need to be looked at, and these are linked together. We can use the fact that TransIDs are generated in a fixed order to bound the search in the case that no record was made of a transaction.)
s: TransState ← Log.FindTransState[t];
SELECT s FROM
committed, aborted => NULL;
-- normal case.
unknown => s ← FileStorePrivate.ResolveUnknownTransaction[t];
-- t not found in log; appeal to a higher authority.
active, collecting, committing, aborting => ERROR;
ENDCASE => ERROR;
RETURN[ConsCoordinator[t, s]] };
FinishTransaction: PUBLIC ENTRY PROC [t: TransID, requestedOutcome: {commit, abort},
commitRecordPlaces: LIST OF FileStoreID]
RETURNS [outcome: {committed, aborted, unknown}] = {
-- To notify its coordinator of a spontaneous abort, a worker causes FinishTransaction[t, abort, NIL] (with no Worker monitors locked).
c: Coordinator ← CoordinatorFromTransID[t: t, whereToLook: volatileOrLog];
IF c = NIL THEN RETURN [unknown];
DO SELECT c.state FROM
active => EXIT;
-- Normal case.
committed => RETURN [committed];
aborted => RETURN [aborted];
collecting => WAIT c.outcomeDetermined;
-- Another activation of this procedure is at work. Wait for it to finish.
ENDCASE => ERROR; -- includes committing, aborting, unknown.
ENDLOOP;
IF c.workers = NIL THEN {
-- Funny transaction; no workers! Don’t complain.
[] ← Log.Write[record: [t, requestedOutcome-ed[]], forceLog: TRUE];
c.recoverableState ← outcome ← requestedOutcome; GOTO done };
IF commitRecordPlaces = NIL THEN commitRecordPlaces ← LIST [c.coordinator];
IF c.workers.first = c.coordinator AND c.workers.rest = NIL AND
commitRecordPlaces.first = c.coordinator AND commitRecordPlaces.rest = NIL THEN {
-- Single FileStore case; use one phase protocol. Worker will do the necessary log write.
c.recoverableState ← outcome ← FileStorePrivate.PrepareAndFinishWorker[t, requestedOutcome];
GOTO done };
IF requestedOutcome = abort THEN {
-- Abort request. Use one phase protocol.
[] ← Log.Write[record: [t, aborting[]], forceLog: TRUE];
FOR w: FileStoreID IN c.workers DO
[] ← FileStore[w].PrepareWorker[transID: t, requestedOutcome: abort,
commitRecordPlaces: NIL];
ENDLOOP;
[] ← Log.Write[record: [t, aborted[]], forceLog: FALSE];
c.recoverableState ← outcome ← aborted; GOTO done };
-- Trying to commit a multi FileStore transaction. Use two phase protocol.
[] ← Log.Write[record: [t, collecting[c.workers, commitRecordPlaces]], forceLog: TRUE];
c.state ← c.recoverableState ← collecting;
-- We are now responsible for doing the job. To do this properly we should have a number of processes whose role in life is to call PrepareWorker or FinishWorker, and wait for the return. If we wait in the monitor, others who enter will wait for us to determine the outcome. In this crude implementation, we do not wait.
outcome ← committed;
FOR w: FileStoreID IN c.workers DO
IF FileStore[w].PrepareWorker[transID: c.transID, requestedOutcome: requestedOutcome,
commitRecordPlaces: NIL] = notReady THEN outcome ← aborted;
ENDLOOP;
[] ← Log.Write[record: [t, outcome-ing[]], forceLog: TRUE];
c.state ← c.recoverableState ← outcome-ing;
FOR w: FileStoreID IN c.workers DO
FileStore[w].FinishWorker[transID: c.transID, requestedOutcome: outcome];
ENDLOOP;
[] ← Log.Write[record: [t, outcome-ed[]], forceLog: FALSE];
c.recoverableState ← outcome-ed; GOTO done;
EXITS
done => RETURN[c.state ← outcome]
};
END.--CoordinatorImpl
WorkerImpl: MONITOR LOCKS w USING w: Worker
IMPORTS Log, WorkerPrivate, WorkerTable EXPORTS FileStore, FileStorePrivate = BEGIN
PrepareWorker: PUBLIC PROC [t: TransID, requestedOutcome: {commit, abort},
commitRecordPlaces: LIST OF FileStoreID]
RETURNS [{readonlyReady, ready, notReady}] = {
PrepareWorkerEntry: ENTRY PROC [w: Worker, requestedOutcome: {commit, abort},
commitRecordPlaces: LIST OF FileStoreID]
RETURNS [{readonlyReady, ready, notReady}] = {
SELECT w.state FROM
active => NULL; -- normal case.
ready => GOTO readyReturn;
committed => RETURN [ready]; --result is arbitrary
aborted => RETURN [notReady]; --result is arbitrary
ENDCASE => ERROR;
IF requestedOutcome = commit AND PrepareToCommit[w: w, twoPhase: TRUE] THEN {
[] ← Log.Write[record: [w.transID, workerReady[commitRecordPlaces]], forceLog: TRUE];
w.state ← w.recoverableState ← ready;
GOTO readyReturn }
ELSE {
AbortWorker[w]; RETURN [notReady] }
EXITS
readyReturn => RETURN [IF w.readonly THEN readonlyReady ELSE ready];
};--PrepareWorkerEntry
w: Worker ← WorkerTable.Lookup[t];
IF w = NIL THEN RETURN [notReady];
RETURN [PrepareWorkerEntry[w, requestedOutcome, commitRecordPlaces]];
};
FinishWorker: PUBLIC PROC [t: TransID, requiredOutcome: {commit, abort}] = {
PrepareWorkerEntry: ENTRY PROC [w: Worker, requiredOutcome: {commit, abort}] = {
SELECT w.state FROM
ready => NULL; -- normal case.
ENDCASE => RETURN; --we must have already finished.
IF requestedOutcome = commit THEN CommitWorker[w: w, twoPhase: TRUE]
ELSE AbortWorker[w: w, twoPhase: TRUE] };
w: Worker ← WorkerTable.Lookup[t];
IF w = NIL THEN RETURN; --we must have already finished.
FinishWorkerEntry[w, requiredOutcome] };
PrepareAndFinishWorker: PUBLIC PROC [t: TransID, requestedOutcome: {commit, abort}]
RETURNS [{committed, aborted}] = {
PrepareAndFinishWorkerEntry: ENTRY PROC [w: Worker, requestedOutcome: {commit, abort}]
RETURNS [{committed, aborted}] = {
SELECT w.state FROM
active => NULL; -- normal case.
ENDCASE => ERROR; --this is local only, and protected by coordinator’s monitor.
IF requestedOutcome = commit AND PrepareToCommit[w: w, twoPhase: FALSE] THEN {
CommitWorker[w: w, twoPhase: FALSE]; RETURN [committed] }
ELSE {
AbortWorker[w: w, twoPhase: FALSE]; RETURN [aborted] }
w: Worker ← WorkerFromTransID[t: t, createWorkerIfNecessary: FALSE];
IF w = NIL THEN ERROR; --this is local only, and protected by coordinator’s monitor.
PrepareAndFinishWorkerEntry[w, requestedOutcome] };
PrepareToCommit: PROC [w: Worker, twoPhase: BOOLEAN] RETURNS [ready: BOOLEAN] = {
-- Here we must check to ensure that the transaction has all of the resources necessary for commitment. This is much easier if they have actually been allocated already, for instance file creation and file lengthening not deferred. If updates have been deferred using update mode locks, these must be converted to exclusive mode at this time. If twoPhase = TRUE, then all exclusive locks must be written into the log.
RETURN [ ... ] };
CommitWorker: PROC [w: Worker, twoPhase: BOOLEAN ← TRUE] = {
IF twoPhase THEN
[] ← Log.Write[record: [w.transID, workerCommitting[]], forceLog: TRUE];
ELSE
[] ← Log.Write[record: [w.transID, workerCommittingAndCoordinatorCommitted[]],
forceLog: TRUE];
-- Here we perform the volatile actions associated with transaction commitment. These are: make all buffer pool entries written by this transaction be the "truth" versions (discard old versions if present in buffer pool). Carry out other deferred updates such as file deletion and shortening. Finally, release locks held by this transaction.
[] ← Log.Write[record: [w.transID, workerCommitted[]], forceLog: FALSE];
};
AbortWorker: PROC [w: Worker, twoPhase: BOOLEAN ← TRUE] = {
IF twoPhase THEN
[] ← Log.Write[record: [w.transID, workerAborting[]], forceLog: FALSE];
ELSE
[] ← Log.Write[record: [w.transID, workerAbortingAndCoordinatorAborted[]],
forceLog: FALSE];
-- Log force is not required since transaction will be aborted in recovery (but we might do it anyway if it turns out to simplify recovery).
-- Here we perform the volatile actions associated with transaction abortion. These are: discard all buffer pool entries written by this transaction. Undo non-deferred updates such as file creation and lengthening. Finally, release locks held by this transaction.
[] ← Log.Write[record: [w.transID, workerAbortted[]], forceLog: FALSE];
};
END.--WorkerImpl
WorkerImpl2: MONITOR
IMPORTS Log, WorkerTable EXPORTS FileStorePrivate = BEGIN
CreateWorker: PUBLIC PROC [t : TransID, stableCreation: BOOLEAN] RETURNS [Worker] = {
-- This implementation ensures that (1) no duplicate Workers (two Workers with the same transID) are created, (2) no monitors are locked when a remote call to RegisterWorker is made. This implementation sometimes makes a redundant remote call to RegisterWorker (in the case that two processes try to migrate the same transaction to the same FileStore at about the same time), but RegisterWorker must be prepared for duplicated calls anyway.
LogWorker: ENTRY PROC [w: Worker] = {
IF w.state = unknown THEN {
[] ← Log.Write[record: [t, workerRegistered[]], forceLog: FALSE]; w.state ← active };
IF stableCreation AND w.recoverableState = unknown THEN {
[] ← Log.Force[]; w.recoverableState ← active } };
w: Worker ← WorkerTable.Lookup[t];
IF w # NIL THEN RETURN[w];
IF FileStore[CoordinatorFileStoreIDFromTransID[t]].RegisterWorker[
t, MyFileStoreID[], stableCreation] THEN {
w ← ConsWorker[transID: t, transState: unknown];
LogWorker[w] };
RETURN [w] };
ConsWorker: ENTRY PROC [transID: TransID, transState: TransState] RETURNS [w: Worker] = {
-- This procedure never takes very long, which is good since every transaction creation passes through this monitor.
IF (w ← WorkerTable.Lookup[t]) # NIL THEN RETURN[w];
w ← NEW[WorkerObject ← [
transID: transID, state: transState, recoverableState: transState,
coordinator: CoordinatorFileStoreIDFromTransID[transID],
worker: FileStorePrivate.MyFileStoreID[] ]];
WorkerTable.Insert[w];
RETURN[w] };
END.--WorkerImpl2
FileStorePrivate: DEFS = BEGIN
FileStoreID: TYPE = MACHINE DEPENDENT RECORD [
... ];
FileObject: TYPE = RECORD [
file: FileID,
trans: Worker,
fileLockMode: LockMode ];
TransactionID: TYPE = MACHINE DEPENDENT RECORD [
coordinatorFileStoreID: FileStoreID,
idOnFileSystem: LONG CARDINAL ];
-- Note that a FileStoreID, not a MachineID, is needed (because of CoordinatorFileStoreIDFromTransID).
TransState: TYPE = MACHINE DEPENDENT
{ unknown, active, ready, collecting, committing, aborting, committed, aborted };
-- or something like that ... this is the union of coordinator and worker states.
CoordinatorObject: TYPE = MONITORED RECORD [
transID: TransID,
state, recoverableState: TransState,
beginRecordID: LogRecordID ← NULL,
coordinator: FileStoreID,
workers: LIST OF FileStoreID ← NIL ];
WorkerObject: TYPE = MONITORED RECORD [
transID: TransID,
state, recoverableState: TransState,
coordinator: FileStoreID,
worker: FileStoreID ];
MyFileStoreID: PROC [] RETURNS [FileStoreID];
UniqueLongCardinalOnMyFileStore: PROC [] RETURNS [LONG CARDINAL];
ResolveUnknownTransaction: PROC [t: TransID] RETURNS [outcome: TransState {committed, aborted}];
PrepareAndFinishWorker: PROC [t: TransID, requestedOutcome: {commit, abort}]
RETURNS [{committed, aborted}];
-- Caller (an implementation of BFS.FinishTranasction) asserts that w is the only worker of the transaction and that w.worker = w.coordinator. Caller requests w to finish with the specified outcome, writing the COMMIT record on w.coordinator. Worker can return the opposite of the requested outcome.
CreateWorker: PROC [t: TransID] RETURNS [Worker];
-- If worker exists for this transaction, just returns it; otherwise creates it.
END.--FileStorePrivate
CoordinatorTable: DEFS = BEGIN
-- This is a monitor.
Insert: PROC [c: Coordinator];
-- Caller asserts that no Coordinator x in the table has x.transID = c.transID. Add c to the table.
Lookup: PROC [t: TransID] RETURNS [Coordinator];
-- Return c in table such that c.transID = t; return NIL if no such c exists.
Delete: PROC [c: Coordinator];
-- Caller asserts that c is in the table. Remove c from table.
END.--CoordinatorTable
WorkerTable: DEFS = BEGIN
-- This is a monitor.
Insert: PROC [w: Worker];
-- Caller asserts that no Worker x in the table has x.transID = w.transID. Add w to the table.
Lookup: PROC [t: TransID] RETURNS [Worker];
-- Return w in table such that w.transID = t; return NIL if no such w exists.
Delete: PROC [w: Worker];
-- Caller asserts that w is in the table. Remove w from table.
END.--WorkerTable