About locks:
1. Primitive spin locks (PSLs) are the most primitive locks. PSLs are built out of CStore, are only acquired with reschedule off, and must be pinned. A single processor system should never wait for PSLs, obviously. In a multi-processor system PSLs must never be held long, or they become a bottleneck. The most important (and currently only) PSL is the metaLock, which controls access to the global process state information (especially the meta queues).
2. General spin locks (GSLs) use CStore and directed Yield. If CStore cannot acquire the lock, then the process attempting to acquire the lock donates its cycles to the process holding the lock using directed Yield. If the process holding the lock cannot be run, directed Yield acts like undirected Yield. GSLs can be non-pinned.
3. General queues (GQs) use a GSL to serialize ownership of the queue, and a circular queue of processes to hold the processes that are waiting. See Enqueue and Dequeue for details.
4. Monitor locks (MLs) are built from a GQ and an ownership word, which is used to denote the process owning the lock. See MonitorEntry and MonitorExit for details.
5. Condition variables (CVs) do not assume that a monitor is held, although one is always present. CVs also support timeouts. CVs are built from a GQ and a timeout word that holds the initial value of the timeout period. A flags word contains a bit that can enable aborts to affect the CV.
6. Interrupt condition variables (ICVs) are the mechanism for turning device signals and interrupts into ICV notifications. ICVs do not assume that a monitor is held, since none is ever present. ICVs also support timeouts. ICVs are built from a CV and a request bit for interrupt requests so we do not lose them while not waiting for the ICV. ICVs must be pinned, to allow the scheduler to remove requests from them.
External procedures
Fork:
PUBLIC
PROC
[proc:
PROC
ANY
RETURNS
ANY, argPtr: WordSeq, nArgs:
INT, nRets:
INT]
RETURNS [new: SecureProcess] = {
... forks a new process, which will initially call the given procedure using the given arguments. The number of words of arguments and returns is also necessary.
In the calling process:
F1: allocate a new process. Return NIL if we can't do this. (Error??)
-- CHH: PrincOps ProcessImpl.Fork waits until a process is available. Do we really want to change the semantics? March 12, 1987
F2: newFX ← Allocate a FX of (IF nRets > maxStackRets THEN nRets ELSE 0)+(IF nArgs > maxStackArgs THEN nArgs ELSE 0) words.
F3: copy the arguments from argPtr^ for nArgs words to newFX
(the original argument storage belongs to the caller)
F4: setup the process block & make it ready to run Forkee, having set things up so that Forkee's arguments will be correct. Forkee[proc, newFX, nArgs, nRets]
{
-- The following SIM code works if there are no args or results ...
p: Process;
simProc: PROC ← LOOPHOLE [proc];
RDI: allocate storage, copy args to a safe place, etc.
DisableInterrupts[];
AcquireMetaLock[];
p ← NewProcessInner[];
new ← SecureProcessFromProcess[p];
p.priority ← CurrentProcess[].priority;
RDI: make the process a schedulable entity that will run Forkee when it's scheduled. SIM:
C7Process.Detach[ FORK SimForkee[p, simProc] ];
EnqueueReady[p];
ReleaseMetaLock[];
EnableInterrupts[]; -- SIM only. Automatic in RDI.
SimCheckForRescheduleTrap[]; -- SIM only.
};
};
SimForkee:
PROC [process: Process, proc:
PROC] ~ {
SimBackground[]; -- SIM only, because we just forked
Note at this point SimCurrentProcessor[] is NIL! This is okay, since we're about to suspend ourselves. When we get switched back to, the current processor will be filled in.
SimCallAndSuspend[process.sim.lock, process, NIL, $SimForkee]; -- just suspend until we're switched to
RDI: get arguments from somewhere
proc[];
RDI: put results somewhere
MonitorEntry[@process.lock];
DO
SELECT process.joinState
FROM
none, exiting => {
process.joinState ← exiting;
Wait[@process.joinCondition, @process.lock] };
detached, joined => {
EXIT };
joining => {
process.joinState ← exiting;
Notify[@process.joinCondition];
Wait[@process.joinCondition, @process.lock] };
ENDCASE => ERROR;
ENDLOOP;
process.joinState ← none; -- not really necessary (will be done by CleanProcess)
MonitorExit[@process.lock];
RDI: Free the results
SIM: don't create a suspension here ... this process won't get switched back to!
RDIDirectedYield[free, NIL, 0]; -- RDI: replace by DirectedYield.
SIM: now wake up the suspension we've just switched to ...
{ self: Process ← CurrentProcess[];
SimRestartSuspension[self.sim.lock, self];
};
};
Forkee:
PROC [proc:
PROC
ANY
RETURNS
ANY, argPtr: WordSeq, nArgs:
INT, nRets:
INT]
RETURNS [
--doesn't--] ~ {
retRecord: WordSeq;
FF0: IF nRets > maxStackRets THEN
{ retRecord ← argPtr^;
argPtr^ ← argPtr + nArgs;
};
FF1: Depending on nArgs, either push argPtr or push nArgs from argPtr^;
FF2: Push appropriate descriptor information for proc
FF3: Call proc using an inline SFCI instruction; it eventually will return or not as it sees fit.
JJ1: proc's RETURN record may either be in the FX pointed to by argPtr at offset nArgs or on the stack.
JJ2: wait for JOIN or DETACH by another process.
JJ3: IF JOINed
THEN {
Copy return values in stack or FX to a Nacho;
Give the Nacho to the other process
}
JJ4: FrameAllocator.Free[argPtr];
JJ5: place the process in the free list (state ← free), free any remaining resources (possibly a FX) and quietly disappear
NotYetImplemented[];
};
Join:
PUBLIC
PROC [secureProcess: SecureProcess, retPtr: WordSeq] = {
... joins the specified process, which waits for the process to complete, then copies the return values to the address given by retPtr.
In the calling process:
J1: wait for process to be in the done state (wait on condition?)
J2: copy the return values to retPtr^
J3: place the process in the free list (state ← free)
J4: return to the caller
{
-- SIM only, no args or results
process: Process ~ ProcessFromSecureProcess[secureProcess];
MonitorEntry[@process.lock];
DO
SELECT process.joinState
FROM
none, joining => {
process.joinState ← joining;
Wait[@process.joinCondition, @process.lock] };
exiting => {
get the results
process.joinState ← joined;
Notify[@process.joinCondition];
EXIT };
detached, joined => {
???? Possibly this should be a different error ????
ERROR InvalidProcess[secureProcess];
};
ENDCASE => ERROR;
ENDLOOP;
MonitorExit[@process.lock];
};
};
Detach:
PUBLIC
SAFE
PROC [secureProcess: SecureProcess] ~
TRUSTED {
process: Process ~ ProcessFromSecureProcess[secureProcess];
MonitorEntry[@process.lock];
SELECT process.joinState
FROM
none => {
process.joinState ← detached };
exiting => {
process.joinState ← detached;
Notify[@process.joinCondition] };
joining, detached, joined => {
???? Possibly this should be a different error ????
ERROR InvalidProcess[secureProcess] };
ENDCASE => ERROR;
MonitorExit[@process.lock];
};
DirectedYield:
PUBLIC
PROC
[nextState: ProcessState, nextProcess: Process, when: Ticks] =
TRUSTED {
SIM only ... this is a veneer around RDIDirectedYield, below.
DoDirectedYield: PROC ~ { RDIDirectedYield[nextState, nextProcess, when] };
SimCallAndSuspend[CurrentProcess[].sim.lock, CurrentProcess[], DoDirectedYield, $DirectedYield];
This is where the process will resume when it's rescheduled.
SimCheckForRescheduleTrap[];
};
RDIDirectedYield:
PUBLIC
PROC
[nextState: ProcessState, nextProcess: Process, when: Ticks] =
TRUSTED {
... yields the processor to the nextProcess, placing the current process in nextState. If nextProcess = NIL or is not ready, then the executing processor is yielded to the "best" ready process, placing the current process in nextState.. DirectedYield inhibits critical traps, and performs a ProcessSwitch. The when word is used for passing through timeout information.
SIM: This routine must be called only when critical traps are enabled.
RDI: change name to DirectedYield!
DisableInterrupts[];
CurrentProcess[].when ← when;
MarkFrame[];
This Indicates that the current frame should not be saved.
ProcessSwitch[nextState, nextProcess];
EnableInterrupts[]; -- SIM only. RDI: happens automatically.
};
Wait:
PUBLIC
PROC [cond: Condition, lock: MonitorLock] = {
... waits for the condition variable cond to be the target of a Notify, Broadcast, or timeout. If lock # NIL, then lock must be released during the possible pause, and lock must be re-acquired after the Wait has finished.
aborted: BOOL ← FALSE;
Place self on the condition variable; we do this before giving up the monitor to avoid missing a Notify, which would be wrong; we cannot get a Notify between now and releasing the monitor lock, since one must possess the monitor lock in order to do a Notify.
Enqueue[@cond.queue];
Release the monitor lock. After this the condition can be Notified by another process.
IF lock # NIL THEN MonitorExit[lock];
IF (CurrentProcess[].abortState # requested)
OR (
NOT cond.flags.abortEnable)
THEN {
Go to sleep, choosing the "best" process to run next. If we get a Notify after exiting the monitor and before going to sleep, the Notify will remove this process from the condition, yet will NOT place this process on the ready queue (because its state is not yet waitingCV). In that case the DirectedYield will return immediately without going to sleep ...
DirectedYield[waitingCV, NIL, cond.timeout];
};
Remove self from the condition queue, if still on it. DequeueSelf has no effect if the current process is not on a queue, so a race with some other process trying to dequeue this process is benign.
IF DequeueSelf[@cond.queue]
THEN {
Self was still on the queue. This means the wakeup was not a NOTIFY or BROADCAST, so check for abort. There's a race between timeout and abort here, but that's benign.
aborted ← ((CurrentProcess[].abortState = requested) AND (cond.flags.abortEnable));
};
Reacquire the monitor lock.
IF lock # NIL THEN MonitorEntry[lock];
Final test for abort request while we were waiting on the CV.
IF aborted THEN ERROR ABORTED;
};
WaitInterrupt:
PUBLIC
PROC [cond: Condition] = {
... waits for cond to be the target of an interrupt request, or timeout. Unlike Wait, there are no monitor locks involved, and we have to proceed immediately if interrupts have occurred since the last WaitInterrupt.
For now, we prohibit aborts on interrupt condition variables; comments suggest how aborts could be implemented if we wanted them.
If multiple processes wait on the same ICV, more than one of them might be awakened by a single interrupt.
IF
NOT cond.flags.condRequest
THEN {
Enqueue[@cond.queue];
IF
NOT cond.flags.condRequest
THEN {
We are now on the queue. A DelicateNotify is guaranteed either (i) to remove somebody from the queue, or (ii) to fail to acquire the queue. In case (i), if this process is the one removed from the queue, and it happens before the DirectedYield below, then the DirectedYield will return immediately. Otherwise, things happen normally. (See the comments in Wait.) In case (ii), the queue is held by some process in WaitInterrupt, which is guaranteed to examine cond.flags.condRequest shortly. In no case will an interrupt be lost.
ABORTS: IF we've been aborted THEN skip the DirectedYield ...
DirectedYield[waitingICV, NIL, cond.timeout];
};
[] ← DequeueSelf[@cond.queue];
};
ABORT: IF we should abort THEN raise ABORTED; Also, make sure to change the code below to preserve the value of cond.flags.abortEnable.
Clear cond.flags.condRequest. This is correct even if it's been set since we've examined it, since the current process (doing WaitInterrupt) is about to wake up.
cond.flags ← defaultCondFlags;
};
Notify:
PUBLIC
PROC [cond: Condition] = {
... notifies the "best" process waiting on cond that the condition has occurred. The protecting monitor lock is presumed to be held, but is not needed for this primitive.
best: Process ~ Dequeue[@cond.queue];
IF best #
NIL
THEN MakeReady[best];
There really was a process waiting on the CV. Although a process may be made ready, the process switch is not immediately required. The process switch may occur when the protecting monitor lock is released.
};
Broadcast:
PUBLIC
PROC [cond: Condition] = {
... notifies all processes waiting on cond that the condition has occurred. To eliminate race problems, all of the waiting processes are made ready while holding onto the queue. The protecting monitor lock is presumed to be held, but is not needed for this primitive.
tail: Process ← cond.queue.chain;
IF tail #
NIL
THEN {
AcquireQueue[@cond.queue];
IF (tail ← cond.queue.chain) #
NIL
THEN {
Remove all of the processes from the circular queue and make them ready. Although processes may be made ready, the process switch is not immediately required. The process switch may occur when the protecting monitor lock is released.
cond.queue.chain ← NIL;
DO
this: Process ~ tail.next;
next: Process ~ this.next;
tail.next ← next;
this.queue ← NIL;
this.next ← NIL;
MakeReady[this];
IF this = tail THEN EXIT; -- we just removed the last one
ENDLOOP;
};
ReleaseQueue[@cond.queue];
};
};
MonitorEntry:
PUBLIC
PROC [lock: MonitorLock] = {
... is performed at the entry to any monitored procedure to lock up the monitor for exclusive access. This operation is performed frequently, so it should be fast in the case where the monitor is not already owned.
self: Process = CurrentProcess[];
DO
Try for conditional store of current PSB index with 0 wait chain. If we succeed, then we are in, and can return.
IF CStoreProcess[ptr: @lock.owner] THEN RETURN;
Place self on the monitor queue.
Enqueue[@lock.queue];
If the monitor is still held, then go to sleep, yielding to best process, only to wake up when the monitor queue is OK. Otherwise, some process released the monitor while we were trying to Enqueue, so we should get off the monitor queue and try again.
{
owner: Process = lock.owner;
IF owner #
NIL
THEN {
THROUGH [0..spinTries)
WHILE owner.state = running
DO
SimYield[]; -- SIM only
ENDLOOP;
IF lock.owner # NIL THEN DirectedYield[waitingML, lock.owner, 0];
};
};
Remove self from the monitor queue
[] ← DequeueSelf[@lock.queue];
ENDLOOP;
};
MonitorExit:
PUBLIC
PROC [lock: MonitorLock] = {
... frees up the monitor lock. If there is a process waiting the monitor, dequeue the process from the monitor queue and place it in the ready queue. The current coding does nothing if we do not own the lock. Should we die instead of doing nothing?
IF lock.owner = CurrentProcess[]
THEN {
Give up ownership of the lock.
lock.owner ← NIL;
IF lock.queue.chain #
NIL
THEN {
Make ready the next process waiting for the monitor. There is a slight possibility that some other process will jump in ahead of this one while the monitor is free, but fairness is not guaranteed.
next: Process = Dequeue[@lock.queue];
IF next # NIL THEN MakeReady[next];
};
IF deferredWakeups
THEN {
deferredWakeups = TRUE indicates that we deferred a desirable wakeup while processing a Broadcast or Notify for some monitor. Statistically speaking, it was probably for this monitor. So we raise reschedule here to get the best processes running.
RaiseReschedule[];
};
};
};
CheckForAbort:
PUBLIC
SAFE
PROC ~
TRUSTED {
... test for a requested abort.
self: Process = CurrentProcess[];
IF self.abortState = requested THEN { self.abortState ← none; ERROR ABORTED };
};
RequestAbort:
PUBLIC
PROC [secureProcess: SecureProcess] = {
... requests the given process to abort itself.
process: Process ~ ProcessFromSecureProcess[secureProcess];
MonitorEntry[@process.lock];
SELECT process.abortState
FROM
none => {
process.abortState ← requested;
MakeReady[process];
IF deferredWakeups THEN RaiseReschedule[];
};
inhibited =>
process.abortState ← delayed;
ENDCASE;
MonitorExit[@process.lock];
};
InhibitAborts:
PUBLIC
SAFE
PROC [inhibit:
BOOL]
RETURNS [wasInhibited:
BOOL]
~ TRUSTED {
self: Process ~ CurrentProcess[];
MonitorEntry[@self.lock];
SELECT self.abortState
FROM
none => {
wasInhibited ← FALSE;
IF inhibit THEN self.abortState ← inhibited;
};
inhibited => {
wasInhibited ← TRUE;
IF NOT inhibit THEN self.abortState ← none;
};
requested => {
wasInhibited ← FALSE;
IF inhibit THEN self.abortState ← delayed;
};
delayed => {
wasInhibited ← TRUE;
IF NOT inhibit THEN self.abortState ← requested;
};
ENDCASE => ERROR;
MonitorExit[@self.lock];
};
SetPriority:
PUBLIC
SAFE
PROC [p: Priority] ~
TRUSTED {
... changes the priority of the current process. We need to recompute lowestRunningPriority; also, if our new priority is lower than that of some ready process we need to initiate a process switch.
self: Process ~ CurrentProcess[];
oldPriority: Priority;
doYield: BOOL;
DisableInterrupts[];
AcquireMetaLock[];
oldPriority ← self.priority;
self.priority ← p;
Recompute lowestRunningPriority ...
SELECT
TRUE
FROM
p < lowestRunningPriority => {
lowestRunningPriority ← p };
oldPriority = lowestRunningPriority => {
lowest: Priority ← tailProcessor.running.priority;
FOR proc: Processor ← tailProcessor.next, proc.next
WHILE proc # tailProcessor
DO
lowest ← MIN[lowest, proc.running.priority];
ENDLOOP;
lowestRunningPriority ← lowest };
ENDCASE => NULL;
doYield ← (p < highestReadyPriority);
ReleaseMetaLock[];
IF doYield
THEN {
???? All that's really necessary here is { MarkFrame[]; ProcessSwitch[ready, NIL] }, but that won't work in the simulation environment! ????
EnableInterrupts[]; -- SIM only?
DirectedYield[ready, NIL, 0] }
ELSE {
EnableInterrupts[]; -- SIM only. Automatic in RDI.
};
RDI: the above IF becomes simply IF doYield THEN DirectedYield[ready, NIL, 0]; ????
};
Reschedule Interrupt
intHandlers: ARRAY RequestKind OF IntHandler ← ALL [StrayIntHandler];
StrayIntHandler: IntHandler
[requestKind: RequestKind]
~ TRUSTED {
SIM version. Obviously we need to do something better here for RDI.
ERROR };
RegisterIntHandler:
PUBLIC
SAFE
PROC [requestKind: RequestKind, intHandler: IntHandler]
RETURNS [old: IntHandler] ~
TRUSTED {
IF intHandler = NIL THEN intHandler ← StrayIntHandler;
DisableInterrupts[];
AcquireMetaLock[];
old ← intHandlers[requestKind];
intHandlers[requestKind] ← intHandler;
ReleaseMetaLock[];
EnableInterrupts[]; -- SIM only. Automatic in RDI.
SimCheckForRescheduleTrap[]; -- SIM only.
};
SimCheckForRescheduleTrap:
PROC ~ {
SIM only.
SIM: call this only from user-level code — when there's no suspension in the process block — because it sill put one there.
On a real Dragon the current processor can change whenever interrupts are enabled! This code would be bogus on a real machine, but, after all, it's here to simulate a real machine's interrupt mechanism ...
IF NOT CurrentProcessor[].sim.interruptsEnabled THEN RETURN;
WHILE CurrentProcessor[].sim.rescheduleRequested
DO
CurrentProcessor[].sim.rescheduleRequested ← FALSE;
DisableInterrupts[]; -- so RescheduleTrap will be called with interrupts disabled
SimCallAndSuspend[CurrentProcess[].sim.lock, CurrentProcess[], RescheduleTrap, $SimCheckForRescheduleTrap];
This is where the current process resumes when it's rescheduled.
IF NOT CurrentProcessor[].sim.interruptsEnabled THEN ERROR; -- ????
ENDLOOP;
};
RescheduleTrap:
PROC = {
... is called whenever the reschedule interrupt is enabled and either the reschedule line is asserted OR reschedule traps become enabled and there is a deferred reschedule pending. In either case, we start up with critical traps inhibited, and a special return (hand-coded in the RescheduleTrap vector) enables critical traps on exit. On entry, volatile EU state has been pushed in such a way that when the process is switched to later the volatile EU state will be restored correctly (again, hand-coded in the RescheduleTrap vector).
RescheduleTrap is primarily called in response to two causes: device interrupts and explicit condition variable wakeups. Device interrupts are turned into condition variable wakeups in this routine, so the appearance outside of this routine is that we only have condition variable wakeups. Statistically we only get to enter RescheduleTrap if there is a ready process with priority >= lowestRunningPriority, and we try to give the scheduler to a processor with priority = lowestRunningPriority. The major exception to this is time-slicing, which should be kept to a modest rate (the current system uses about 10 per second).
entryPriority: Priority ← CurrentProcess[].priority;
DO
First, try to grab ownership of the scheduler (if at lowest priority)
IF entryPriority.
ORD <= lowestRunningPriority.
ORD
AND schedulerLock =
NIL
THEN
This processor is one of the best to try the scheduler, and the scheduler is free.
IF CStoreProcessor[ptr: @schedulerLock]
THEN {
At this point we have acquired the scheduler.
currentReq: RequestWord ← ReadAndClearDeviceRequests[];
sample & clear request word (currently protected by the schedulerLock)
IF currentReq # nullRequestWord
THEN {
Make maximum possible room on IFU stack for device interrupt handler. This is expensive enough to be worth doing before grabbing the metaLock.
MarkFrame[];
SaveProcessState[CurrentProcess[]];
};
AcquireMetaLock[];
don't allow changes to the ready queue by other processors
IF currentReq # nullRequestWord
THEN {
Service requests in the currentReq word by tweaking the appropriate interrupt condition queues. This loop is not the fastest way to do this, but it is not worth improving unless we find out that it is taking a lot of time.
FOR reqx: RequestKind
IN RequestKind
DO
IF currentReq[reqx]
THEN {
(intHandlers[reqx])[reqx];
currentReq[reqx] ← FALSE;
IF currentReq = nullRequestWord THEN EXIT;
};
ENDLOOP;
};
IF highestReadyPriority.
ORD > lowestRunningPriority.
ORD
THEN {
There are ready processes that are better than at least some processor, so go do the bulk reassignment.
AssignProcessors[];
};
Release locks. (in the given order).
???? what does the order have to do with anything? I would believe this if the test of schedulerLock = NIL below were done while holding the metaLock, but it isn't ???? - ajd
SimStoreNilInProcessor[ptr~@schedulerLock]; -- RDI: do the store in line
ReleaseMetaLock[];
};
Normal exit, just follow orders.
DO
SELECT CurrentProcessor[].orders
FROM
noChange =>
EXIT;
If no orders to follow, then exit the inner loop.
stopped =>
NULL;
We're panic stopped, spin in this loop until ordered to restart.
switchToGiven => {
Switch to a particular process or to switch to the "best" available process.
First (!) remember the switchTo field, and then reset orders to indicate they've been followed.
nextProcess: Process ← CurrentProcessor[].switchTo;
CurrentProcessor[].orders ← noChange;
Follow the orders ...
MarkFrame[];
Indicates that only frames older than the current frame are to be saved.
ProcessSwitch[ready, nextProcess];
<< IF (nextProcess # NIL AND nextProcess.state = ready)
OR CurrentProcess[].priority <= highestReadyPriority THEN {
It is possible that a switch is necessary. Otherwise keep running the current process.
???? How likely is this test to fail? I claim very unlikely. In any event, it's only an accelerator. I'd like to remove it. - ajd ????
MarkFrame[];
Indicates that only frames older than the current frame are to be saved.
ProcessSwitch[ready, nextProcess];
}; >>
};
panicStop => {
Come to a panic stop.
Save the state so it can be examined/modified.
MarkFrame[];
Indicates that only frames older than the current frame are to be saved.
SaveProcessState[CurrentProcess[]];
CurrentProcessor[].orders ← stopped;
};
restart => {
Restart after panic stop.
Restore the state in case some piece of it was modified while we were stopped.
RestoreProcess[CurrentProcess[]];
CurrentProcessor[].orders ← noChange;
};
ENDCASE => ERROR;
ENDLOOP;
Test for residual unserviced interrupts
If some process already has the scheduler lock then we need not persist.
IF schedulerLock # NIL THEN EXIT;
If no device requests, then there is no point in going back for another try at the scheduler.
IF NOT DeviceRequestsPending[] THEN EXIT;
By this point we have given a lowestRunningPriority processor the first chance to get in and process the interrupt. However, lowestRunningPriority must be treated as a hint, not as the truth. So if there are still device requests pending and no one has the scheduler, we go back to try to service those requests using the current processor. We lower entryPriority to try to force an entry into the scheduler.
entryPriority ← Priority.FIRST; -- ???? was IF entryPriority.ORD > lowestRunningPriority.ORD THEN entryPriority ← entryPriority.PRED; ????
ENDLOOP;
EnableInterrupts[]; -- SIM only. Automatic in RDI.
};
Special external procedures
ShuffleReadyQueues:
PUBLIC
PROC = {
... is called by the timeslicer service. It directs all undirected processes to switch to the best available process(es).
DisableInterrupts[];
AcquireMetaLock[];
Order each processor to switch to the best available process.
FOR processor: Processor ← tailProcessor.next, processor.next
DO
SELECT processor.orders
FROM
noChange => {
Force a switch to the best available process
processor.switchTo ← NIL;
processor.orders ← switchToGiven;
};
ENDCASE;
IF processor = tailProcessor THEN EXIT;
ENDLOOP;
Raise reschedule to make all processors choose the best available. Even the executing processor will eventually respond.
RaiseReschedule[];
ReleaseMetaLock[];
EnableInterrupts[]; -- SIM only. Automatic in RDI.
SimCheckForRescheduleTrap[]; -- SIM only.
};
CausePanicStop:
PUBLIC
PROC = {
... causes all processors except the current one to come to a screeching halt with properly saved state. When we want to perform stack tracing, delicate debugging, or prepare for world-swap debugging, we use this routine to shut down the world. On entry, interrupts should be disabled but the metaLock should not be held. We return with interrupts disabled and the metaLock and panicStopLock held. Interrupts do get enabled during this routine, though, so a process switch might occur.
self: Process = CurrentProcess[];
Acquire panicStopLock, disable interrupts.
DO
Inhibit interrupts while we try to acquire the panicStopLock.
DisableInterrupts[];
If panicStopLock is acquired, exit with interrupts disabled.
IF CStoreProcess[ptr~@panicStopLock] THEN EXIT;
Allow interrupts to occur while we are waiting. This allows the current processor to panic stop if it is requested to do so.
EnableInterrupts[];
SimCheckForRescheduleTrap[]; -- SIM only.
Spin (without using CStore) until panicStopLock is free. No DirectedYield[] is required: livelock is impossible because the holder of the panicStopLock is running with interrupts disabled.
WHILE panicStopLock #
NIL
DO
SimYield[]; -- SIM only.
SimCheckForRescheduleTrap[]; -- SIM only.
ENDLOOP;
ENDLOOP;
Save the process state to make sure that the calling process is stable, and to make sure that we have enough stack registers to complete processing.
MarkFrame[];
Indicates that only frames older than the current frame are to be saved.
SaveProcessState[self];
DO
current: Processor ← CurrentProcessor[];
done: BOOL ← TRUE;
AcquireMetaLock[];
FOR processor: Processor ← current.next, processor.next
WHILE processor # current
DO
SELECT processor.orders
FROM
noChange => {
processor.switchTo ← NIL;
processor.orders ← panicStop;
done ← FALSE;
};
stopped => {};
ENDCASE => done ← FALSE;
ENDLOOP;
IF done THEN EXIT;
ReleaseMetaLock[];
RaiseReschedule[];
ENDLOOP;
At this point all of the processors except the current processor have entered the stopped state, and are waiting for different orders to be given. We return to the caller with interrupts off and the metaLock and panicStopLock held.
};
ReleasePanicStop:
PUBLIC
PROC = {
... allows the stopped processors to resume with normal processing. Call this procedure to release all processes from their stopped state. We assume that the metaLock is held and interrupts are off, that the current process holds the panicStopLock, and that all processors but this one are stopped (i.e., the current process has successfully caused a panic stop). On exit, the current process has released the metaLock and the panicStopLock.
self: Processor ~ CurrentProcessor[];
FOR processor: Processor ← self.next, processor.next
WHILE processor # self
DO
SELECT processor.orders
FROM
stopped => processor.orders ← restart;
ENDCASE => ERROR;
ENDLOOP;
At this point all of the stopped processors have been ordered to restart. It's not necessary to wait for them to follow the orders. They are guaranteed to do so eventually.
Release the panicStopLock and the metaLock and return. The order of releasing the locks is important. Init[] tests to see if anyone holds the panicStopLock, while holding the metaLock; so here we must ensure that we can't release the panicStopLock unless we hold the metaLock.
SimStoreNilInProcess[ptr~@panicStopLock]; -- RDI: panicStopLock ← NIL;
ReleaseMetaLock[];
};
SatisfyPageFault:
PUBLIC
PROC [startPage, endPage:
INT] = {
... for each process P in the page fault with startPage d P.page d endPage, make the process ready.
???? Why can't we use a trailing pointer and remove processes from pageFaultMetaQueue directly here, instead of calling DequeueMeta (which has to search the queue) ???? After all, we hold the metaLock! ???? - ajd
DisableInterrupts[];
AcquireMetaLock[];
{
tail: Process ← pageFaultMetaQueue;
p: Process ← tail;
WHILE p #
NIL
DO
next: Process ← p.meta;
SELECT p.state
FROM
waitingPage =>
IF WordToInt[p.page]
IN [startPage..endPage]
THEN {
DequeueMeta[@pageFaultMetaQueue, p];
EnqueueReady[p];
place the process on the ready queue (also maintains highestReadyPriority)
IF p.priority > lowestRunningPriority
THEN deferredWakeups ←
TRUE;
The new process might be worth waking, but we don't do it here. We just remember that it was desirable.
};
ENDCASE;
p ← next;
IF p = tail THEN EXIT;
ENDLOOP;
};
ReleaseMetaLock[];
EnableInterrupts[];
SimCheckForRescheduleTrap[]; -- SIM only.
IF deferredWakeups THEN RaiseReschedule[];
};
Internal procedures
ProcessSwitch:
PROC [nextState: ProcessState, nextProcess: Process] = {
... saves the state of the current process. If the process is not waiting, then it is placed on the ready queue. If nextProcess # NIL then it is the next process to run. Otherwise, the "best" ready process is chosen. During this sequence, we assume that critical traps are disabled. Note that if nextProcess # NIL, we must switch to it if it is ready, even if the current process is "better" (see comments in AcquireQueue to see why we require this).
self: Process ~ CurrentProcess[];
selfPriority: Priority ~ self.priority;
SaveProcessState[self];
This is expensive enough to do outside of the region that holds the metaLock.
AcquireMetaLock[];
From now on, don't allow changes to the ready queue by other processors.
SELECT nextState
FROM
waitingML => {
Check to see if this process is trying to go to sleep on a monitor lock that has already been exited, which could happen if the MonitorExit occurred after this process got on the queue, but before the Yield. We have carefully arranged for this to be OK with Dequeue and friends.
IF self.next = NIL THEN GO TO bailOut;
self.state ← nextState;
};
waitingCV, waitingICV => {
Check to see if this process has already been removed from the condition queue, which could happen if the Notify occurred after this process got on the queue, but before the Yield. We have carefully arranged for this to be OK with Dequeue and friends.
IF self.next = NIL THEN GO TO bailOut;
IF self.when # noTimeout THEN EnqueueMeta[@timeoutMetaQueue, self];
self.state ← nextState;
};
waitingPage => {
A page fault has occured. Put the offending process to sleep on the page fault metaQueue, then notify the page fault handling process.
EnqueuePageFault[self]; -- self.state ← waitingPage;
DelicateNotify[pageFaultCond];
};
ready => {
IF (nextProcess #
NIL)
AND (nextProcess.state = waitingPage)
THEN {
This is a directed yield to a process that is page faulting, so we should also wait for that page to be ready. It does us no good to proceed until that process is ready.
self.page ← nextProcess.page;
EnqueuePageFault[self]; -- self.state ← waitingPage;
}
ELSE {
EnqueueReady[self]; -- self.state ← ready;
};
};
free => {
Release the process structure here. This process will never be switched to again. We can't touch the process structure hereafter. We assume all resources (other than the Process structure itself) have already been released.
CleanProcess[self]; -- self.state ← free;
};
ENDCASE => ERROR;
IF nextProcess =
NIL
OR nextProcess.state # ready
THEN {
This is an undirected yield OR a directed yield to a process that is not ready for some peculiar (possibly transient) reason. Just choose the best process to run. There are as many idle processes as processors, and the idle processes are always ready to run, so there will always be a process ready to run.
nextProcess ← NIL; -- Part of sanity check, below ???? - ajd
FOR priority: Priority
DECREASING
IN [Priority.
FIRST .. highestReadyPriority]
DO
tail: Process = readyQueueArray[priority];
IF tail # NIL THEN { nextProcess ← tail.meta; EXIT };
ENDLOOP;
IF nextProcess = NIL THEN ERROR; -- Sanity check ???? - ajd
};
At this point nextProcess # NIL, and we're committed to switching to it.
DequeueMeta[@readyQueueArray[nextProcess.priority], nextProcess];
Note: this is currently the ONLY place that a process is removed from a ready queue!
Recompute the highestReadyPriority.
DO
pri: Priority ← highestReadyPriority;
IF pri = Priority.FIRST OR readyQueueArray[pri] # NIL THEN EXIT;
highestReadyPriority ← pri.PRED;
ENDLOOP;
Make nextProcess run on this processor.
nextProcess.state ← running;
SetCurrentProcess[nextProcess]; -- RDI: this loads an aux register. Is this the place for it?
CurrentProcessor[].running ← nextProcess;
Recompute the lowestRunningPriority.
IF selfPriority = lowestRunningPriority
THEN {
new: Priority ← tailProcessor.running.priority;
FOR each: Processor ← tailProcessor.next, each.next
WHILE each # tailProcessor
DO
IF new > each.running.priority THEN new ← each.running.priority;
ENDLOOP;
lowestRunningPriority ← new;
};
ReleaseMetaLock[];
IF nextProcess # self
THEN RestoreProcess[nextProcess];
Note that if we are the same process no state needs to be restored, since nothing should have changed except volatile EU state, which is restored later.
EXITS
bailOut => ReleaseMetaLock[];
We get here when we have NOT found a better process to run. Therefore there is no change necessary, and things get done more quickly.
};
MakeReady:
PROC [process: Process] ~ {
... place process on the ready queue, if it's currently waiting. Also remove process from the timeout queue if it is there.
SIM only: on entry, we require that interrupts NOT be disabled.
DisableInterrupts[];
AcquireMetaLock[];
SELECT process.state
FROM
waitingCV, waitingML, waitingICV => {
priority: Priority ~ process.priority;
Remove process from timeout queue if present ...
DequeueMeta[@timeoutMetaQueue, process];
Place the process on the ready queue (also maintains highestReadyPriority) ...
EnqueueReady[process];
Remember if the new process might be worth waking ...
IF priority > lowestRunningPriority THEN deferredWakeups ← TRUE;
};
ENDCASE;
ReleaseMetaLock[];
EnableInterrupts[]; -- SIM only. Automatic in RDI.
};
Enqueue:
PROC [q: Queue, p: Process ← CurrentProcess[]] = {
... places the process on the queue in FIFO order. Priority processing is the responsibility of Dequeue.
AcquireQueue[q];
{
tail: Process = q.chain;
IF tail =
NIL
THEN p.next ← p
ELSE {p.next ← tail.next; tail.next ← p};
q.chain ← p;
p.queue ← q;
};
ReleaseQueue[q];
};
Dequeue:
PROC [q: Queue]
RETURNS [best: Process ←
NIL] = {
... performs dequeue by process priority. We perform a linear search of every process in the queue to find the best. The best process in the queue is the first one we find with the maximum priority. Performing the priority testing in Dequeue is more expensive for the case where there are several things on the queue, but it allows us to change process priorities without altering the queue. If the process we find has also timed out, we have to handle the case elsewhere, but it is OK for Dequeue to dequeue such a process. Dequeue does not remove such a process from the timeout queue.
lag: Process ← q.chain;
IF q.chain #
NIL
THEN {
AcquireQueue[q];
best ← DequeueInner[q];
ReleaseQueue[q];
};
};
DequeueInner:
PROC [q: Queue]
RETURNS [best: Process ←
NIL] =
--INLINE-- {
... is the inside part of Dequeue. While we still dequeue by process priority, we assume that we have already grabbed ownership of the queue. This allows us to have common code for this case and the more delicate notification required by interrupt requests & page faults.
lag: Process ← q.chain;
IF lag #
NIL
THEN {
tail: Process = lag;
best ← lag.next;
IF lag = best
THEN q.chain ←
NIL
only process in queue, so clear the chain
ELSE {
head: Process ← best; -- loop invariant: lag.next = best
this: Process ← head;
DO
next: Process ← this.next;
IF next = head THEN EXIT; -- we have wrapped
IF next.priority > best.priority
THEN {lag ← this; best ← next};
new candidate for the best process
this ← next;
ENDLOOP;
IF tail = best
THEN q.chain ← lag;
don't queue chain point to deleted process
lag.next ← best.next;
};
best.queue ← NIL;
best.next ← NIL; -- the last change, due to test in ProcessSwitch
};
};
DequeueSelf:
PROC [q: Queue]
RETURNS [found:
BOOL ←
FALSE] ~ {
... dequeues the current process from the given queue. This operation has no effect if the process was not in the queue. Returns TRUE iff self was on the queue.
IF CurrentProcess[].next #
NIL
THEN {
lag: Process ← q.chain;
IF lag #
NIL
THEN {
AcquireQueue[q];
IF (lag ← q.chain) #
NIL
THEN {
tail: Process = lag;
DO
p: Process = lag.next;
IF p = CurrentProcess[]
THEN {
IF p = lag
THEN q.chain ←
NIL
last one in queue, so show no chain
ELSE {
IF p = tail
THEN q.chain ← lag;
don't let deleted one be the queue head
lag.next ← p.next;
};
CurrentProcess[].queue ← NIL;
p.next ← NIL; -- the last change, due to test in ProcessSwitch
found ← TRUE;
EXIT;
};
lag ← p;
IF lag = tail THEN EXIT;
ENDLOOP;
};
ReleaseQueue[q];
};
};
};
AcquireQueue:
PROC [q: Queue] = {
... grabs ownership of the given queue. Performs Yield[ready, owner] until queue is owned. This directed Yield is necessary to overcome nasty interactions between spin locks and preemptive scheduling.
WHILE
NOT CStoreProcess[ptr: @q.busy]
DO
owner: Process ← q.busy;
IF owner = NIL THEN LOOP;
IF owner.state # running
THEN DirectedYield[ready, owner, 0];
Try to donate our cycles to the owner of the queue. Note that if the owner is running on another processor that there is not need to Yield, since the queue should be given up quite soon.
SimYield[]; -- SIM only!
ENDLOOP;
};
ReleaseQueue:
PROC [q: Queue] =
--INLINE-- {
... releases ownership of the queue. Assumes that the current process owns the queue.
RDI: change this to an inline that just sets q.busy ← NIL.
SimStoreNilInProcess[ptr~@q.busy]; -- SIM only.
};
Internal procedures (interrupts disabled, must not hold metaLock)
AcquireMetaLock:
PROC =
--INLINE-- {
... grabs ownership of all of the meta queues. On entry, we require that critical traps are disabled, of course. On exit, we hold the meta lock.
WHILE
NOT CStoreProcessor[ptr: @metaLock]
DO
spin using local cache only to reduce bus contention
WHILE metaLock #
NIL
DO
SimYield[]; -- SIM only
ENDLOOP;
ENDLOOP;
};
ReleaseMetaLock:
PROC =
--INLINE-- {
... releases ownership of the meta lock. On entry, we require that critical traps are disabled and that the metaLock is held. On exit, the metaLock is not held, although critical traps are still disabled.
RDI: change this to an INLINE that directly sets metaLock ← NIL.
SimStoreNilInProcessor[ptr~@metaLock]; -- SIM only
};
SaveProcessState:
PROC [process: Process] =
--INLINE-- {
... saves frames and process info from the current processor for the given process. On entry, we require that critical traps are disabled. On exit, there are no frames in the IFU stack (so a return will provoke a stack underflow). Note that this procedure must be an INLINE to keep from putting another frame on the stack.
save stack frames
SaveStack[]; (RDI only)
save EU regs (RDI: fix this)
FOR which: EUstateIndex IN EUstateIndex DO
process.euState[which] ← EUFetchSpecial[which];
ENDLOOP;
check for local stack overflow (RDI: fix this)
IF WordToInt[process.euState[framesLeft]] <= 0 THEN
cause a stack overflow fault for the caller
NotYetImplemented[];
NULL };
RestoreProcess:
PROC [process: Process] =
--INLINE-- {
... restores the given process to a running state. Also changes the result of CurrentProcess. On entry, we require that critical traps are disabled, and process.state = running. Note that this procedure must be an INLINE to keep from putting another frame on the stack.
restore EU regs (RDI: fix this)
FOR which: EUstateIndex IN EUstateIndex DO
EUStoreSpecial[which, process.euState[which]];
ENDLOOP;
???? WHO should set the current process aux reg ???? I claim it should not be done here. In fact, I claim RestoreProcess doesn't need an argument; it always restores from CurrentProcess[]. - ajd
};
Internal procedures (interrupts disabled, must hold metaLock)
DelicateNotify:
PROC [cond: Condition] =
--INLINE-- {
... does an ICV notification, with the understanding that interrupts are disabled and the metaLock is held, so we just can't yield the processor. The condition must be resident, of course.
cond.flags.condRequest ← TRUE;
IF CStoreProcess[ptr: @cond.queue.busy]
THEN {
We got the queue, so we can make the best process in it ready. If we did not get the queue, then the owner of it (who's running WaitInterrupt) will examine the flags pretty soon and see the notification.
best: Process ← DequeueInner[@cond.queue];
IF best #
NIL
THEN {
The following is like MakeReady ... see the comments in Wait and WaitInterrupt.
SELECT best.state
FROM
waitingICV => {
remove from timeout queue if present
DequeueMeta[@timeoutMetaQueue, best];
place the process on the ready queue (also maintains highestReadyPriority)
EnqueueReady[best];
The new process might be worth waking, but we don't do it here. We just remember that it was desirable.
IF best.priority > lowestRunningPriority THEN deferredWakeups ← TRUE;
};
running, ready => NULL;
ENDCASE => ERROR; -- ???? I'm not sure of this at all ???? - ajd
};
ReleaseQueue[@cond.queue];
};
};
AssignProcessors:
PROC ~ {
... set processor orders based on whether some ready process is "better" than any running process. We run through the ready processes in highest-to-lowest order. At the end of the reassignment, if we had to change the orders of any processor other than our own then we raise reschedule to get the other processors to follow their orders. The search for the best processor is arranged so that in the (most likely) case — we're running on the lowest priority processor and there's only a single ready process that needs to be assigned — the process will be assigned to the current processor, and it won't be necessary to raise reschedule.
On entry we require that interrupts are disabled and the metaLock is held.
self: Processor ~ CurrentProcessor[];
needReschedule: BOOL ← FALSE;
FOR pProcess: Priority
DECREASING
IN (lowestRunningPriority..highestReadyPriority]
DO
(FOR each schedulable ready process priority pProcess, decreasing, DO ...)
tailProcess: Process ~ readyQueueArray[pProcess];
IF tailProcess = NIL THEN LOOP;
FOR process: Process ← tailProcess.meta, process.meta
DO
(FOR each ready process at priority pProcess DO ...)
Ensure there's a processor assigned to process ...
BEGIN
processor: Processor ← self;
best: Processor ← NIL;
priToBeat: Priority ← pProcess;
Set best ← lowest priority assignable processor, favoring self ...
DO
SELECT processor.orders
FROM
noChange => {
thisPri: Priority ~ processor.running.priority;
IF thisPri < priToBeat THEN { best ← processor; priToBeat ← thisPri };
};
switchToGiven => {
IF processor.switchTo = process THEN GOTO AlreadyAssigned; -- hint only
};
ENDCASE => NULL;
IF (processor ← processor.next) = self THEN EXIT;
ENDLOOP;
If there are no more assignable processors, we're done ...
IF best = NIL THEN GOTO NoMoreProcessors;
Set orders for best to switch to this process, and remember to pull reschedule if necessary ...
best.switchTo ← process;
best.orders ← switchToGiven;
IF best # self THEN needReschedule ← TRUE;
EXITS
AlreadyAssigned => NULL;
END;
IF process = tailProcess THEN EXIT;
ENDLOOP;
REPEAT
NoMoreProcessors => NULL;
ENDLOOP;
IF needReschedule THEN RaiseReschedule[];
};
<<
AssignProcessors:
PROC = {
... sets processor orders based on whether the a ready process is "better" than any running processes; we run through the ready processes in highest to lowest order. At the end of the reassignment, if we had to change the orders of any processor other than our own then we raise reschedule to get the other processors to do their switching. On entry we require that interrupts are disabled and the metaLock is held.
The cost of algorithm in this procedure has an upper bound of O(N*MIN[N,MAX[K,R]), where N is the number of processors, K is highestReadyPriority-lowestRunningPriority, and R is the number of ready processes with priority IN (lowestRunningPriority..highestReadyPriority]. In practice, the algorithm has somewhat better average behavior, since in most cases there will have been only one process made ready by a single device interrupt, an idle processor will answer the interrupt first, and immediately assign the new process to itself with no further reschedule activity. In this (anticipated) normal case the cost is O(K)+O(1). This algorithm could be made faster in the high cost cases by keeping the processors sorted by priority, but that would increase costs in ProcessSwitch so it might not be a net perfomance gain.
needReschedule: BOOL ← FALSE;
current: Processor = CurrentProcessor[];
firstAvail: Processor ← current;
deferredWakeups ←
FALSE;
At this point we are about to reassign all of the processors that need it, so nothing is really deferred.
FOR priority: Priority
DECREASING
IN (lowestRunningPriority..highestReadyPriority]
DO
tail: Process = readyQueueArray[priority];
IF tail #
NIL
THEN {
FOR process: Process ← tail.meta, process.meta
DO
best: Processor ← NIL; -- ???? Why "← NIL"? Isn't this dead? ???? - ajd
bestPriority: Priority ← process.priority;
{
DO
The first scan is for some available processor that could run the requested process. If none is found, then we exit all loops.
best ← firstAvail;
IF best.orders = noChange
THEN {
np: Priority = best.running.priority;
IF np < bestPriority THEN {bestPriority ← np; EXIT};
};
firstAvail ← best.next;
IF firstAvail = current THEN GO TO done;
IF best.switchTo = process AND best.orders = switchToGiven THEN
GO TO alreadyAssigned;
ENDLOOP;
IF bestPriority # lowestRunningPriority
THEN
If the first available processor found is not already at the minimum possible priority then it is worth scanning for another processor with a lower priority.
FOR p: Processor ← best.next, p.next
WHILE p # current
DO
SELECT p.orders
FROM
switchToGiven => IF p.switchTo = process THEN GO TO alreadyAssigned;
noChange => {
np: Priority = p.running.priority;
IF np < bestPriority
THEN {
best ← p;
bestPriority ← np;
IF np = lowestRunningPriority THEN EXIT;
};
};
ENDCASE;
ENDLOOP;
We have found the best choice for the given process.
best.switchTo ← process;
best.orders ← switchToGiven;
IF best # current
THEN needReschedule ←
TRUE;
If we are gave orders to another processor, then we rememebr the fact.
IF best = firstAvail
THEN {
The best processor was the first one we found. Therefore we advance firstAvail to the next processor (unless that happens to be the current processor, in which case we are completely done scanning).
firstAvail ← best.next;
IF firstAvail = current THEN GO TO done;
};
EXITS alreadyAssigned => {
In this case the process was already assigned to a processor. So we want to choose another processor (starting at firstAvail) and another process. This should be rare.
};
};
IF process = tail THEN EXIT;
ENDLOOP;
EXITS done => EXIT;
};
ENDLOOP;
IF needReschedule THEN RaiseReschedule[];
};
EnqueueReady:
PROC [process: Process] =
--INLINE-- {
... places the given process on the ready queue using the meta link. On entry, we require that critical traps are disabled and that the metaLock is held. On exit, process.state = ready.
priority: Priority = process.priority;
EnqueueMeta[@readyQueueArray[priority], process];
IF priority > highestReadyPriority THEN highestReadyPriority ← priority;
process.state ← ready;
};
EnqueuePageFault:
PROC [process: Process] =
--INLINE-- {
... places the given process on the page fault queue using the meta link. On entry, we require that critical traps are disabled and that the metaLock is held. On exit, process.state = waitingPage.
EnqueueMeta[@pageFaultMetaQueue, process];
process.state ← waitingPage;
};
EnqueueMeta:
PROC [metaQueue: MetaQueue, process: Process] =
--INLINE-- {
... places the given process on the given meta queue using the meta link. On entry, we require that critical traps are disabled and that the metaLock is held.
IF process.meta =
NIL
THEN {
tail: Process = metaQueue^;
IF tail =
NIL
THEN process.meta ← process
ELSE { process.meta ← tail.meta; tail.meta ← process };
metaQueue^ ← process;
};
};
DequeueMeta:
PROC [metaQueue: MetaQueue, process: Process] =
--INLINE-- {
... removes the given process from the given meta queue (timeout, ready, pageFault). On entry, we require that critical traps must be disabled and that the metaLock is held. The process state is not changed except for the meta link.
IF process.meta #
NIL
THEN {
remove from meta queue if it is there
tail: Process = metaQueue^;
IF tail #
NIL
THEN {
lag: Process ← tail;
DO
p: Process = lag.meta;
IF p = process
THEN {
IF lag = p
THEN metaQueue^ ←
NIL
last one in queue, so show no chain
ELSE {
IF p = tail
THEN metaQueue^ ← lag;
don't let deleted one be the queue head
lag.meta ← p.meta;
};
p.meta ← NIL;
EXIT;
};
lag ← p;
IF lag = tail THEN EXIT;
ENDLOOP;
};
};
};
Initialization
Init:
PROC [indexPtr:
LONG
POINTER
TO
CARD] ~ {
... called by each processor when it starts, with interrupts disabled and with enough room on the EU stack that a stack overflow is guaranteed not to occur. The value of indexPtr^ is the index of this processor's ProcessorRep structure in processors, below. Other processors will spin on indexPtr^ until it increases by 1. Thus, only one processor can enter at a time. This is important only for Processor 0, which creates the Nachos, Process and Processor objects. Every other processor just acquires the metaLock to allocate itself Processor and Process objects for the scheduler to manipulate; it then releases the metaLock and idles until the scheduler comes up with something better for it to do.
processor: Processor;
process: Process;
index: CARD ~ indexPtr^;
Create the Nachos, Process and Processor objects, etc.
IF index = 0
THEN {
CreateNachos[];
CreateProcesses[];
CreateProcessors[];
pageFaultCond ← @pageFaultCondRep;
InitializeCondition[pageFaultCond, noTimeout];
};
Now it's okay to let other processors in ...
SimStoreInCard[ptr~indexPtr, new~index+1]; -- RDI: indexPtr^ ← index+1;
Set up my processor object (so I can acquire the metaLock).
processor ← ProcessorFromIndex[index];
SetCurrentProcessor[processor];
AcquireMetaLock[];
Allocate myself a Process object, initialize so the scheduler can manipulate me.
process ← NewProcessInner[];
process.state ← running;
process.priority ← lowestRunningPriority ← Priority.FIRST;
SetCurrentProcess[process];
processor.running ← process;
InsertProcessor[processor];
IF panicStopLock #
NIL
THEN {
Somebody else has already brought the system to a panic stop, or is about to do so. In either case, this process should come to a panic stop ...
(Do you suppose this code will ever be exercised on a real system? - ajd)
MarkFrame[];
SaveProcessState[process];
CurrentProcessor[].orders ← stopped;
RaiseReschedule[]; -- ensure that this processor will follow its own orders immediately.
};
ReleaseMetaLock[];
EnableInterrupts[];
SimCheckForRescheduleTrap[]; -- SIM only.
IF index = 0 THEN { SetPriority[normal]; SystemStartup[] };
Idle
SetPriority[Priority.FIRST];
DO
SimCheckForRescheduleTrap[]; -- SIM only.
SimYield[]; -- SIM only.
ENDLOOP;
};
SystemStartup:
PROC ~ {
sP: SecureProcess;
SetPriority[excited]; sP ← Fork[TimeSliceServer, NIL, 0, 0]; SetPriority[normal];
Detach[sP];
sP ← Fork[CountA, NIL, 0, 0];
Detach[sP];
sP ← Fork[CountB, NIL, 0, 0];
Detach[sP];
};
tickConditionRep: ConditionRep;
ticksPerSlice: CARD ← 2;
TimeSliceServer:
PROC ~ {
tickCondition: Condition ~ @tickConditionRep;
InitializeCondition[tickCondition, 0];
[] ← RegisterIntHandler[0, HandleTick];
DO
THROUGH [0 .. ticksPerSlice)
DO
WaitInterrupt[tickCondition];
RDI: here we should test that the clock really did interrupt and reset it.
ENDLOOP;
ShuffleReadyQueues[];
ENDLOOP;
};
HandleTick: IntHandler
[requestKind: RequestKind]
SIM only - this is copied from the body of ShuffleReadyQueues.
~ TRUSTED {
IF requestKind # 0 THEN ERROR; -- SIM: Sanity check ????
DelicateNotify[@tickConditionRep];
};
Tick: PROC ~ { EntryTick[] };
EntryTick:
ENTRY
PROC [lock: SimLock ← simIOLock] ~ {
simIORequestWord[0] ← TRUE;
SimRaiseReschedule[] };
counterA: CARD ← 0;
counterB: CARD ← 0;
CountOrders: TYPE ~ { init, go, quit };
countOrders: CountOrders ← init;
CountA:
PROC ~ {
target: CARD ← 1;
WHILE countOrders = init
DO
SimYield[];
SimCheckForRescheduleTrap[];
ENDLOOP;
WHILE countOrders = go
DO
WHILE counterB < target
DO
SimYield[];
SimCheckForRescheduleTrap[];
ENDLOOP;
target ← target.SUCC;
counterA ← counterA.SUCC;
ENDLOOP;
};
CountB:
PROC ~ {
target: CARD ← 1;
WHILE countOrders = init
DO
SimYield[];
SimCheckForRescheduleTrap[];
ENDLOOP;
WHILE countOrders = go
DO
WHILE counterA < target
DO
SimYield[];
SimCheckForRescheduleTrap[];
ENDLOOP;
target ← target.SUCC;
counterB ← counterB.SUCC;
ENDLOOP;
};
Process Storage
Mostly just dummied-up for simulation purposes.
maxProcesses: CARDINAL = 40;
processes: ARRAY [0..maxProcesses) OF ProcessRep;
InvalidProcess: PUBLIC ERROR [secureProcess: SecureProcess] ~ CODE;
ValidateProcess:
PUBLIC
SAFE
PROC [sP: SecureProcess] ~
TRUSTED {
RDI: Does this have to change?
i: CARD ~ sP.index;
IF (i >= maxProcesses)
OR (processes[i].secure.key # sP.key)
THEN ERROR InvalidProcess[sP];
};
ProcessFromSecureProcess:
PROC [sP: SecureProcess]
RETURNS [p: Process] ~
--INLINE-- {
i: CARD ~ sP.index;
IF (i >= maxProcesses)
OR ( (p ← @processes[i]).secure.key # sP.key)
THEN ERROR InvalidProcess[sP];
};
SecureProcessFromProcess:
PROC [p: Process]
RETURNS [SecureProcess] ~
--INLINE-- {
RETURN [p.secure] };
CreateProcesses:
PROC [] ~ {
... create all the process objects. On entry we assume interrupts are disabled and the metaLock is held.
RDI: Does this have to change?
FOR i:
CARDINAL
IN [0..maxProcesses)
DO
p: Process ~ @processes[i];
p.sim ← NEW [ SimProcessRep ← [lock~NEW[SimLockRep ← []], awakened~FALSE, processor~NIL] ]; -- SIM only.
Possibly we should not initialize the secure key fields of the ProcessReps here. Arguably they should be initialized to random values rather than 0.
p.secure ← [key~0, index~i];
CleanProcess[p, FALSE];
ENDLOOP;
};
CleanProcess:
PROC [p: Process, check:
BOOL ←
TRUE]
RETURNS [] ~ {
... put a Process object on the free list, restoring its fields to a "clean" state. On entry we require that interrupts are disabled and the metalock is held.
???? What do we do if check is TRUE ????
RDI: remove initialization of sim fields; do something about euState?
p.secure.key ← p.secure.key + 1;
p.queue ← NIL;
p.next ← NIL;
p.when ← noTimeout;
p.page ← wordZero;
p.priority ← Priority.FIRST;
p.state ← free;
RDI: do something about the EUState here? Probably not necessary. - ajd
InitializeMonitorLock[@p.lock];
p.abortState ← none;
p.joinState ← none;
InitializeCondition[@p.joinCondition, noTimeout];
{
-- SIM only
p.sim.awakened ← FALSE;
p.sim.processor ← NIL;
};
IF tailFreeProcess =
NIL
THEN p.meta ← tailFreeProcess ← p
ELSE { p.meta ← tailFreeProcess.meta; tailFreeProcess.meta ← p };
};
NewProcessInner:
PROC
RETURNS [p: Process] ~ --INLINE-- {
... get a process object from free process list. On entry we require that interrupts are disabled and the metalock is held.
IF tailFreeProcess = NIL THEN ERROR; -- too horrible to contemplate
p ← tailFreeProcess.meta;
IF tailFreeProcess = p THEN tailFreeProcess ← NIL ELSE tailFreeProcess.meta ← p.meta;
p.meta ← NIL; -- for sanity when debugging
};
NewProcess:
PROC
RETURNS [p: Process] ~ {
... get a process object from free process list.
SIM: On entry we require that interrupts are enabled.
DisableInterrupts[];
AcquireMetaLock[];
p ← NewProcessInner[];
ReleaseMetaLock[];
EnableInterrupts[]; -- SIM only. Automatic in RDI.
SimCheckForRescheduleTrap[]; -- SIM only.
};
Differences Between PrincOps and DragOps
Note: this section requires knowledge of the PrincOps architecture. I freely admit that my knowledge of said architecture is spotty at best. This section is also incomplete.
Multiple processors
With current PrincOps machines atomicity is acheived through a combination of locks, inhibition of interrupts, priority levels, and non-interruptibility of non-faulting microcode routines. In the DragOps arhcitecture, only locks (based on memory bus arbitration) and inhibition of interrupts for a single processor (based on IFU inhibition of the Reschedule interrupt) can be used as sources of atomicity. This complicates life, you betcha.
Memory vs. time
For PrincOps, memory was assumed to be expensive, and cycles were assumed to be expendable if it saved memory. For example, a process block costs only 8 bytes when no state vector is used, and only costs 36 additional bytes when a state vector is used (the unusual case).
The DragOps, memory is assumed to be relatively cheap, and cycles should only be spent to avoid gross memory costs. For example, a process block costs at least 40 bytes in the normal case, and there is no exceptional case.
The ready queue
For PrincOps, the head of the ready queue is the currently running process. For DragOps, running processes are not on the ready queue at all. Instead, running processes are located through the current ring of processors.
The DragOps ready queue is special in that it is actually an array of metaqueues, one for each priority level. This bounds the search for the best ready process to a time proportional to the number of priority levels (small), yet also allows us to place processes on the ready queue in unit time.
Queues and meta queues
For PrincOps, all queues are circular queues of processes linked through a particular word in the PSB (Process State Block). Condition variables, monitor locks, and even the ready queue are all queues where the queue pointer points at the tail of the processor queue. For DragOps, there are Queues and MetaQueues. Although both are circular queues, and both point at the tail, the link words are different. A DragOps process can be queued at the same time on both a queue and a metaqueue.
DragOps queues are used by "normal" Mesa monitors and condition variables. MetaQueues are used for more specialized reasons, like the ready queue (waiting for processor), the pageFault queue (waiting for a page to be brought into memory), and the timeout queue (waiting for some time to expire). Queues need not be pinned, while MetaQueues must be pinned. The scheduler is permitted to move processes between MetaQueues, but is not permitted to touch queues. All of this is done in the name of simplicity, you betcha.
Monitor locks & condition variables
For PrincOps, monitor locks and condition variables are managed by microcode that has carefully been constructed to not take interrupts at awkward times.
For DragOps, managing monitor lock and condition queues is too complex to be performed in a single instruction, so each monitor lock and condition has a spin lock that must be acquired before the queue can be examined or modified. A potential problem with this scheme is that a higher-priority process may attempt to acquire the spin lock while a lower-priority process possess the lock. Therefore, each spin lock has the address of the owning process, so a process that fails to acquire the lock can yiled control directly to the owning process. Although this is technically a violation of preemptive scheduling, it is necessary to prevent livelock (where no progress can be made because of the higher-priority process spinning).
State vectors
For PrincOps, the eval stack and other cleanup information is contained in a state vector. Minimal stack restrictions on monitor entry and condition wait ensure that the eval stack is empty when such processes go to sleep, which tends to keep the number of state vectors low.
For DragOps, there are no state vectors. When a process is about to go to sleep, it saves the frames in the IFU and EU stacks to memory, saves the appropriate EU registers to the process block, and puts itself on the appropriate metaqueue (if any). The only problem is ensuring that there are sufficient nachos to accomodate the stack of the process. The current thinking is to have an emergency-level process put every other process to sleep when frames are about to run out. When that has been done, a world-swap debug can take place.
An improvement on this scheme can be made if there is an error generated for a particular process when it has used "too many" frames. Since this is typically due to a process executing unbounded recursion, we can tolerate at least a few runaways without having to world-swap.
In both cases, we have to have some mechanism for keeping track of how many frames are in use by a given process, and how many are left in the global pool of frames. Details are yet to be specified.
Costs
For PrincOps, a process switch is on the order of 5 procedure call times. A procedure call (and return) is on the order of 5 microseconds (on the Dorado).
For DragOps, a process switch is on the order of 100 procedure call times (very rough estimate). Partly, this is because procedure calls are about 10 times faster, and partly this is because there is more per process state to save and restore.
Timeout
For PrincOps, a timeout occurs when the clock ticks and some PSB that is awaiting timeout has its wait count go to 0. All PSBs have to be touched at every tick, although this is done via microcode.
For DragOps, at each tick every process in timeoutMetaQueue has its when field decremented. If that field reaches 0, the process is placed in the ready queue. It then becomes the responsibility of the process to remove itself from the appropriate condition variable. This makes page faults (if any) due to touching the condition variable occur in the "right" process. Only PSBs waiting for timeout have to be touched at every tick.
Aborts
For PrincOps (as implemented for Cedar), aborts are enabled/inhibited on a per condition variable basis, and aborts are tested only by CVs & ICVs with aborts enabled and by explicit tests.
For DragOps, aborts are enabled/inhibited on a per process basis and on a CV basis. Abort requests are tested when waiting for CVs, ICVs, and MLs, as well as the normal explcit tests. Is it worth having the enable/disable on a per-process basis? (it costs rather little)
Things to do
This section is a list of things that still need some design work.
Fork / join
These need to be coded in greater detail.
Page faults
How will the fault handler work and place things to sleep waiting for page fault? For an EU page fault, the offending memory address is in the MAR register in the EU. For an IFU page fault the offending PC is for the start of the instruction, so the memory address to page in may be the next page (since this fits well with page clustering anyway, we may want to always page in the following page on an IFU page fault whenever the following page is also valid).
Soft and hard emergency limits on frames
These need support in the scheduler, as well as in stack save/restore. Reaching the soft limit (per process) should raise an error. This is just an attempt to restrain runaway processes. Reaching the hard limit will cause system shutdown and a call to the world-swap debugger. This probably will only happen if a significant number of runaways occur.
Timeout mechanism
In current Cedar relatively few condition variables have timeouts (the default is not to have timeouts). The current design for the timeout queue is based on the assumption that this situation will continue on Dragon. The current design has O(n) insertion and deletion of entries from the queue, O(n) updating for every tick, and O(1) testing, where n is the number of processes waiting for timeout. A heapsort-style priority queue has O(n log n) insertion and deletion, and O(1) testing, but it also has more complicated code, and needs an auxilliary pinned data structure (1 word / process). The current simple design will be used until it becomes a bottleneck.
Should the timeout "process" just be part of the scheduler? How will elapsed time be determined, and to what accuracy? Just for the record, there are three clocks of current interest to Cedar:
1: Time of day clock. This need not have resolution of better than a second, and it need not be locally stable, but it should have very good long-term accuracy.
2: Timeout clock. This need not have great accuracy, but it must generate interrupts, and it should have resolution of 10-100 milliseconds. More resolution is OK provided that timeout interrupts don't occur more often than every 10 milliseconds.
3: Elapsed time clock. This need not have great long-term accuracy, but it should have very good short-term accuracy, have roughly microsecond resolution, and it must be quite cheap to read. This clock can easily synthesize the Timeout clock.
Stack trace
SafeStorage needs a way to trace out the process stacks atomically. That is, all possible references in the process stacks must be discovered without any reference containing processes making headway. Dump frames are already pinned, and the panicStop mechanism can be used to stop all processes except for the current process (we may also want to have an stopRC command that would stop all processes that had not declared themselves to be non-RC). We can allocate frame extensions out of a special pool of memory that allows scanning. Possibly we should also make this pool pinned?
Reschedule & page fault
Setting L at procedure essentially transfers registers from the called frame to the calling frame, as far as stack save and restore are concerned. It will only work properly if both the calling frame and the called frame are in the EU (and IFU) registers. If the various compilers (and assembly language coders) guarantee that no calls are made before setting L, then there are only two sources of having the calling frame migrated into memory - reschedule & page fault. For these traps, if the youngest frame below the trap is P, and the next youngest is Q, and Q has zero registers, then both P and Q have to be restored before returning from the trap.