DIRECTORY
Arpa USING [nullAddress],
Basics USING [bytesPerWord],
BasicTime USING [GetClockPulses, MicrosecondsToPulses, Pulses],
Booting USING [RegisterProcs, RollbackProc, switches],
CommBuffer USING [],
CommBufferExtras USING [gapNoList, gapRecvOne, gapSendOne],
CommDriver USING [AllocBuffer, AddNetwork, Buffer, bytesToRead, FreeBuffer, Network, NetworkObject, NoThankYou, recvPriority, sendPriority, watcherPriority, wordsInIocb],
CommDriverType USING [Encapsulation, ethernetOneEncapsulationOffset, ethernetOneEncapsulationBytes],
DebuggerSwap USING [CallDebugger],
EthernetOneFace USING [AddCleanup, controlBlockSize, GetHostNumber, GetNextDevice, GetStatusAndCollisions, GetStatusAndLength, GetPacketsMissed, Handle, hearSelf, HostArray, IOCB, MarkKilled, nullHandle, QueueInput, QueueOutput, SetInputHosts, Status, TurnOff, TurnOn],
EthernetDriverStats USING [EtherStats, EtherStatsRep, MaxTries],
GermSwap USING [], -- Needed by Booting.switches
PrincOpsUtils USING [AllocateNakedCondition, LongCopy],
Process USING [Detach, DisableTimeout, GetPriority, MsecToTicks, Priority, SetPriority, SetTimeout],
Pup USING [allHosts, Host, nullNet],
SafeStorage USING [PinObject],
XNS USING [unknownNet];
EthernetOneDriver:
CEDAR
MONITOR
LOCKS data
USING data: InstanceData
IMPORTS BasicTime, Booting, CommDriver, DebuggerSwap, EthernetOneFace, PrincOpsUtils, Process, SafeStorage
EXPORTS CommBuffer = {
Buffer: TYPE = CommDriver.Buffer;
Network: TYPE = CommDriver.Network;
Encapsulation: PUBLIC TYPE = CommDriverType.Encapsulation;
Next:
PROC [b: Buffer]
RETURNS [Buffer] =
TRUSTED
INLINE {
RETURN[LOOPHOLE[b.ovh.next]]; };
Data:
PROC [b: Buffer]
RETURNS [
LONG
POINTER] =
TRUSTED
INLINE {
RETURN[@b.ovh.encap + CommDriverType.ethernetOneEncapsulationOffset]; };
Bytes:
PROC [bytes:
NAT]
RETURNS [
NAT] =
TRUSTED
INLINE {
RETURN[bytes + CommDriverType.ethernetOneEncapsulationBytes]; };
BytesToRead:
PROC
RETURNS [
NAT] =
INLINE {
RETURN[Bytes[CommDriver.bytesToRead]]; };
Iocb:
PROC [b: Buffer]
RETURNS [EthernetOneFace.
IOCB] =
TRUSTED
INLINE {
RETURN[LOOPHOLE[b.ovh.iocb]]; };
ElapsedPulses:
PROC [startTime: BasicTime.Pulses]
RETURNS [BasicTime.Pulses] =
INLINE {
RETURN[BasicTime.GetClockPulses[] - startTime]; };
InstanceData: TYPE = REF InstanceDataRep;
InstanceDataRep:
TYPE =
MONITORED
RECORD [
ether: EthernetOneFace.Handle,
timer: CONDITION,
inWait, outWait: LONG POINTER TO CONDITION ← NIL,
inHickup, outHickup: CONDITION,
inHickups, inBurps: INT ← 0,
outHickups, outBurps: INT ← 0,
inInterruptMask, outInterruptMask: WORD,
firstInputBuffer, lastInputBuffer: Buffer,
firstOutputBuffer, lastOutputBuffer: Buffer,
timeLastRecv, timeSendStarted: BasicTime.Pulses,
lastMissed: CARDINAL,
numberOfInputBuffers: CARDINAL,
me: Pup.Host,
promiscuous: BOOL,
stats: EthernetDriverStats.EtherStats ];
defaultNumberOfInputBuffers: NAT ← 5;
thirtySecondsOfPulses: BasicTime.Pulses ← BasicTime.MicrosecondsToPulses[30000000];
fiveSecondsOfPulses: BasicTime.Pulses ← BasicTime.MicrosecondsToPulses[5000000];
Sending
GetPupEncapsulation:
PROC [network: Network, dest: Pup.Host]
RETURNS [encap: Encapsulation] =
TRUSTED {
data: InstanceData = NARROW[network.instanceData];
encap ← [ ethernetOne[
etherSpare1: 0, -- fill with something known so 7 word compare will work
etherSpare2: 0,
etherSpare3: 0,
etherSpare4: 0,
etherSpare5: 0,
ethernetOneDest: dest,
ethernetOneSource: data.me,
ethernetOneType: pup ]];
};
Return:
PROC [network: Network, buffer: Buffer, bytes:
NAT] = {
data: InstanceData = NARROW[network.instanceData];
buffer.ovh.encap.ethernetOneDest ← buffer.ovh.encap.ethernetOneSource;
buffer.ovh.encap.ethernetOneSource ← data.me;
Send[network, buffer, bytes];
};
unknownDest: LONG CARDINAL ← 0;
Send:
PROC [network: Network, buffer: Buffer, bytes:
NAT] = {
data: InstanceData = NARROW[network.instanceData];
priority: Process.Priority = Process.GetPriority[];
dest: Pup.Host = buffer.ovh.encap.ethernetOneDest;
IF network.dead THEN RETURN;
IF buffer.ovh.encap.ethernetOneType = translationFailed
THEN {
unknownDest ← unknownDest.SUCC;
RETURN; };
IF priority # CommDriver.sendPriority THEN Process.SetPriority[CommDriver.sendPriority];
IF ~EthernetOneFace.hearSelf
AND (dest = data.me
OR dest = Pup.allHosts
OR data.promiscuous)
THEN {
sending to ourself, copy it over since we can't hear it
copy: Buffer ← CommDriver.AllocBuffer[];
words: NAT ← (Bytes[bytes]+Basics.bytesPerWord-1) / Basics.bytesPerWord;
copy.ovh.network ← network;
copy.ovh.next ← NIL;
copy.ovh.direction ← none;
TRUSTED {
PrincOpsUtils.LongCopy[from: Data[buffer], nwords: words, to: Data[copy]]; };
SELECT copy.ovh.encap.ethernetOneType
FROM
arpa => copy ← network.arpa.recv[network, copy, bytes];
arp => copy ← network.arpa.recvTranslate[network, copy, bytes];
xns => copy ← network.xns.recv[network, copy, bytes];
translation => copy ← network.xns.recvTranslate[network, copy, bytes];
pup => copy ← network.pup.recv[network, copy, bytes];
fromImp, toImp => copy ← network.other.recv[network, copy, bytes];
ENDCASE => copy ← network.other.recv[network, copy, bytes];
IF copy # NIL THEN CommDriver.FreeBuffer[copy]; };
buffer.ovh.network ← network;
buffer.ovh.next ← NIL;
SendInner[data, buffer, bytes];
IF priority # CommDriver.sendPriority THEN Process.SetPriority[priority];
};
Unless you are remote debugging, an error or breakpoint in here will probably kill your whole machine. The problem is that the debugger wants to check the time stamp on files.
SendInner:
ENTRY
PROC [data: InstanceData, b: Buffer, bytes:
NAT] = {
stats: EthernetDriverStats.EtherStats = data.stats;
status: EthernetOneFace.Status;
collisions: NAT;
EthernetOneFace.QueueOutput[data.ether, Data[b], Bytes[bytes], Iocb[b]];
IF data.firstOutputBuffer # NIL AND data.lastOutputBuffer.ovh.next # NIL THEN ERROR;
IF b.ovh.gap#CommBufferExtras.gapNoList
THEN
DebuggerSwap.CallDebugger["SendInner: buffer already in a list!"];
b.ovh.gap ← CommBufferExtras.gapSendOne; -- DKW: b is now in the output queue
IF data.firstOutputBuffer = NIL THEN data.firstOutputBuffer ← b
ELSE data.lastOutputBuffer.ovh.next ← b;
data.lastOutputBuffer ← b;
data.timeSendStarted ← BasicTime.GetClockPulses[];
DO
TRUSTED { WAIT data.outWait^; };
[status, collisions] ← EthernetOneFace.GetStatusAndCollisions[Iocb[b]];
IF status # pending THEN EXIT;
data.outBurps ← data.outBurps.SUCC;
BROADCAST data.outHickup;
ENDLOOP;
UNTIL b = data.firstOutputBuffer
DO
data.outHickups ← data.outHickups.SUCC;
WAIT data.outHickup;
ENDLOOP;
data.firstOutputBuffer ← Next[b];
b.ovh.next ← NIL; -- DKW: just to be careful ...
IF b.ovh.gap#CommBufferExtras.gapSendOne
THEN
DebuggerSwap.CallDebugger["SendInner: clobbered buffer in output queue!"];
b.ovh.gap ← CommBufferExtras.gapNoList; -- DKW: b now removed from the queue
SELECT status
FROM
ok => {
stats.packetsSent ← stats.packetsSent + 1;
stats.wordsSent ← stats.wordsSent + bytes/Basics.bytesPerWord;
stats.loadTable[collisions] ← stats.loadTable[collisions] + 1;
};
ENDCASE => {
SELECT status
FROM
tooManyCollisions => {
tooMany: NAT = EthernetDriverStats.MaxTries;
stats.loadTable[tooMany] ← stats.loadTable[tooMany] + 1; };
underrun => stats.overruns ← stats.overruns + 1;
ENDCASE => stats.badSendStatus ← stats.badSendStatus + 1;
};
BROADCAST data.outHickup;
};
Receiving
Recv:
PROC [network: Network] = {
data: InstanceData = NARROW[network.instanceData];
b: Buffer ← CommDriver.AllocBuffer[];
Process.SetPriority[CommDriver.recvPriority];
DO
good: BOOL;
bytes: NAT;
b.ovh.network ← network;
b.ovh.next ← NIL;
b.ovh.direction ← none;
[good, bytes] ← RecvInner[data, b];
IF bytes < CommDriverType.ethernetOneEncapsulationBytes THEN good ← FALSE
ELSE bytes ← bytes - CommDriverType.ethernetOneEncapsulationBytes;
IF good
THEN
SELECT b.ovh.encap.ethernetOneType
FROM
arpa => b ← network.arpa.recv[network, b, bytes];
arp => b ← network.arpa.recvTranslate[network, b, bytes];
xns => b ← network.xns.recv[network, b, bytes];
translation => b ← network.xns.recvTranslate[network, b, bytes];
pup => b ← network.pup.recv[network, b, bytes];
fromImp, toImp => b ← network.other.recv[network, b, bytes];
ENDCASE => b ← network.other.recv[network, b, bytes]
ELSE b ← network.error.recv[network, b, bytes];
IF b = NIL THEN b ← CommDriver.AllocBuffer[];
ENDLOOP;
};
Unless you are remote debugging, an error or breakpoint in here will probably kill your whole machine. The problem is that the debugger wants to check the time stamp on files.
RecvInner:
ENTRY
PROC [data: InstanceData, b: Buffer]
RETURNS [good:
BOOL, bytes:
NAT] = {
stats: EthernetDriverStats.EtherStats = data.stats;
status: EthernetOneFace.Status;
EthernetOneFace.QueueInput[data.ether, Data[b], BytesToRead[], Iocb[b]];
IF data.firstInputBuffer # NIL AND data.lastInputBuffer.ovh.next # NIL THEN ERROR;
IF b.ovh.gap#CommBufferExtras.gapNoList
THEN
DebuggerSwap.CallDebugger["RecvInner: buffer already in a list!"];
b.ovh.gap ← CommBufferExtras.gapRecvOne; -- DKW: b is now in the input queue
IF data.firstInputBuffer = NIL THEN data.firstInputBuffer ← b
ELSE data.lastInputBuffer.ovh.next ← b;
data.lastInputBuffer ← b;
DO
TRUSTED { WAIT data.inWait^; };
[status, bytes] ← EthernetOneFace.GetStatusAndLength[Iocb[b]];
IF status # pending THEN EXIT;
data.inBurps ← data.inBurps.SUCC;
BROADCAST data.inHickup;
ENDLOOP;
UNTIL b = data.firstInputBuffer
DO
data.inHickups ← data.inHickups.SUCC;
WAIT data.inHickup;
ENDLOOP;
data.firstInputBuffer ← Next[b];
b.ovh.next ← NIL; -- DKW: just to be careful ...
IF b.ovh.gap#CommBufferExtras.gapRecvOne
THEN
DebuggerSwap.CallDebugger["RecvInner: clobbered buffer in input queue!"];
b.ovh.gap ← CommBufferExtras.gapNoList; -- DKW: b now removed from the queue
data.timeLastRecv ← BasicTime.GetClockPulses[];
SELECT status
FROM
ok => {
good ← TRUE;
stats.packetsRecv ← stats.packetsRecv + 1;
stats.wordsRecv ← stats.wordsRecv + bytes/Basics.bytesPerWord; };
ENDCASE => {
good ← FALSE;
stats.badRecvStatus ← stats.badRecvStatus + 1;
IF status = overrun THEN stats.overruns ← stats.overruns + 1; };
BROADCAST data.inHickup;
};
Watching
Rollback: Booting.RollbackProc = {
[clientData: REF ANY]
network: Network = NARROW[clientData];
data: InstanceData = NARROW[network.instanceData];
SmashCSB[data];
};
Watcher:
PROC [network: Network] = {
data: InstanceData = NARROW[network.instanceData];
stats: EthernetDriverStats.EtherStats = NARROW[network.stats];
missedIn: NAT ← 0;
missedOut: NAT ← 0;
inputNotifys: INT ← 0;
outputNotifys: INT ← 0;
fixupInputs: INT ← 0;
shootDownOutputs: INT ← 0;
Process.SetPriority[CommDriver.watcherPriority];
DO
missed: CARDINAL ← EthernetOneFace.GetPacketsMissed[data.ether];
newMissed: CARDINAL ← (missed - data.lastMissed);
This is the only place where inputOff gets updated, so we don't need the ML.
IF newMissed < 10000 THEN stats.inputOff ← stats.inputOff + newMissed;
data.lastMissed ← missed;
Since the interrupt routines are higher priority than we are, all the interrupts should get processed before we can see them. If see anything interesting, an interrupt has probably been lost. However, there is a slim chance it was generated between the time we started decoding the instruction and the time that the data is actually fetched. That is why we look several times. Of course, if it is not process when we look again, it could be a new interrupt that has just arrived.
FOR i:
NAT
IN [0..25)
DO
-- Check for lost input interrupts
IF InputChainOK[data] THEN { missedIn ← 0; EXIT; };
REPEAT
FINISHED => {
missedIn ← missedIn.SUCC;
inputNotifys ← inputNotifys.SUCC;
WatcherNotifyInput[data]; };
ENDLOOP;
FOR i:
NAT
IN [0..25)
DO
-- Check for lost output interrupts
IF OutputChainOK[data] THEN { missedOut ← 0; EXIT; };
REPEAT
FINISHED => {
missedOut ← missedOut.SUCC;
outputNotifys ← outputNotifys.SUCC;
WatcherNotifyOutput[data]; };
ENDLOOP;
IF (missedIn > 10) -- Check for input confusion
OR (ElapsedPulses[data.timeLastRecv] > thirtySecondsOfPulses)
THEN {
missedIn ← 0;
fixupInputs ← fixupInputs.SUCC;
SmashCSB[data]; };
IF (missedOut > 10) -- Check for output confusion
OR (data.firstOutputBuffer #
NIL
AND (ElapsedPulses[data.timeSendStarted] > fiveSecondsOfPulses)) THEN {
missedOut ← 0;
shootDownOutputs ← shootDownOutputs.SUCC;
SmashCSB[data]; };
WatcherWait[data];
ENDLOOP;
};
InputChainOK:
ENTRY
PROC [data: InstanceData]
RETURNS [
BOOL] = {
status: EthernetOneFace.Status;
IF data.firstInputBuffer = NIL THEN RETURN[TRUE];
status ← EthernetOneFace.GetStatusAndLength[Iocb[data.firstInputBuffer]].status;
IF status = pending THEN RETURN[TRUE];
RETURN[FALSE];
};
OutputChainOK:
ENTRY
PROC [data: InstanceData]
RETURNS [
BOOL] = {
status: EthernetOneFace.Status;
IF data.firstOutputBuffer = NIL THEN RETURN[TRUE];
status ← EthernetOneFace.GetStatusAndCollisions[Iocb[data.firstOutputBuffer]].status;
IF status = pending THEN RETURN[TRUE];
RETURN[FALSE];
};
WatcherNotifyInput:
ENTRY
PROC [data: InstanceData] =
TRUSTED {
NOTIFY data.inWait^;
};
WatcherNotifyOutput:
ENTRY
PROC [data: InstanceData] =
TRUSTED {
NOTIFY data.outWait^;
};
SmashCSB:
ENTRY
PROC [data: InstanceData] = {
EthernetOneFace.TurnOff[data.ether];
FOR b: Buffer ← data.firstInputBuffer, Next[b]
UNTIL b =
NIL
DO
status: EthernetOneFace.Status;
status ← EthernetOneFace.GetStatusAndLength[Iocb[b]].status;
IF status = pending THEN EthernetOneFace.MarkKilled[Iocb[b]];
TRUSTED { NOTIFY data.inWait^; };
ENDLOOP;
FOR b: Buffer ← data.firstOutputBuffer, Next[b]
UNTIL b =
NIL
DO
status: EthernetOneFace.Status;
status ← EthernetOneFace.GetStatusAndCollisions[Iocb[b]].status;
IF status = pending THEN EthernetOneFace.MarkKilled[Iocb[b]];
TRUSTED { NOTIFY data.outWait^; };
ENDLOOP;
BROADCAST data.inHickup;
BROADCAST data.outHickup;
EthernetOneFace.TurnOn[data.ether, data.inInterruptMask, data.outInterruptMask];
data.lastMissed ← EthernetOneFace.GetPacketsMissed[data.ether];
data.timeLastRecv ← BasicTime.GetClockPulses[];
IF data.promiscuous
THEN
TRUSTED {
hostArray: EthernetOneFace.HostArray ← ALL[TRUE];
EthernetOneFace.SetInputHosts[data.ether, @hostArray]; }
};
WatcherWait:
ENTRY
PROC [data: InstanceData] = {
WAIT data.timer;
};
SetPromiscuous:
PROC [network: Network, promiscuous:
BOOL] = {
data: InstanceData = NARROW[network.instanceData];
IF promiscuous
THEN
TRUSTED {
hostArray: EthernetOneFace.HostArray ← ALL[TRUE];
EthernetOneFace.SetInputHosts[data.ether, @hostArray]; }
ELSE EthernetOneFace.SetInputHosts[data.ether, NIL];
data.promiscuous ← promiscuous;
};
IsThisForMe:
PROC [network: Network, buffer: Buffer]
RETURNS [yes:
BOOL] = {
data: InstanceData = NARROW[network.instanceData];
IF buffer.ovh.encap.ethernetOneDest = data.me THEN RETURN[TRUE];
IF buffer.ovh.encap.ethernetOneDest = Pup.allHosts THEN RETURN[TRUE];
RETURN[FALSE];
};
ToBroadcast:
PROC [network: Network, buffer: Buffer]
RETURNS [yes:
BOOL] = {
IF buffer.ovh.encap.ethernetOneDest = Pup.allHosts THEN RETURN[TRUE];
RETURN[FALSE];
};
MoreBuffers:
PROC [network: Network, total:
NAT] = {
data: InstanceData = NARROW[network.instanceData];
IF total < data.numberOfInputBuffers THEN RETURN;
FOR i:
NAT
IN [data.numberOfInputBuffers..total)
DO
TRUSTED { Process.Detach[FORK Recv[network]]; };
ENDLOOP;
data.numberOfInputBuffers ← total;
};
}.
In the ideal world, the code for an interrupt routine would look like:
...
Queue up request
WAIT for interrupt
...
Unfortunately, things don't work that simply. One problem is that there is a race between the Queue/wait, and the notify that results when the hardware finishes. (In normal
WAIT/
NOTIFY sequences, the
CONDITION is part of the monitored data, so there isn't any race.) The microcode "solves" this problem with a wakeup waiting bit in the condition variable. If you only had one request in progress at any time, the wakup waiting trick would make the code above work. Things get more complicated if several requests are queued since interrupts may happen while the interrupt routine is processing some other request.The wakup waiting bit is only one bit, so it can't remember how many extra wakeups are necessary. Thus the code now looks like:
...
Queue up request
UNTIL status = done
DO
WAIT for interrupt
ENDLOOP
...
The
WAIT will be bypassed if an interrupt happens while a previous event was being processed. In that case, the first real try at WAITing will encounter a wakeup waiting and get a wakeup without any work to do.
That picture assumes that there is only one interrupt routine for the microcode to notify. This driver has one routine for each active request. Since there is only one process at a time in the critical region a single bit of wakup wating should be enough. As long as they are all running at the same priority, they should get woken up in the right order. Unfortunately, there is a hairy case. Consider what happens if 1) a request is queued and a process is waiting for it, 2) a second process has queued a request but hasn't waited for it, 3) the first request finishes (and it is moved from the CV to the ready list), and 4) the second request finishes, and 5) the second process trys to wait but hits the wakup waiting bit. The result is the second process keeps running, but the first hasn't run yet. (I found this by trial and error. Under heavy load, it would happen about 1 time in 100000. /HGM)