Cache.rose
Last edited by: Barth, May 31, 1984 6:02:04 pm PDT
Last edited by: McCreight, June 12, 1984 5:57:32 pm PDT
Directory IO;
Imports Atom, BitOps, CacheOps, Cucumber, Dragon;
Library CacheMInterface, CachePInterface, CacheEntries;
Cache: CELL [
Signal names obey the following convention: If a signal x is computed during PhA and remains valid throughout the following PhB, it is denoted as xAB. If x is computed during PhA and can change during the following PhB (as, for example, in precharged logic), it is denoted as xA. In this latter case, a client wanting to use x during PhB must receive it in his own latch open during PhA. xBA and xB are defined symmetrically. Positive logic is assumed (asserted = TRUE = 1 = more positive logic voltage); negative-logic signals have an extra "N" at or very near the beginning of the signal name (e.g., PNPError for PBus Negative-TRUE Parity Error).
Timing and housekeeping interface
PhA, PhB<BOOL,
Vdd, Gnd<BOOL,
PadVdd, PadGnd<BOOL,
Processor interface
PData=INT[32],
PParityB=BOOL,
PCmdA<EnumType["Dragon.PBusCommands"],
PRejectB>BOOL, -- Tristate
PFaultB>EnumType["Dragon.PBusFaults"], -- Tristate
PNPError>BOOL, -- Tristate
Main memory interface
MDataBA=INT[32],
MCmdBA=EnumType["Dragon.MBusCommands"],
MNShared=BOOL,
MParityBA=BOOL,
MNError>BOOL,
MReadyBA<BOOL,
MRq>BOOL,
MNewRq>BOOL,
MGnt<BOOL,
Serial debugging interface
All the following signals change during PhA and propagate during the remainder of PhA and PhB, giving an entire clock cycle for them to propagate throughout the machine. Each user must receive them into a latch open during PhB. The effects of changes are intended to happen throughout the following PhA, PhB pair.
ResetAB<BOOL,
DHoldAB<BOOL, -- must be high before testing
DShiftAB<BOOL, -- shift the shift register by 1 bit if ~DNSelect
DExecuteAB<BOOL, -- interpret the content of the shift register if ~DNSelect
DNSelectAB<BOOL, -- if high, hold but don't Execute or Shift
DDataInAB<BOOL, -- sampled during each PhB following a PhB that DShift is asserted
DDataOutAB>BOOL -- changes during each PhA following a PhB that DShift is asserted, continues to be driven through the PhB following the PhA it changes
]
State
phALast: BOOLFALSE, -- hack to improve simulation performance
cache: CacheOps.Cache,
cycleNo: INT ← 0,
skipRejects: BOOLFALSE,
rejectCycles: NAT ← 0,
cmdAB: PBusCommands ← NoOp,
address, fetchData, storeData: Dragon.HexWord ← 0,
cmdType: {noOp, fetch, store} ← noOp,
pageFault, writeFault, storeParity, storeParityErrorBA, storeParityErrorAB, firstBOfCmdBA, resetBA, resettingAB, dHoldBA, holdingAB, rejectedB: BOOLFALSE
EvalSimple
IF PhA THEN
BEGIN
holdingAB ← dHoldBA;
resettingAB ← resetBA;
IF NOT phALast THEN
BEGIN
storeParityErrorAB ← storeParityErrorBA OR (cmdType=store AND storeParity#CacheOps.Parity32[storeData]);
IF NOT dHoldBA AND cmdType=store AND rejectCycles=0 AND NOT (pageFault OR writeFault) THEN
CacheOps.Write[cache, address, storeData];
phALast ← TRUE;
END;
IF rejectCycles=0 THEN
address ← BitOps.ELFD[container: PData, containerWidth: 32, fieldPosition: 0, fieldWidth: 32];
cmdAB ← IF resetBA THEN NoOp ELSE PCmdA;
PRejectB ← FALSE;
PFaultB ← None;
END;
IF PhB THEN
BEGIN
dHoldBA ← DHoldAB;
resetBA ← ResetAB;
IF phALast THEN
BEGIN
IF NOT holdingAB THEN
BEGIN
storeParityErrorBA ← NOT resettingAB AND storeParityErrorAB;
IF resettingAB THEN
BEGIN
rejectCycles ← 0;
cycleNo ← 0;
cache ← CacheOps.NewCache[cache];
END
ELSE cycleNo ← cycleNo+1;
IF rejectCycles=0 THEN
BEGIN
pageFault ← writeFault ← FALSE;
SELECT cmdAB FROM
Fetch, FetchHold =>
BEGIN
cmdType ← fetch;
[data: fetchData, rejectCycles: rejectCycles, pageFault: pageFault] ← CacheOps.Access[cache, address, read, cycleNo];
END;
Store, StoreHold =>
BEGIN
cmdType ← store;
[rejectCycles: rejectCycles, pageFault: pageFault, writeProtect: writeFault] ← CacheOps.Access[cache, address, write, cycleNo];
END;
IOFetch, IOStore, IOFetchHold, IOStoreHold =>
BEGIN
Dragon.Assert[ FALSE, "Cache doesn't yet implement IO operations" ]; -- for now
cmdType ← noOp;
END;
ENDCASE => cmdType ← noOp;
IF skipRejects THEN rejectCycles ← 0;
IF pageFault OR writeFault THEN rejectCycles ← MAX[1, rejectCycles];
firstBOfCmdBA ← TRUE;
END
ELSE
BEGIN -- rejected on previous PhB
Dragon.Assert[ cmdAB = NoOp ];
rejectCycles ← rejectCycles-1;
firstBOfCmdBA ← FALSE;
END;
END;
phALast ← FALSE;
END;
IF storeParityErrorAB THEN PNPError ← FALSE;
IF cmdType # noOp THEN
BEGIN
PFaultB ← (SELECT TRUE FROM
rejectCycles=1 AND pageFault => PageFault,
rejectCycles=1 AND writeFault => WriteProtectFault,
ENDCASE => None);
PRejectB ← rejectCycles>0;
SELECT TRUE FROM
cmdType=store AND firstBOfCmdBA =>
BEGIN
storeData ← BitOps.ELFD[container: PData, containerWidth: 32, fieldPosition: 0, fieldWidth: 32];
storeParity ← PParityB;
END;
cmdType=fetch AND rejectCycles=0 AND NOT pageFault =>
BEGIN
PData ← BitOps.ILID[source: fetchData, container: PData, containerWidth: 32, fieldPosition: 0, fieldWidth: 32];
PParityB ← CacheOps.Parity32[fetchData];
END;
ENDCASE => NULL; -- neither write nor read PBus
END;
END;
Initializer
WITH initData SELECT FROM
pl: Atom.PropList =>
BEGIN
r: REF;
IF (r ← pl.GetPropFromList[$Cache]) # NIL THEN
cache ← NARROW[r, CacheOps.Cache];
skipRejects ← (pl.GetPropFromList[$SkipRejects] # NIL AND NARROW[pl.GetPropFromList[$SkipRejects], REF BOOL]^);
END;
ENDCASE => ERROR
Expand
Who knows?
PKillRequestB: BOOL;
Temporary until we decide whether to use latches with bias
LatchBias: BOOL;
Buffered timing and housekeeping interface
PhAb, nPhAb, PhBb, nPhBb:BOOL;
PhAh, PhBh:BOOL;
Resetb:BOOL;
CAM interface
VirtualPage, nVirtualPage:INT[24];
VirtualBlock, nVirtualBlock:INT[6];
RealPage, nRealPage:INT[24];
RealBlock, nRealBlock:INT[6];
CAMPageAccess, nCAMPageAccess:SWITCH[24];
CAMBlockAccess, nCAMBlockAccess:SWITCH[6];
RAM access
The left PBits should be sampled by M during PhB.
PBits, nPBits:SWITCH[66];
MBits, nMBits:SWITCH[66];
Cell control
nVirtualMatch, nMatchPageClean, nMatchCellShared:BOOL;
nMapValid, nRealMatch, nVictimClean:BOOL;
nMatchTIP:BOOL;
CellAdr, nCellAdr:INT[8];
VirtualAccess, nVirtualAccess, SelCell, SelVictimAdr, SelMapAdr, SelRealData, SelPageFlag, SelVictimData, SelRealAdr:BOOL;
FinishSharedStore:BOOL;
VPValid, nVPValid, RPValid, nRPValid, RPDirty, nRPDirty, Master, nMaster, Shared, nShared, Victim, nVictim, TIP, nTIP, Broken, nBroken:BIT;
MAdrLow, nMAdrLow:BOOL;
PAdrLow, nPAdrLow:BOOL;
PStore:BOOL;
VictimFeedback, nVictimFeedback, ShiftVictim, nShiftVictim:BOOL;
ForceDataSelect:BOOL;
P control <=> M control, all change during PhA
MDoneAB, MHeldAB:BOOL;
MFaultAB:EnumType["Dragon.PBusFaults"];
PAdrHigh, PAdrLowToM:BOOL;
PCmdToMAB:EnumType["Dragon.PBusCommands"];
Debug interface
DoShiftBA, DoExecuteBA, DoHoldBA, ShiftDataToPCAM:BOOL;
ShiftDataToMCtlPads:BOOL;
pInterface: PInterface[];
mInterface: MInterface[];
cacheEntries: CacheEntries[]
BlackBoxTest
cacheTester[instructions, drive, handle];
ENDCELL;
CEDAR
CacheTester: TYPE = PROC[i: CacheIORef, d: REF CacheDrive, h: CellTestHandle];
cacheTester: CacheTester;
RegisterCacheTester: PUBLIC PROC[ct: CacheTester]={
cacheTester ← ct};
CacheStateHandler: Cucumber.Handler = NEW[Cucumber.HandlerRep ← [
PrepareWhole: CacheStatePrepareProc,
PartTransfer: CacheTransferProc
]];
CacheStatePrepareProc: PROC [ whole: REF ANY, where: IO.STREAM, direction: Cucumber.Direction, data: REF ANY ] RETURNS [ leaveTheseToMe: Cucumber.SelectorList ] -- Cucumber.Bracket -- =
{leaveTheseToMe ← LIST[$cache]};
CacheTransferProc: PROC [ whole: REF ANY, part: Cucumber.Path, where: IO.STREAM, direction: Cucumber.Direction, data: REF ANY ] -- Cucumber.PartTransferProc -- =
TRUSTED {Cucumber.Transfer[ what: NARROW[whole, REF CacheStateRec].cache, where: where, direction: direction ]};
Cucumber.Register[CacheStateHandler, CODE[CacheStateRec]];