AllocatorImpl.Mesa
Copyright © 1984, 1985, 1986 by Xerox Corporation. All rights reserved.
designed by Paul Rovner
Bob Hagmann, February 19, 1985 5:45:56 pm PST
Russ Atkinson (RRA) April 15, 1986 2:26:17 pm PST
DIRECTORY
Allocator USING [BlockSizeIndex, bsiEscape, BSIToSizeObj, EHeaderP, ExtendedHeader, FNHeaderP, Header, HeaderP, LastAddress, logPagesPerQuantum, maxSmallBlockSize, minLargeBlockSize, NHeaderP, NormalHeader, pagesPerQuantum, PUZone, QMObject, QuantumCount, QuantumIndex, QuantumMap, RefCount, SizeToBSIObj, UZoneObject, wordsPerQuantum, Zone, ZoneObject],
AllocatorOps USING [CreateMediumSizedHeapObject, DFHeaderP, DoubleFreeHeader, FreeMediumSizedHeapObject, nonNullType, permanentPageZone, REFToNHP, Rover],
Basics USING [BITAND, BITOR, BITXOR, CARD, DoubleShiftLeft, DoubleShiftRight, HighHalf, LongNumber, LowHalf],
DebuggerSwap USING [CallDebugger],
PrincOps USING [SD, StateVector, zRET],
PrincOpsUtils USING [LongZero, MyLocalFrame],
RCMicrocodeOps USING [Allocate, CreateRef, FREEPLEASE, Free, InsertQuanta, rcMicrocodeExists, SoftwareAllocate, SoftwareFree],
RTSD USING [sSystemZone],
SafeStorage USING [GetCanonicalType, NarrowRefFault, nullType, Type],
SafeStoragePrivate USING [], -- TEMPORARY export of NewObject
StorageAccounting USING [ConsiderCollection, nObjectsCreated, nObjectsReclaimed, nWordsReclaimed],
StorageTraps USING [],
SystemVersion USING [machineType],
TrapSupport USING [BumpPC, GetTrapParam],
UnsafeStorage USING [],
UnsafeStoragePrivate USING[FreeTransientPageObject, InitializeTransientPageZone, NewTransientPageObject],
VM USING [AddressForPageNumber, Allocate, CantAllocate, Free, Interval, logWordsPerPage, PageNumber, PageNumberForAddress, PagesForWords, Pin, SimpleAllocate, SwapIn, Unpin, WordsForPages, wordsPerPage],
ZCT USING [Enter, ExpandZCT];
AllocatorImpl: MONITOR
the LOCK protects the quantumMap and the countedFreeTable
IMPORTS AllocatorOps, Basics, DebuggerSwap, PrincOpsUtils, RCMicrocodeOps, SafeStorage, StorageAccounting, SystemVersion, TrapSupport, UnsafeStoragePrivate, VM, ZCT
EXPORTS AllocatorOps, SafeStorage, SafeStoragePrivate, StorageTraps, UnsafeStorage
= BEGIN OPEN Allocator, AllocatorOps;
CARD: TYPE = Basics.CARD;
Ptr: TYPE = LONG POINTER;
SmallFreeTable: TYPE = ARRAY Allocator.BlockSizeIndex OF Allocator.FNHeaderP;
Type: TYPE = SafeStorage.Type;
UZone: TYPE = UNCOUNTED ZONE;
ERRORS
InsufficientVM: ERROR = CODE;
InvalidRef: PUBLIC ERROR[ref: REF ANY] = CODE;
Options
checking: BOOLTRUE;
We don't trust any new code!
paranoid: BOOL ← SystemVersion.machineType # dorado;
Extra checks when distrusting microcode.
CONSTANTS
maxSmallBlockRealSize: NAT = QuantizedSize[maxSmallBlockSize];
minLargeBlockRealSize: NAT = QuantizedSize[minLargeBlockSize];
wordsPerPage: NAT = VM.wordsPerPage;
logWordsPerPage: NAT = VM.logWordsPerPage;
wordsPerQuantum: NAT = Allocator.wordsPerQuantum;
logWordsPerQuantum: NAT = logWordsPerPage+Allocator.logPagesPerQuantum;
nullType: SafeStorage.Type = SafeStorage.nullType;
NormalHeaderInit: Allocator.NormalHeader = [
type: nullType,
blockSizeIndex: 0];
NormalHeaderEscape: Allocator.NormalHeader = [
type: nullType,
blockSizeIndex: Allocator.bsiEscape];
ExtendedHeaderInit: Allocator.ExtendedHeader = [
sizeTag: pages,
extendedSize: 0,
blockSizeIndex: Allocator.bsiEscape,
normalHeader: NormalHeaderEscape];
VARIABLES and their INITIALIZATION
hackPtr: LONG POINTERVM.AddressForPageNumber[VM.SimpleAllocate[1].page];
HackAlloc: PROC [nWords: CARDINAL] RETURNS [ptr: LONG POINTER] = {
ptr ← hackPtr;
PrincOpsUtils.LongZero[ptr, nWords];
hackPtr ← hackPtr + nWords;
};
Inner Statistics
innerStats: LONG POINTER TO InnerStats ← HackAlloc[SIZE[InnerStats]];
InnerStats: TYPE = RECORD [
countedSmallNewAllocs: INT ← 0,
countedBigNewAllocs: INT ← 0,
uncountedSmallNewAllocs: INT ← 0,
uncountedBigNewAllocs: INT ← 0,
countedSmallNewFrees: INT ← 0,
countedBigNewFrees: INT ← 0,
uncountedSmallNewFrees: INT ← 0,
uncountedBigNewFrees: INT ← 0,
quantaAllocated: INT ← 0,
quantaFreed: INT ← 0
];
NOTE: the declaration sequence below is EXTREMELY DELICATE. Order of the declarations and their initialization is important.
Support for the systemUZone
rZnHeapSystem: UZoneObject ← [new: NewHeapObject, free: FreeHeapObject];
znHeapSystem: Ptr ← LONG[@rZnHeapSystem];
zhs: UZone ← LOOPHOLE[LONG[@znHeapSystem], UZone];
Support for the TransientPageUZone
transientPageUZoneObject: UZoneObject ← [
new: UnsafeStoragePrivate.NewTransientPageObject,
free: UnsafeStoragePrivate.FreeTransientPageObject];
transientPageUZonePointer: Ptr ← LONG[@transientPageUZoneObject];
transientPageUZone: UZone ← LOOPHOLE[LONG[@transientPageUZonePointer], UZone];
Support for both uncounted and counted storage (exported to AllocatorOps)
bsiToSize: PUBLIC LONG POINTER TO BSIToSizeObj ← permanentPageZone.NEW[BSIToSizeObj ← ALL[0]];
bsiToSize maps a bsi to a quantized size (including overhead) for small block sizes
sizeToBSI: PUBLIC LONG POINTER TO SizeToBSIObj ← permanentPageZone.NEW[SizeToBSIObj];
sizeToBSI maps a requested size <= maxSmallBlockSize (not including overhead) to a bsi
Support for counted storage only
the "medium-sized" free list (msRover): a doubly-linked ring
msRoot: DoubleFreeHeader ← [
eh: [extendedSize: 0, normalHeader: [blockSizeIndex: bsiEscape]],
nextFree: @msRoot,
prevFree: @msRoot
];
msRover: Rover ← HackAlloc[SIZE[DoubleFreeHeader]];
(Obsolete, but necessary as a place holder)
the "permanent" free list (permRover): a doubly-linked ring
permRoot: DoubleFreeHeader ← [
eh: [extendedSize: 0, normalHeader: [blockSizeIndex: bsiEscape]],
nextFree: @permRoot,
prevFree: @permRoot
];
permRover: Rover ← HackAlloc[SIZE[DoubleFreeHeader]];
(Obsolete, but necessary as a place holder)
quantumMap: PUBLIC QuantumMap ← permanentPageZone.NEW[QMObject];
the quantumMap provides a flag for each page in VM, TRUE iff the page is assigned to a counted ZONE.
systemZone: ZONENIL;
permanentZone: ZONENIL;
Statistics
nQMapEntries: INT ← 0;
PROCEDURES
Support for uncounted storage
Public access to uncounted storage
NewUObject: PUBLIC PROC [size: CARDINAL, zone: UZone] RETURNS [Ptr] = {
RETURN [NewHeapObject[LOOPHOLE[zone, PUZone], size]];
};
GetSystemUZone: PUBLIC PROC RETURNS [UZone] = {
RETURN [zhs];
};
GetTransientPageUZone: PUBLIC PROC RETURNS [UZone] = {
RETURN [transientPageUZone];
};
UNCOUNTED ZONE procs
Unfortunately, these cannot be ENTRY procs here, since the uncounted system zone gets used during TraceAndSweep, and should not have any connection with the counted allocator. Of course, it does, but we will be careful about this. For historical reasons, the "medium" allocator is assumed in the interfaces, even though there is no longer any such thing. The monitor lock in UnsafeAllocatorImpl is used to protect our uncounted data structures.
NewHeapObject: PROC [self: PUZone, size: CARDINAL] RETURNS [p: Ptr] = {
Referenced only via zhs, The "systemUZone"
ENABLE UNWIND => Crash[];
realSize: INT = QuantizedSize[MAX[size, SIZE[ExtendedHeader]]];
nhp: NHeaderP;
IF size > maxSmallBlockRealSize
THEN {
We don't need a monitor lock for large allocation
nhp ← AllocateNewStyle[realSize, FALSE, FALSE];
p ← LOOPHOLE[nhp, NHeaderP] + SIZE[NormalHeader];
}
ELSE {
We need the monitor lock for small allocation (to protect the table)
p ← AllocatorOps.CreateMediumSizedHeapObject[realSize];
(this is a monitored call to DoCreateMediumSizedObject)
nhp ← LOOPHOLE[p, NHeaderP] - SIZE[NormalHeader];
};
nhp.type ← nonNullType;
IF UsableWords[nhp] < size THEN Crash[];
};
DoCreateMediumSizedObject: PUBLIC --INTERNAL-- PROC [size: NAT, type: Type, roverP: LONG POINTER TO--VAR-- Rover, counted: BOOL] RETURNS [p: Ptr ← NIL] = {
Called from UnsafeAllocatorImpl.CreateMediumSizedHeapObject
size includes overhead (for extended header), has been quantized and is >= SIZE[DoubleFreeHeader].
This proc may return an object of a larger size than requested.
nhp: NHeaderP ← AllocateNewStyle[size, FALSE, FALSE];
nhp.type ← type;
p ← nhp+SIZE[NormalHeader]; -- here for the conservative scan
IF counted THEN Crash[];
};
FreeHeapObject: PROC [self: PUZone, object: Ptr] = {
Referenced only via zhs, The "systemUZone"
ENABLE UNWIND => Crash[];
nhp: NHeaderP = LOOPHOLE[object - SIZE[NormalHeader]];
IF checking AND (CheckAfterAlloc[nhp] # 0 OR nhp.type # nonNullType) THEN
Crash[];
IF nhp.blockSizeIndex = Allocator.bsiEscape
THEN {
For large objects we do not need the monitor
FreeNewStyle[object, FALSE];
}
ELSE {
For small objects we need the monitor (to protect the table)
AllocatorOps.FreeMediumSizedHeapObject[object-SIZE[NormalHeader]];
(this is a monitored call to DoFreeMediumSizedObject)
};
};
DoFreeMediumSizedObject: PUBLIC --INTERNAL-- PROC [ptr: DFHeaderP, rover: Rover] = {
Called from UnsafeAllocatorImpl.FreeMediumSizedObject
ptr may refer to a small object or a large one
bsi: Allocator.BlockSizeIndex ← ptr.eh.blockSizeIndex;
counted: BOOL ← rover = permRover;
This is a real crock, but such is life
IF rover = msRover THEN counted ← TRUE;
Can we ever have rover = msRover?
IF bsi # Allocator.bsiEscape THEN {
This is a small-size object allocated by the new allocator
FreeNewStyle[LOOPHOLE[ptr, Ptr] + SIZE[Allocator.NormalHeader], counted];
RETURN;
};
IF ptr.eh.sizeTag = pages THEN {
This is a large-size object that we can handle in the new style.
FreeNewStyle[LOOPHOLE[ptr, Ptr] + SIZE[Allocator.ExtendedHeader], counted];
RETURN;
};
Crash[];
};
Support for counted storage
Public access to counted storage
GetSystemZone: PUBLIC SAFE PROC RETURNS [ZONE] = TRUSTED{
RETURN [systemZone];
};
GetPermanentZone: PUBLIC SAFE PROC RETURNS [ZONE] = TRUSTED{
RETURN [permanentZone];
};
TrimSystemZone: PUBLIC SAFE PROC = TRUSTED {
There is nothing to do here, although there could be some day
NULL;
};
NewObject: PUBLIC PROC [type: Type, size: CARDINAL, zone: ZONENIL] RETURNS [ans: REFNIL] = {
SELECT zone FROM
NIL, systemZone => ans ← NewSystemObject[NIL, size, type];
permanentZone => ans ← NewPermanentObject[NIL, size, type];
ENDCASE => Crash[];
};
ValidateRef: PUBLIC ENTRY PROC [ref: REF] = {
ENABLE UNWIND => NULL;
IF NOT IsValidRef[LOOPHOLE[ref, Ptr]] THEN
RETURN WITH ERROR InvalidRef[ref];
};
IsValidRef: PUBLIC --INTERNAL-- PROC [p: Ptr] RETURNS [BOOL] = {
SELECT TRUE FROM
p = NIL => RETURN [TRUE];
LOOPHOLE[p, CARD] >= LastAddress => RETURN [FALSE];
Basics.LowHalf[LOOPHOLE[p, CARD]] MOD 2 = 1 => RETURN [FALSE];
ENDCASE => {
Start parsing at the beginning of the first quantum in this run. Note that we depend on the quantumMap not changing during this operation!
qi: QuantumIndex ← LPToQI[p];
IF quantumMap[qi] THEN {
hp: HeaderP ← NIL;
UNTIL qi = FIRST[QuantumIndex] OR NOT quantumMap[qi-1] DO
qi ← qi-1;
ENDLOOP;
hp ← QIToLP[qi];
WHILE LOOPHOLE[hp, CARD] < LOOPHOLE[p, CARD] DO
bs: Basics.LongNumber;
nhp: NHeaderP ← LOOPHOLE[hp, NHeaderP];
extended: BOOL ← nhp.blockSizeIndex = bsiEscape;
r: Ptr;
IF extended
THEN {
hp points to an extended header
ehp: EHeaderP ← LOOPHOLE[hp];
nhp ← nhp + (SIZE[ExtendedHeader] - SIZE[NormalHeader]);
SELECT ehp.sizeTag FROM
words => bs.lc ← ehp.extendedSize;
pages => bs.lc ← VM.WordsForPages[ehp.extendedSize];
ENDCASE => Crash[];
r ← hp + SIZE[ExtendedHeader];
}
ELSE {
hp points to a normal header
bs.lc ← bsiToSize[nhp.blockSizeIndex];
r ← hp + SIZE[NormalHeader];
};
SELECT TRUE FROM
bs.lc = 0, bs.lc >= LastAddress, bs.lo MOD 2 = 1 =>
The size of the object must never be too small, too large, or odd. If it is then we crash, since it means that the heap has been violated.
Crash[];
r = p => RETURN [nhp.type # nullType];
We have found the REF, so the ref is valid for non-free objects.
ENDCASE;
hp ← hp + bs.lc;
ENDLOOP;
};
RETURN [FALSE];
};
};
Pinning & unpinning objects
PinObject: PUBLIC SAFE PROC [ref: REF] = TRUSTED {
... pins the object & its header in memory
IF ref # NIL THEN {
nhp: HeaderP ← LOOPHOLE[ref, HeaderP] - SIZE[NormalHeader];
hp: HeaderP ← IF IsExtendedBlock[nhp] THEN LOOPHOLE[ref, HeaderP] - SIZE[ExtendedHeader] ELSE nhp;
words: INT ← BlockSize[hp];
low: VM.PageNumber ← VM.PageNumberForAddress[hp];
next: VM.PageNumber ← VM.PageNumberForAddress[hp+(words-1)] + 1;
VM.Pin[[low, next-low]];
};
};
UnpinObject: PUBLIC SAFE PROC [ref: REF] = TRUSTED {
... unpins the object & its header (provided it was already pinned)
IF ref # NIL THEN {
nhp: HeaderP ← LOOPHOLE[ref, HeaderP] - SIZE[NormalHeader];
hp: HeaderP ← IF IsExtendedBlock[nhp] THEN LOOPHOLE[ref, HeaderP] - SIZE[ExtendedHeader] ELSE nhp;
words: INT ← BlockSize[hp];
low: VM.PageNumber ← VM.PageNumberForAddress[hp];
next: VM.PageNumber ← VM.PageNumberForAddress[hp+(words-1)] + 1;
VM.Unpin[[low, next-low]];
};
};
ZONE procs
NewSystemObject: PROC [self: Zone, size: CARDINAL, type: Type] RETURNS [r: REF] = {
Referenced from systemZone and called from NewObject and Initialize
size DOES NOT INCLUDE overhead
nhp: NHeaderP ← NIL;
realSize: INT ← size;
IF type = nullType THEN ERROR;
This should NOT happen!
IF (r ← RCMicrocodeOps.Allocate[size, type]) # NIL
THEN {
The microcode allocation was successful, so just use the object returned. This is the fastest path for allocation.
nhp ← LOOPHOLE[r, NHeaderP] - SIZE[NormalHeader];
IF checking AND CheckAfterNewRef[nhp, type] # 0 THEN Crash[];
realSize ← bsiToSize[nhp.blockSizeIndex];
IF realSize = 0 THEN Crash[];
}
ELSE {
medium-sized or large object
nhp: NHeaderP ← NIL;
realSize ← QuantizedSize[size];
IF size >= minLargeBlockSize THEN realSize ← LONG[size] + SIZE[ExtendedHeader];
nhp ← AllocateNewStyleEntry[realSize, TRUE, FALSE];
nhp.type ← type;
IF checking AND CheckAfterAlloc[nhp] # 0 THEN Crash[];
r ← LOOPHOLE[nhp+SIZE[NormalHeader]];
RCMicrocodeOps.CreateRef[nhp];
IF paranoid THEN {
We are getting pretty suspicious here!
IF CheckAfterNewRef[nhp, type] # 0 THEN Crash[];
IF UsableWords[nhp] < size THEN Crash[];
};
};
StorageAccounting.nObjectsCreated ← StorageAccounting.nObjectsCreated + 1;
StorageAccounting.ConsiderCollection[size, realSize];
};
ExpandNormalFreeList: PUBLIC PROC [bsi: BlockSizeIndex] = {
Called from Allocate and AllocateTrap.
[] ← ExpandAnyFreeList[bsi, TRUE, FALSE];
};
ExpandAnyFreeList: PROC [bsi: BlockSizeIndex, install, permanent: BOOL] RETURNS [first, last, next: FNHeaderP ← NIL] = {
Called from ExpandNormalFreeList (install = TRUE) and AllocateNewStyle (install = FALSE). This proc must not call monitored procedures if NOT install!
qi: QuantumIndex;
qc: QuantumCount;
incr: NAT = bsiToSize[bsi];
druthers: QuantumCount = WordsToQC[Lcm[incr, wordsPerQuantum]];
Using the LCM enables us to not have any leftovers, which helps zone scanning in the future.
[qi, qc] ← GetQuanta[druthers, NOT install];
first ← last ← LOOPHOLE[QIToLP[qi], FNHeaderP];
next ← LOOPHOLE[QIToLP[qi+qc], FNHeaderP];
Clear[first, QCToWords[qc]];
Run through the zone, chaining the free blocks together in increasing order until we hit next, which is the next object after our allocated storage. Every free block is initialized with the blockSizeIndex.
DO
after: FNHeaderP ← last+incr;
last.fnh ← [blockSizeIndex: bsi];
IF after = next THEN EXIT;
last.nextFree ← after;
last ← after;
ENDLOOP;
last.nextFree ← NIL;
IF install THEN
This is for the normal free list, so we install it
PutQuantaOnNormalFreeList[bsi, first, last, qi, qc];
};
PutQuantaOnNormalFreeList: ENTRY PROC [bsi: BlockSizeIndex, first, last: FNHeaderP, qi: QuantumIndex, qc: QuantumCount] = {
Called only from ExpandNormalFreeList
atomic: stuffing the quantum map and expansion of the free list. Should be in the quantumMap monitor
ENABLE UNWIND => NULL;
RCMicrocodeOps.InsertQuanta[bsi, first, last];
now stuff the QMap
StuffQMapRange[qi, qc];
};
NewPermanentObject: PROC [self: Zone, size: CARDINAL, type: Type] RETURNS [r: REFNIL] = {
Referenced from permanentZone and called from NewObject
size DOES NOT INCLUDE overhead
realSize: CARDLONG[size] + SIZE[NormalHeader];
nhp: NHeaderP ← NIL;
IF type = nullType THEN ERROR;
This should NOT happen!
IF size >= minLargeBlockSize THEN realSize ← LONG[size] + SIZE[ExtendedHeader];
nhp ← AllocateNewStyleEntry[realSize, TRUE, TRUE];
nhp.type ← type;
StorageAccounting.nObjectsCreated ← StorageAccounting.nObjectsCreated + 1;
StorageAccounting.ConsiderCollection[size, realSize];
IF checking AND UsableWords[nhp] < size THEN Crash[];
RCMicrocodeOps.CreateRef[nhp];
IF paranoid THEN {
We are getting pretty suspicious here!
IF CheckAfterNewRef[nhp, type] # 0 THEN Crash[];
IF UsableWords[nhp] < size THEN Crash[];
};
RETURN [LOOPHOLE[nhp+SIZE[NormalHeader]]];
};
FreeSystemObject: PROC [self: Zone, object: REF ANY] = {
Referenced from systemZone and permanentZone
We always rely on GC, except from within GC for collectible objects
};
Procs called by the reclaimer and TraceAndSweepImpl
FreeObject: PUBLIC PROC [nhp: NHeaderP] = {
realSize: INT ← RealWords[nhp];
IF checking AND (CheckAfterAlloc[nhp] # 0 OR nhp.type = nullType) THEN
Crash[];
BumpWordsReclaimed[realSize];
Monitored to keep the stats reasonably consistent
IF NOT RCMicrocodeOps.Free[nhp].success THEN {
This is a large block
ehp: EHeaderP = NHPToEHP[nhp];
IF nhp.blockSizeIndex # Allocator.bsiEscape THEN Crash[];
SELECT ehp.sizeTag FROM
pages => {
qi: QuantumIndex ← LPToQI[ehp];
qc: QuantumCount ← WordsToQC[realSize];
IF checking THEN {
ln: Basics.LongNumber ← [li[realSize]];
IF ln.lo MOD wordsPerQuantum # 0 THEN Crash[];
IF Basics.LowHalf[LOOPHOLE[ehp, CARD]] MOD wordsPerQuantum # 0
THEN Crash[];
IF NOT IsFullQMapRange[qi, qc] THEN Crash[];
};
ClearQMapRange[qi, qc];
FreeQuanta[qi, qc];
};
ENDCASE => Crash[];
};
};
TAndSFreeObject: PUBLIC --INTERNAL-- PROC [nhp: NHeaderP] = {
realSize: INT ← RealWords[nhp];
IF checking AND CheckAfterAlloc[nhp] # 0 THEN Crash[];
IF nhp.type # nullType THEN {
It is possible to free an already free object during T&S, since we rebuild the free lists from scratch, but that should NOT count in the statistics.
StorageAccounting.nObjectsReclaimed ← StorageAccounting.nObjectsReclaimed + 1;
StorageAccounting.nWordsReclaimed ← StorageAccounting.nWordsReclaimed + realSize;
};
IF NOT RCMicrocodeOps.FREEPLEASE[nhp].success THEN {
large or medium-sized object or extended header for some other reason
ehp: EHeaderP = NHPToEHP[nhp];
IF nhp.blockSizeIndex # Allocator.bsiEscape THEN Crash[];
SELECT ehp.sizeTag FROM
pages => {
qi: QuantumIndex ← LPToQI[ehp];
qc: QuantumCount ← WordsToQC[realSize];
IF checking THEN {
ln: Basics.LongNumber ← [li[realSize]];
IF ln.lo MOD wordsPerQuantum # 0 THEN Crash[];
IF Basics.LowHalf[LOOPHOLE[ehp, CARD]] MOD wordsPerQuantum # 0
THEN Crash[];
IF NOT IsFullQMapRangeInternal[qi, qc] THEN Crash[];
};
ClearQMapRangeInternal[qi, qc];
FreeQuanta[qi, qc];
};
ENDCASE => Crash[];
};
};
EnterAndCallBack: PUBLIC ENTRY PROC [proc: PROC] = {
Called from TraceAndSweepImpl.
ENABLE UNWIND => NULL;
proc[];
};
BumpWordsReclaimed: ENTRY PROC [words: INT] = {
StorageAccounting.nObjectsReclaimed ← StorageAccounting.nObjectsReclaimed + 1;
StorageAccounting.nWordsReclaimed ← StorageAccounting.nWordsReclaimed + words;
};
Trap handlers
AllocateTrap: PUBLIC PROC [size: CARDINAL, type: Type] RETURNS [r: REF] = {
state: PrincOps.StateVector;
state ← STATE; -- incantation
SELECT (IF RCMicrocodeOps.rcMicrocodeExists THEN TrapSupport.GetTrapParam[] ELSE 0)
FROM
0 =>
no microcode
r ← RCMicrocodeOps.SoftwareAllocate[size, type];
2 => {
uCode is disabled; someone is inside this monitor
p: PROC [size: CARDINAL, type: Type] = MACHINE CODE{PrincOps.zRET};
ZCT.Enter[];
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
p[size, type];
};
4 => {
zctFull
p: PROC [size: CARDINAL, type: Type] = MACHINE CODE{PrincOps.zRET};
ZCT.ExpandZCT[];
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
p[size, type];
};
6 => {
expandNormalFreeList
p: PROC [size: CARDINAL, type: Type] = MACHINE CODE{PrincOps.zRET};
ExpandNormalFreeList[sizeToBSI[size]];
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
p[size, type];
};
ENDCASE => Crash[];
TrapSupport.BumpPC[2]; -- length of opcode
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
};
FreeTrap: PUBLIC PROC [nhp: NHeaderP] RETURNS [success: BOOL] = {
state: PrincOps.StateVector;
state ← STATE; -- incantation
SELECT (IF RCMicrocodeOps.rcMicrocodeExists THEN TrapSupport.GetTrapParam[] ELSE 0)
FROM
0 => success ← RCMicrocodeOps.SoftwareFree[nhp]; -- no microcode
2 => { -- uCode is disabled; someone is inside this monitor
p: PROC [nhp: NHeaderP] = MACHINE CODE{PrincOps.zRET};
ZCT.Enter[];
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
p[nhp];
};
ENDCASE => Crash[];
TrapSupport.BumpPC[2]; -- length of opcode
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state; -- incantation
};
Support for NARROW; referent types
GetReferentType: PUBLIC SAFE PROC [ref: REF] RETURNS [type: Type] = TRUSTED {
RETURN [GetCanonicalReferentType[ref]]
};
GetCanonicalReferentType: PUBLIC SAFE PROC [ref: REF] RETURNS [type: Type] = TRUSTED {
IF ref = NIL
THEN RETURN [nullType]
ELSE RETURN [REFToNHP[ref].type]
};
GetCanonicalReferentTypeTrap: PUBLIC PROC [ref: REF] RETURNS [type: Type] = {
state: PrincOps.StateVector;
kludge: CARD;
state ← STATE; -- incantation
kludge ← 0;
type ← GetCanonicalReferentType[ref];
TrapSupport.BumpPC[2];
state.dest ← LOOPHOLE[PrincOpsUtils.MyLocalFrame[]];
TRANSFER WITH state -- incantation
};
IsReferentType: PUBLIC SAFE PROC [ref: REF, type: Type] RETURNS [BOOL] = TRUSTED {
RETURN [SafeStorage.GetCanonicalType[type] = GetCanonicalReferentType[ref]];
};
NarrowRef: PUBLIC SAFE PROC [ref: REF, type: Type] RETURNS [REF] = TRUSTED {
SELECT TRUE FROM
ref = NIL =>
RETURN [NIL];
SafeStorage.GetCanonicalType[type] = GetCanonicalReferentType[ref] =>
RETURN [ref];
ENDCASE =>
ERROR SafeStorage.NarrowRefFault[ref, type];
};
QuantumMap
StuffQMapRange: --INTERNAL-- PROC [qi: QuantumIndex, qc: QuantumCount] = {
nQMapEntries ← nQMapEntries + qc;
FOR i: QuantumIndex ← qi, i+1 UNTIL i = qi + qc DO
quantumMap[i] ← TRUE;
ENDLOOP;
};
ClearQMapRange: ENTRY PROC [qi: QuantumIndex, qc: QuantumCount] = {
ENABLE UNWIND => NULL;
ClearQMapRangeInternal[qi, qc];
};
ClearQMapRangeInternal: --INTERNAL-- PROC [qi: QuantumIndex, qc: QuantumCount] = {
nQMapEntries ← nQMapEntries - qc;
FOR i: QuantumIndex ← qi, i+1 UNTIL i = qi + qc DO
quantumMap[i] ← FALSE;
ENDLOOP;
};
IsFullQMapRange: ENTRY PROC [qi: QuantumIndex, qc: QuantumCount] RETURNS [ans: BOOLTRUE] = {
ENABLE UNWIND => NULL;
RETURN [IsFullQMapRangeInternal[qi, qc]];
};
IsFullQMapRangeInternal: --INTERNAL-- PROC [qi: QuantumIndex, qc: QuantumCount] RETURNS [ans: BOOLTRUE] = {
FOR i: QuantumIndex ← qi, i+1 UNTIL i = qi + qc DO
IF NOT quantumMap[i] THEN RETURN [FALSE];
ENDLOOP;
};
New uncounted allocator
AllocateNewStyleEntry: ENTRY PROC [size: CARD, counted, permanent: BOOL] RETURNS [nhp: NHeaderP ← NIL] = {
ENABLE UNWIND => Crash[];
RETURN [AllocateNewStyle[size, counted, permanent]];
};
AllocateNewStyle: PROC [size: CARD, counted, permanent: BOOL] RETURNS [nhp: NHeaderP ← NIL] = {
This must be called under the allocator lock for counted allocations!
words INCLUDES overhead, has been quantized and is >= SIZE[DoubleFreeHeader].
This proc may return an object of a larger size than requested (of course).
This proc can be called only for uncounted objects or for counted permanent zone objects. System zone objects are handled differently (see NewSystemObject).
ehp: EHeaderP ← NIL;
SELECT size FROM
<= SIZE[Allocator.NormalHeader] => Crash[];
<= maxSmallBlockRealSize => {
Allocate from quantized table
words: CARDINALSELECT Basics.LowHalf[size] FROM
RRA: to reduce breakage here, get all small objects up to a multiple of 2^n.
<= 20B => Basics.BITAND[Basics.LowHalf[size]+7, 177777B-7],
8, 16
<= 100B => Basics.BITAND[Basics.LowHalf[size]+17B, 177777B-17B],
32, 48, 64
<= 1000B => Basics.BITAND[Basics.LowHalf[size]+177B, 177777B-177B],
128, 256, 384, 512
ENDCASE => maxSmallBlockRealSize;
576
bsi: Allocator.BlockSizeIndex ← sizeToBSI[words-SIZE[Allocator.NormalHeader]];
objectWords: CARDINAL ← bsiToSize[bsi];
table: LONG POINTER TO SmallFreeTable ← IF counted THEN countedFreeTable ELSE uncountedFreeTable;
fhp: Allocator.FNHeaderP ← table[bsi];
IF counted
THEN
innerStats.countedSmallNewAllocs ← innerStats.countedSmallNewAllocs + 1
ELSE
innerStats.uncountedSmallNewAllocs ← innerStats.uncountedSmallNewAllocs + 1;
IF fhp = NIL THEN {
This slot needs some words added to it
first, last, next: FNHeaderP;
[first, last, next] ← ExpandAnyFreeList[bsi, FALSE, permanent];
Call to expand the free list for the given size. Note that we should never get here for the system zone, only for the permanent zone or the uncounted zone, so it is quite proper to NOT ask for installation.
IF counted THEN
These quanta need to be shown as in the counted heap
StuffQMapRange[qi: LPToQI[first], qc: LPToQI[next]-LPToQI[first]];
fhp ← table[bsi] ← first;
};
table[bsi] ← fhp.nextFree;
nhp ← LOOPHOLE[fhp];
IF nhp.blockSizeIndex # bsi OR nhp.type # nullType THEN
We have a nasty error here, since these are set when we initialize, and are not changed by anything except a bad bug!
Crash[];
Clear[nhp+SIZE[Allocator.NormalHeader], objectWords-SIZE[Allocator.NormalHeader]];
nhp^ ← NormalHeaderInit;
nhp.blockSizeIndex ← bsi;
};
ENDCASE => {
Allocate from VM
qi: QuantumIndex;
qc: QuantumCount ← WordsToQC[size];
np: INT ← QCToPages[qc];
[qi, qc] ← GetQuanta[qc, permanent];
ehp ← QIToLP[qi];
Clear[ehp, VM.WordsForPages[np]];
IF counted
THEN {
These quanta need to be shown as in the counted heap
StuffQMapRange[qi: LPToQI[ehp], qc: qc];
innerStats.countedBigNewAllocs ← innerStats.countedBigNewAllocs + 1;
}
ELSE {
Just count this one
innerStats.uncountedBigNewAllocs ← innerStats.uncountedBigNewAllocs + 1;
};
ehp^ ← ExtendedHeaderInit;
ehp.extendedSize ← np;
nhp ← @ehp.normalHeader;
};
IF checking AND CheckAfterAlloc[nhp] # 0 THEN Crash[];
};
FreeNewStyle: PROC [ptr: Ptr, counted: BOOL] = {
This must be called under the allocator lock (unless for an uncounted large block)!
IF Basics.HighHalf[LOOPHOLE[ptr]] # 0 AND Basics.LowHalf[LOOPHOLE[ptr]] MOD 2 = 0 THEN {
nhp: NHeaderP ← LOOPHOLE[ptr, NHeaderP]-SIZE[Allocator.NormalHeader];
IF nhp.type = nullType THEN Crash[];
SELECT TRUE FROM
nhp.blockSizeIndex = Allocator.bsiEscape => {
If AllocateNewStyle allocated this object, then it has ehp.sizeTag = pages. Otherwise, we did not have anything to do with it.
ehp: EHeaderP ← LOOPHOLE[ptr, EHeaderP]-SIZE[Allocator.ExtendedHeader];
SELECT ehp.sizeTag FROM
pages => {
np: CARDINAL ← ehp.extendedSize;
qi: QuantumIndex ← LPToQI[ehp];
qc: QuantumCount ← PagesToQC[np];
ehp.extendedSize ← 0;
IF np = 0 THEN Crash[];
IF Basics.BITAND[wordsPerQuantum-1, Basics.LowHalf[LOOPHOLE[ehp]] ] # 0 THEN Crash[];
IF counted
THEN {
ClearQMapRangeInternal[qi, qc];
innerStats.countedBigNewFrees ← innerStats.countedBigNewFrees + 1
}
ELSE
innerStats.uncountedBigNewFrees ← innerStats.uncountedBigNewFrees + 1;
FreeQuanta[qi, qc];
RETURN;
};
ENDCASE;
};
ENDCASE => {
We probably did allocate this object, so we can return it to our pool.
bsi: Allocator.BlockSizeIndex ← nhp.blockSizeIndex;
table: LONG POINTER TO SmallFreeTable ← IF counted THEN countedFreeTable ELSE uncountedFreeTable;
fhp: Allocator.FNHeaderP ← LOOPHOLE[nhp, Allocator.FNHeaderP];
IF counted
THEN
innerStats.countedSmallNewFrees ← innerStats.countedSmallNewFrees + 1
ELSE
innerStats.uncountedSmallNewFrees ← innerStats.uncountedSmallNewFrees + 1;
fhp.fnh ← NormalHeaderInit;
fhp.fnh.blockSizeIndex ← bsi;
fhp.nextFree ← table[bsi];
table[bsi] ← fhp;
RETURN;
};
};
IF ptr # NIL THEN Crash[];
};
countedFreeTable: LONG POINTER TO SmallFreeTable ← InitSmallAllocator[];
This is protected (after init) by the monitor in AllocatorImpl (this module)
uncountedFreeTable: LONG POINTER TO SmallFreeTable ← InitSmallAllocator[];
This is protected (after init) by the monitor in UnsafeAllocatorImpl
InitSmallAllocator: PROC RETURNS [ptr: LONG POINTER TO SmallFreeTable] = {
pages: NATVM.PagesForWords[SIZE[SmallFreeTable]];
interval: VM.Interval ← VM.Allocate[count: pages, in64K: TRUE];
VM.SwapIn[interval];
ptr ← LOOPHOLE[VM.AddressForPageNumber[interval.page]];
Clear[ptr, SIZE[SmallFreeTable]];
};
Utilities
QuantizedSize: PROC [size: CARDINAL] RETURNS [rtn: INT] = {
This is called very early in the (delicate) initialization sequence. It calls nothing. arg is a requested block size NOT INCLUDING overhead; result is an even, quantized size that INCLUDES OVERHEAD. Requested sizes > maxSmallBlockSize require extended headers. It is not true that the difference between any two quantized sizes is a valid quantized size.
n: NAT = SIZE[NormalHeader]; -- minimum overhead
e: NAT = SIZE[ExtendedHeader]; -- additional overhead for extended headers
IF checking AND size = 0 THEN Crash[];
IF size <= Allocator.maxSmallBlockSize THEN {
adj: NAT ← size-1+n;
SELECT size FROM
<= 22B => rtn ← (adj/2 + 1)*2; -- 4, 6, 10B, 12B, 14B, 16B, 20B, 22B, 24B
<= 42B => rtn ← (adj/4B + 1)*4B; -- 30B, 34B, 40B, 44B
<= 106B => rtn ← (adj/10B + 1)*10B; -- 50B, 60B, 70B, 100B, 110B
<= 216B => rtn ← (adj/20B + 1)*20B; -- 120B, 140B, 160B, 200B, 220B
<= 436B => rtn ← (adj/40B + 1)*40B; -- 240B, 300B, 340B, 400B, 440B
ENDCASE => rtn ← (adj/100B + 1)*100B; -- 500B, 600B, 700B, 1000B, 1100B
RETURN
};
IF size <= Allocator.maxSmallBlockSize*2 THEN
sizes: 1200B, 1400B, 1600B, 2000B, 2200B
RETURN [((size-1+e)/200B + 1)*200B];
steps of wordsPerPage for requested size
RETURN [VM.WordsForPages[VM.PagesForWords[LONG[size]+e]]];
};
BlockSize: PUBLIC PROC [hp: HeaderP] RETURNS [INT] = {
nhp: NHeaderP = LOOPHOLE[hp, NHeaderP];
IF nhp.blockSizeIndex = bsiEscape
THEN RETURN [ExtendedBlockSize[LOOPHOLE[hp, EHeaderP]]]
ELSE RETURN [bsiToSize[nhp.blockSizeIndex]];
};
IsExtendedBlock: PROC [hp: HeaderP] RETURNS [BOOL] = INLINE {
RETURN [LOOPHOLE[hp, NHeaderP].blockSizeIndex = bsiEscape];
};
ExtendedBlockSize: PROC [ehp: EHeaderP] RETURNS [words: INT ← 0] = INLINE {
SELECT ehp.sizeTag FROM
words => words ← ehp.extendedSize;
pages => words ← VM.WordsForPages[ehp.extendedSize];
ENDCASE => Crash[];
};
RealWords: PROC [nhp: NHeaderP] RETURNS [words: INT] = INLINE {
IF nhp.blockSizeIndex # bsiEscape
THEN RETURN [bsiToSize[nhp.blockSizeIndex]]
ELSE {
ehp: EHeaderP ← NHPToEHP[nhp];
SELECT ehp.sizeTag FROM
words => words ← ehp.extendedSize;
pages => words ← VM.WordsForPages[ehp.extendedSize];
ENDCASE => Crash[];
RETURN [words];
};
};
UsableWords: PROC [nhp: NHeaderP] RETURNS [INT] = INLINE {
IF nhp.blockSizeIndex # bsiEscape
THEN RETURN [bsiToSize[nhp.blockSizeIndex] - SIZE[NormalHeader]]
ELSE RETURN [ExtendedBlockSize[NHPToEHP[nhp]] - SIZE[ExtendedHeader]];
};
QIToLP: PROC [qi: QuantumIndex] RETURNS [Ptr] = INLINE {
RETURN [Basics.DoubleShiftLeft[[lc[qi]], logWordsPerQuantum].lp];
};
LPToQI: PROC [ptr: Ptr] RETURNS [QuantumIndex] = INLINE {
RETURN [Basics.DoubleShiftRight[[lp[ptr]], logWordsPerQuantum].lo];
};
QCToWords: PROC [qc: QuantumCount] RETURNS [INT] = INLINE {
RETURN [Basics.DoubleShiftLeft[[lc[qc]], logWordsPerQuantum].li];
};
WordsToQC: PROC [words: INT] RETURNS [QuantumCount] = INLINE {
RETURN [Basics.DoubleShiftRight[[li[words+(wordsPerQuantum-1)]], logWordsPerQuantum].lo];
};
PagesToQC: PROC [pages: INT] RETURNS [qc: QuantumCount] = INLINE {
IF Allocator.logPagesPerQuantum = 0 THEN RETURN [pages];
pages ← pages + (pagesPerQuantum-1);
RETURN [Basics.DoubleShiftRight[[li[pages]], Allocator.logPagesPerQuantum].lo];
};
QCToPages: PROC [qc: QuantumCount] RETURNS [pages: INT] = INLINE {
IF Allocator.logPagesPerQuantum = 0
THEN RETURN [qc]
ELSE RETURN [Basics.DoubleShiftLeft[[lc[qc]], Allocator.logPagesPerQuantum].li];
};
EHPToNHP: PROC [ehp: EHeaderP] RETURNS [NHeaderP] = INLINE {
RETURN [@ehp.normalHeader];
};
NHPToEHP: PROC [nhp: NHeaderP] RETURNS [EHeaderP] = INLINE {
RETURN [LOOPHOLE[nhp - (SIZE[ExtendedHeader] - SIZE[NormalHeader]), EHeaderP]];
};
DFHPToNHP: PROC [dfhp: DFHeaderP] RETURNS [NHeaderP] = INLINE {
RETURN [@dfhp.eh.normalHeader];
};
FunnyHeaderPtr: TYPE = LONG POINTER TO FunnyHeader;
FunnyHeader: TYPE = MACHINE DEPENDENT RECORD [
preType (0): WORD,
type (1): WORD];
CheckAfterAlloc: PROC [nhp: NHeaderP] RETURNS [WORD] = INLINE {
RETURN [Basics.BITAND[LOOPHOLE[nhp, FunnyHeaderPtr].preType, maskWord1]];
nhp.inZCT OR nhp.f OR nhp.rcOverflowed OR nhp.refCount # 0
};
maskWord1: WORD = LOOPHOLE[Allocator.NormalHeader[
inZCT: TRUE, f: TRUE, rcOverflowed: TRUE, refCount: LAST[Allocator.RefCount],
blockSizeIndex: 0], FunnyHeader].preType;
CheckAfterNewRef: PROC [nhp: NHeaderP, type: Type] RETURNS [WORD] = INLINE {
RETURN [Basics.BITOR[
Basics.BITAND[LOOPHOLE[nhp, FunnyHeaderPtr].preType, maskWord2],
Basics.BITXOR[LOOPHOLE[nhp, FunnyHeaderPtr].type, type]
]];
};
maskWord2: WORD = LOOPHOLE[Allocator.NormalHeader[
f: TRUE, rcOverflowed: TRUE, refCount: LAST[Allocator.RefCount],
blockSizeIndex: 0], FunnyHeader].preType;
Clear: PROC [ptr: Ptr, size: INT] = {
delta: CARDINAL = 20000B;
WHILE size > delta DO
PrincOpsUtils.LongZero[ptr, delta];
size ← size - delta;
ptr ← ptr + delta;
ENDLOOP;
PrincOpsUtils.LongZero[ptr, Basics.LowHalf[size]];
};
Lcm: PROC [m,n: INT] RETURNS [INT] = {
RETURN [(m*n)/Gcd[m,n]];
};
Gcd: PROC [m,n: INT] RETURNS [INT] = {
DO
r: INT = m MOD n;
IF r = 0 THEN RETURN [n];
m ← n;
n ← r;
ENDLOOP;
};
GetQuanta: PROC [needed: QuantumCount, permanent: BOOLFALSE] RETURNS [qi: QuantumIndex, qc: QuantumCount] = {
Called from ExpandNormalFreeList, AllocateNewStyle
np: INT ← QCToPages[qc ← needed];
interval: VM.Interval ← VM.Allocate[
count: np, alignment: logPagesPerQuantum, in64K: permanent
! VM.CantAllocate => GO TO noVM
];
VM.SwapIn[interval];
qi ← PagesToQC[interval.page];
innerStats.quantaAllocated ← innerStats.quantaAllocated + qc;
EXITS noVM => ERROR InsufficientVM;
};
FreeQuanta: PROC [qi: QuantumIndex, qc: QuantumCount] = {
Called from ExpandNormalFreeList, AllocateNewStyle
VM.Free[[page: QCToPages[qi], count: QCToPages[qc]]];
innerStats.quantaFreed ← innerStats.quantaFreed + qc;
};
Initialize: PUBLIC PROC = {
This is called long after this module is started; CODE[mumble] isn't meaningful until then
systemZone ← LOOPHOLE[NewSystemObject[NIL, SIZE[ZoneObject], CODE[ZoneObject]], ZONE];
LOOPHOLE[systemZone, Zone]^ ← [new: NewSystemObject, free: FreeSystemObject];
LOOPHOLE[@PrincOps.SD[RTSD.sSystemZone], POINTER TO Ptr]^
LOOPHOLE[systemZone, Ptr];
permanentZone
LOOPHOLE[NEW[ZoneObject ← [new: NewPermanentObject, free: FreeSystemObject]]];
UnsafeStoragePrivate.InitializeTransientPageZone[];
};
Reset: PUBLIC PROC = {
This is called by the TAndS (with the monitor lock held); the intent is to make us drop all of the free lists so they can be reconstructed
countedFreeTable^ ← ALL[NIL];
uncountedFreeTable^ ← ALL[NIL];
};
Crash: PROC = {
DebuggerSwap.CallDebugger["Kosher it's not!"L];
};
START CODE
{
Initialize bsiToSize and sizeToBSI
oldQS: NAT ← 0;
nextBSI: BlockSizeIndex ← 0;
Initialize the quantum map: needed here because ALL has a bug
FOR i: QuantumIndex IN QuantumIndex DO quantumMap[i] ← FALSE ENDLOOP;
FOR size: --requested-- [0..maxSmallBlockSize] IN [0..maxSmallBlockSize] DO
qs: NAT = QuantizedSize[IF size = 0 THEN 1 ELSE size];
IF qs # oldQS THEN {
new size and bsi
IF nextBSI = bsiEscape THEN Crash[];
oldQS ← qs;
bsiToSize[nextBSI] ← qs;
nextBSI ← nextBSI + 1;
};
sizeToBSI[size] ← nextBSI - 1;
ENDLOOP;
The systemUZone (zhs) is now ready for use
};
END.
Bob Hagmann February 19, 1985 2:49:15 pm PST
changes to: DIRECTORY, AllocatorImpl, GetSystemUZone, GetTransientPageUZone, Initialize
Bob Hagmann February 19, 1985 5:44:59 pm PST
changes to: NewHeapObject, DoCreateMediumSizedObject
Russ Atkinson (RRA) February 3, 1986 2:13:31 pm PST
Beefed up checking in IsValidRef, made NewPermanentObject call AllocateNewStyleEntry instead of AllocateNewStyle.