THE DRAGON MAP CACHE
THE DRAGON MAP CACHE
THE DRAGON MAP CACHE
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
The Dragon Map Cache
Design Document
Louis Monier, Pradeep Sindhu, Jean-Marc Frailong
Dragon-84-xx Written November, 23 1984  Revised December 10, 1987

© Copyright 1984 Xerox Corporation. All rights reserved.
Abstract: This document needs to be heavily revised, don't trust too much of what is in there for the time being... The DynaBus command encoding is roughly correct, but that's about it.
Abstract: This document describes the Dragon Map Processor, the device that maps virtual addresses to real and provides per-page memory protection in Dragon. The Map Processor's main features are fast response to mapping requests; the use of a small, fixed fraction of main memory for mapping; the support of multiple address spaces with sharing; and an organization in which both function and performance can be enhanced relatively easily after initial implementation.
Keywords: Dragon, Multiprocessors, Address Mapping, Virtual Memory, Map Cache
FileName: /Indigo/Dragon/MapProcessor/MapProcessorDoc.tioga, .interpress
XEROX  Xerox Corporation
   Palo Alto Research Center
   3333 Coyote Hill Road
   Palo Alto, California 94304



For Internal Xerox Use Only
Contents
1. Introduction
2. The Addressing Architecture
3. Map Processor Organization
4. Mapping Functions
5. The Map Cache
6. The Map Cache Controller
7. The Map Table
8. Miscellaneous Issues
Appendix A. Size of Bypass Area
Appendix B. Layout of IO Address Space
Appendix C. Format of Map Entries
Appendix D. Notation
1. Introduction
The Dragon Map Processor is a single logical device that implements the mapping from virtual addresses to real and provides per-page memory protection. Its main features are: fast response to mapping requests on the average (~ 4 DynaBus cycles); the use of a map table that consumes a small (< 1%), fixed fraction of real memory; the support of multiple virtual address spaces that are permitted to share memory; and an organization in which both function and performance can be enhanced relatively easily after hardware implementation. In spite of these features the design is simple.
This document begins with the addressing architecture supported by the Map Processor. It then describes the Map Processor's organization in terms of its three components: the Map Cache, which is a custom VLSI chip; the Map Cache Controller, whose functionality is implemented by ordinary Dragon processors; and the main memory Map Table, which stores the virtual to physical mapping. The following section provides the functional specifications for the Map Processor; it also indicates how each function is invoked and where it is implemented. The next three sections describe each of the components in greater detail, while the last section addresses design issues that do not fit elsewhere. Important information is collected together in the Appendices.
2. The Addressing Architecture
The Map Processor's addressing architecture supports multiple address spaces, with fixed size pages being the unit of mapping and protection. The virtual addresses issued by each Dragon processor are mapped according to one of a number of independent addressing contexts, or address spaces. For a given virtual address the mapping to a real address can be thought of as a two step process: (i) determine the address space in which the virtual address must be interpreted, and (ii) perform the translation in this space.
Before we describe each of these steps, it is convenient to define basic terms and introduce some shorthand notation. A page is a contiguous chunk of memory 2s 32-bit words long, aligned on a 2s word boundary; s is fixed at 10. In our description virtual addresses will be denoted by va, real addresses by ra, virtual pages by vp, real pages by rp, and offsets into pages by offset. Both va and ra are 32-bit quantities, both vp and rp are (32—s)-bit quantities, and offset is an s-bit quantity. If vp and offset are the virtual page and offset corresponding to va, we will write va=vp|offset; similarly, we will write ra=rp|offset for real addresses. Six flags bits are kept for each page; these will be denoted by flags. Faults associated with address mapping will be communicated via a four-bit fault field. The sizes of these quantities are constrained by the relation ||rp||+||flags||d32, where ||x|| is shorthand for log2xË, the number of bits needed to represent x. Address spaces will be denoted by their address space id, aid, a quantity constrained by ||aid||d32.
Figure 1 illustrates the process of mapping a virtual address for a given processor (the sizes and locations of the various regions shown are for illustration purposes only, and should not be interpreted as being fixed by the architecture). The first step begins with the aid of the processor's address space and the va to be mapped. If va lies in a special shared area of virtual addresses, then the address space used for mapping is space 0 otherwise va lies in the switched area, and the space used is aid. The second step translates va in the context of the aid given by the first step. This translation produces ra=rp|offset, where rp is obtained by looking up the entry corresponding to aid|vp in the Map Table.
Step one provides an economical, but restricted form of page sharing across address spaces we call aliasing. Aliasing is economical in that there is only one mapping entry per real page regardless of whether the page is shared or not. It is restricted in two senses: (i) a page must appear at the same virtual address in all spaces, and (ii) sharing is all-or-nothing; that is, a page is shared by all address spaces if it is shared by any. It is expected that all but a few cases of sharing will be covered by this restricted mechanism. However, the architecture also permits more general sharing in which the vp's sharing a particular rp are unrestricted and where each page may be shared by any subset of the spaces
[Artwork node; type 'ArtworkInterpress on' to command tool]
Figure 1. Address Mapping in Dragon
The shared and switched areas are the same for all processors, and are defined as follows. An address va is in the shared area if the bits selected in vp by a (32—s)-bit SharedMask match the corresponding bits in a (32—s)-bit SharedPattern; a 1 in SharedMask selects the corresponding bit in vp while a 0 deselects it. The mask and pattern can be modified by software. The switched area is defined to be all addresses that are not in the shared area.
A processor's address space is kept in an aidRegister within each of the processor's small caches. The architecture does not constrain the contents of the aidRegisters of the IFU and EU caches to be the same, so in fact code and data could run in different address spaces!
The architecture provides two necessary modifications of the basic mapping just described. One of these is called bypassing, and the other identity mapping. Bypassing is needed so processors can access the Map Table and map-fault handling code without getting map faults in the process. Bypassing gets activated whenever a processor makes a reference to a portion of virtual address space called the map bypass area (the check for bypass gets applied before either of the two steps described above). This area appears at the same locations in all address spaces, and is defined by three (32—s)-bit quantities BypassMask, BypassPattern, and BypassBase. The mask and pattern determine the bypass area's location and size in virtual memory as follows: a va is defined to be in the bypass area if the bits selected by BypassMask in vp match the corresponding bits in BypassPattern; a 1 in BypassMask selects the corresponding bit in vp while a 0 deselects it. BypassBase determines the starting location of the bypass area in real memory. The real address produced by bypass mapping is ra=rp|offset, where rp= (BypassBase'BypassMask) ( (vp'~BypassMask). The mask, pattern, and base can all be modified by software.
Identity mapping is useful in initializing the system at startup, and for doing IO. It is implemented by designating one of the address spaces (aid=-1) to correspond to the identity map (rp=vp). The check for identity mapping is applied before the check for bypass mapping, so it takes highest precedence.
3. Map Processor Organization
The Map Processor consists of three components (Figure 2): a Map Table kept in main memory, a custom VLSI Map Cache, and a Map Cache Controller. The Map Table serves as the repository for mapping information for all address spaces, mapping (aid, vp) to rp. For each real page the table also maintains six flags used for paging and memory protection; in high to low order these are: Spare, Shared, Dirty, KWtEnable, UWtEnable, and URdEnable. The Map Table stores information only about pages currently in main memory, and is structured as a hash table indexed by (aid, vp). This allows the table to fit in a small, fixed fraction of main memory (< 1%), in contrast with direct map schemes. These schemes consume table space proportional to all of virtual memory or to the fraction being used frequently, depending on how they are implemented. Assuming that the hash function spreads things properly, the average time to access the table will be quite good. The worst case time, however, depends on how collisions are resolved. Initially, linear chaining with "move to front on access" will be used; if this turns out to be a performance problem the scheme will be modified to use a tree structure. Dragon processors access the Map Table directly via the map bypass area.
The Map Cache is simply a performance accelerator that sits between processor caches and the Map Table. It contains around 1200 mapping entries and allows mapping requests that hit to be serviced in a small number of bus cycles. A Map Cache entry contains only four of the flags kept in a Map Table entry: Dirty, KWtEnable, UWtEnable, and URdEnable; the remaining two bits are not relevant so they are not cached. Processor caches themselves keep a limited number of entries, so the Map Cache is really a second level cache. When a map miss occurs in a processor cache, the cache fires off a mapping request (containing the aid and the vp) to the Map Cache. The Map Cache returns the entry if it is present, and signals a map fault if it is not. This fault is then handled by the processor attatched to the cache that got the map miss. The Map Cache also implements a number of other operations, including ones to read and write entries, flush entries, to switch address spaces, and to control the location and size of the shared and map bypass areas. All communication with the Map Cache occurs over the DynaBus.
[Artwork node; type 'ArtworkInterpress on' to command tool]
 Figure 2. Map Processor Organization
The Map Cache Controller is a fictitious device whose functionality is implemented by ordinary Dragon processors. As explained above, whenever the Map Cache misses, the processor whose cache got the miss fields the map fault. This processor plays the role of the Map Cache Controller in fetching the missing entry from the Map Table and shipping it to the Map Cache. In addition to servicing misses, the Map Cache Controller also implements a complete set of operations for manipulating the Map Table. The code for these operations is available to all processors and can be executed by any one of them. This split in functionality between Map Cache and Map Cache Controller lets us put in hardware only that portion of the design which is necessary for speed. The rest is implemented by Dragon code, so enhancements in both function and performance can be made relatively easily. For example, the structure of the Map Table can be left completely open since the hardware has no knowledge of it.
The final component is the Map Table. It serves as the repository for mapping information for all address spaces, mapping (aid, vp) to rp. For each real page the table also maintains six flag bits used for paging and memory protection; in high to low order these are: Spare, Shared, Dirty, KWtEnable, UWtEnable, and URdEnable. The Map Table stores information only about pages currently in main memory, and is structured as a hash table indexed by (aid, vp). This allows the table to fit in a small, fixed fraction of main memory (< 1%), in contrast with direct map schemes. These schemes consume table space proportional to all of virtual memory or to the fraction being used frequently, depending on how they are implemented. Assuming that the hash function spreads things properly, the average time to access the table will be quite good. The worst case time, however, depends on how collisions are resolved. Initially, linear chaining with "move to front on access" will be used; if this turns out to be a performance problem the scheme will be modified to use a tree structure. Dragon processors access the Map Table directly via the map bypass area.
4. Map Processor Functions
This section describes the functions implemented by the Map Processor. Map Cache functions are taken up first, followed by Map Cache Controller functions. The description for each function includes its specification, the method for invoking it, and miscellaneous information relevant largely to the implementor.
4.1 Map Cache Functions
One of the Map Cache functions is invoked via the dedicated DynaBus transaction Map, while the remainder are invoked via IOWrites. Map is implemented using a special transaction to allow mapping requests in a multi-level DynaBus system to be easily confined to the originating bus (ie. without having to interpret the address part of a transaction). Such confining is important given the relative frequency of mapping requests.
Map(aid: AddressSpaceId, vp: VirtualPage) Returns(rp: RealPage, flags: Flags)
Map returns the real page rp and flags corresponding to a given virtual page vp; the mapping is performed in the context of address space aid. If there is no entry for (aid, vp) in the Map Cache, a map fault is signalled by setting the error bit in the reply packet. Map is used by processor caches when they get a map miss while servicing a processor reference, so its speed is important for good system performance.
Invocation: MapRqst(header, data); header[17..31] = 0; header[32..63] = vp; header[54..59] = undefined; data[0..31] = undefined; data[32..63] = aid.
Results: returned via MapRply(header, data); header[17..31] = 0; header[32..53] = rp; header[54..59] = undefined; header[60..63] = flags; data = undefined.
If the vp lies in the bypass area or the aid is the identity mapped space then the value returned for flags is as follows: Dirty=TRUE, KWtEnable=TRUE, UWtEnable=FALSE, and URdEnable=FALSE. If the entry is not in the Map Cache then the error bit in the reply packet is set, and data[32..63] gives the fault.
WriteMapCacheEntry(vp: VirtualPage, rp: RealPage, flags: Flags)
WriteMapCacheEntry puts the entry (aidRegister, vp) b (rp, flags) into the Map Cache. This operation is used by a Dragon processor to return an entry to the Map Cache after a miss. The aidRegister register must have been set up earlier via a WriteRegister.
Invocation: IOWriteRqst(header, data); header[17..31] = 0; header[32..35] = devType; header[36..39] = devNum; header[40..41] = 1; header[42..63] = vp; data[0..31] = 0; data[32..53] = rp data[54..59] = undefined; data[60..63] = flags.
FlushMapCacheEntry(vp: VirtualPage)
FlushMapCacheEntry causes the entry corresponding to (aidRegister, vp) to be removed from the Map Cache. If the entry was not there to begin with, the operation has no effect. The aidRegister register must have been set up earlier via a WriteRegister.
Invocation: IOWriteRqst(header, data); header[17..31] = 0; header[32..35] = devType; header[36..39] = devNum; header[40..41] = 0; header[42..63] = vp; data[0..63] = undefined.
ReadMapCacheEntry(vp: VirtualPage) Returns(rp: RealPage, flags: Flags)
ReadMapCacheEntry is identical to Map except that it is invoked via an IORead transaction rather than a Map, and that the aid used is the contents of aidRegister. This operation is useful to allow a processor to verify the contents of a Map Cache; it is not used in normal operation. The aidRegister register must have been set up earlier via a WriteRegister.
Invocation: IOReadRqst(header, data); header[17..31] = 0; header[32..35] = devType; header[36..39] = devNum; header[40..41] = 0; header[42..63] = vp; data[0..63] = undefined.
Results: returned via IOReadRply(header, data); header=standard; data[0..31] = 0; data[32..53] = rp; data[54..59] = undefined; data[60..63] = flags.
If the vp lies in the bypass area or the aid is the identity mapped space then the value returned for flags is as follows: Dirty=TRUE, KWtEnable=TRUE, UWtEnable=FALSE, and URdEnable=FALSE. If the entry is not in the Map Cache then the error bit in the reply packet is set, and data[32..63] gives the fault.
WriteMapCacheRegister(reg: RegNumber, value: 32BitQuantity)
WriteMapCacheRegister writes value into one of the internal registers. These include the aidRegister, BypassMask, BypassPattern, BypassBase, SharedMask, and SharedPattern.
Invocation: IOWriteRqst(header, data); header[17..31] = 0; header[32..35] = devType; header[36..39] = devNum; header[40..41] = 2; header[42..63] = undefined; data[0..31] = 0; data[32..63] = value.
Implementation Notes: Note that only the first six register numbers are used. Any additional commands can be implemented by adding more registers.
ReadMapCacheRegister(reg: RegNumber) Returns(value: 32BitQuantity)
ReadMapCacheRegister returns the contents of the internal register reg.
Invocation: IOReadRqst(header, data); header[17..31] = 0; header[32..35] = devType; header[36..39] = devNum; header[40..41] = 2; header[42..63] = reg; data[0..63] = undefined.
Results: returned via IOReadRply(header, data); header=standard; data[0..31] = 0; data[32..63] = value.
Implementation Notes: The registers are numbered as follows: AID(0), SharedPattern(1), SharedMask(2), BypassPattern(3), BypassMask(4), BypassBase(5), SubSetMask(6), SubSetPattern(7).
4.2 Cache Controller Functions
The Map Cache Controller provides two interfaces: one to service requests from the Map Cache and the other to service requests from Dragon software to manipulate the Map Table. Functions in the two interfaces have different speed requirements, and are also invoked differently. The Map Cache interface functions must be relatively efficient since they are invoked frequently, while the Map Table functions are not as critical. The former are invoked via traps initiated by the Map Cache, while the latter are invoked via procedure calls.
The first two functions below belong to the Map Cache interface. The remainder belong to the Map Table interface.
HandleMapFault(aid: AddressSpaceId, vp: VirtualPage)
HandleMapFault is called when there is a miss in the Map Cache. It first checks if the entry for (aid, vp) is in the Map Table. If the entry is there it is sent to the Map Cache via TakeEntry, otherwise a page fault is signaled.
Invocation: via map fault.
Implementation Notes:
HandleWPFault(aid: AddressSpaceId, vp: VirtualPage)
HandleWPFault is called when a small cache finds Dirty'KWtEnable FALSE if the processor is in kernel mode or finds Dirty'UWtEnable FALSE if the processor is in user mode. It first checks if the write protect fault is real or simply a result of this being the first write to the entry's page. If it is real, a write protect fault is signaled, otherwise the dirty bit for the entry is set in the Map Table and the updated entry is written to the Map Cache via WriteMapCacheEntry.
Invocation: via write protect fault.
Implementation Notes: Note that the entry is guaranteed to be in the Map Table since it was in the Map Cache so we won't ever have to signal page fault.
ReadEntry(aid: AddressSpaceId, vp: VirtualPage) Returns(rp: RealPage, flags: Flags)
This operation is like Map(pn, vp) implemented by the Map Cache except that it returns the flags flags and real page rp for an explicit address space id aid rather than for the space currently loaded. If there is no entry corresponding to (aid, vp) a map table fault is signaled.
Invocation: via procedure call.
Implementation Notes:
WriteEntry(aid: AddressSpaceId, vp: VirtualPage, flags: Flags, rp: RealPage)
WriteEntry writes the entry (aid, vp) b (rp, flags) into the Map Table. If an entry already existed for (aid, vp) it is overwritten, otherwise a new entry is added.
Invocation: via procedure call.
Implementation Notes:
GetNextEntry() Returns(rp: RealPage, flags)
GetNextEntry returns the next entry from the map table according to some enumeration order. The enumeration is not perfect: some entries in the table may not appear and one entry may appear more than once. However, the enumeration is good enough to allow most of the entries in the table to be listed. This operation will be used by the page replacement algorithm, amongst others.
Invocation: via procedure call.
Implementation Notes:
DeleteEntry(aid: AddressSpaceId, vp: VirtualPage) Returns(deleted: Bool)
This operation deletes the entry corresponding to aid|vp from the Map Table. An attempt to map the address aid|vp following this operation will result in a page fault unless an intervening operation has placed a new entry for aid|vp into the table.
Invocation: via procedure call.
Implementation Notes:
5. The Map Cache
The Map Cache is an accelerator for speeding up mapping requests issued by processor caches. It has space for 256 of the entries stored in the Map Table, and is able to respond to requests in 300 ns in the case of a hit. It is implemented as a single VLSI chip whose main link with the outside world is via the Dynabus. The chip also connects to the DBus to permit initialization and debugging.
Two overall aspects of the design are worth pointing out here because they contribute to simple implementation and good performance. The first is that the cache is pure: an entry is never modified within the cache once it has been read in; if modifications need to be made, the entry is flushed from the cache, modified in the Map Table, and read in again. The most important consequence of this is that entries never need to be written through or written back, making the control portions of the chip simpler. The second aspect of the design is that the cache functions as a slave to other devices.
5.1 DynaBus Interface
During normal operation the Map Cache communicates with the outside world exclusively over the DynaBus. The only transactions recognized are Map, IOReads and IOWrites directed to the appropriate portion of IO address space.
5.2 Structure
The Map Cache chip consists of the following parts: DynaBus interface logic with a 16-entry input FIFO, control registers, a 256-entry table, and DBus interface logic.
5.2.1 M-Bus Interface Logic
This interface logic contains: pad drivers and receivers; circuitry to do address matching; latches to hold MCmd, IOAddress, IOData, and IODone response; array bypass logic that implements identity map mode as well as map bypass area addressing, and a control unit that steers the various operations.
5.2.2 Control Registers
The control registers store global mapping information, which includes identity map mode and MapBypassBase.
5.2.3 Aid Registers
This is a set of 2||pn|| registers of ||aid|| bits each that give the mapping between processors and the address spaces currently loaded on them. During normal operation, these registers are accessed using pn, and the aid read out is used along with vp to lookup (rp, flags).
5.2.4 Cache Array
This array holds the mapping between (aid, vp) and (rp, flags). It consists of a number of lines, where each line is made up of corresponding rows from four separate arrays: an ||aid||-bit associative memory array AIDArray; a (||vp||—2)-bit associative memory array VPArray; a 1-bit memory array MPar which stores the parity for the two associative arrays; and a 4*(||rp||+||flags||+1)-bit memory array RPFArray that contains four (rp, flags) pairs for each line (the remaining bit in this array is rpfPar). The top (||vp||—2) bits of the incoming vp and the aid bits from AidReg[pn] are used to do an associative lookup to select a line, while the bottom 2 bits of vp are used to select between the four (rp, flags) pairs potentially stored within that line.
In addition, each line has a number of control bits: ref is set whenever its line is used in mapping, lValid indicates if the (aid, vp) pair in this line is valid; lBroken is used to turn off bad lines completely, and rpf0Valid..rpf3Valid indicate whether each of the four (rp, flags) pairs in this line is valid. All parts of the Cache Array except for these control bits are implemented using dynamic cells.
The final section is the address decoder which permits individual lines in the cache to be selected for victimization as well as for reading and writing during testing.
5.2.5 Victim Select Logic
When a miss occurs in the Map Cache, the line selected as victim is the one pointed to by the victim pointer vPtr. For each cycle during which a victim is not needed the ref bit for the line pointed to by vPtr cleared and vPtr is incremented if ref was set. Thus vPtr moves from one line to the next till it lands on a line that has ref clear. It follows that vPtr will tend to be on lines that have ref clear when there is a request to find a victim. The ref bit is set whenever its line is used in mapping. Thus the lines used most frequently in mapping will be the least likely to be picked as victims, and so this procedure approximates least frequently used victim selection.
At the moment there is no logic to handle broken lines. A clever way to implement this in the future would be as follows. When lBroken is set, its line does not participate in matches and does not permit its ref bit to be cleared (when the victim pointer passes over it). The broken line will appear as though it is in use, so on the average it will not be picked as the victim. However, when it does get picked the effects will not be harmful. The entry will get written into the bogus line but this line will never match so it will appear as though the write entry was never done. The next reference to this entry will miss, and the entry will get written to the cache one more time. However by this time the victim pointer will in all likelihood have moved on to a legitimate line so the entry will be written correctly!! Isn't caching wonderful?
5.2.7 D-Bus Interface Logic
-- to be written
5.3 Operation
-- to be written
5.4 D-Bus Interface
-- to be written
5.5 Miscellaneous Information
Maintainence of dirty bit: when a processor writes into a page the first time after it has been brought into main memory, its dirty bit must be set. Since the Map Cache has a copy of this bit, this copy must get set also. This is done by flushing the entry from the Map Cache and signalling a write protect fault in response to a MapForWrite from a processor cache. This operation is issued by a cache when it tries to do a write, but finds wtEnabled clear. It is then up to the Map Cache Controller software to figure out whether the write protect fault is real or not and to update the dirty bit in the Map Table and send the updated entry to the Map Cache if it is not.
6. The Map Cache Controller
As stated earlier, the Map Cache Controller is implemented by software running on Dragon processors. This software provides two interfaces, one to service requests from the Map Cache and the other to perform more general manipulations on the Map Table.
Dragon procesors access the Map Table through the special map bypass area provided in low virtual memory. This area is also meant to contain any other system code (such as some of the fault handlers) that cannot tolerate map faults during its execution.
7. The Map Table
The Map Table is stored in main memory, and maps pairs (aid, vp) to pairs (rp, flags). It is implemented as a hash table, where the hash index is derived by folding and XORing the 32 bits of (aid, vp). Each row of the table is 4 words long and is quadword aligned. A row contains two mapping entries and two pointers used in collision resolution.
Initially, the method for resolving collisions will be linear chaining. If this causes performance problems the method will be changed to tree chaining. This change will be straightforward since the access algorithms are in Dragon code.
Since there may be more than one mapping entry per physical page, it is possible for the Map Table to overflow. The number of entries in the table will be 1.5 times the number of real pages in main memory, making overflow extremely unlikely. However, overflow is still possible, and it is a problem because a miss in the Map Table does not necessarily mean page fault—it could also mean that there was not enough space in the table. To fix this problem, the Map Table will be maintained using the all-or-nothing rule, which states that either all of the vp's for a given rp are in the table or none are. With this rule, table miss once again means page fault, freeing the software from having to discriminate between page faults and table misses.
8. Miscellaneous Issues
-- Variable page size
-- Multiple Map Caches
-- Multiple M-Bus systems
-- In aliasing all vp's must have identical protection info
Appendix A. Size of Bypass Area
The map bypass area must be large enough to fit the Map Table as well as code for handling traps. Since the Map Table will larger by far, we need only consider its size to calculate how large to make the bypass area. Map Table size in turn depends directly on the amount of real memory. Assuming that 4 Mbit chips will be available within the lifetime of Dragon, that we can surface mount 1024 memory chips per board, and that we will put at most 8 memory boards on the largest Dragons, the maximum amount of real memory comes out to 1 GWord.
The size of the Map Table is 1.5*(#pages in real memory)*4 words. Rounding up the 1.5 up to 2, the Map Table occupies 223 words. Note that in rounding up 1.5 to 2 we've left plenty of space for code and any other data that needs to be kept in the bypass area.
Appendix B. Layout of IO Address Space
Appendix C. Format of Map Entries
Appendix D. Notation
||x|| log2xË, or the number of bits needed to represent x
a|ba*2||b||+b
vp virtual page
rp real page
s page size
aid address space id
pn processor number
ChangeLog
MapProcessorDoc.tioga
Written by: Sindhu, November 23, 1984 2:18:11 pm PST
Last Edited by: Sindhu, June 22, 1985 5:24:24 pm PDT
Pradeep Sindhu October 12, 1986 7:58:37 pm PDT