THE MEMORY CONTROLLER
THE MEMORY CONTROLLER
THE MEMORY CONTROLLER
1
1
1
The Memory Controller
Jim Gasbarro and Lissy Bland
Dragon-88-02 January 1988
© Copyright 1988 Xerox Corporation. All rights reserved.
Keywords: Bank, Controller, DRAM, DynaBus, Error Correction, Memory, Zone,
Maintained by: Gasbarro.pa, Bland.pa
XEROX   Xerox Corporation
    Palo Alto Research Center
    3333 Coyote Hill Road
    Palo Alto, California 94304

For Internal Xerox Use Only
The Memory Controller
1.0 Brief Description
The Memory Controller chip provides the interface between the Dynabus and main memory. It requires a minimal amount of external circuitry and is designed to work with a wide variety of dynamic random-access memory (DRAM) chips. In addition, it takes into account differences in timing specifications and the prospects for future generations of high-density ram technology. In order to accommodate the high bandwidth requirements of the Dynabus, the Memory Controller allows several Memory Controllers to work together, effectively multiplying the available throughput of the DRAMs. The Memory Controller utilizes a Hamming code algorithm for performing single-bit error correction and double-bit error detection (SECDED) for data stored in the ram.
In addition to providing read/write access to main memory, the Memory Controller also responds to several other types of Dynabus transactions. The Cache Consistency Protocol requires that the replies for certain of the Dynabus request operations be generated from a single source in the system. The Memory Controller acts as this common reflection point.
2.0 Pin-Out
[Artwork node; type 'Artwork on' to command tool]
Figure 1: The Logical Pin-Out of the Memory Controller.
3.0 Block Diagram
[Artwork node; type 'Artwork on' to command tool]
Figure 2: Memory Controller Block Diagram
4.0 Detailed Description of Each of the Functional Blocks
The Memory Controller has four major functional blocks: the Memory/IO Address Match, the Command/Data Fifo, the Ram Interface, and the Output Pipeline. The Memory/IO Address Match unit watches the command and data words on the Dynabus Input bus. Words determined to be needed by the Memory Controller are entered into the Data Fifo. An autonomous process in the Memory Controller takes commands that appear at the output of the fifo and interprets them. If a write operation is required by the command, the write data is taken from the fifo and placed in the Ram Interface write buffer and the DRAM write cycle is initiated. If the operation if a read or some miscellaneous operation, the appropriate action is taken with the result in all cases being that the data for the reply packet of the operation is entered into one of the two "ping-pong" reply buffers. These buffers are labeled Ram Buffer A and B in Figure 2. At this point a third autonomous process takes over which signals a Dynabus Request and stages the data in the Output Pipeline, where any necessary correction takes place, so that the first word of the reply can be placed on the output bus in the first cycle after Grant arrives. These multiple cooperating hardware processes in the Memory Controller allow maximum utilization of the DRAM bandwidth to be achieved.
4.1 Memory Address Space Partitioning
In the fully general configuration of a Dynabus memory system there are three, hierarchical levels of busses and memories. All of main memory resides at the top level of this tree (Figure 3).
[Artwork node; type 'Artwork on' to command tool]
Figure 3: The hierarchical memory system.
Each main memory box in the diagram of Figure 3 typically consists of a Memory Controller chip, 72 DRAMs, 64 for data and 8 for error correction, and a small amount of external logic for address buffering and timing generation. In order to achieve maximum data I/O bandwidth to the rams, the Memory Controller takes advantage of an access mode available in most DRAMs called nibble mode. In this mode, the Memory Controller sends one address to the DRAM and retrieves four successive data words, thus allowing greater throughput. The Dynabus also reads and writes data in groups of four successive cycles of 64-bit words called blocks. Thus, the transfer of blocks into and out of the DRAM's blends nicely with the nibble mode access mechanism.
Main memory can be partitioned in several ways depending on system requirements for storage size and bandwidth. Memory can be interleaved, creating sequential blocks of memory which are accessed by different controllers. When memory is interleaved, all the memory accessed by a single controller is called a bank. The number of banks must always be a power of 2. A zone is the range of memory controlled by a set of interleaved banks; all the memory within a single zone should be in the same technology. Note that in the degenerate case where the number of interleaved banks in the zone is 20, the memory is not interleaved; the bank and the zone become equivalent (Figure 4).
[Artwork node; type 'Artwork on' to command tool]
Figure 4: The configuration of memory into zones c omposed of interleaved banks. Interleaved memory allows controllers to work in parallel when sequential words are accessed. If memory is not interleaved, sequential words of memory are controlled by the same device.
Memory addressing in the Memory Controller is necessarily complex due to the wide range of possibilities for ram size, number of banks, and number of zones in the system. There are two components to address selection: Memory Address Match and Ram Address Selection. Programming the 2 registers associated with the Memory Address Match tells the Memory Controller which read and write addresses to respond to from the Dynabus. Programming the register associated with Ram Address Selection tells the Memory Controller which sub-fields of the Dynabus address to transmit to the rams as the row and column address.
The 2 registers that are programmed for use in the Memory Address Match and Ram Address Selection are called the Zone Address Selection Register and Memory/Bank Address Selection Register. These registers are described fully in Sections 5.4.2 and 5.4.3, respectively. Here is a summary of those sections:
The Zone Address Selection Register
This register contains a Zone Mask field that indicates how many bits of the potentially 14-bit, Zone Address field are to be used in the Memory Address Match.
Memory/Bank Address Selection Register
The Memory/Bank Address Selection Register contains a Bank Mask field that indicates how many bits of the potentially 4-bit Bank Address field are to be used in the Memory Address Match and Ram Address Selection.
The Memory/Address Selection Register also contains a Column Address Select field and a Row Address Select field that select the subfields of the memory address that will be sent to the DRAM as the column and row address, respectively. Note that the number of bits that can be sent to the DRAM for the column and row addresses is variable and depends upon the number of bits that have been used to specify the zone and bank addresses. The total number of bits available for the zone, bank, row and column addresses is 30 so that the number of bits in the row and column addresses can be expressed by the following equation:
X
4.1.1 Memory Address Match
Figure 5 shows the 47-bit Memory Address field. It is broken down into five sub-fields. The number of bits available for the zone, bank, and word address fields are indicated. A 0 or 1 in the Zone Adr, Bank Adr and Word Adr fields indicates that a particular bit in that field can be programmed to match a zero or a one. An X indicates that the bit does not participate in the match.
[Artwork node; type 'Artwork on' to command tool]
Figure 5: Memory address sub-fields. Memory Address matching uses the Zone, Bank, and Word address bits. The high-order 14 bits are essentially unused, but must be set to zero. The remaining bits of the Memory Address Format are used for Ram Address Selection.
The configuration of the Zone Address and Bank Address sub-fields depends a number of factors:
 - the address of the zone in which the controller resides
 - the number of banks in the zone
 - the address of the bank
 - the capacity of the DRAMs
4.2 Ram Interface
4.2.2 Ram Address Selection
The Ram Address Selection logic must also take into account differences in the configuration of memory. Here, the problem is somewhat simplified in that the configuration depends only upon the number of banks in the zone and the capacity of the DRAMs. The address selectors and address path to the DRAMs are both 14 bits wide, allowing for future generations of DRAM with capacities up to 256 Mbits. The Column Address Selector specifies the low order address bits which will be delivered to the rams as column address (Figure 6).
[Artwork node; type 'Artwork on' to command tool]
Figure 6: The Column Address Selector. The number ranges that label the Field Selected, indicate the bit ranges that would be part of the column address for 1MBit and 256MBit DRAMs.
The Column Selector specifies one of five sub-fields of the memory address, depending on the number banks in the zone. Note that not all of the upper bits of the Low Address sub-field are necessarily used by the DRAM. For example, if 1 Mbit DRAMs were being controlled then only the low-order 10 bits would be connected to the ram. The sub-field of the memory address extracted by the Row Address Selector, shown in Figure 7, overlaps part of the Column Address field so that it uses the remaining portion of the column address as the low order bits of the row address.
[Artwork node; type 'Artwork on' to command tool]
Figure 7: Row Address Selector. The number ranges that label the Field Selected, indicate the bit ranges that would be part of the row address for 1MBit and 256MBit DRAMs.
The Row Address Selector specifies one of ten sub-fields of the memory address, depending on the number of banks in the zone and the capacity of the DRAM. The least significant bit of the Word Address field addresses a 32-bit word within a block and is used by the Memory Controller only for IO operations. The two most significant bits of the Word Address field, which index one of the four 64-bit words within a block, are used memory as well as IO operations. During a memory read, if these bits are non-zero then the Memory Controller must deliver the 64-bit word corresponding to this address in the first cycle of the reply packet. In the succeeding cycles, the remaining three 64-bit words of the block are delivered in increasing cyclic order. In order to fetch the data from the DRAMs in this order using the nibble-mode access mechanism the low order two bits of the row and column addresses must be the two most significant bits of the Word Address field. For this reason the LSB of both the High and Low address selectors remains fixed relative to the selected memory address sub-field as shown in Figures 6 and 7 above.
4.2.3 Memory Address Selection: A Practical Example
Figure 8 shows an example of how the Zone, Bank and Address fields would look given the following parameters:
   - four banks
   - bank address 2
   - 1 Mbit DRAMs
   - Zone address 3
[Artwork node; type 'Artwork on' to command tool]
Figure 8: Memory configuration example.
Since there are four banks, the Bank Mask field would be 00112 indicating that only the two low bits of the Bank Address field participate in the match (1 => participate). The Bank Address field would be 00102 to indicate a bank address 2. The Column Address Select would also be 00112 selecting bits [29..41] and bit [45] to appear on the Ram Address lines of the Memory Controller. The 1 Mbit rams have only 10 address wires so the top four bits of the Column Address Select are unused. The actual Column Address delivered to the DRAMs is therefore [33..41], and [45]. Next the Row Address Select is chooses to be 00112 corresponding to a Row Address of [20..32] and [44]. Again, the top four bits are unused, so the Row Address consists of bits [24..32] and [44]. Thus, the entire DRAM address consists of bits [24..41] and [44..45] or 20 bits in all. Bits [42..43] are used for bank match, and [44..45] select which 64-bit word is delivered in the first Dynabus data cycle. The Zone Mask field is set to 111111111100002, signifying that the low four bits of the field do not participate in the match, they are being used to specify the row address. The Zone Address field is set to 000000001100002, to specify a zone address of 3.
4.3 Dynabus Commands
The Memory Controller responds to nine of the Dynabus commands. Commands sent to the Memory Controller are stored in a input queue so that multiple requests to the memory can be outstanding simultaneously. If the number of pending requests exceeds a certain threshold, grants for request packets on the Dynabus are suspended until the Memory Controller can reduce the size of its input queue. For cache consistency purposes, all commands received by a single Memory Controller are processed in the order in which they were received.
4.3.1 ReadBlockRequest - two cycle request, five cycle reply
The Dynabus delivers the address of the word to be read in the first cycle of the packet. The second word is unused. The Memory Controller fetches the requested block from ram, then checks the state of the OwnerIn bit associated with the RBRqst packet. If OwnerIn not asserted the Memory Controller issues a request to the Arbiter for a reply packet and when the Grant arrives, the data is delivered. If the OwnerIn was asserted however, the reply is aborted and the request to the Arbiter is never issued. The state of the SharedIn bit in is reflected in the header of the reply packet.
4.3.2 WriteBlockRequest - five cycle request, two cycle reply
The Dynabus delivers the address of the word to be written in the first cycle of the packet, and the block of data in the four succeeding cycles. The reply is issued in as soon as the actual write operation is initiated.
4.3.3 FlushBlockRequest - five cycle request, two cycle reply
Same as WriteBlockRequest.
4.3.4 WriteSingleRequest - two cycle request, two cycle reply
This command is reflected by the Memory Controller. The SharedIn bit of the request is reflected in the header of the reply packet.
4.3.5 IOReadRequest - two cycle request, two cycle reply
This command causes the Memory Controller to deliver the state of one of its internal registers. The first cycle of the request contains the 32-bit address of the register. The second cycle is unused.
4.3.6 IOWriteRequest - two cycle request, two cycle reply
This command causes the Memory Controller to write one of its internal registers. The first cycle of the request contains the 32-bit address of the register. The second cycle contains the 32-bit data.
4.3.7 ConditionalWriteSingleRequest - two cycle request, five cycle reply
This command is reflected by the Memory Controller. The header of the reply packet reflects the state of the SharedIn bit. The four data cycles are identical and contain a copy of the two data words delivered in the second cycle of the CWSRequest.
4.3.8 BroadcastIOWrite - two cycle request, two cycle reply
Reflected by the Memory Controller.
4.3.9 DeMapRequest - two cycle request, two cycle reply
Reflected by the Memory Controller.
4.4 Performance Considerations
4.4.1 Read Data Delay
On the average, the majority of the operations that the Memory Controller performs are reads. Therefore, one of the most important timing specifications for the Memory Controller is the minimum delay from a ReadBlockRequest on the Dynabus to the corresponding reply packet. This delay determines the maximum read data bandwidth that a single Memory Controller can provide. The read delay is determined by three factors: the data pipeline delay, the OwnerIn/SharedIn bit pipeline delay, and the Request-Grant delay for the Arbiter. The data pipeline delay is the sum of two parts: the time from RBRequest to the start of the ram read timing cycle or input delay, and the time from the start of ram read timing to the last word of data fetched from the ram, or access delay. The pipeline delay for the OwnerIn/SharedIn bits is called as the owner delay. And, the Request-Grant delay for the Arbiter will be referred to as the grant delay. Before computing the minimum delay, a simplifying assumption is needed:
The nibble mode cycle time or the rate at which read data is delivered from the Rams, typically 60 nanoseconds/nibble, is longer than the Dynabus clock cycle time, typically 25 nanoseconds.
Since the output pipeline is three stages long, this assumption infers that by the time the last word is delivered from the rams, the first word has already propagated to the end of the pipe and is ready to be delivered onto the bus. If this assumption were not true, the additional time necessary to get the read data to the front of the pipeline (one, two, or three cycles) would have to the added to the read access delay. It is expected, though that as nibble-mode access times improve the bus clock speed will increase as well, making the assumption valid in the long term.
There are two parallel paths which can limit the minimum read access time. The first, obvious path consists of the input delay of getting the read command and address to the rams and the access delay of fetching the data from the rams. This is the sum of the input and access delays. The second path involves the owner and grant delays. The Memory Controller attempts to hide the grant delay by issuing the Arbiter request before the data is available from the rams; however the request cannot actually be issued until the owner information is available. This leads to a dependency between the owner delay and the grant delay. The minimum read data time is therefore:
X
The input delay for the Memory Controller is 5 cycles. Typical values for access delay, owner delay, and grant delay are 320ns, 11 cycles, and 5 cycles, respectively. Thus the maximum delay is:
X
Assuming a cycle time of 25 ns yields: delay = 450ns. It is interesting to note that this is only 50ns worse than the "theoretical" limit of 400ns imposed by the bus piplining.
4.4.2 Bus Bandwidth
Assuming minimum timing for the bus and ram, the bandwidth available from a single bank of memory is limited by the rate at which the Memory Controller can cycle the ram. Each complete ram read cycle produces eight word of read data. Within a complete cycle there are two clock cycles of overhead in the state machine, in addition to the ram access time and ram precharge time. The bandwidth available from one controller is then:
X
Precharge times for DRAMs are typically 100ns. The bandwidth available from a single Memory Controller then is:
X
4.4.3 Bus Throughput
To achieve maximum throughput on the Dynabus, Memory Controllers can be arranged in banks such that a sequential access pattern will cause each of the banks to operate in parallel. If the assumption is made that there is no contention between banks for the bus and that the available bandwidth scales with the number of banks, then the number of banks required to fully saturate the bus is:
X
The Dynabus cycle time for one ReadBlock operation is the sum of the sum of the number of cycles in the RBRequest and RBReply packets, which is 7 cycles. Using the typical values given above this yields:
X
Thus, with two controllers about 75% of the Dynabus throughput can be achieved. A four bank system will fully saturate the bus.
4.5 Ram Timing Interface
In order to achieve maximum performance from a DRAM, the clock edges must be controllable to a resolution of 5ns, with a worst case skew between signals of about 1ns. The details of the exact edge placement are different for nearly every manufacturer's part, making it fairly difficult to build an on-chip timing generator, especially with the 2m CMOS technology in which the Memory Controller is fabricated. Therefore, the task of the timing generation has been moved off-chip to a circuit which can be customized for a particular type of DRAM to take full advantage of the device's speed. The Memory Controller provides an interface to this timing generator that is general enough to allow for a wide variety of approaches to building it.
[Artwork node; type 'Artwork on' to command tool]
Figure 9: Ram timing interface
Two approaches have been considered, one using a registered PAL configured as a finite state machine, and another using a programmable event generator such as the AMD 2971. There are many tradeoffs to consider in either design.
4.5.1 Timing Interface Operations
The timing controller needs to know about only two different timing sequences: access and refresh. The access timing sequence is used for both read and write operations. The Memory Controller distinguishes read accesses from write accesses by the nRamWrite signal which controls both the direction of the RamData lines as well as nWE, the DRAM write enable pin. The access timing sequence consists of the five control signals shown in Figure 10.
[Artwork node; type 'Artwork on' to command tool]
Figure 10: RAM access timing
The refresh timing sequence employs the CAS-before-RAS refresh mode found in most DRAMs. This mode utilizes a counter internal to the DRAM to provide the refresh row address. The refresh timing sequence is shown is Figure 11.
[Artwork node; type 'Artwork on' to command tool]
Figure 11: RAM refresh timing
4.5.2 Timing Interface Pins
StartA
short for StartAsyncronous, is the trigger signal to indicate that a memory timing sequence should begin.
Refresh
when asserted indicates that a refresh timing sequence should be performed. Otherwise an access sequence is started.
SelColumnAdd
asynchronously selects either the row or column address bits.
WordAddress[0..1]
during writes, asynchronously selects one of four data words to be written to the rams. During reads, asynchronously selects one of four buffer words to store the read data.
RamBufWrite
latches read data in the buffer selected by WordAddress.
RamReset
a buffered version of DReset. Useful if the timing generator is implemented as a synchronous device.
RamClock
a buffered version of Dynabus clock. Useful if the timing generator is implemented as a synchronous device.
4.5.3 Synchronization With The Memory Controller
In order minimize the length of a ram cycle, the timing generator and Memory Controller run asynchronously. This implies that there must be a synchronization mechanism to notify the Memory Controller when a ram cycle has completed. Synchronization can be achieved with a handshake signal that is synchronized through a number of flip-flop stages; however, this approach introduces a delay caused by the flip-flops each time the timing generator and the Memory Controller must synchronize. The synchronization method used involves an internal counter and a priori knowledge about the duration of each operation. When the Memory Controller starts a ram operation it loads the counter, and then waits for it to expire before proceeding. The synchronization delay using this method is one to two cycles better than a two stage synchronized handshake. The risk with this method is that the Dynabus clock or timing generator could drift far enough to cause the synchronization to fail. Proper care should be taken to ensure that this does not happen.
5.0 Programming the Memory Controller
This section describes the internal control and status registers of the Memory Controller. Most of these registers are accessed by IOWrite operations performed through the Dynabus. The rest are accessed from the diagnostic DBus. A few are available through both the Dynabus and DBus.
5.1 DBus Operation
The DBus is a serial data path into the Memory Controller chip used for initialization and diagnostic purposes. The Memory Controller has several different internal registers which can be accessed via the DBus. A register which is read or written by the DBus is first selected by asserting the DBus DAddress line and shifting in the three bit register address using DSerialIn and DShiftCK. To read a register, DSelect is asserted along with DExecute, and DShiftCK is cycled once, loading the value to be read into the DBus shift register and causing the first bit (MSB) of data to appear on the DSerialOut wire. Successive bits are obtained by deasserting DExecute and cycling DShiftCK once for each bit to be read. Values are written by first selecting the desired register, then clocking the value in using DSerialIn and DShiftCK. This is a very brief description of DBus operation, for full details see DBusDoc.tioga.
5.2 DBus Register Descriptions
5.2.1 Device Type and Version Number Register, DBus Address 0, 16 bits, Read Only
Header - Bits [0..3]
These bits always read as 5.
DBus Device Type - Bits [4..9]
Memory Controller = 6. This number should not be confused with the IO device type.
Version Number - Bits [10..15]
This number, which is zero for the first version of the Memory Controller, will be incremented once each time a new mask set for the Memory Controller is generated.
5.2.2 DynaBus DeviceID Register, DBus Address 1, 10 bits, Write Only (Bogus, easy to make R/W)
[Artwork node; type 'Artwork on' to command tool]
DynaBus DeviceID - Bits [0..9]
These bits specify the device number used in decoding IO addresses. This number gives each instance of the Memory Controller in the system a unique identification for IO operations.
5.2.3 Syndrome, Error Status, and Address Register, DBus Address 2, 72 bits, Read Only
[Artwork node; type 'Artwork on' to command tool]
Header - Bits [0..27]
These bits always read as 0.
Syndrome - Bits [28..34]
If a single bit error occurs these bits provide and index to the failing bit. The values 0, 2, 4, 8, 16, 32, and 64 point to failing check bits. Value 63 and all values in the range [65..127] point to a failed data bit. Values other than these are indicative of a double error.
Another minor bogosity - the endian-ness of these values is reversed from normal. 127 here indicates MSB.
Data Fifo overflow - Bit [35]
When set indicates that the fifo that buffers commands and data from the Dynabus ran out of space. This should never happen. If it does it indicates something seriously wrong in either the Memory Controller's bus hold/request logic or in the Arbiter's grant logic.
Double Bit Error - Bit [36]
Indicates that an uncorrectable two bit error occurred while reading the DRAMs.
Output Buffer Sync Error - Bit [37]
Indicates that Memory Controller reached an idle state, but the read and write pointers to the ping-pong buffers in the output pipe were out of sync. This should never happen. Indicates a serious hardware problem.
One Bit Error - Bit [38]
Indicates that a correctable one bit error occurred while reading the DRAMs.
Multiple Memory Errors - Bit [39]
Indicates that additional memory errors occurred since the last time that the status register was cleared.
Error Address - Bits [40..71]
Contains the address of the first failing memory operation.
5.3 IO Operation
The Memory Controller uses IO commands for the majority of its internal initialization. The IO Device Number must be initialized via the DBus before any IO command can be performed.
[Artwork node; type 'Artwork on' to command tool]
Each Memory Controller in the memory system is given a unique identification for decoding IO addresses via the DBus. The 32-bit Dynabus IO address space is partitioned into three fields: DevType (12 bits), DevNum (10 bits), and DevOffset (10 bits). The IO device type for the Memory Controller (not to be confused with the DBus device type) is 3. The device number, which uniquely identifies a single instance of a Memory Controller is programmed via the DBus. The DevOffset selects a particular register address within the Memory Controller to be read or written. For a more thorough description of IO addressing, see Appendix B of DynaBusLogicalSpecifications.tioga.
5.4 IO Register Descriptions
5.4.1 RAM Timing, Miscellaneous Functions, Address 0, Write
[Artwork node; type 'Artwork on' to command tool]
Grant Delay - Bits [0..4]
See Read Delay below. Delay = 32 - n.
Owner Fifo Delay - Bits [5..7]
Allows a tolerance in the number of cycles between the header cycle of a request packet on the Dynabus and the appearance of OwnerIn and SharedIn on the bus. This allows a variable number of pipeline stages between the Memory Controller and the Cache. [0..7] => [16..9] cycles between HeaderIn and OwnerIn/SharedIn.
Enable Correction - Bit [8]
When asserted enables the error correction/detection logic.
Select Refresh Clock - Bits [9..10]
Selects the frequency with which DRAM refresh cycles will be initiated.
[0..3] => CKIn / [64, 128, 256, 512].
Example: 1 Mb rams require 1000 rows to be refreshed in 8 ms. Assume CK = 25ns.
8 ms / (1000 rows * 25 ns/cycle) = 320 cycles/row. So select refresh clock to be 2 (256 cycles/row) which is about 20% faster than the required minimum refresh rate.
Enable Operation Reflect - Bit [11]
When asserted, allows the Memory Controller to reflect the Dynabus operations BIOW, WS, CWS, and DeMap. For a particular Memory Controller to respond to one of these operations, the address of the operation must match the Bank address of the Memory Controller. This allows the load of reflecting operations to be divided among several Memory Controllers. If memory is configured as multiple zones, the Bank Address becomes ambiguous. This ambiguity is avoided by using the Enable Operation Reflect bit to select one bank and consequently one Memory Controller.
Precharge Delay - Bits [12..16]
Number of clock cycles that the Memory Controller allows the DRAMs to precharge after a read or write operation. Delay = 32 - n.
Refresh Delay - Bits [17..21]
Number of clock cycles required for a Refresh operation to complete. Delay = 32 - n.
Write Delay - Bits [22..26]
Number of clock cycles required for a Write operation to complete. Delay = 32 - n.
Read Delay - Bits [27..31]
Read Delay and Grant Delay are used specifically for the ReadBlock operation. In order to minimize latency for read operations, the request to the Arbiter for a Reply packet must be made at a point in the read cycle of the rams such that if the Grant for the packet is returned in minimum time, the last word of data for the reply packet is fetched from the ram just before it is needed to be placed in the output pipeline for transmission onto the bus. Read Delay is the number of clock cycles from the start of the read operation until the request is issued. Grant Delay is the number of clock cycles from issuance of request until Grant arrives (and the ram precharge starts). Delay = 32 - n.
5.4.2 Zone Address Selection Register, Address 1, Write
[Artwork node; type 'Artwork on' to command tool]
Unused - Bits [0..3]
Zone Mask - Bits [4..17]
Maps to bits [14..27] of the 47 bit memory address. Zero indicates that the bit does not participate in the memory address match.
Zone Address - Bits [18..31]
Maps to bits [14..27] of the 47 bit memory address. For each bit position, if the corresponding Zone Mask bit is a one then the memory address bit must match the Zone Address bit for the controller to respond to the address.
Sample code for computing Zone Mask and Zone Address:
IOWrite1: PROC [banks: [1..16], ramAddWires: [9..14], zoneAdd: CARDINAL]
RETURNS [data: LONG CARDINAL ← 0] =
{
match, address, shift: CARDINAL;
shift ← ramAddPins*2 + Log2[banks] - 18; -- 18 is offset to mask field start
match ← BITAND[ShiftLeft[0ffffh, shift], 03fffh];
address ← BITAND[zoneAdd, 03fffh];
data ← BITOR[ShiftLeft[match, 14], address];
};
5.4.3 Memory/Bank Address Selection Register, Address 2, Write
[Artwork node; type 'Artwork on' to command tool]
Unused - Bits [0..16]
Column Address Select - Bits [17..19]
Selects the subfield of the memory address that will be sent to the DRAMs as column address.
Row Address Select - Bits [20..23]
Selects the subfield of the memory address that will be sent to the DRAMs as row address.
Bank Mask - Bits [24..27]
Maps to bits [40..43] of the 47 bit memory address. Zero indicates that the bit does not participate in the bank address match.
Bank Address - Bits [28..31]
Maps to bits [40..43] of the 47 bit memory address. For each bit position, if the corresponding Bank Mask bit is a one then the memory address bit must match the Bank Address bit for the controller to respond to the address.
Sample code for computing Low Address, High Address, Bank Mask and Bank Address:
IOWrite2: PROC [banks: [1..16], ramAddWires: [9..14], bankAdd: [0..15]]
RETURNS [data: LONG CARDINAL ← 0] =
{
columnAdd: CARDINAL ← Log2[banks];
rowAdd: CARDINAL ← ramAddWires - 9 + Log2[banks];
bankMask: CARDINALBITAND[BITNOT[BITSHIFT[0fh, Log2[banks]]], 0fh];
data ← BITOR[data, ShiftLeft[columnAdd], 12]];
data ← BITOR[data, ShiftLeft[rowAdd], 8]];
data ← BITOR[data, ShiftLeft[bankMask], 4]];
data ← BITOR[data, ShiftLeft[bankAdd], 0]];
};
5.4.4 Clear Error Status Register, Address 3, Write
[Artwork node; type 'Artwork on' to command tool]
Unused [0..31] - Bits
The side effect of writing this register is that the Error Status Register is cleared and the Error Address and Syndrome registers are reset so that they will capture the address and syndrome of the next failing memory operation.
5.4.5 Error Address Register, Address 0, Read
[Artwork node; type 'Artwork on' to command tool]
Error Address - Bits [0..31]
Contains the address of the first failing memory operation. This value is also available through the DBus.
5.4.6 Error Status, Syndrome Register, Address 1, Read
[Artwork node; type 'Artwork on' to command tool]
Unused - Bits [0..19].
Syndrome - Bits [20..26]
Same as DBus.
Data Fifo overflow - Bit [27]
Same as DBus.
Double Bit Error - Bit [28]
Same as DBus.
Output Buffer Sync Error - Bit [29]
Same as DBus.
One Bit Error - Bit [30]
Same as DBus.
Multiple Memory Errors - Bit [31]
Same as DBus.
6.0 Detailed Description of Each Pin
Pin Name I/O Pin Description
HeaderCycleIn I This bit indicates that the data currently on the Dynabus contains header information.
HeaderCycleOut O Asserted by the Memory Controller to indicate that it is currently driving header information onto the Dynabus.
OwnerIn I If OwnerIn is asserted by a Cache during a ReadBlock operation the Memory Controller will not reply to the operation. Instead, the Cache provides the reply.
OwnerOut O Unused by the Memory Controller.
SharedIn I SharedIn is reflected in the Shared bit of reply packets for ReadBlock, WriteSingle and ConditionalWriteSingle.
  
SharedOut O Unused by the Memory Controller.
  
SStopIn I Unused by the Memory Controller.
SStopOut  O Asserted by the Memory Controller whenever a fatal error occurs. This could be caused by a double-bit error when correction is enabled, a buffer sync error, or a fifo overflow error. See 5.1.2.
Grant  I Asserted by the Arbiter to indicate that the Memory Controller should transmit data in the following cycle.
HiPGrant I Unused by the Memory Controller.
LongGrant I Unused by the Memory Controller.
RequestOut[0..1] O These signals are used to signal the Arbiter for service.
   00 => Idle, Release Hold
   01 => Bus Hold
   10 => Two cycle request
   11 => Five cycle request
  Hold causes the Arbiter to stop grant request packets. This is asserted when the Memory Controller input fifo nears full. Reply packets are still granted, allowing the output buffers to be emptied. Returning to Idle allows the bus to return to normal operation.
DShiftCK I DShiftCK is the Shift clock for the currently selected scan path. Data is transferred on the positive edge of this signal. If the DAddress line is active, the DShiftCK is used to transfer component address bits instead of data bits.
DAddress I DAddress indicates that the next DShiftCK cycle is transferring address bits.
DExecute I DExecute asks the Memory Controller to perform an execute cycle instead of a a data/address transfer on the next positive edge of DShiftCK.
DFreeze I Unused by the Memory Controller.
DReset I DReset resets all internal state of the Memory Controller. It is internally synchronized to the Memory Controller clock.
DSerialIn I The Memory Controller's internal registers receive input from the DBus via the DSerialIn pin.
DSerialOut O When DSelect is asserted, DSerialOut sends information from a specified register in the Memory Controller to the DBus. (This is a Tri-state wire.)
DSelect I Enables the Memory Controller to respond to DBus serial in, serial out, and execute commands. Addressing operations are independent of DSelect.
Clock I Dynabus clock.
CkOut O This is the Dynabus clock after it has been internally buffered. It is transmitted off-chip so that its phase can be compared to a reference and the result used to adjust the phase of the input clock.
RamCheck[0..7] I/O These eight lines are used to transfer the check bits for error correction to and from ram storage.
RamAddress[0..13] O Specifies the word in the rams to be accessed. These bits are multiplexed reflecting the architecture of most DRAMs. Selection of the row and column address words is done with SelColumnAdd below.
nRamWrite O When asserted, indicates that the external timing generator should perform a ram write operation.
WordAddress[0..1] I For nibble-mode writes, selects which of the four words should be driven on the RamData lines. For nibble-mode reads, selects which of the four internal buffers the value on the RamData lines should be written into.
RamBufWrite I When asserted enables latching of the selected internal data buffer
StartA O Indicates that the external timing generator should initiate a DRAM timing cycle.
Refresh O Indicates that the external timing generator should perform a refresh operation.
RamReset O Resets the ram timing generator.
RamClock O Provides a synchronized clock for the timing generator.
SelColumnAdd I Selects whether the row or column address is driven on the RamAddress lines.
SpareOut[0..1] O Unused by the Memory Controller (Gnd).
SpareIn[0..1] I Unused by the Memory Controller.
DynabusIn[0..63] I These are the 64 Dynabus data lines into the chip.
DynabusOut[0..63] O These are the 64 Dynabus data lines out of the chip.
TestIn I This signal is used to reduce the number of pins that need to be contacted for testing purposes. It relates only to the operation of the Dynabus Data lines. When de-asserted the DynabusIn and DynabusOut pins are uni-directional. When asserted, the DynabusIn pins become active outputs and drive the level currently on the DynabusOut pins.
RamData[0..63] I/O These are the 64 bi-directional ram data lines.
ParityIn I Unused by the Memory Controller.
ParityOut O Unused by the Memory Controller (Gnd).
7.0 DC Characteristics
Pin Type Signal Type Voltage  Current
Group A 5V input L 1.0 0
  H 4.0 0 
Group B 5V output L 0.5 25 ma
  H 4.0 25 ma
Group C 5V Tri-state output L 0.5 25 ma
  H 4.5 25 ma
Group D 5V Tri-state I/O L 0.5 25 ma
  H 4.5 25 ma
Group A
HeaderCycleIn OwnerIn 
SharedIn SStopIn
Grant HiPGrant
LongGrant DShiftCK
DAddress DExecute
DFreeze DReset
DSerialIn DSelect
Clock WordAddress[0..1]
RamBufWrite SelColumnAdd
SpareIn[0..1] DynabusIn[0..63]
TestIn ParityIn
Group B
HeaderCycleOut OwnerOut
SharedOut SStopOut
Request[0..1] nRamWrite
CkOut RamAddress
nRamWrite StartA
Refresh RamReset
RamClock
Group C
DSerialOut SpareOut[0..1]
DynabusOut[0..63]
Group D
RamCheck RamData[0..63]
8.0 AC Characteristics
8.1 Definitions
[Artwork node; type 'Artwork on' to command tool]
Figure 12: Input Signal Characteristics
Ts (setup time) = the minimum time a signal must be stable before the rising edge of the clock.
Th (hold time) = the minimum time a signal must be stable after the rising edge of the clock.
[Artwork node; type 'Artwork on' to command tool]
Figure 13: Output Signal Characteristics
Tcycle = the time interval between successive rising edges of the clock
Tpd (propagation delay) = the waiting time after the clock is high until an output becomes valid.
Tm (maintenance of old data) = the time after rising edge of next clock cycle that old data remains valid.
8.2 Values
Qualified Pin Name Tmin Ttypical Tmax
Tcycle 20ns 25ns 35ns
Ts.Dynabus In (setup.Dynabus In)  3ns 
Th.Dynabus In (hold.Dynabus In)  1ns
Tpd.Dynabus Out (propagation delay.Dynabus Out)  5ns
Tm.Dynabus Out (maintain.Dynabus Out)  2ns
9.0 Application Schematic of the Circuit
[Artwork node; type 'Artwork on' to command tool]
10.0 Physical Pin-out For Each Package
[Artwork node; type 'Artwork on' to command tool]