DRAGON DOCUMENT TITLE
DRAGON DOCUMENT TITLE
DRAGON DOCUMENT TITLE
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
The Dragon Arbiter
Ed McCreight
Dragon-86-xx Written December 2, 1986 Revised December 2, 1986
© Copyright 1986 Xerox Corporation. All rights reserved.
Abstract: The Dragon arbiter is one of a set of identical chips that collectively decide how to grant time-multiplexed control of the Dragon's Dynabus to requesting devices. This document is its interface specification.
Keywords: arbiter, arbitration, scheduling
FileName: /Indigo/Dragon/Documentation/Arbiter.tioga, .interpress
XEROX  Xerox Corporation
   Palo Alto Research Center
   3333 Coyote Hill Road
   Palo Alto, California 94304



Dragon Project - For Internal Xerox Use Only
Contents
1. General Description
2. Pin Description
3. Protocols
4. Arbitration Pipeline Stages
5. Unresolved Issues
ChangeLog
1. General Description
The Dragon arbiter chip is part of a Dragon computer system. There is one arbiter chip on each Dragon board, located physically near the board-to-backpanel interface. The main function of this set of identical chips is to control time-multiplexed access to the DynaBus, Dragon's main data bus.
The arbiter implements seven discrete major priority levels. Dynabus access is always granted to a requester requesting at highest major priority. Within each major priority level, minor priority level is assigned on a round-robin basis. The most recent grantee at a given major priority level has lowest minor priority level for the next arbitration cycle. To simplify implementation, all requesting devices attached to a single arbiter are grouped adjacent to each other as a block in each major priority level's round-robin sequence.
Each arbiter has eight device request ports. Each of these ports can have up to three requests pending, of two kinds. Associated with each kind is a priority level [0..7) and a packet length (2 or 5 cycles). These sixteen parameter pairs (8 requesters x 2 kinds) are loaded into the arbiter during system initialization via the DBus scan path.
There are up to eight arbiters in a Dragon system. These arbiters exchange priority information across the system backpanel. Each arbiter is assigned a fixed position (within the group of eight arbiters) during system initialization as above.
Since there is exactly one arbiter chip on every board, the arbiter also computes system-wide shared and owner, and serves as DBus controller.
2. Pin Description
2.1 Request ports
Each of the eight independent request ports consists of three wires:
Req[i] - Input - Asserted low by the requesting device, one cycle for every separate request. A port can accumulate up to two pending requests in this way. This wire is passively pulled high by the arbiter chip, so unconnected request ports do not request.
Raise[i] - Input - Asserted high by the requesting device, one cycle for each separate request that is to be promoted from low to high kind. Has no effect if all pending requests are already of high kind.
DGrant[i] - Output - Asserted high to inform the requester that his request is of highest priority on his board, and that if system grant is given to this board, system grant will go to that device.
2.2 Common grant bus
The following wires that go in common to every requester:
BGrant - Output - Asserted high to notify the device that received DGrant[i] in the previous cycle that he is the system grantee, and that he should put his first data cycle on his local board Dynabus segment in the following cycle.
DKind - Output - Timed with DGrant, this wire is low if the arbiter is responding to a low-kind request from the grantee device, and high if to a high-kind request.
DLength - Output - Timed with DGrant, this wire is low if the arbiter is responding to a short-packet request (2 cycles) and high if responding to a long-packet request (5 cycles).
2.3 Arbiter negotiation bus
There are eight four-bit backpanel buses across which arbiters exchange priority information across the system backpanel. These wires are passively terminated high on the backpanel:
MinP[j] - Input/Output - A three-bit field that carries the minimum (best) priority level at which arbiter j has any device requesting. If no device is requesting service from arbiter j, the value of this field is 7.
Ahead[j] - Input/Output - This field has meaning if the round-robin "rover" for the priority level being requested by arbiter j now resides in the block of devices owned by arbiter j. This wire then says whether j's request is being made "ahead" of the rover (and therefore of highest round-robin sub-priority within priority level MinP[j]), or "behind" the rover (and therefore of lowest round-robin sub-priority within priority level MinP[j]).
There is also a single wire to signal the last cycle of a transaction.
LastCyc - Input/Output - Asserted high by the arbiter that is currently holding the system grant to indicate that his transaction has received as many cycles as it needs.
2.4 DBus control
The following signal pairs are buffered and pipeline-delayed from the backpanel to the board's devices:
DBClock  -> DDClock
DBDataIn -> DDDataIn
DBExecute -> DDExecute
DBFreeze -> DDFreeze
DBnReset -> DDnReset
One signal goes the other way if the arbiter is "selected":
DBDataOut <- DDDataOut
The arbiter receives but does not pass on one extra backpanel signal:
DBAddr - When this signal is asserted, all arbiters "deselect", and DBDataIn and DBClock are used to shift an address into a 16-bit device address register. When DBAddr is de-asserted, the arbiter compares the high-order 6 bits of this address register against six bits read from the backpanel. If there is an exact match, then a decoder on the next 2 bits is enabled, and three of these four select wires are made available on the board
DDSel1
DDSel2
DDSel3
The remaining eight bits of the address register are also wired to the board
DDAddr[8-15]
The arbiter behaves as DBus devices 0-255 on its board, in that it is enabled only by the invisible DDSel0, and pays no attention to DDAddr[8-15]. As part of its scan path, it reads three arbiter pins
DDMemo[0-2]
each of which can be connected to any of the eight DDAddr[8-15] pins, so simple PC-board interconnection can communicate up to 3*log2(8) = 9 bits of information to the system DBus controller.
2.5 Shared/Owner Merging
The following signals come separately from each requester ( i IN [0..4) ):
DShared[i] - Input - Indicates whether requester i holds a copy of the contents of a memory location.
The following signals are exchanged on the system backpanel ( j IN [0..8) ):
BOwner - Input/Output (Wire-OR) - Indicates whether any requester in the system claims ownership of a memory location.
BShared[j] - Input/Output - Indicates whether any requester attached to arbiter j holds a copy of the contents of a memory location.
The following signals go in common to all requesters:
DOwner - Input - Indicates whether any requester attached to this arbiter claims ownership of a memory location.
SysShared - Output - A signal computer identically by all arbiters, indicating whether any requester attached to any arbiter holds a copy of the contents of a memory location.
SysOwner - Output - Indicates whether any requester in the system claims ownership of a memory location.
2.6 Overhead
Vdd
Gnd
Clock
3. Protocols
During system initialization, the system DBus controller must load into each arbiter its (unique) board position within the round-robin arbitration scheme, as well as the major priority level and packet length associated with both kinds of request for each of its eight request ports.
4. Arbitration Pipeline Stages
A modest convention: A flipflop belongs to the pipeline stage of its output, which is one larger than that of its input. (This convention doesn't handle feedback very well.)
0: Requesting devices i on board j send requests on the Req[i] wires and priority changes on the Raise[i] wires to arbiter j.
1: This stage is implemented separately for each requester i. Its input is Req[i] and Raise[i] from stage 0, PreGnt and PreGntH[i] from stage 3, and Ack[i] and AckH[i] from stage 5, as well as the request counters C (total count, IN [0..3] ) and H (high-priority count, IN [0..3], H<=C). Outputs are new values for C and H, and a best priority level for each requester.
2: At this stage the separate requests i in arbiter j are combined to form a common request. Its input is the best priority level for each requester, and a pipeline bypass from stage 4 to mask a request that is about to be granted. Its output is a three-bit major priority level, MinP, and a one-bit round-robin flag, Ahead.
3: All arbiters exchange priority information over the backpanel using MinP[j], Ahead[j], and LastCyc (piped back from stage 5 or 8 of some earlier request).
Each arbiter computes the identity of its own best requester, taking both major and round-robin minor priority into account.
4: Each arbiter decides whether it made the winning request. The major priority of the winning request is computed identically on all arbiters. The device rover pointer for the winning major priority level is reset in all arbiters except the winner.
Each arbiter sends DGrant to at most one device on its board.
5: The winning arbiter communicates BGrant to all devices on its board. It asserts LastCyc true if this is a two-cycle packet. The board rover pointer for the winning priority level is recomputed identically in all arbiters. The device rover pointer for the winning priority level is recomputed in the winning arbiter.
6: The device that saw DGrant two cycles ago and BGrant one cycle ago becomes the system grantee device. It drives its first data word onto its board's local outgoing DynaBus.
6: The first data word appears on the backpanel DynaBus.
8: The first data word appears on the incoming local DynaBus of every board. The arbiter asserts LastCyc true if this is a five-cycle packet.
5. Unresolved issues
The DBus is still a mess.
ChangeLog
December 2, 1986 4:22:02 pm PST, EMM
First revision.
McCreight December 22, 1986 10:42:16 am PST
Added owner, shared.