DRAGON ARBITER
DRAGON ARBITER
DRAGON ARBITER
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
DRAGON PROJECT — FOR INTERNAL XEROX USE ONLY
The Dragon Arbiter
Ed McCreight
Dragon-87-xx Written December 2, 1986 Revised March 5, 1986
© Copyright 1987 Xerox Corporation. All rights reserved.
Abstract: The Dragon arbiter is one of a set of identical chips that collectively decide how to grant time-multiplexed control of the Dragon's Dynabus to requesting devices. This document is its interface specification.
Keywords: arbiter, arbitration, scheduling
FileName: /Indigo/Dragon/Documentation/Arbiter25.tioga, .interpress
XEROX  Xerox Corporation
   Palo Alto Research Center
   3333 Coyote Hill Road
   Palo Alto, California 94304



Dragon Project - For Internal Xerox Use Only
Contents
1. General Description
2. Pin Description
3. Protocols
4. Arbitration Pipeline Stages
5. Unresolved Issues
ChangeLog
1. General Description
The Dragon arbiter chip is part of a Dragon computer system. There is one arbiter chip on each Dragon board, located physically near the board-to-backpanel interface. The main function of this set of identical chips is to control time-multiplexed access to the DynaBus, Dragon's main data bus.
The arbiter implements six discrete priority levels. Dynabus access is always granted to a requester requesting at highest priority. Within each major priority level, there is the concept of precedence. Precedence is assigned on nearly a round-robin basis. Within each arbiter, and separately at each priority level, the last grantee device has lowest precedence. Likewise, at each priority level, the last grantee arbiter has lowest precedence.
Each arbiter has eight device request ports. Each of these ports can have up to three requests pending, of two types each. Associated with each type is a priority level [0..7) and a packet length (2 or 5 cycles). These sixteen parameter pairs (8 requesters x 2 types) are loaded into the arbiter during system initialization via the DBus scan path.
Smaller priority numbers have more urgent priority. This convention assigns the priority value 7, corresponding to an unconnected, passively-pulled-up port, the least urgency. In fact, 7 means NoRequest.
The two types of request at a single port are called L and H. The priority number assigned to type L must be no smaller than that assigned to type H, because within a single request port a type H request is preferred over a type L request independent of the priority numbers assigned to them.
There can be at most eight arbiters in a Dragon system. These arbiters exchange priority information across the system backpanel. Each arbiter drives one three-bit port and receives the other seven. A fixed pinout for output and input ports is implemented by rotating the inter-arbiter wiring, a nice trick suggested by J. Gastinel.
The arbiter also computes system-wide shared and owner, and serves as DBus address decoder.
2. Pin Description
2.1 Request ports
Each of the eight independent DRQ ports consists of three wires: two input wires and a single output wire:
DRQ[i].DReq - Input, 2 wires - These wires are encoded as follows:
0 => seize, meaning assert system-wide Hold. This prevents any new grant to a requester whose request priority number is greater than 1.
1 => reqL, meaning increase the number of L-type requests for this port by one.
2 => reqH, meaning increase the number of H-type requests for this port by one.
3 => release, meaning release system-wide Hold (if we were Holding; it has no effect otherwise).
These wires are passively pulled high by the arbiter chip, so an unconnected request port asserts "release".
DRQ[i].DGrant[i] - Output - Asserted high by the arbiter to inform a requester that he is receiving system grant. This wire remains high as for the duration of the bus grant. The arbiter can make back-to-back grants to the same requester.
2.2 Common grant bus
The following wires are available in common to every requester. Not every requester needs both, or even one.
DHiPGrant - Output - One cycle before the first cycle of a grant, this wire is high if the arbiter is responding to a high-type request from the grantee device, and low otherwise. Its state at other times is undefined.
DLongGrant - Output - One cycle before the first cycle of a grant, this wire is high if the arbiter is responding to a long-packet (5-cycle) request from the grantee device, and low otherwise (2-cycle). Its state at other times is undefined.
2.3 Inter-Arbiter negotiation bus
Requesters don't need to know about this. There are eight three-bit backpanel buses across which arbiters exchange priority information across the system backpanel. These wires are passively pulled high on the backpanel:
ArbReq[j] - Input/Output - A three-bit field that carries the minimum (best) priority level at which arbiter j has any device requesting. If no device is requesting service from arbiter j, the value of this field is 7, NoRequest. If any of the arbiter's requesters is asserting Hold, it is as if that requester is also making a request at priority value 2, Hold.
There is also a single wire to signal the last cycle of a transaction.
LastCyc - Input/Output - Asserted low by the arbiter (if any) that is currently holding the system grant, to indicate that his transaction has not received as many cycles as it needs.
2.4 DBus control
The following signal pairs are buffered and pipeline-delayed from the backpanel to the board's devices:
DBClock  -> DDClock
DBDataIn -> DDDataIn
DBExecute -> DDExecute
DBFreeze -> DDFreeze
DBnReset -> DDnReset
One signal goes the other way if the arbiter is "selected":
DBDataOut <- DDDataOut
The arbiter receives but does not pass on one extra backpanel signal:
DBAddr - When this signal is asserted, all arbiters "deselect", and DBDataIn and DBClock are used to shift an address into a 16-bit device address register. When DBAddr is de-asserted, the arbiter compares the high-order 6 bits of this address register against six bits read from the backpanel. If there is an exact match, then a decoder on the next 2 bits is enabled, and three of these four select wires are made available on the board
DDSel1
DDSel2
DDSel3
The remaining eight bits of the address register are also wired to the board
DDAddr[8-15]
The arbiter behaves as DBus devices 0-255 on its board, in that it is enabled only by the invisible DDSel0, and pays no attention to DDAddr[8-15]. As part of its scan path, it reads three arbiter pins
DDMemo[0-2]
each of which can be connected to any of the eight DDAddr[8-15] pins, so simple PC-board interconnection can communicate up to 3*log2(8) = 9 bits of information to the system DBus controller.
2.5 Shared/Owner Merging
The following signals come separately from each requester ( i IN [0..4) ):
DShared[i] - Input - Indicates whether requester i holds a copy of the contents of a memory location.
The following signals are exchanged on the system backpanel ( j IN [0..8) ):
BOwner - Input/Output (Wire-OR) - Indicates whether any requester in the system claims ownership of a memory location.
BShared[j] - Input/Output - Indicates whether any requester attached to arbiter j holds a copy of the contents of a memory location.
The following signals go in common to all requesters:
DOwner - Input - Indicates whether any requester attached to this arbiter claims ownership of a memory location.
SysShared - Output - A signal computer identically by all arbiters, indicating whether any requester attached to any arbiter holds a copy of the contents of a memory location.
SysOwner - Output - Indicates whether any requester in the system claims ownership of a memory location.
2.6 Overhead
ArbIndex (3 bits), from the backpanel, I suppose.
Vdd
Gnd
Clock
3. Protocols
During system initialization, the system DBus controller must load into each arbiter the priority level and packet length associated with both types of request for each of its eight request ports.
4. Arbitration Pipeline Stages
It is helpful here to consult ArbiterImpl.mesa, the Rosemary RTL simulation of the Arbiter. This can be found in [Indigo]<Dragon>Top>Arbiter24.df. Italics are used to indicate pipeline registers.
0: drqIn[i] in arbiter j is loaded from the DRQ[j][i].Req wire pair, which carries a request from arbiter j's requester i.
1: At this stage, each requester i is handled independently. The request counters, one for each type t, drqCtrs[i][t], are updated with new counts that reflect the contents of drqIn[i] and grants from stage 4. The hold flipflop, drqHold[i], is updated. The best request for a port, together with its type and length, are registered in drqPrior1[i] and drqInfo1[i].
2: At this stage the separate requests i of arbiter j are combined to form a common request, arbReqOut[j].
3: All arbiters exchange priority information over the backpanel using the ArbReq wires, and LastCycle, and these are recorded in arbReqIn and lastCycIn, which are identical in all arbiters.
Each arbiter computes the identity of its own best requester, taking both priority and precedence into account. This requester is coded in unary form in bestDev, and its type and length are recorded in bdInfo3[j].
4: The information in bdInfo3[j] is sent to all requesters attached to arbiter j on DHiPGrant[j] and DLongGrant[j]. Requesters can use this information to set up multiplexers in anticipation of receiving grant in the next cycle.
Each arbiter decides first whether a grant will be made at all, globalGrantStart, and second whether the grant will be made to itself, localGrantStart. The priority of the winning request is computed on all arbiters, and the winning arbiter computes dGrant, a unary encoding of grantees.
The unary-coded representation of the best local device from the previous stage, bestDev is re-coded in binary and stored in bestDevCoded.
The number of cycles remaining in the current local grant, localBusCyclesLeft, is computed, and LastCyc is asserted FALSE if necessary.
5: The winning arbiter communicates dGrant as DRQ[j][i].DGrant to all devices on its board.
The board rover pointer, arbRovers[p], for the winning priority level is recomputed identically in all arbiters.
The device rover pointer, devRovers[p], for the winning priority level is recomputed only in the winning arbiter.
5. Unresolved issues
The DBus isn't right yet.
ChangeLog
McCreight December 2, 1986 4:22:02 pm PST
First version.
McCreight December 22, 1986 10:42:16 am PST
Added owner, shared.
McCreight March 5, 1987 6:43:53 pm PST
Revised to reflect RTL simulation.