EFFICIENT STRUCTURAL DESIGN FOR INTEGRATED CIRCUITS
Richard Barth
Xerox Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, United States of America
ABSTRACT
Integrated circuit design methodologies have been described either extremely abstractly or as case histories. Abstract descriptions have focused on the top-down or bottom-up methodologies. In this paper we present a methodology which compromises between the top-down and bottom-up methodologies and analyze a case history.
INTRODUCTION
An integrated circuit design methodology transforms vague design requirements into a collection of pattered layers which comprise the fabrication tooling, a set of test patterns which verify the design, and documentation which describes the operation of the circuit (Fig. 1).
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 1 Methodology Input and Output
Integrated circuit design requirements can be defined as a set of constraints. Some constraints are technological constraints, which limit what is possible. Some constraints are application constraints, which specify what is wanted. One definition of a design methodology, then, is the process of finding a design which simultaneously satisfies all of the constraints. This is a very fuzzy process; most often the designer is initially presented with an overconstrained problem, or at least one which has no obvious solution, and must decide which constraints to relax until a solution is found.
In general, existing integrated circuit design methodology practice focuses upon the top-down or bottom-up approaches. Actual leading edge design practice does not conform to either of these methodologies.
The existing integrated circuit design methodology literature tends towards two extremes. One extreme is the very abstract description, in which no details are presented and none of the realities of integrated circuit design show through. The other extreme is the case history. In these descriptions, the details of a specific design dominates the discussion of methodology.
This paper fills the gap in the methodology literature and describes a methodology which avoids the deficiencies of both the top-down and bottom-up methodologies.
PREVIOUS METHODOLOGIES
1. Bottom-Up
The bottom-up methodology requires a minimal set of tools. Schematic capture, a leaf cell library, and a structural simulator is all that is required. This minimal set of tools is all that is supplied by many ASIC vendors.
The bottom-up methodology consists of starting with the leaf cells, then forming a hierarchy by composing the leaf cells into larger cells, then, recursively, forming still larger cells out of the more primitive compositions. For very simple designs one need not even "go up", a flat description of the interconnection of all of the leaf cells is sufficient. The entire design has no formal description until the top is reached. Thus no tool can be applied to the entire design until the whole structural hierarchy has been entered. Since no formal verification has been performed it is likely that a structure of any size will contain a significant number of errors. Unfortunately, some of these errors are likely to be mistakes in the original specifications. This can have a disastrous effect on the design, possibly requiring that the whole thing be thrown away and the design started anew.
2. Top Down
In an effort to avoid the disaster of completing a bottom-up design, only to discover it has a fundamental flaw, the top-down methodology was invented. Classically, this methodology consists of producing a formal description, evaluating that description against the requirements, and then refining the description, a portion at a time, until primitive elements are reached.
There are several problems with this methodology. Since the designer is attempting to avoid regressing to a previous stage of abstraction, there is a tendency to overoptimize the design. Furthermore, the description of the design at the upper levels, while being more compact than the lower levels, is nonetheless redundant information. During refinement, the designer is reexpressing the same information, albeit with more detail. Both of these drawbacks lead to a larger source.
The redundancy has tool implications as well. The description at one level generally does not use the same semantic base as the description at another level. One would like to have tools for checking consistency between levels, but the lack of a consistent semantic base makes the implementation of such tools difficult.
Another problem is that the designer is awash in a sea of possibilities due to the lack of technological constraints. Many possibilities, which could have been discarded immediately because they are technologically infeasible or simply more difficult, must be considered by the designer.
PREVIOUS LITERATURE
In this section we review a representative sample of the existing literature and indicate the contributions of the current work.
Weste1 includes a flow graph of current and ideal design approaches and an enumeration of automatic and manual synthesis mechanisms. Isolated from this abstract discussion is a series of CMOS system case studies. One such case study uses top-down in the sense of hierarchical divide and conquer rather than our definition of top-down as successive refinement.
Many people have realized that design is always a mixture of top-down and bottom-up methods. Rubin2 has an abstract description about "refinement" which proceeds "from the abstract to the specific"; "Although design rarely proceeds strictly from the abstract to the specific, the refinement notion is useful in describing any design process. In actuality, design proceeds at many levels, moving back and forth to refine details and concepts, and their interfaces". There is a good discussion of design as the process of constraint satisfaction, which further describes top-down vs. bottom-up, and indicates they are almost always mixed in a real design.
In this paper we bring together the abstract and concrete discussions of Weste and add a case history to the abstract discussion of Rubin.
Trimberger et.al.3 talk about a design methodology focused upon custom integrated circuits. Suprisingly, timing analysis tools are ignored. The focus is upon functionality and the space aspects of layout. They make the assumption that the physical and logical hierarchies match, just as we do. We explore the implications of this restriction.
Trimberger4 suggests that the complete description of the design in a behavioral form is required, although he tempers his statement with the suggestion that large variances from this design flow are normal.
We claim that behavioral descriptions of complicated implementations are not worth the effort; with sufficiently abstract structural descriptions, and sufficiently efficient means of simulating them, it is better to go straight to the structural description.
Meade and Conway5 talk about top-down design with the caveat that the architect must have a full understanding of lower levels of the hierarchy. However their discussion hardly mentions tools to support a methodology. In the intervening decade since this work was published, tools have become a central part of any methodology, while the particular implementation technology has left center stage. Our example emphasizes this central role of tools.
CRITERIA
Niessen
6 describes criteria for a VLSI design methodology. These include.
A. It must provide complexity control such that a reasonable confidence in the correctness of designed circuits is made possible.
B. The method must comprise the whole design traject.
C. An efficient utilization of technological possibilities should be provided.
D. A considerable increase in design productivity should result.
E. It must enable the creation of efficient CAD tools.
He defines the primary concepts of a design methodology to be abstraction, repetition, and the use of past experience. Here we describe much more detailed criteria for a good VLSI design methodology.
The definition of a good design methodology depends upon a large number of contextual factors. The complexity of the design, the background of the design engineer, the history of the design group in which he is working, the cost and performance goals of the design, and the available computing resources are just a few of the factors which must be considered. Describing a good methodology is further complicated by the fact that every design has a different detailed design methodology. Each individual design poses different problems which need to be solved in an unique manner. Even though composing a methodology for any particular design is so context dependent, a number of criteria can be stated that are reasonably universal.
Minimal source leads to fast design times. If the designer has to reenter some design information several times then the potential for error grows and the cost of changing the design, even in the absence of errors, also grows. The source is minimal when a design is entered at the highest level of abstraction possible that is consistent with available automatic synthesis techniques, and no higher. The level of abstraction should be high, because more abstract descriptions require less source. It should not be any higher than automated synthesis techniques can handle, because the designer would then have to manually translate to the level that can be handled by automatic synthesis, leading to redundant source.
Technology constraints affect a design at least as much as the behavior specified by the design requirements. Frequently the intended behavior is mostly defined by what is possible, rather than what is required. A good methodology relies on the intuitive abilities of the designer to find a simultaneous solution to the technological and application constraints. Humans perform such tasks well; to the greatest extent possible the design aid system should take care of everything else; especially keeping track of all of the details, which is a task that computers perform well. Early evaluation of design feasibility by determining the fit of the solution to the set of constraints is important.
Fast turnaround for small changes is just as important to hardware designers as it is to programmers. Since it is impossible to enter a design completely correct the first time, it is important to make it easy to go through the change and evaluate cycle.
During the forseeable future every significant design will have some amount of structural description before reaching elements that can be automatically transformed into artifacts. Thus, pure behavioral descriptions are not useful, and a good set of tools must have a good structural description mechanism. More strongly, most design is currently done by structural decomposition. As logic synthesis improves this may become less true, but it is certainly true now.
Even though most design is structural, it is still useful to describe some portions of a design behaviorally. Finite state machine synthesis is sufficiently mature to be considered a primitive in the description. Some portions of a design, such as memories, are so simple to describe behaviorally, relative to their structural descriptions, that behavioral description is clearly best during initial design. Simplified behavioral models are also useful during debugging to check consistency between abstract behavior and concrete implementation.
Many portions of a design are only slight variations on a basic theme. Counters are a classic example in which the carry propagate network is slightly different depending upon the number of bits. It should be possible to capture this design knowledge in a manner which allows its reuse in many contexts.
We have described criteria for evaluating the efficacy of a design methodology.
A. minimal source
B. early design evaluation
C. fast turnaround for small changes
D. good structural description capability
E. behavioral description capabilities
F. reusable design descriptions
Each of these criteria imply desires for the tool set which supports it. To have minimal source one would like the best automated synthesis tools possible. It must also be possible to extend the description mechanisms in ways specific to the current design, i.e. the tool set must be easily extended. Early design evaluation implies tools which can analyze a partial design. Fast turnaround implies that the amount of work done by the tool set, in response to a change, should be proportional to the size of the change, rather than the size of the design. Reusable descriptions imply a method for parameterizing the description mechanisms.
A METHODOLOGY
In this section we describe the methodology of a completed design. The example is a memory controller which connects a high speed bus to a RAM bank. None of the details of the requirements for the design nor the details of the implementation are of interest here. We examine the methodology, illustrating the tools used, and the order in which they were used.
Prior to writing the requirement document for this chip an overall system architectural specification was written. This upper level design included the structural decomposition of the system and so defined the basic timing and signalling requirements for the memory controller.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 2 Memory Controller Methodology
Fig. 2 shows the methodology. The first step is writing the chip specification as illustrated in detail in Fig. 3. This stage of design simply used a text editor.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 3 Specification
The next step in Fig. 2 is the RTL design of the chip. The RTL design process is further decomposed in Fig. 4. During this design period the technological constraints begin to be factored into the design by the designer's intuition of the space-time cost of each of the primitives.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 4 Register Transfer Level
The RTL design time is dominated by the preparation of schematics and debugging them with a simulator. The behavioural models and test programs are written in the local standard algorithmic programming language. With the completion of the RTL schematic, it is possible to verify the integrated circuit design as part of the overall system schematic, as illustrated in Fig. 5.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 5 System Simulation
This step uses the same schematic and simulation system as was used for the RTL design. The invariant checkers are written in the same manner as the RTL behavioural models.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 6 Layout
In parallel with the system simulation, the physical details of the chip implementation are filled in as shown in Fig. 6. The design is incrementally enhanced until it meets the space-time constraints of the implementation technology. The memory controller consists of a data path and a collection of random control logic. The data path requires more detailed physical design as illustrated in Fig. 7. The data path design consists of creating a set of tiles which fit together tightly in an arrangement specified by the designer. Each of these tiles has a transistor level schematic and a corresponding layout. Critical paths through the data path are simulated with Spice to keep control of the performance constraints given in the design requirements document. Some low level error checking in the form of structural comparison of layout and schematics as well as geometric design rule checking is performed.
Once the low level details of the data path are completed, the rest of the schematic is modified to include the information necessary for the layout synthesis tools. This consists of dividing the chip into the standard cell and data path areas, defining the routing blocks which connect them together, and specifying the pad assembly and routing. After these modifications are made, the design is simulated again to ensure that the rearrangement of the schematic to include the physical synthesis information has not perturbed the functionality of the design.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Fig. 7 Data Path Layout
Layout generation is finished by comparing the layout to the schematics, as a check of the physical synthesis tools, and geometric design rule check, as a check of both the manually and automatically synthesized layout.
In parallel with correcting the details of the layout, the chip's timing performance is checked by a critical path timing analyzer. All feedback paths due to errors discovered by any of the checkers result in modifications to the schematics, in the case of design errors, or modifications to the synthesis tools in the case of design aid bugs. The final connectivity check shown in Fig. 6 is solely to catch bugs in the design aids. It checks that all of the rectangles emitted by the router which are supposed to intersect each other, in fact do so.
Finally we return to Fig. 2 where we see the design exit the portion of the design flow that we are discussing in this paper, and go on to fabrication and test. In parallel with fabrication and testing, the logical operation of the chip is documented. When the chip returns from test, the final performance numbers, and D.C. parameters, are inserted into this documentation.
EVALUATION
In this section we evaluate the methodology and tools of the example according to the criteria established earlier.
1. Minimal Source
A single source captures the entire logical and physical description. This is good because it reduces the amount of source, for example the data path is described by a single diagram which defines the logic, floor plan and layout generation. It is bad because the schematics are distorted by the requirement that they implement the physical as well as the logical hierarchy.
The design makes good use of available synthesis systems, such as finite state machine and data path generators, so it is captured at the highest level of available automated synthesis. It also relies heavily upon the parameterized logic library, so that there is no repetition of common low-level component descriptions.
There is no seperate behavioural model of the chip. This reduces the amount of source but eliminates the ability to catch errors by checking for inconsistencies between the representations.
2. Early Design Evaluation
This criteria is satisfied to only a limited extent. Our framework allows better evaluation tools but our available manpower does not permit their development. Currently there are no early warning tools, rather, we rely on the analysis tools which expect a complete description. Since the description is reasonably abstract, the delay to evaluation is somewhat mitigated.
We really would like to extend our ability to evaluate correctness to the extent of embedding the design in its intended system prior to fabrication. This would allow us to examine the state evolution of the circuit in great detail. Systems such as ASIC emulation boxes7 hold out hope in this area.
3. Fast Turnaround for Small Changes
The major stumbling block to satisfaction of this criteria is usually the need to manually propagate changes through several different design representations. This stumbling block has been circumvented by relying on a single irredundant source. Unfortunately, the underlying tool set is only partially incremental, so that some of the evaluation stages, such as simulation, take time proportional to the size of the design, rather than the size of the change. However, the existing tool set minimizes the amount of time the designer must spend, even at the expense of machine cycles, which are relatively cheap in our environment.
All tools attempt to communicate results to the designer in terms of the original source. This reduces the amount of time spent by the designer trying to understand the results rather than creating solutions.
4. Structural Description
The design uses a mixture of text and graphics to describe structure8. This extensible ability to describe structure is the most important contributor to allowing minimal source.
Our schematic capture system follows the paradigm of drawing pictures in a stylized manner, and then analyzing the picture to produce a net list. This allows new graphical conventions to be introduced without changing the interactive editor. A general mechanism for intermixing code written in a standard programming language with pictures that look much like standard schematics, but which can be parameterized arbitrarily, considerably enhances the power of our schematic capture package.
The extensible graphical conventions allows new tools to be easily introduced into the system. For example, the memory controller uses classical finite state machines for its primary control. Rather than drawing the machines and hand translating them into logic, a tool is available that directly parses the machines drawn as a collection of states and transistions.
5. Behavioral Description
Utilizing a standard programming language for behavioral modeling has both good points and bad points. It is good because another language does not have to be introduced into the environment. It is bad because the behavior is described with a sequential programming model, whereas hardware is inherently parallel.
Since there were no large behavioral models written for our example, there really isn't any point in providing sophisticated behavioral modeling capability. We don't really want large behavioral models for the design since that would be redundant description, which has to be prepared, debugged, and maintained.
6. Reuseable Design
Reusing designs is not a new idea. ASIC manufacturers do this through the macro and megacell concepts. A few have started to bring out datapath and FSM generators. However, macros and megacells both embody fixed netlists instead of the flexible net list generators in our approach.
The primitives which designers use in our system are not just the fixed 2, 3, or 4 input gates typical of most logic libraries. Such simple primitives are available, but the power of the schematic capture system is used to create a highly parameterized logic library that contains everything from a simple two input nand gate to an up/down counter parameterized by the number of bits. The underlying implementation of the counter builds a minimal implementation regardless of the number of bits specified by constructing an optimal carry lookahead network and inserting buffers whereever they are necessary.
The parameterization allows a design to be reused in multiple contexts. For example, the memory controller has several pins devoted to a serial initialization and debugging bus. The interface logic for this bus was designed once for an entire family of chips, of which the memory controller is only one member, even though details, such as the device id, are different for each member of the family.
CONCLUSION
The top-down methodology doesn't work well because it requires more description than is necessary to fabricate manufacturing tooling, and it delays technology constraint satisfaction too long. The bottom-up methodology doesn't work well because it requires manual preparation of many low level details and delays application constraint satisfaction too long.
In this paper we have described a methodology that compromises between the top-down and bottom-up methodologies so that both the application and technology constraints can be satisfied while the designer is preparing a minimal source description. This leads to minimal design time and manpower.
REFERENCES
1. N. Weste and K. Eshraghian, Principles of CMOS VLSI Design: A Systems Perspective, Addison-Wesley Publishing Company, 1985.
2. S. Rubin, Computer Aids for VLSI Design, Addison-Wesley Publishing Company, 1987.
3. S. Trimberger, J. Rowson, J. Gray, and C. Lang, "A Structured Design Methodology and Associated Software Tools", IEEE Transactions on Circuits and Systems, vol. CAS-28, no. 7, July 1981.
4. S. Trimberger, An Introduction to CAD for VLSI, Kluwer Academic Publishers, 1987.
5. C. Meade and L. Conway, Introduction to VLSI Systems, Addison-Wesley Publishing Company, 1980.
6. C. Niessen, "Hierarchical Design Methodologies and Tools for VLSI Chips", Proceedings of the IEEE, vol. 71, January 1983.
7. Quickturn Systems Inc., product bulletin, 1988
8. R. Barth, B. Serlet, and P. Sindhu, "Parameterized Schematics", 25th ACM/IEEE Design Automation Conference, June 1988.