Efficient Structural Design for Integrated Circuits Richard Barth Abstract: A design methodology is the process of transforming design ideas into artifacts. Two different design methodologies have been described for the design of systems with custom or semi-custom integrated circuits. One is the top down methodology in which designers evaluate very abstract specifications of behaviour with the aid of computer-based tools, and then proceed towards the specifications which are actually used to build the system, such as the masks for fabricating circuits and printed circuit boards. This approach provides a very structured set of events to proceed from the idea to the implementation, and therefore is likely to yield a working system. However, the time and resources it takes to complete all the necessary steps is large. This methodology requires good planning and significant manpower. Moreover, the approach requires many different representations of designs and therefore an extensive collection of sophisticated tools to handle them. Currently, a complete suite of such sophisticated tools is not available. Some transformations between these different representations must be maintained manually, adding further to the manpower requirements. Another approach, the ASIC methodology, is espoused by the semi-custom integrated circuit foundries. It consists of describing the circuits structurally in terms of low level primitives, such as gates, instead of behaviorally. This methodology is very restricted since its intention is to guarantee that the foundry gets paid, not to guarantee that the semi-custom circuits will actually work in the system for which they were designed. This methodology does not really deal with the integration of circuits into a system at all. That is left to a higher level methodology which is not the responsibility of the foundry, although the foundries methodology requirements can have a serious, detrimental, impact upon the overall system design methodology. In this paper we define the top down and ASIC methodologies more rigorously, describe the problems with them in more detail, and present an alternative methodology that skips the behavioral specification stage of the top down methodology, and also avoids the low level approach of the ASIC methodology. We describe the tools that are necessary to support this methodology and give an example of a large design which used it. 1.0 Introduction This paper describes integrated circuit design methodologies. It is based on 10 years of performing and observing integrated circuit design, as well as building and then using computer-based integrated circuit design aids. This is a view from the trenches, in that it is mostly based on local observation of design groups, rather than a survey across the industry. Much of the work and observations were performed in a research laboratory setting, although some of it was performed in a development environment subjected to the time pressure of product design. This paper is proselytizing a particular methodology, which has been found to be very efficient, and describes the design aids required to support this methodology. 1.1 Design Methodology Defined The process of transforming ideas into the tooling required to manufacture artifacts is a design methodology. In this paper we concern ourselves specifically with integrated circuit design methodologies. An integrated circuit design methodology transforms vague design requirements into a collection of pattered layers which comprise the fabrication tooling, a set of test patterns which verify the design, and documentation which describes the operation of the circuit (Figure 1). The patterned layers may be used to fabricate photolithography masks or to directly control an electron beam exposure system; we do not distinguish between these techniques here. The test patterns perform a first order check that a fabricated circuit meets its specifications; they are not full manufacturing test patterns. In this context we exclude manufacturing and manufacturing test considerations, although the general remarks we make certainly apply to them as well. These considerations are extremely important but they introduce unnecessary complexity into the discussion. We include consideration of the systems into which the integrated circuits are to be embedded, to the extent of ensuring that the gross timing and signalling conventions are compatible between the integrated circuits in the system. We do not consider other high level architectural problems such as statistical analysis of bus bandwidth. Such problems are part of forming the integrated circuit design requirements rather than part of the integrated circuit design methodology itself. One way of defining integrated circuit design requirements is as a set of constraints. Some constraints are technological constraints. These constraints limit what is possible. Some constraints are application constraints. These specify what is wanted. A design methodology, then, is the process of finding a design which simultaneously satisfies all of the constraints. This is a very fuzzy process; most often the designer is initially presented with an overconstrained problem, or least one which has no obvious solution, and must decide which constraints to relax until a solution is found. 1.2 Good Integrated Circuit Methodologies The definition of a good design methodology is considerably less crisp because it depends upon a large number of contextual factors. The complexity of the design, the background of the design engineer, the history of the design group in which he is working, the cost and performance goals, and the available computing resources are just a few of the factors which must be considered. Describing a good methodology is further complicated by the fact that every design has a different detailed design methodology. Each individual design poses different problems which ought to be solved in a manner unique to the design. The tools available constrain any design's methodology. Conversely, the desired design methodology impacts the requirements for tool development. Good design cannot be done without good tools and good tools cannot be developed without good designs to influence their development. Even though composing a methodology for any particular design is so context dependent, a number of claims can be made that are reasonably universal. Minimal irredundant source leads to fast design times. If the designer has to reenter some design information several times then the potential for error grows and the cost of changing the design, even in the absence of errors, also grows. The source is minimal when a design is entered at the highest level of abstraction possible that is consistent with available automatic synthesis techniques and no higher. The level of abstraction should be high because more abstract descriptions require less source. It should not be any higher than automated synthesis techniques can handle because the designer would then have to manually translate to the level that can be handled by automatic synthesis, leading to redundant source. Technology constraints affect a design at least as much as the behavior specified by the design requirements. Frequently the intended behavior is mostly defined by what is possible, rather than what is required. A good methodology relies on the intuitive abilities of the designer to find a simultaneous solution to the technological and application constraints. This is what humans are good at; to the greatest extend possible the design aid system should take care of everything else; especially keeping track of all of the details, which is what computers are good at. Early evaluation of design feasibility by determining the fit of the solution to the set of constraints is important. Fast turnaround for small changes is just as important to hardware designers as it is to programmers. Since it is impossible to enter a design completely correctly the first time, it is important to make it easy to go through the change and evaluate cycle. During the forseeable future every significant design will have some amount of structural description before reaching elements that can be automatically transformed into artifacts. Thus pure behavioral descriptions are not useful and a good set of tools must have a good structural description mechanism. More strongly, most design is currently done by structural decomposition. As logic synthesis improves this may become less true, but it is certainly true now. Even though most design is structural, it is still useful to describe some portions of a design behaviorally. Finite state machine synthesis is sufficiently mature to be considered a primitive in the description. Some portions of a design, such as memories, are so simple to describe behaviorally, relative to their structural descriptions, that behavioral description is clearly best during initial design. Simplified behavioral models are also useful during debugging to check consistency between abstract behavior and concrete implementation. Many portions of a design are only slight variations on a basic theme. Counters are a classic example in which the carry propagate network is slightly different depending upon the number of bits. It should be possible to capture this design knowledge in a manner which allows its reuse in many contexts. We have described criteria for evaluating the efficacy of a design methodology: 1) minimal irredundant source 2) early evaluation of design feasibility and correctness 3) fast turnaround for small changes 4) good structural description capability 5) behavioral description capabilities 6) reusable design descriptions In some sense the minimal irredundant source criteria implies all of the rest. However, the others provide more detail. Each of these criteria imply desires for the tool set which supports it. To have minimal source one would like the best automated synthesis tools possible. It must also be possible to extend the description mechanisms in ways specific to the current design, i.e. the tool set must be easily extended. Early design feasibility evaluation implies tools which can analyze a partial design. Fast turnaround implies that the amount of work done by the tool set in response to a change should be proportional to the size of the change rather than the size of the design. Reusable descriptions imply a method for parameterizing the description mechanisms. 1.3 Organization of This Paper The remainder of the paper proceeds as follows. In section 2.0, an example design using a methodology supported by our current tool suite is presented as a pedagogical aid for the following section, 3.0, in which we describe the tool suite which enables this methodology. Then section 4.0 evaluates the methodology utilized by the example against the criteria for a good methodology. In section 5.0 we contrast this methodology against the top down and bottom up methodologies. Finally in sections 6.0 and 7.0 we place this paper in relationship to previously published work and draw some conclusions. 2.0 An Example In this section we set the context of discussion more concretely by describing the methodology of a completed design. We use this example in following sections to illustrate specific points. The example is a memory controller which connects a high speed, split transaction (address and data) bus to a RAM bank. None of the details of the requirements for the design nor the details of the implementation are of interest here. We examine the methodology, illustrating the tools used, and the order in which they were used. Prior to writing the requirement document for this chip an overall system architectural specification was written. This upper level design included the structural decomposition of the system and so defined the basic timing and signalling requirements for the memory controller. Figure 2 shows the methodology. The first step is writing the chip requirement specification as illustrated in detail in figure 3. A 4 page specification describes the general requirements of the chip. It indicates the amount of RAM which each memory controller can handle, the number of banks which can be placed on a bus, and the amount of board area required for the entire memory controller subsystem. It also specifies which of the bus transactions must be dealt with by the memory controller and gives a very abstract decomposition of the implementation of the chip. The next step in figure 2 is the RTL design of the chip. The RTL design process is further decomposed in figure 4. The primitives used to build up the RTL description are illustrated in figure 8. During this design period the technological constraints were factored into the design by the designer's intuition of the space-time cost of each of the primitives. With the completion of the RTL schematic, it is possible to verify the integrated circuit design as part of the overall system schematic. This process is illustrated in figure 5. In parallel with the system simulation, the physical details of the chip implementation are filled in as shown in figure 6. The memory controller consists of a data path and a collection of random control logic. The data path requires more detailed physical design as illustrated in figure 7. The data path design consists of creating a set of tiles which fit together tightly in an arrangement specified by the designer. Each of these tiles has a transistor level schematic and a corresponding layout. Critical paths through the data path are simulated with Spice to keep control of the performance constraints given in the design requirements document. Some low level error checking in the form of structural comparison of layout and schematics as well as geometric design rule checking is performed. Once the low level details of the data path are completed the rest of the schematic can be modified to include the information necessary for the layout synthesis tools. This consists of dividing the chip into the standard cell and data path areas, defining the routing blocks which connect them together, and specifying the pad assembly and routing. After these modifications are made the design is simulated with the design verification test vectors to ensure that the rearrangement of the schematic to include the physical synthesis information has not perturbed the functionality of the design. Layout generation is followed by comparing the layout to the schematics, as a check of the physical synthesis tools, and geometric design rule check, as a check of both the manually and automatically synthesized layout. In parallel with getting the details of the layout correct the chip's timing performance is checked by a critical path timing analyzer. All feedback paths due to errors discovered by any of the checkers result in modifications to the schematics, in the case of design errors, or modifications to the synthesis tools in the case of design aid bugs. The final connectivity check shown in figure 6 is solely to catch bugs in the design aids. It checks that all of the rectangles emitted by the router which are supposed to intersect each other, in fact do so. Finally we return to figure 2 where we see the design exit the portion of the design flow that we are discussing in this paper, and go on to fabrication and test. In parallel with fabrication and testing the logical operation of the chip is documented. When the chip returns from test the final performance numbers and D.C. parameters are inserted into this documentation. 3.0 Tools Required In this section we derive the tools required to allow the methodology presented in the example to be followed. Of course we did not perform the design first and then build the tool suite. In fact the tool development was driven by this design and so sometimes the tools were developed before they were assumed by the design and sometimes the design assumed new tools before they were actually implemented. These consist of: 1) Schematic Capture 2) RTL and System Level Simulation 3) Layout Generation (standard cells, data path, pads) 4) Documentation 5) Behavioral Modeling (integrated with 2) 6) Timing Analysis 7) Layout vs. Schematic Comparison 8) Geometric Design Rule Check 9) Connectivity Check 10) Circuit Simulation (Spice) 11) Layout Editor 12) Static Electrical Rules Check 13) FSM capture and logic synthesis 14) Parameterized Logic Library And, of course, all of these tools have to be tied together and driven from a single design source. Many of these tools are standard, some are available, at least in experimental versions, from several sources, and some are unique to our tool suite. The key enabling features which we believe to be unique to our system and that increase designer productivity include: 1) Integration around a minimal, irredundant design source 2) Integrated layout synthesis framework 3) Parameterized Logic Library 4.0 Evaluation In this section we evaluate the methodology and tools described in the previous two sections according to the criteria established in the introduction. We will deal with each of the criteria in turn, describing how the existing methodology and tools succeeded or failed in satisfying them. 4.1 Minimal Irredundant Source This criteria is satisfied to a very large extent. There is virtually no redundant design information. A single source captures the entire logical and physical description. The source is also minimal. The design makes good use of available synthesis systems such as finite state machine and data path generators, so it is captured at the highest level of available automated synthesis. It also relies heavily upon the parameterized logic library, so that there is no repetition of common low-level component descriptions. 4.2 Early Feasibility and Correctness Evaluation This criteria is satisfied to only a limited extent. Simulation really cannot proceed until a working subset of the design has been entered and all behavioral models have been entered. Since the description is reasonably abstract the delay to evaluation is somewhat mitigated. We really would like to extend our ability to evaluate correctness to the extent of embedding the design in its intended system prior to fabrication. This would allow us to examine the state evolution of the circuit in great detail. More highly optimized simulation systems than our own may improve our capabilities by a few orders of magnitude but we would like to run the systems in real time. Actually attaining this goal for systems which are pressing the state of the art does not seem possible; all we can really hope for is to get close to real time performance at the cost of a great deal of space and, consequently, power. Systems such as ASIC emulation boxes [QuickTurn] hold out hope in this area. 4.3 Fast Turnaround for Small Changes The major stumbling block to satisfaction of this criteria is usually the need to manually propagate changes through several different design representations. This stumbling block has been circumvented by relying on a single irredundant source. Unfortunately, the underlying tool set is only partially incremental, so that some of the evaluation stages, such as simulation, take time proportional to the size of the design, rather than the size of the change. 4.4 Structural Description The design uses a mixture of text and graphics to describe structure. (Does it have any structure that is specific to the design, illustrating extensibility?) 4.5 Behavioral Description Utilizing a standard programming language for behavioral modeling has both good points and bad points. It is good because another language does not have to be introduced into the environment and, consequently, the implementation can be very small. It is bad because the primitives available all follow the sequential programming model, whereas hardware is inherently parallel. Since there were no large behavioral models written for our example there really isn't any point in providing sophisticated behavioral modeling capability. We don't really want large behavioral models for the design since that would be redundant description which has to be prepared, debugged, and maintained. For this design we did not need any large behavioral models to establish an environment for debugging. 4.6 Reuseable Design Of course reusing previous designs is not a new idea. ASIC manufacturers do this through the macro and megacell concepts. A few have started to bring out datapath and FSM generators [VTI]. However, macros and megacells both embody fixed netlists instead of the flexible net list generators in our approach. Another approach to reusing design knowledge is demonstrated by systems such as [Mayo] which operate at the layout level. In the era of standard cell and sea-of-gates layout synthesis the utility of such low level approaches is restricted to high performance tilings, which do not constitute the bulk of logic design. 5.0 In Contrast 5.1 ASIC Define ASIC methodology as starting with very basic cells, maybe with fixed macro enhancements but these frequently are not exactly what you want so you end up descending into the tar pit. Building up from very low level primitives leads to large source 5.2 Top Down Tried top down and became awash in a sea of possibilities from the lack of constraints. Constraints help define the thing. Large source from ASIC methodology is not easily modified at the more abstract levels (pyramid diagram) => want to make single pass to get source => invention of top down behavioral modeling and overoptimization => highly polished source => larger source => positive feedback 6.0 Relationship to Previous Work Get the crud out of the reference list. Look at IEEE Engineering Management publication. Limit search to work specifically related to integrated circuit, or at least logic, design. This is at the back of the paper because there is little literature discussing methodology per se. This is probably because it is difficult to make definitive statements. Methodologies tend to evolve in response to needs and enabling technologies rather than being a focus in their own right. [Weste] talks about design approaches on p. 236. he talks about design approaches as if behavior, structure, and physical design can be done independently. There are no feedback arrows in his "current" diagram. This is clearly bullshit. On page 480 they talk about top down in one breath and in the next they talk about doing all levels of design in parallel. Clearly there is some confusion about what top down means. They seem to mean hierarchical divide and conquer rather than successive refinement, which is what I've always thought it meant. 7.0 Conclusion The top down methodology doesn't work well because it is cumbersome, in the sense of having to prepare far more description than is required to fabricate manufacturing tooling, and it delays technology constraint satisfaction too long. The ASIC,or bottom up, methodology doesn't work well because it is cumbersome, in the sense of having to build many low level details manually, and delays application constraint satisfaction too long. In this paper we have described a methodology that compromises between the top down and bottom up methodologies so that both the application and technology constraints can be satisfied while the designer is preparing a minimal source description. This source description is minimal in that it can be automatically translated into the tooling specifications for fabricating the design, and it makes use of design knowledge encapsulated as procedural descriptions. The combination of simultaneous satisfaction of technological and application constraints, and minimal irredundant source capture leads to minimal design time and manpower. To support this conclusion we have described the methodology of a completed design and the set of tools which enabled this methodology and then compared the amount of work required for this methodology against the work required by other methodologies. Unfortunately, in the costly and mathematics free realm of system design no stronger evidence can be offered. References [QuickTurn] Product bulletin [VTI] Trimberger's book or VTI product literature