Efficient Structural Design for Integrated Circuits
Richard Barth
Xerox Palo Alto Research Center
Abstract: A design methodology is the process of transforming design ideas into artifacts. Two different design methodologies have been described for the design of systems with custom or semi-custom integrated circuits.
One is the top down methodology in which designers evaluate very abstract specifications of behaviour with the aid of computer-based tools, and then proceed towards the specifications which are actually used to build the system, such as the masks for fabricating circuits and printed circuit boards.
This approach provides a very structured set of events to proceed from the idea to the implementation, and therefore is likely to yield a working system. However, the time and resources it takes to complete all the necessary steps is large. This methodology requires good planning and significant manpower. Moreover, the approach requires many different representations of designs and therefore an extensive collection of sophisticated tools to handle them. Currently, a complete suite of such sophisticated tools is not available. Some transformations between these different representations must be maintained manually, adding further to the manpower requirements.
Another approach, the ASIC methodology, is espoused by the semi-custom integrated circuit foundries. It consists of describing the circuits structurally in terms of low level primitives, such as gates, instead of behaviorally.
This methodology is very restricted since its intention is to guarantee that the foundry gets paid, not to guarantee that the semi-custom circuits will actually work in the system for which they were designed. This methodology does not really deal with the integration of circuits into a system at all. That is left to a higher level methodology which is not the responsibility of the foundry, although the foundries methodology requirements can have a serious, detrimental, impact upon the overall system design methodology.
In this paper we define the top down and ASIC methodologies more rigorously, describe the problems with them in more detail, and present an alternative methodology that skips the behavioral specification stage of the top down methodology, and also avoids the low level approach of the ASIC methodology. We describe the tools that are necessary to support this methodology and give an example of a large design which used it.
CR Categories and Subject Descriptors: B.6.0 [Logic Design]: General; B.6.1 [Logic Design]: Design Styles; B.6.3 [Logic Design]: Design Aids; B.7.1 [Integrated Circuits]: Types and Design Styles; B.7.2 [Integrated Circuits]: Design Aids; B.m [Hardware]: Design Management; D.2.2 [Software Engineering]: Tools and Techniques; D.2.10 [Software Engineering]: Design
1.0 Introduction
This paper describes integrated circuit design methodologies. It is based on 10 years of performing and observing integrated circuit design, as well as building and then using computer-based integrated circuit design aids. This is a view from the trenches, in that it is mostly based on local observation of design groups, rather than a survey across the industry. Much of the work and observations were performed in a research laboratory setting, although some of it was performed in a development environment subjected to the time pressure of product design. This paper is proselytizing a particular methodology, which has been found to be very efficient, and describes the design aids required to support this methodology.
1.1 Design Methodology Defined
The process of transforming ideas into the tooling required to manufacture artifacts is a design methodology. In this paper we concern ourselves specifically with integrated circuit design methodologies. An integrated circuit design methodology transforms vague design requirements into a collection of pattered layers which comprise the fabrication tooling, a set of test patterns which verify the design, and documentation which describes the operation of the circuit (Figure 1). The patterned layers may be used to fabricate photolithography masks or to directly control an electron beam exposure system; we do not distinguish between these techniques here. The test patterns perform a first order check that a fabricated circuit meets its specifications; they are not full manufacturing test patterns.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 1. Methodology Input and Output
In this context we exclude manufacturing and manufacturing test considerations, although the general remarks we make certainly apply to them as well. These considerations are extremely important but they introduce unnecessary complexity into the discussion.
We include consideration of the systems into which the integrated circuits are to be embedded, to the extent of ensuring that the gross timing and signalling conventions are compatible between the integrated circuits in the system. We do not consider other high level architectural problems such as statistical analysis of bus bandwidth. Such problems are part of forming the integrated circuit design requirements rather than part of the integrated circuit design methodology itself.
One way of defining integrated circuit design requirements is as a set of constraints. Some constraints are technological constraints. These constraints limit what is possible. Some constraints are application constraints. These specify what is wanted. A design methodology, then, is the process of finding a design which simultaneously satisfies all of the constraints. This is a very fuzzy process; most often the designer is initially presented with an overconstrained problem, or at least one which has no obvious solution, and must decide which constraints to relax until a solution is found.
1.2 Good Integrated Circuit Methodologies
The definition of a good design methodology is considerably less crisp because it depends upon a large number of contextual factors. The complexity of the design, the background of the design engineer, the history of the design group in which he is working, the cost and performance goals, and the available computing resources are just a few of the factors which must be considered. Describing a good methodology is further complicated by the fact that every design has a different detailed design methodology. Each individual design poses different problems which need to be solved in an unique manner.
The tools available constrain any design's methodology. Conversely, the desired design methodology impacts the requirements for tool development. Good design cannot be done without good tools, and good tools cannot be developed without good designs to influence their development.
Even though composing a methodology for any particular design is so context dependent, a number of criteria can be stated that are reasonably universal.
Minimal irredundant source leads to fast design times. If the designer has to reenter some design information several times then the potential for error grows and the cost of changing the design, even in the absence of errors, also grows.
The source is minimal when a design is entered at the highest level of abstraction possible that is consistent with available automatic synthesis techniques, and no higher. The level of abstraction should be high, because more abstract descriptions require less source. It should not be any higher than automated synthesis techniques can handle, because the designer would then have to manually translate to the level that can be handled by automatic synthesis, leading to redundant source.
Technology constraints affect a design at least as much as the behavior specified by the design requirements. Frequently the intended behavior is mostly defined by what is possible, rather than what is required. A good methodology relies on the intuitive abilities of the designer to find a simultaneous solution to the technological and application constraints. This is what humans are good at; to the greatest extent possible the design aid system should take care of everything else; especially keeping track of all of the details, which is what computers are good at. Early evaluation of design feasibility by determining the fit of the solution to the set of constraints is important.
Fast turnaround for small changes is just as important to hardware designers as it is to programmers. Since it is impossible to enter a design completely correctly the first time, it is important to make it easy to go through the change and evaluate cycle.
During the forseeable future every significant design will have some amount of structural description before reaching elements that can be automatically transformed into artifacts. Thus, pure behavioral descriptions are not useful, and a good set of tools must have a good structural description mechanism. More strongly, most design is currently done by structural decomposition. As logic synthesis improves this may become less true, but it is certainly true now.
Even though most design is structural, it is still useful to describe some portions of a design behaviorally. Finite state machine synthesis is sufficiently mature to be considered a primitive in the description. Some portions of a design, such as memories, are so simple to describe behaviorally, relative to their structural descriptions, that behavioral description is clearly best during initial design. Simplified behavioral models are also useful during debugging to check consistency between abstract behavior and concrete implementation.
Many portions of a design are only slight variations on a basic theme. Counters are a classic example in which the carry propagate network is slightly different depending upon the number of bits. It should be possible to capture this design knowledge in a manner which allows its reuse in many contexts. Currently this means capturing a recipe for each type of network. Of course as powerful tools are developed, that can synthesize networks from basic principles, the library of synthesis procedures can be eliminated.
We have described criteria for evaluating the efficacy of a design methodology:
1) minimal irredundant source
2) early evaluation of design feasibility and correctness
3) fast turnaround for small changes
4) good structural description capability
5) behavioral description capabilities
6) reusable design descriptions
In some sense, the minimal irredundant source criteria implies all of the rest. However, the others provide more detail.
Each of these criteria imply desires for the tool set which supports it. To have minimal source one would like the best automated synthesis tools possible. It must also be possible to extend the description mechanisms in ways specific to the current design, i.e. the tool set must be easily extended. Early design feasibility evaluation implies tools which can analyze a partial design. Fast turnaround implies that the amount of work done by the tool set, in response to a change, should be proportional to the size of the change, rather than the size of the design. Reusable descriptions imply a method for parameterizing the description mechanisms.
1.3 Organization of This Paper
The remainder of the paper proceeds as follows. In section 2.0, an example design using a methodology supported by our current tool suite is presented as a pedagogical aid for the following section, 3.0, in which we describe the tool suite which enables this methodology. Then section 4.0 evaluates the methodology utilized by the example against the criteria for a good methodology. In section 5.0 we contrast this methodology against the bottom up and top down methodologies. Finally in sections 6.0 and 7.0 we place this paper in relationship to previously published work and draw some conclusions.
2.0 An Example
In this section we set the context of discussion more concretely by describing the methodology of a completed design. We use this example in following sections to illustrate specific points. The example is a memory controller which connects a high speed, split transaction (address and data) bus to a RAM bank. None of the details of the requirements for the design nor the details of the implementation are of interest here. We examine the methodology, illustrating the tools used, and the order in which they were used.
Prior to writing the requirement document for this chip an overall system architectural specification was written. This upper level design included the structural decomposition of the system and so defined the basic timing and signalling requirements for the memory controller.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 2. Memory Controller Methodology
Figure 2 shows the methodology. The first step is writing the chip requirement specification as illustrated in detail in figure 3. A 4 page specification describes the general requirements of the chip. It indicates the amount of RAM which each memory controller can handle, the number of banks which can be placed on a bus, and the amount of board area required for the entire memory controller subsystem. It also specifies which of the bus transactions must be dealt with by the memory controller and gives a very abstract decomposition of the implementation of the chip.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 3. Specification
The next step in figure 2 is the RTL design of the chip. The RTL design process is further decomposed in figure 4. During this design period the technological constraints were factored into the design by the designer's intuition of the space-time cost of each of the primitives.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 4. Register Transfer Level
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 5. System Simulation
With the completion of the RTL schematic, it is possible to verify the integrated circuit design as part of the overall system schematic. This process is illustrated in figure 5.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 6. Physical Design
In parallel with the system simulation, the physical details of the chip implementation are filled in as shown in figure 6. The memory controller consists of a data path and a collection of random control logic. The data path requires more detailed physical design as illustrated in figure 7. The data path design consists of creating a set of tiles which fit together tightly in an arrangement specified by the designer. Each of these tiles has a transistor level schematic and a corresponding layout. Critical paths through the data path are simulated with Spice to keep control of the performance constraints given in the design requirements document. Some low level error checking in the form of structural comparison of layout and schematics as well as geometric design rule checking is performed.
[ Nectarine figure; type 'Artwork on' to a CommandTool ]
Figure 7. Data Path Physical Design
Once the low level details of the data path are completed, the rest of the schematic can be modified to include the information necessary for the layout synthesis tools. This consists of dividing the chip into the standard cell and data path areas, defining the routing blocks which connect them together, and specifying the pad assembly and routing. After these modifications are made, the design is simulated with the design verification test vectors to ensure that the rearrangement of the schematic to include the physical synthesis information has not perturbed the functionality of the design. Layout generation is followed by comparing the layout to the schematics, as a check of the physical synthesis tools, and geometric design rule check, as a check of both the manually and automatically synthesized layout.
In parallel with getting the details of the layout correct, the chip's timing performance is checked by a critical path timing analyzer. All feedback paths due to errors discovered by any of the checkers result in modifications to the schematics, in the case of design errors, or modifications to the synthesis tools in the case of design aid bugs. The final connectivity check shown in figure 6 is solely to catch bugs in the design aids. It checks that all of the rectangles emitted by the router which are supposed to intersect each other, in fact do so.
Finally we return to figure 2 where we see the design exit the portion of the design flow that we are discussing in this paper, and go on to fabrication and test. In parallel with fabrication and testing, the logical operation of the chip is documented. When the chip returns from test, the final performance numbers, and D.C. parameters are inserted into this documentation.
3.0 Tool Requirements
In this section we derive the tools required to allow the methodology presented in the example to be followed. Of course we did not perform the design first and then build the tool suite. In fact, the tool development was driven by this design, so sometimes the tools were developed before they were assumed by the design, and sometimes the design assumed new tools before they were actually implemented.
These consist of:
1) Schematic Capture
2) RTL and System Level Simulation
3) Layout Generation (standard cells, data path, pads)
4) Documentation
5) Behavioral Modeling (integrated with 2)
6) Timing Analysis
7) Layout vs. Schematic Comparison
8) Geometric Design Rule Check
9) Connectivity Check
10) Circuit Simulation (Spice)
11) Layout Editor
12) Static Electrical Rules Check
13) FSM capture and logic synthesis
14) Parameterized Logic Library
Of course, all of these tools have to be tied together and driven from a single design source. Many of these tools are standard, some are available, at least in experimental versions, from several sources, and some are unique to our tool suite.
The key enabling features which we believe to be unique to our system and that increase designer productivity include:
1) Integration around a minimal, irredundant design source
2) Integrated layout synthesis framework
3) Parameterized Logic Library
4.0 Evaluation
In this section we evaluate the methodology and tools described in the previous two sections according to the criteria established in the introduction. We will deal with each of the criteria in turn, describing how the existing methodology and tools succeeded or failed in satisfying them.
4.1 Minimal Irredundant Source
This criteria is satisfied to a very large extent. There is virtually no redundant design information. A single source captures the entire logical and physical description. The source is also minimal. The design makes good use of available synthesis systems, such as finite state machine and data path generators, so it is captured at the highest level of available automated synthesis. It also relies heavily upon the parameterized logic library, so that there is no repetition of common low-level component descriptions.
4.2 Early Feasibility and Correctness Evaluation
This criteria is satisfied to only a limited extent. Simulation really cannot proceed until a working subset of the design and all behavioral models have been entered. We would like to have better evaluation tools. Our framework permits this but our available manpower does not. Currently there are no early warning tools, rather, we rely on the analysis tools which expect a complete description. Since the description is reasonably abstract, the delay to evaluation is somewhat mitigated.
We really would like to extend our ability to evaluate correctness to the extent of embedding the design in its intended system prior to fabrication. This would allow us to examine the state evolution of the circuit in great detail. More highly optimized simulation systems than our own may improve our capabilities by a few orders of magnitude, but we would like to run the systems in real time. Actually attaining this goal for systems which are pressing the state of the art does not seem possible; all we can really hope for is to get close to real time performance, at the cost of a great deal of space and, consequently, power. Systems such as ASIC emulation boxes [QuickTurn] hold out hope in this area.
4.3 Fast Turnaround for Small Changes
The major stumbling block to satisfaction of this criteria is usually the need to manually propagate changes through several different design representations. This stumbling block has been circumvented by relying on a single irredundant source. Unfortunately, the underlying tool set is only partially incremental, so that some of the evaluation stages, such as simulation, take time proportional to the size of the design, rather than the size of the change. However, the existing tool set minimizes the amount of time the designer must spend, even at the expense of machine cycles, which are relatively cheap in our environment.
4.4 Structural Description
The design uses a mixture of text and graphics to describe structure. This is described more fully in [Barth et al.]. This extensible ability to describe structure is the most important contributor to allowing minimal irredundant source.
4.5 Behavioral Description
Utilizing a standard programming language for behavioral modeling has both good points and bad points. It is good because another language does not have to be introduced into the environment and, consequently, the implementation can be very small. It is bad because the primitives available all follow the sequential programming model, whereas hardware is inherently parallel.
Since there were no large behavioral models written for our example, there really isn't any point in providing sophisticated behavioral modeling capability. We don't really want large behavioral models for the design since that would be redundant description, which has to be prepared, debugged, and maintained. For this design we did not need any large behavioral models to establish an environment for debugging.
4.6 Reuseable Design
Of course reusing previous designs is not a new idea. ASIC manufacturers do this through the macro and megacell concepts. A few have started to bring out datapath and FSM generators. However, macros and megacells both embody fixed netlists instead of the flexible net list generators in our approach. Another approach to reusing design knowledge is demonstrated by systems such as [Mayo] which operate at the layout level. In the era of standard cell and sea-of-gates layout synthesis, the utility of such low level approaches is restricted to high performance tilings, which do not constitute the bulk of the design effort.
5.0 In Contrast
5.1 Bottom Up
The bottom up methodology requires a minimal set of tools. Structural description, a leaf cell library, and a simulator which produces the response of the structure to a set of inputs, is all that is required. This minimal set of tools is all that is supplied by many ASIC vendors, thus our labeling of the bottom up methodology as the ASIC methodology. For very simple designs one need not even "go up", a flat description of the interconnection of all of the gates is sufficient. This flat description is the same manner in which most printed circuit boards are designed.
The ASIC methodology consists of starting with very basic cells, then forming a hierarchy by composing the basic cells into larger cells, then, recursively, forming still larger cells out of more primitive compositions. The entire design has no formal description until the top is reached. Thus no computer based tool can be applied to the entire design until the whole structural hierarchy has been entered. Since no formal verification has been performed it is likely that a structure of any size will contain a significant number of errors. Unfortunately, some of these errors are likely to be mistaken assumptions in the translation of the original specifications. This can have a disastrous effect on the design, possibly requiring that the whole thing be thrown away and the design started anew.
The ASIC vendors generally provide some relief, in the form of macrocells, which are commonly used compositions of the primitives. Unfortunately these are fixed primitives that frequently are not quite right for the task at hand, thus leading to redesign of the macros from the basic primitives.
5.2 Top Down
In an effort to avoid the disaster of completing a bottom up design, only to discover it has a fundamental flaw, the top down methodology was invented. Classically, this methodology consists of producing a formal description, evaluating that description against the requirements, and then refining the description, a portion at a time, until the basic elements are reached. [Rubin] has a good descripton of both the top down and bottom up methodologies.
There are several problems with this methodology. Since the designer is attempting to avoid regressing to a previous stage of abstraction, there is a tendency to overoptimize the design. Furthermore, the description of the design at the upper levels, while being more compact than the lower levels, is nonetheless redundant information. During refinement, the designer is reexpressing the same information, albeit with more detail. Both of these drawbacks lead to a larger source.
The redundancy has tool implications as well. The description at one level generally does not use the same semantic base as the description at another level. One would like to have tools for checking consistency between levels, but the lack of a consistent semantic base makes the implementation of such tools pragmatically difficult, and, frequently, theorectically impossible.
Another problem is that the designer is awash in a sea of possibilities due to the lack of technological constraints. Many possibilities, which could have been discarded immediately because they are technologically infeasible or simply more difficult, must be considered by the designer.
6.0 Previous Work
This is at the back of the paper because there is little literature discussing methodology per se. This is probably because it is difficult to make definitive statements. Most descriptions of methodologies are in the context of specific design examples with little generalization of the lessons learned in a specific example. Methodologies tend to evolve in response to needs and enabling technologies rather than being a focus in their own right.
[Weste et al.] talk about design approaches as if behavior, structure, and physical design can be done independently. Their description of current design approaches do not include any feedback paths from lower levels to higher levels of abstraction. This work contains a number of case examples.
In the dynamic time warp processor example, the "functional" description contains many schematics that look like structure. However, they say they wrote a functional simulator. Why did they do this? Was their simulation technology so inferior they had to hand recode for speed?
The single chip 32-bit CPU example talks about top down in one breath and in the next about doing all levels of design in parallel. Clearly there is some confusion about the meaning of top down. They seem to mean hierarchical divide and conquer, rather than successive refinement, which definition assumed in this paper.
[Rubin] talks about "refinement" which proceeds "from the abstract to the specific"; also "Although design rarely proceeds strictly from the abstract to the specific, the refinement notion is useful in describing any design process. In actuality, design proceeds at many levels, moving back and forth to refine details and concepts, and their interfaces." This is design by attacking the greatest uncertainty. Although some people may be working in parallel, collectively they should be working on the most uncertain aspects of the design, i.e. where the greatest risk is. When the uncertainty, or risk, is reduced to zero, then the design is completed.
There is also a good discussion of design as the process of constraint satisfaction. It further describes top-down vs. bottom-up, and indicates they are almost always mixed in a real design. Many people have realized that design is always a mixture. In this paper we are making the point that behavioral descriptions of complicated implementations are not worth the effort; with sufficiently abstract structural descriptions, and sufficiently efficient means of simulating them, it is better to go straight to the structural description.
[Sequin] talks about design proceeding both top-down and bottom-up, and then meeting in the middle; "Yo-Yo" iteration until "the optimal path linking architecture to technology has been found." He also talks about the CAD system virtuoso, someone who can move through the maze of design options with a set of tools with great expertise. He also mentions the designer as tool builder and draws a development spiral. Both of these apply to what we have done here. You must be a virtuoso to use our tools effectively, and the tools were mostly built by designers, except for portions which were well understood at the beginning.
[Trimberger] suggests that the complete description of the design in a behavioral form is required, although he tempers his statement with suggestions that large variances from this design flow are normal.
[Meade et al.] talk about "top down" design with the caveat that the architect must have a full understanding of lower levels of the hierarchy. However their discussion hardly mentions tools to support a methodology. In the intervening decade since this work was published, tools have become a central part of any methodology, while the particular implementation technology has left center stage.
[Glasser et al.] expand on the notion that a holistic view of design is necessary for significant invention, while abstractions which compartmentalize our thinking are the basis of any methodology. Unfortunately, they treat the description of methodology purely by describing an example. The central role of CAD tools is missing from their presentation.
This complements Sequin's view of the CAD virtuoso. As in music, the virtuoso plays his artistic notions within a style of music known to the performers. Within the capabilities of the instruments available, the CAD performer plays with his architectural notions within the constraints of the style his group is used to, and the tools available to him. Sometimes a new notation, or a new instrument, is needed, but in general, one plays out a design in the context of existing styles and tools.
[Niessen] describes criteria for VLSI design methodologies. These include:
a) It must provide complexity control such that a reasonable confidence in the correctness of designed circuits is made possible.
b) The method must comprise the whole design traject.
c) An efficient utilization of technological possibilities should be provided.
d) A considerable increase in design productivity should result.
e) It must enable the creation of efficient CAD tools.
He also goes on to say that the primary concepts of design methodology are abstraction, repetition, and the use of past experience. His article is a prescription for CAD tools needs, while this paper is a report on results once the CAD tools have been built.
[Trimberger et al.] talk about a design methodology focused upon custom integrated circuits. Suprisingly timing analysis tools are ignored. The focus is upon functionality and the space aspects of layout. They make statements such as "The functional and geometrical hierarchies must match because the logical connection, the interface between functional units, is precisely along the geometrical interfaces." This may be true in their methodology, it is certainly (mostly) true in ours, but it is not true as an universal statement.
The problem with most of these design flows is that they assume some real split between behavioral description and structural description during design. There is none.
[Newton et al.] refer to parameterized circuit modules. These are not capturing design at the same level of abstraction which the parameterized logic library does. They are directed towards array structures rather than arbitrary logic networks.
7.0 Conclusion
The top down methodology doesn't work well because it is cumbersome, in the sense of having to prepare far more description than is required to fabricate manufacturing tooling, and it delays technology constraint satisfaction too long. The ASIC, or bottom up, methodology doesn't work well because it is cumbersome, in the sense of having to build many low level details manually, and delays application constraint satisfaction too long.
In this paper we have described a methodology that compromises between the top down and bottom up methodologies so that both the application, and technology constraints, can be satisfied while the designer is preparing a minimal source description. This source description is minimal in that it can be automatically translated into the tooling specifications for fabricating the design, and it makes use of design knowledge encapsulated as procedural descriptions. The combination of simultaneous satisfaction of technological and application constraints, and minimal irredundant source capture leads to minimal design time and manpower.
To support this conclusion we have described the methodology of a completed design, and the set of tools which enabled this methodology, then compared the amount of work required for this methodology against the work required by other methodologies. Unfortunately, in the costly and mathematics-free realm of system design, no stronger evidence can be offered.
References
[Barth et al.]
R. Barth, B. Serlet, and P. Sindhu, ``Parameterized Schematics,'' 25th ACM/IEEE Design Automation Conference, June 1988.
[Glasser et al.]
L. Glasser and D. Dobberpuhl, The Design and Analysis of VLSI Circuits, Addison-Wesley Publishing Company, 1985.
[Mayo]
R. Mayo, ``Mocha Chip: A System for the Graphical Design of VLSI Module Generators,'' ICCAD 86, November 1986.
[Meade et al.]
C. Meade and L. Conway, Introduction to VLSI Systems, Addison-Wesley Publishing Company, 1980.
[Newton et al.]
A. Newton, D. Pederson, A. Sangiovanni-Vincentelli, and C. Sequin, ``Design Aids for VLSI: The Berkeley Perspective,'' IEEE Transactions on Circuits and Systems, vol. CAS-28, no. 7, July 1981.
[Niessen]
C. Niessen, ``Hierarchical Design Methodologies and Tools for VLSI Chips,'' Proceedings of the IEEE, vol. 71, January 1983.
[QuickTurn]
Quickturn Systems Inc., product bulletin, 1988
[Rubin]
S. Rubin, Computer Aids for VLSI Design, Addison-Wesley Publishing Company, 1987.
[Sequin]
C. Sequin, "VLSI Design Strategies", in VLSI CAD Tools and Applications, W. Fichtner and M. Morf eds., Kluwer Academic Publishers, 1987.
[Trimberger et al.]
S. Trimberger, J. Rowson, J. Gray, and C. Lang, ``A Structured Design Methodology and Associated Software Tools,'' IEEE Transactions on Circuits and Systems, vol. CAS-28, no. 7, July 1981.
[Trimberger]
S. Trimberger, An Introduction to CAD for VLSI, Kluwer Academic Publishers, 1987.
[Weste et al.]
N. Weste and K. Eshraghian, Principles of CMOS VLSI Design: A Systems Perspective, Addison-Wesley Publishing Company, 1985.