ALPS: A Generator of Static CMOS Layout from Boolean Expressions ALPS: A Generator of Static CMOS Layout from Boolean Expressions ALPS: A Generator of Static CMOS Layout from Boolean Expressions Bertrand Serlet Xerox Palo Alto Research Center Palo Alto, CA Abstract: Alps is a layout generator which accepts a set of boolean equations, much like PLA generators do. It makes use of an original tree-structured representation of arbitrary boolean expressions. This representation is usually more compact than the classic disjunctive form, is suitable for fast symbolic manipulation, and maps naturally into silicon. We present here an implementation of Alps which produces static CMOS layout using the cascode switch style. Alps has already been used in custom VLSI chips developed at Xerox PARC. VLSI layout generated by Alps compares favorably in many cases to PLAs and standard cell designs. Keywords: Boolean algebra, boolean expressions, disjunctive normal form, VLSI, PLA, CMOS, DCVS, cascode, static, automatic layout generation. 1. Introduction Recent papers [Chen84, Yoffa85] describing the use of Differential Cascode Voltage Switch (DCVS) for generating CMOS layout have initiated a series of investigations by the author on the possible styles of layout for CMOS. In particular, no satisfactory solution was known for large and irregular control structures, usually implemented with PLAs: one would like a simple and compact structure with a reasonable power consumption compatible with CMOS. Among the styles in widespread use we find precharged or self-timed structures which are complex and hard to test, and NMOS-like static designs which use up too much power. This paper presents a layout generator which produces Cascode-based circuits from a set of boolean expressions. We first summarize different representations used for arbitrary boolean expressions, introduce an original one, compare it, and then we discuss some of its properties. This representation maps easily into two-dimensional layout, in a way similar to the mapping of disjunctive form into PLAs. We subsequently describe how this mapping is done, and some peculiarities, for example the static character of the layout obtained. We then do a quick overview of the implementation, and show some results. 2. Representation of Boolean Expressions 2.1. Conventional representations Throughout all this article, we represent the AND operator (product) by ".", the OR operator (sum) by "+", and the NOT operator (Complement, Negation) by "~". The exclusive OR is denoted XOR. The constants are "0" (false) and "1" (true). An expression is made of operators, sub-expressions, constants and variables. We will often use the IF operator (Multiplexor) which selects one of two sub-expression. In terms of AND and OR, IF can be defined by the relation: IF [a, b, c] = a . b + ~a . c . A conventional way to represent expressions is Disjunctive Form. Any expression can be viewed as a sum of products, each term of the sum being made of a product of either an input variable or its complement. The decomposition is unique if each product contains all variables (possibly complemented). Without this condition, the decomposition is far from unique and many papers describe various techniques for minimizing the number of product terms. This minimized representation is usually compact enough for a large variety of problems, especially if XORs are avoided. It is the representation used for generating PLAs and to some extent Weinberger's Arrays. Another simple way to represent a function of N variables is just to decompose it into 2 partial functions of N-1 variables, with a boolean choice operator between those partial functions. This gives: F [x1, x2, ..., xn] = x1 . F1 [x2, ..., xn] + ~x1 . F0 [x2, ..., xn] We see that this is exactly the IF operator introduced previously: F [x1, x2, ..., xn] = IF x1 THEN F1 [x2, ..., xn] ELSE F0 [x2, ..., xn] F [x1, x2, ..., xn] = IF [x1, F1 [x2, ..., xn], F0 [x2, ..., xn]] Assuming that an order in the variables has been chosen, we will abbreviate as: F = x . F1 + ~x . F0 This simple representation is called in the literature Binary Decision Tree (see for example [Akers78]). For that representation, the composition laws are, when F = IF [~x, F0, F1] and G = IF [x, G1, G0]: ~F = IF [x, ~F1, ~F0] F + G = IF [x, F1 + G1, F0 + G0] F . G = IF [x, F1 . G1, F0 . G0] XOR [F, G] = IF [x, XOR [F1, G1], XOR [F0, G0]] [ F = G ] <=> [ ( F1 = G1 ) AND ( F0 = G0 ) ] (Uniqueness of the representation) The operations on binary decision trees are simple. However the representation has a major drawback: its size. The size of any expression of N inputs is 2N. And size is the most important parameter of a representation, for layout area and computation times are usually increasing with the size of the representation. 2.2. Alps normal form The major originality of Alps representation is to distinguish special cases of the partial sub-expressions F1 and F0. These two sub-expressions can be 0, 1, identical or opposite. That leads us to 9 cases (conditions are to be evaluated sequentially): F1=F0=1 OR F1=F0=0 [Constant] => F=0 OR F=1 F1=~F0=1 OR F1=~F0=0 [Identity] => F=x OR F=~x F1=1 [OR] => F= x + F0 F0=0 [AND] => F= x . F1 F0=1 [ORNOT] => F= F1 + ~x F1=0 [ANDNOT] => F= ~x . F0 F1=F0 [NoDependence] => F= F1 F1=~F0 [XORNOT] => F= x . F1 + ~x . ~F1 ELSE [IF] => F= x . F1 + ~x . F0 When this distinction between all different cases is made, the decomposition is still unique (given an ordering of the inputs), and the composition laws are unaffected. We thus have a normal form (Alps normal form). The size of the representation is no longer 2N, as in pure binary decision trees, because in all cases, apart from [IF], at most one sub-expression is needed. For example, NOR, NAND, AND, OR of N inputs all take a size N, as in the sum of product form. However, the sum of product form of a N-input XOR is of size 2N, versus size N for the Alps representation. The Alps normal form is dependent on the ordering of inputs. For example (see figure Permutations): F = IF [x2, XORNOT [x1, G], x1], where G is another expression, can be rewritten when permuting x1 and x2: F = IF [x1, OR [x2, G], ANDNOT [x2, G]]. The second expression duplicates the G expression. If no sharing occurs, that is if we do not allow the introduction of an auxiliary variable, the size of F in the second case is the size of F in the first case plus the size of G. This might happen at all levels of the expression tree, and may provoke a rapid growth of the representation. In order to have an efficient representation we therefore have to find a good permutation of the inputs. The exhaustive search of all possible permutations is a theoretical answer to that minimization problem. In practice, a local search of better permutations, done by permuting only 2 inputs at a time, seems to be sufficient. The introduction of auxiliary variables may also bring a reduction of the size of an expression, whatever the representation is. Introduction of auxiliary variables is a difficult problem. You must find the best auxiliary variable, find the best way to use it in factorizing other expressions, all those assuming that you can find a best function. Some heuristics have been developed specifically for Alps, but their importance in real-life design seem secondary, because permuting is in Alps a more efficient way to minimize the representation. Moreover auxiliary variables give birth to layout and timing problems. We will not discuss the algorithms used for choosing good auxiliary variables in that paper. Figure Representations shows a comparison between optimized sum of products and permuted Alps representations for some examples. 2.3. Generalization The separation between all special cases makes Alps a good representation, and can be used more than once. The IF gate is a conceptual multiplexor, that may be extended to a whole vector of variables. For example with two inputs, we can introduce the 2-multiplexor, IF2, defined by: IF2 [x1, x2, F11, F10, F01, F00] = IF [x1, IF [x2, F11, F10], IF [x2, F01, F00]] When at least one of the four partial functions is 1 or 0, and/or when at least two of the partial functions are equal or relative complement, then one can distinguish special cases for compacting trees containing IF2. By grouping two inputs at a time the number of sub-expressions needed is smaller, because duplicates are avoided, as in figure Permutations. The size of the representation of the high bit of an adder, as given in figure Representations would also drop from 2n/2 to n. The same game can be played more than once, and in the extreme case where all P variables are multiplexed, the result is a giant multiplexor isomorphous to the truth-table, with size 2P. This shows the limitations of this generalization: the size of a P-multiplexor grows like 2P, and the number of special cases to distinguish grows like P!. Another possible way to generalize Alps form would be to allow sharing of sub-expressions, that is to manipulate a DAG instead of a tree. This is the representation proposed by Bryant [Bryant85]. It is slightly more compact than Alps, but the mapping to layout is then more complex, because it is the mapping of a graph in 2 dimensions instead of the mapping of a tree. 3. Mapping into silicon The mapping of an Alps expression tree into a two-dimension plane is easily done by fixing all inputs to run in the same direction (say vertical), and all sub-expressions or expressions in the opposite direction (horizontal). Conceptually, at each intersection of an expression with an input there is a gate implementing one of the 9 possible cases of node. Figure Mapping gives a general schematics of the mapping. Even though every gate realizes a simple function, a direct implementation in MOS technologies will be prohibitive in area without any further enhancement. We are now going to concentrate on CMOS technology, and show that using Differential Cascode Voltage Switch (Cascode) leads to very simple gates (usually 2 transistors per gate) and a fully static design. 3.1. Overview of different CMOS styles There is not an infinite range of styles, in CMOS, for realizing combinatorial circuits. Figure CMOS Styles assembles the four basic styles, i.e.: true CMOS, pseudo-NMOS, precharged and cascode. A precise discussion of those techniques can be found in many books, for example in [Hodges83]. True CMOS gates use as many p-transistors as n-transistors, wasting a lot of area in well separation and in transistor sizes, since for the same resistance, p-transistors must be wider than n-transistors. Moreover, the rich connection network between these two necessarily separated tubs increases the area further. Pseudo-NMOS is a variation which uses a p-transistor to mimic an NMOS pull-up. There is a definitive gain in area, but the low-to-high transition is slower and the static power consumption cancels the main advantage of CMOS technology. Precharged logic requires a clocking scheme, and that means some extra global control and communication other than the gate. The designer must also be careful about eventual races. The inputs must remain steady once the precharged phase is finished. That means that you cannot use the output of a precharged gate for the input of another precharged gate without paying extra attention to timing problems. This leads to domino style. But precharge cannot be used when you have non-monotonous functions, as the XOR function. Last but not least, precharged gates require a minimum clock rate and must be tested at speed. Cascode, if used in a regular manner, offers the static quality of true CMOS, but with a smaller area, because there are only 2 p-devices per gate. It has some additional nice features as fast switching time and free supply of both output and complemented output. But it is only valuable for large or medium-size gates (at least 3 or 4 inputs), and so we will describe in the next paragraph a way to group elementary Alps gates into larger units. 3.2. Cascode mapping In Cascode style all signals are carried on dual rails, i.e., both the signal and its complement are needed and produced. The mapping of logical values to voltage levels is not trivial in order to minimize the number of n-transistors per IF gate: 1 means pulled down to Gnd via a n-transistor, 0 means not pulled down to Gnd. Figure Gates Representation shows the schematics for the IF gate. All gates can be obtained by using the Constant gate (the one generating 1 and 0) and the IF gate, but one can spare one transistor in the cases [ORNOT], [AND], [OR] and [ANDNOT], using the fact that: F = x . F1 + ~x can be rewritten F = F1 + ~x and similarly for the other cases. The schematics for the ORNOT is shown on figure Gates Representation. Layout for this 2-transistor gate can be made very compact, and this is important because most of the gates used in the control part of a chip belong to the {[ORNOT], [AND], [OR], [ANDNOT]} set. In addition to the 2 pull-down chains, the cross-coupled pull-ups are needed to compute the output. Very long pull-down chains made of transistors in series may give a quadratic delay. To avoid this, cross-coupled pull-ups are inserted at regular intervals (every 5 or 6 stages is our current choice). The schematic is given in figure Amplification. These cross-coupled devices make the delay time truly linear in the number of inputs. The algorithm used to choose where to put repeaters is slightly better than just dropping them at regular intervals: the interval is a fixed number of used inputs. 3.3. Cascode Layout details Many strategies have been tried to find the most compact way to layout Alps expressions. This discussion is very technology-dependent, but seems reasonable for any double metal CMOS technology. Because n-transistors have gates perpendicular to source and drain, it seems best to have inputs run in one dimension in poly (stitched to with metal2), and outputs run in the other dimension in metal (for minimizing delays) and diffusion (where transistors are needed). N-wells are inserted are regular intervals, for making the amplifier cascodes, which means some extra Vdd and Gnd running parallel to the inputs. Two choices may be made for Gnd: - In n-diffusion alongside the inputs (with the appropriate doubling in metal2); - In metal alongside the outputs. The first choice seems to save some area compared to the second one (10%) but necessitates to connect from time to time the diffusion to the metal2, which is tedious. The second approach was finally taken, and every expression is made of 3 wires: the 2 pull-down chains and Gnd in sandwich in-between. This solution has nice features like the ability to increase the size of transistors, without changing any other geometry. It is also very local, in the sense that Gnd is readily available without having to route. Finally this solution allows Grid-Power-Routing, i.e., the connection of Gnd and Vdd in grid instead of comb, to reduce the effect of power surges, when all inputs switch at the same time. 4. Implementation Alps is not only a theoretical study of feasibility. It is a program, running at PARC on Dorados [Pier83] in Cedar [Teitelman84]. It generates CMOS layouts for use in custom chips developed by the Dragon group. It is being used as a alternative to control PLAs in several Dragon [Monier85] chips. We now describe some tricky parts of the implementation. 4.1. Fast manipulation of Alps Expressions In order to distinguish between possible cases, the test for equality of two expressions and for relative complementarity must be performed efficiently. Because of the last test, and because NOT operations seem to happen frequently, the best way to implement the NOT operation is the lazy evaluation, i.e., the negation is coded explicitly into the structure, and all operators use that information. 4.2. Permutations The aim of this minimization technique is to minimize an evaluation function, usually the number of IF gates, by trying permutations of the order of inputs. Permuting is done by a function which takes as arguments the set of boolean expressions and the permutation, and which returns the new set of expressions. But only a restriction of this general function is used, for only 2 variables are permuted at a time. For sets of expressions with a dozen inputs, the real time used on a Dorado for permuting 2 variables is less than a second. Three heuristics are currently available, and the user may try them interactively. The first heuristic only tries permutations between neighbors. It is somewhat analogous to bubble sort. It operates in linear time relatively to the number of inputs. This heuristic is not very efficient, even for finding a local optimum, but allows a first reduction of the size. Typically when run with a 31-input truth table, it finds 2 permutations to do in the set of 30 tried, which reduces the number of IF from 6000 to 5000. The second heuristic tries to find the best input for each position. It starts with rightmost positions (when outputs are supposed to emerge from the right side). It operates in quadratic time, and converges after a few iterations, for finding a good local optimum. When run on the same 31-input truth table, it reduces the number of IF from 6000 to 200 after half a dozen iterations (each of this iteration tries 31 * 30 permutations). The third algorithm consists in the exhaustive search, with two heuristics for going faster. The first one is to try to put in rightmost slots all inputs which do not generate IF gates. If such an input is found, this means that trying other permutations for this input will not make any reduction in the IF count. The second speed up consists in trying first the permutations which seem the best in terms of number of IF gates, and avoiding search for the branches of the tree search which cannot give a minimum. It operates fast on small sets of expressions, but it seems an unreasonable algorithm for a large set of inputs. It was aborted after a dozen hours of Dorado cpu time on the same 31 input problem. 4.3. Layout Generation Alps program generates a ChipNDale [Jacobi85] data structure, which can be used by layout-assembly programs or directly manipulated on the screen. All basic mechanisms for generating layout are found in ChipNDale, and so the layout generation part of the program is small (ten pages), although some nice features are build in, such as automatic insertion of poly-metal2 contacts at regular intervals (specified by the user). It is easy to generate layout for other technologies, because ChipNDale is technology-independent, and most of the layout generation program is also technology independent. With the layout considerations discussed in 3.3, layout for an expression is straightforward, because it is just the 2-dimension layout of a tree. Two optimizations make that layout more tricky: - Metal lines are shared between different expressions; - While laying out, an attempt is made to find neighbors which are common subtrees and share them to avoid the duplication of identical subtrees. This generalizes the sharing done for XORNOT, but avoids the problem of mapping a graph into a plane required with Bryant's representation [Bryant85]. The geometry generated by Alps is especially nice for feeding back outputs as inputs, after some processing. This allows an easy implementation of auxiliary variables and latched outputs: finite-state machines can be implemented directly as Alps structures. Figures Amplification and BackFeeding show the schematics for auxiliary variables and latches. When using latches, one encounters the problem of not losing states at low clock rates. Let's assume that we have two non-overlapping clocks, PhiA and PhiB. If both clocks are low for a long time, as in slow testing, charge leakage will pull both complementary outputs to Vdd. A trickle-charge provided by two n-transistors as in figure AntiBias Generation compensates for this leakage and preserves states. 5. Results 5.1. Performance of the layout It is always difficult to give figures about program-generated layouts, since they are both technology-dependent and problem-dependent. But we believe our CMOS technology is a relatively standard double metal technology, and all the following numbers always refer to real life circuits. On toy circuits such as counters, decoders or multipliers, results are much less consistent than on our set of Dragon parts: a controller for a map-cache, a truth-table for a memory controller, several truth-tables from the control of a 32-bit microprocessor. The area comparison with PLAs is in favor of Alps for small or medium size problems (between 10 and 20 inputs and between 10 and 20 outputs). For large problems Alps-generated layouts seem to be around 50% larger. Three different factors are competing here. First, Alps representation is better factored, which tends to make smaller layout. Second, it is more general and uses different cells, while PLAs are made of the same two cells, which allows some layout optimizations. Third, cascode style layout is a-priori twice as large as a similar precharged layout, due to the presence of both complemented signals. For speed it is harder to make a comparison between precharged CMOS PLAs and Alps, because if the precharge time is included, Alps is much faster. Our Thyme simulations (Thyme is a Spice-like circuit simulator developed at PARC), indicate a 0.6ns delay per input, if repeating of signals is done every 4 or 5 inputs. 5.2. Performance of the generator The program is composed of two parts: the boolean manipulator and the layout generator. Currently half of the code is for debugging, gathering statistics, or for trying different heuristics to choose the right auxiliary variables. An interactive tool for playing with those heuristics has also been built. The current version comprises 5 interfaces and their implementations, roughly 50 pages of code. For medium size PLAs (10-20 inputs, 10-20 outputs), the elapsed time is a few seconds on a Dorado, excluding the time spent for finding a good permutation of the inputs. The speed of the program was not a show-stopper for interactive use, and so the system has not been optimized, and could run much faster. 6. Directions for future work All described features have been implemented. Future avenues of research include: - Generation of other styles of CMOS layout (precharge, true CMOS) using the same representation. In some cases, they might be more adapted to the problem the chip-designer is trying to solve. For example, one might want the automatic generation of a small cell in true CMOS, or a precharged version of a decoder; - Generation of layout for other technologies; - Improvement of the heuristics for introducing auxiliary variables, and for permuting; - Acceptance of more general topological constraints, such as given positions for the outputs, or partial ordering on the inputs; - Layout optimizations, such as sharing of columns partially used; About 6 months were spend from the original idea to a complete system with serious users. Alps was developed by the author only, and half of the time was devoted to the development of PatchWork (our Silicon Assembler), and other ChipNDale [Jacobi85] tools. 7. Summary We have presented a new layout generator which uses a novel representation of boolean expressions. This representation is in general more compact than the sum of product form, and facilitates fast symbolic manipulation. Our experience is that Alps representation is a data structure universal enough to be used whenever one needs symbolic manipulation of boolean expressions. The mapping of Alps structures into layout is straightforward, particularly in the context of CMOS Cascode. The generated cells are comparable in area and speed to PLAs, but offer the advantage of being completely static. Alps is currently used to generate the control parts of several large chips for a custom multiprocessor project, the Dragon. Alps is not only an alternative to PLAs. Alps layout flexibility can be used by the designer for generating small combinatorial cells, and the combination of Alps and routing might be an alternative to standard cells systems. 8. Acknowledgements I owe special thanks to Louis Monier, for his encouragements during the realization of Alps, and for his help in writing this paper. I want also to thank Pradeep Sindhu, who courageously used the system as an alpha-user. 9. Bibliography [Akers78]: Binary Decision Diagrams Sheldon Akers, June 78, IEEE Transactions on computers, Vol C-27, No 6, pp. 509-516. [Bryant85]: Symbolic Manipulation of Boolean Functions Using a Graphical Representation Randal Bryant, June 85, Proceedings 22nd DAC, pp. 688-694. [Chen84]: Considerations for Implementing CMOS processors Chih-Liang Chen & Ralph Otten, October 84, Proceedings of the IEEE International Conference on Computer Design, pp. 48-53. [Hodges83]: Analysis and Design of Digital Integrated Circuits David Hodges & Horace Jackson, McGraw-Hill series in electrical engineering, ISBN 0-07-029153-5. [Jacobi85]: ChipNDale Documentation Christian Jacobi & Kim Rachmeler, July 85, Internal Xerox Documentation, ChipNDale: An interactive editor for VLSI designs. [Monier85]: The Architecture of the Dragon Louis Monier & Pradeep Sindhu, Spring 85, Proceedings of the IEEE 1985 COMPCON Spring, pp. 118-121. [Pier83]: A Retrospective on the Dorado, A High-Performance Personal Computer Ken Pier, December 83, 10th Annual International Symposium on Computer Architecture, pp. 252-269. [Teitelman84]: A Tour Through Cedar Warren Teitelman, April 84, IEEE Software, Vol 1, No 2, pp. 44-73. [Yoffa85]: ACORN: A Local Customization Approach to DCVS Physical Design Ellen Yoffa & Peter Hauge, June 85, Proceedings 22nd DAC, pp. 32-38. Ê~– "mit" style–i(firstHeadersAfterPage) {1} .cvx .def (firstPageNumber) {1} .cvx .def (oneSidedFormat) {.false} .cvx .def˜Iblock•MarkcenterVersoHeaderšÏsA˜AK–centerRectoHeaderšA˜AItitlešÏkœžœ ˜AIauthorsšœ-˜=IabstractšÏb œQžœÌžœNžœžœžœ>œ˜Nš Ÿ œAžœžœžœžœ/˜Žheadšœ˜Ibodyš œ[žœžœežœzœežœzžœ1˜òPšœp˜pPšœŸœÒ˜ô—šœ(˜(O˜!—Pšœ.žœ žœ žœ8žœ žœ˜žœNžœžœžœ ˜ÕIindentšžœ˜ Pšœ/Ïiœížœ=œ)˜˜PšœÉ˜ÉQšœD˜DPšœ žœ ˜BQšœžœžœžœ˜GQšœžœ)˜APšœO˜OQšœ˜Pšœ7 œ[žœžœ ˜ÍQšœžœ˜Qšœžœ˜ Qšœžœ˜ Qšžœ žœžœ žœ ˜/Qšœžœ4˜SšœœÏuœ¢˜¿O˜—P˜ÿItable4˜Ršœžœžœ˜,Ršœ žœžœ˜/Ršœ žœ˜Ršœ žœ˜ Ršœ žœ˜"Ršœ žœ˜"Ršœ#˜#Ršœ žœž˜/Ršžœžœ˜,Pšœ4œÒ¡œFžœ7žœžœžœžœnžœ ¡œ,˜ÄPšœ>˜>šœÐis œ˜'Qšœžœžœ.˜@—˜*Qšœžœžœ žœ ˜(—PšœØ˜ØPšœI œý˜ÊPš œÑ œ œX œÐ œ#˜Êšœ¢œj˜€O˜—PšœpžœX˜ÊPšœAžœ ˜QQšžœ žœžœžœ˜PPš œÖžœ‚¢ œR¢œ¡œÀ¡œ^¡œ?˜ÁPšœsžœý˜óOšœ˜šœï¢œ{žœožœ§˜ŽOšœžœ˜&—Pš œ-žœ0Ñiks¢œ-žœ žœz˜¤Pšœžœ³˜¼Pšœžœ6žœ–žœ ˜ëPšœ‚žœj˜ïšœHžœõ˜ÁO˜—Pšœïžœ_¢œžœbžœ6žœžœžœžœ˜ÔQšœP˜PPš œžœ¢œ¢žœžœžœžœ˜‰šœÒ¢ œñ œ˜ÜO˜—Pšœ²žœ°˜æP˜ Q˜PQ˜!P˜ÄOšœ˜šœRžœ;žœnœ`˜æO˜*—šœÀžœEžœ œd˜O˜—PšœZ  ž œ†˜òPšœŸžœ˜´PšœÑžœe˜¸šœ±žœ€žœqžœ¦˜ÎO˜—P˜ØP˜ÃQ˜8Qšœ¹žœj˜©PšœŒ¢ œ¢ œ9˜âPšœÒ¢œ4˜™šœ ˜ Ošœ˜—Pšœœžœ„˜¤PšœœøœÓ˜êšœ?žœœ™žœY˜½Ošœ!˜!—P˜”Pšœœ¡˜´Ošœ˜P˜RQšœ žœžœÏžœ'˜»Qšœ.˜.Q˜WQ˜QšœB˜BPšœÈ œ(˜Ošœ ˜ Pšœù˜ùPšœ^žœCœµ˜ÝPšœ#œ¼˜âOšœ˜P˜Ýšœ˜ referencešŸ#˜#I bibliographyš Ðik 8˜T— šŸW˜WTš )¤ ˜:— šŸ*ÐbkŸ ˜9Tš >¤ 8˜z— šŸ>˜>Tš M¤ ˜`— šŸ#˜#Tš n¤  ˜{— šŸ*˜*Tš =¤ ¤ ˜c— šŸM˜MTš a˜a— šŸ#˜#Tš ¤ "˜B— šŸ ¥Ÿ$¥Ÿ˜HTš 5¤  ˜D———…—f"m¦