Symbolic Analysis for Automatic Speech Recognition A strategic shift in the formulation of the continuous speech recognition problem We are developing a model for the recognition of ordinary, continuous speech. The two main obstacles for unrestricted speech recognition have been the variability and the productivity of human speech. Current systems treat variability in the shape of the speech signal as distortion, regardless of the source. There are four main sources of variability which we attempt to factor out in our model. Dialect and individual differences give rise to diverse pronunciations of the same utterance, and create a problem of speaker variation. Words spoken in isolation bear only limited resemblance to words in fluent sentences, and may be hard to segment; similarly, the determination of allophone "boundaries" is rarely the same in two pronunciations of the same word. This problem can be referred to as one of segmetation. Metrical factors, along with rate, create a serious problem of timing variation. A final source of variation in the speech signal is the presence of both environmental and speaker-produced noise. These are the sources of variability in the signal that various technologies attempt to handle by means of normalization, searches for invariant acoustic cues, or through template matching. The model we are developing treats productivity, as well as variability in human speech, as the norm. Current speech systems are not designed for language productivity. It is impractical or impossible to build a system based on the usual design of simple lexical look-up on a finite word list for languages with rich morphologies (e.g. Finnish) where a single verb may exhibit several thousand variations created by processes of inflection. Our shift in perspective on the problems of variability and productivity carries ramifications for both signal processing and linguistic processing. Speech inputs are categorized qualitatively with labels based solely on acoustic events, and which vary if the speech signal varies, even if a human listener might perceive the same word being spoken. The language-independent, symbolic representation of the signal is then translated into phonetic terms and processed linguistically. The representations of the physical signal allow us to filter out language-like noise, and together with an active chart-parser designed to cope with ambiguity, to compute with language-specific constraints from a language-independent base. Thus our model cuts up the recognition problem in a novel way. We label the sampled signal in a descriptive, language-independent fashion, a strategic shift that provides a new mechanism to handle variability. This shift of course demands added power from other parts of the model, which is provided by phonetic and phonological grammars and processors. Figure one lays out the overall topology of the model. We will describe below the determinant analysis performed at the lowest level using a finite label set, which enables us to ignore noise; the language-specific constraints used to build various alternative interpretations of the symbolic string; and the way in which ambiguities are resolved by taking various phonological constraints into account. But first we will sketch the broad outlines of six classes of speech recognition systems we have found useful to consider in the process of designing our model. 1. Overview of previous recognition efforts Previous speech recognition systems have varied in the richness of their signal processing and linguistic processing capabilitites. Representative points on this engineering/linguistic scale are: Z Engineering and Statistical approaches: DTW: Dynamic time warping of stored speech [1]. HMM: Probabilistic hidden Markov models [2], [3]. Z Mixed phonetic/engineering approaches: NBM: Network-based models [4], [5]. FBM: Feature-based models [6]. Z Linguistic approaches: DEM: Dictionary encoding models [7], [8]. LPM: Linguistic parsing models [9]. Engineering and statistically-based models have used dynamic time warping and probabilistic hidden Markov agorithms. In DTW paradigms, whole word prototypes are stored and inputs are matched using a prechosen distortion measure to resolve questions of scaling. When the unknown input is time-aligned with a reference prototype, the ensuing discrimination may be thought of as recognition by least distortion. Finite-state HMM's consists of a state transition probability matrix along with an initial probability vector. They usually do not involve time-alignment. HMM's are primarily constructed out of training data, although their states are usually initially specified by linguists and may consist of allophones, diphones, or triphones (one, two or three segment clusters). A system might have settable parameters which depend on the left and right phonetic context for each allophone. Each path through a graph representation of such a system has an associated probability and the result is basically recognition by maximum probability. Dynamic time warping systems and Markov models only operate on restricted inputs. These are limited either in the number of isolated words recognized or in the specified sequence of words permitted in continuous speech. Based on the evidence of best current results, unrestricted speech recognition with these methods does not seem to be computationally feasible. These approaches have never been used to recognize unrestricted, continuous speech, which would involve dynamic time warping against minimally hundreds of thousands of word prototypes, or in massive training in Markov models. This is due to all sources of variability in the speech signal, from noise and rate to dialect differences. Network models are based on finding a path through a finite-state pronunciation network whose arcs may model some kind of acoustic-phonetic segment. A set of acoustic pattern matchers, which can be based on vector quantized LPC spectra, operate at the level of individual spectral analysis frames, or over groups of these. Various distance metrics have been proposed to rate the fit of the match. A dynamic programming search algorithm finds templates to match against in one or more codebooks. Recognition may be viewed as minimum-cost traversal through the network, and unlike HMM's, is not probabalistic. Again, these systems perform well only on restricted inputs. Variability is minimized by either limiting vocabulary size (often to lexicons of fewer than thirty items), requiring speakers to pause between each word, or by limiting recognition to the same speakers the system is trained on. Dictionary encoding models exploit the useful concept of underspecification. In one implementation [7], the lexicon is encoded using broad phonetic categories. The idea is basically one of delayed binding, and stems from the observation that there is no reason to demand precise identification of a sound if partial identification of a string of sounds yields a word candidate. DEM imposes a linguistic categorization upon the signal in finding possible segment boundaries, and uses constraints from the dictionary to achieve complete identification. This strategy reduces the search space in the recognition problem, but is admitted to be tailored to the problem of isolated word recognition, since the theory does not incoporate phonological knowledge sources for disambiguating possible word boundaries. In contrast, linguistic parsing models cut down the search space and are designed to handle continuous speech. While in most recognition systems, the explicit source of constraint is at the level of the word, in LPM's, generalizations stated on syllables or other units may augment lexical constraints. These additional constraints help in finding word-boundaries, and are not limited to finite word lists. One example of the use of additional regularities is work on syllable-based constraints in an implementation which parses an already-labeled allophonic string into words [9]. In contrast to recent LPM's, early network models explored syntactic constaints [4]. Such constraints over-determine ordinary, casual speech which is characterized by the presence of semi-grammatical sentences (with false starts, unfinished phrases, etc.). Current DEM and LPM models are hard to evaluate in that they abstract away from the analysis of the actual waveform, and operate either on partial or complete allophones taken from sentences transcribed by human listeners. To capture this simulated performace in a fully automatic system, they await better signal processing at the front end, and a solution to the segmentation problem. While those who work on the signal-processing/statistical systems described above publish recognition results of 90% or better for recognition of under fifteen words in continuous speech, or for a large set of words pronounced in isolation, DEM, and LPM researchers generally do not report recognition statistics. An exception to this is certain work on feature-based models, for which valuable statistics are published on two- or three-way distinctions (distinguishing p/b pairs, for example.) The search for invariant acoustic cues is interesting, and although FBM's cannot handle distorted or noisy speech in principle, research on these models continues to provide new information about spoken language. In summary, the more linguistically-based models await a better signal processing component, and the template/statistical models await methods that permit them to incorporate more knowledge regarding the possible utterances of a language. Nonetheless, we employ certain aspects of the systems we just described in the model we are developing. Diagrams 2 a-c show an evolution of the formulation of the speech recognition problem starting with mainstream work in the seventies, continuing through recent work from MIT, and finally illustrating several shifts of perspective in our model. What we believe to be the way around the problem of variability, and what is unique in our model, is in its reformulation of the recognition task, its incorporation of constraints from phonological constituents, from the foot to the sub-segment, its particular use of symbolic representations of physical signals, and the way in which we have compartmentalized the knowledge into language-independent and language-specific modules. 2. Qualitative speech analysis and phonological constraints In our approach, the sampled signal is labeled in a qualitative, language-independent way. The labels depend on the discrimination capacity of the front end. Figure 3 shows representative outputs of a Huberman-Hogg qualitative labeling procedure based on a digitized representation of the word "of." The spectrogram is modified to a scale similar to the Bark scale, and the dimensions are time vs. frequency, with intensity indicated by the gray scale. We will refer to the outputs of the front end as acoustic labels. Our approach handles variability in human speech by both descriptive acoustic labeling and phonological parsing methods. The speech sequence is not warped to match a stored prototype, or submitted to a best fit procedure using language-specific templates. It is labeled in a qualitative way, and then analyzed by an active chart parser using extremely low-level but language-specific constraints, as well as higher-level constraints. This decomposition of the problem permits a representation that is stable over utterance situations and provides constraints that handle some of the difficulties associated with partially obscured or ``incomplete'' information. The organization of the model is designed to account for the various sources of variability: Z Presence of "noise" There are at least three types of noise a realistic system must handle. We omit discussion of the first variety, which is ambient or channel noise that is possible to filter out with standard techniques. Assuming a fairly clean signal, our model is designed to cope with language-like noise as well as extraneous sounds by means of a mapping to a phonetic lexicon and by the particular design of the front end. In one mode, the front end imposes labels on arbitrary time slices from a non-linguistic, finite set. In the simplest version, configurations of spectral energy that do not fall into the fixed bin set can be merely discarded. In more interesting modes, they are categorized as unknown noise and stored until the system learns to partition them into further categories. This set of descriptive, acoustic labels is mapped to a smaller set of linguistic labels. Some labels carry linguistic significance in some languages but are ignored as extra-linguistic in others. The language-specific information regarding the inventory of meaningful labels is stored in a phonetic lexicon. Lip smacks and clicks are good examples of acoustic events which have no match in the phonetic lexicon of English, but which must be processed for certain African languages (e.g. Hottentot, Zulu, Xhosa). This modular analysis of the presence of noise vs. possible language sounds is interesting from a theoretical perspective, particularly since the complete inventory of allophonic speech sounds for any any given language has never been documented. Moreover, the modular treatment is also interesting when only a crude division of language-particular vs. general language sounds is developed, since the ability to label the latter provides a ready-made foundation when moving on to the recognition of new languages. Z Timing variation As we mentioned, some outputs of the front end may be discarded as meaningless in a given language. We wish to augment the machine at this level so that it can perform not only a mapping to the phonetc lexicon, but also a rudimentary kind of counting in order to gain the ability to use duration cues over variable speech rate, for instance, to exploit the fact that speakers do not usually change rate in the middle of a short utterance. It is obviously difficult to recover temporal information in dynamic time warping models, and it is hard to capture duration generalizations in systems that use hidden Markov models. It is also well-known that duration is helpful in segment identification, both from the point of view of distinguishing similar segments, and in the formidable problem of determining how many allophones an utterance contains. In designing a way to keep track of the number of labels the front end provides, and exploiting this source of constraint, we wish to explore the labeller-parser interface as a two-tape Kaplan-Kay/Koskenniemi [12] machine. This is one possible way to use timing information which is governed by environmental factors, including speech rate, and metrical constraints (like stress). Z Location of boundaries The acoustic labels are parsed into linguistic objects using allophonic and higher-level grammatical constraints. Unlike the acoustic labeling, the parsing is language-specific. The speech sequence is not subjected to an explicit segmentation procedure; segments emerge in the well-formed parse. Syllabic grammars also impose language-specific constraints. However, these are not as stringent as one might expect due to the various permissible lenition ("weakening") processes such as vowel deletion in unstressed syllables. On the other hand, additional phonological constraints allow us to hypothesize words, in particular language-specific sequencing constraints on stress units consisting of strings of syllables (foot grammars). Constraints of this type have not been reported outside the theoretical phonological literature, and have never been implemented, to our knowledge, outside our system. Although the sets of grammars are hierarchical in nature, mixed categories appear by design on the same level. A given level might include, for example, a fully specified allophone followed by vague descriptions like "nasal." Phonological constraints allow us to hypothesize words from underspecified tree structures. If we find, for instance, that some kind of nasal precedes a [g] in certain positions, foot-structure constraints tell us that the nasal must be velar, because it is the only possible foot-medial nasal that precedes a [g] (as in the word finger) [10, 11]. This is the concept of underspecification brought into the grammar. We wish to underscore the importance of allophonic constraints provided by metrical constituents, and not just rely on mapping to the word to provide information regarding segments obscured by noise. There are three reasons for relying on phonological constituents as an augmentation to the information available in the dictionary component. The first is the problem of segmentation, and the second is that phonological categories provide interesting sequencing constraints. The third reason is the fact that the appearance of an acoustic event depends on where a segment falls in a phonological constituent. For example, a [p] will have a burst or not depending on its position in a foot. Phonological constraints help in overcoming the problem of obscured information in ordinary, continuous speech. Parsing with phonetic and phonological grammars also permits segments, metrical constituents, and words to emerge in the well-formed parse, and bypasses the search for segment boundaries. Z Language productivity Another reason for the inclusion of phonological grammars is apparent in cross-linguistic studies. We must expect novel words; the inventory of lexical items is not the "bin of parts" we find in robotic vision tasks. In morphologically rich languages like Finnish we would not directly match our phonological constituents against a finite lexicon; rather, we would include a morphological parser as a part of the model [12]. Z Speaker variation We employ a radically low level of symbolic representation in this model, a more traditional one that is essentially syllabic, and a higher level consisting of strings of syllables. The incorporation of the latter reflects our belief that listeners attend to the rhythmic structure of speech in finding word boundaries, and reflects our findings that this level provides some interesting constraints in decoding obscured information. One further practical consideration is that although whole syllables can drop out in ordinary, casual speech in some dialects and speech styles, entire feet do not, unless a complete word is omitted. The parts of the model described so far will handle some aspects of the problem of speaker variation, including some segmental differences and changes in speaker rate, but other aspects involve setting parameters at the front end (e.g. is the speaker an adult male/female or child). The signal parser can be spliced onto any number of good signal-processing front ends. However, our experience in analyzing DFT spectrograms and other processed waveforms leads us to conclude that a rethinking in this domain will lead to interesting results as well. At present, human beings are better at reading spectrograms than are machines [13], [14]. Front ends have not been designed to extract the information that is most valuable from the point of view of acoustic parsing, and should be augmented to handle hard problems of speaker variation. 3. Current system and findings Our wish to couple the model to a front end that is better tailored to the qualitative labeling task, and which exhibits plasticity, fault-tolerance, and the possibility for learning, has lead us to an exploration of experimental architectures. Huberman's and Hogg's machines based on a regular array of locally connected elements exhibit interesting and germane properties. For instance, these machines exhibit the property of plasticity, or the ability to classify changed inputs. A certain amount of plasticity is evident in more traditional systems. However, a key difference that is currently under investigation is that plasticity can be built into the targets, which are dynamic attractors. There is no distinction in principle between the training data and the recognition data. The targets can shift over time in response in the actual input. This extra power has possibilities for domains where inputs vary systematically (as is the case when moving from speaker to speaker or dialect to dialect). We not only wish to build a robust recognizer but to understand first how to extract appropriate representations of physical signals and second, how to compute with these representations to extract linguistic meaning using task-particular constraints and on-line learning from previous experience. The modular design of the system is practical in that it permits us to experiment with a number of front ends, and to test the phonetic component on languages other than English. At this moment, we believe we are uniquely positioned to engage in this exploration. We are investigating optimal parsing design with researchers of the Center for the Study of Language and Information at Stanford, as well as general phonological and speech recognition issues with members of the local research community in the area. In addition, we are able to benefit from Kaplan's and Kay's current work on lexical access and encoding (for a description see [12]). Finally, our computational environment includes access to simulations of the Huberman-Hogg machines, and the LFG system where both preliminary phonetic and high-level phonological components have been successfully implemented. Specifically, the concept of attractive fixed points, drawn from the field of dynamical systems theory in physics, has been applied to discrete processes to produce computing architectures capable of fault-tolerant pattern classification. These architectures currently consist of arrays of simple local units that operate on integer data received locally from their neighbors. Input to the machine takes place only along the edges and the computation is systolically advanced from row to row in step with a global clock. Each processor has an internal state, represented by an integer, which can only take a small set of values depending on a given adaptive rule. The unit determines its local output based on its inputs and its internal state. At each time step, every element receives data values from the adjoining units in one direction and computes its output, which is then passed along to its next neighbors. Huberman and Hogg have recently shown that these architectures can be made to compute in a distributed, deterministic, self-repairing fashion [15]. These machines achieve a certain amount of segmentation of the speech waveform, and perform a discrete recoding of the continuous input. The causal relation between language-independent representations and the physical signal is established in this procedure, which amounts to a qualitative labeling of the inputs. The qualitative labels are then linguistically processed using a generalized LFG system. The technology that forms a basis for the constraint-based parse was originally implemented for natural language syntax, although the theory behind its development was not restricted to this domain [16]. The design of the Kaplan system reflects the idea of a collection of asynchronous communicating processes that handle various aspects of the recognition problem. Abandoning the familiar linear control strategy, the system effectively reduces the computation space by keeping track of a collection of constraints which minimizes dead-ends in the parse. The phonological grammar is a set of transition networks where labels on directed arcs correspond to sound constituents of various sizes. The phonological grammar derives from several studies on metrical and segmental phonological theory. The system applies a grammar network to a data structure called a chart, which builds up new edges and vertices, where each edge corresponds to a well-formed sub-string and each vertex forms a juncture. The current parser operates both bottom-up and top-down, the multi-processor framework eliminating redundant computation. The system is both fast and flexible, and should scale up in a straightforward way as more constraints are incorporated. 4. Proposal We propose a three-year effort in order to make substantial progress on these issues. While at present we have designs and simulations of components that provide solutions to some of the difficulties associated with variability and productivity, we wish to systematically investigate how to improve them both from the point of view of the scientific model and from an efficiency perspective. In addition, we want to implement our software in a special-purpose chip which would enable far greater processing speed. The support we seek is for the investigation of scientific questions including how to develop better adaptive structures for time-varying inputs, whether a two-tape finite-state automaton will suffice to map the array outputs to the parser, how to develop the metrical constraints already in place, and add additional phonological constraints, and finally how best to exploit the machines we have developed for lexical look-up. REFERENCES [1] Rabiner, L. and S. Levinson. Isolated and connected word recognition -- theory and selected applications. IEEE Trans. on communications, COM-29, 5, May 1981. [2] Rabiner, L., S. Levinson and M. Sondhi. On the application of vector quantization and hidden Markov models to speaker-independent, isolated word recognition. BSTJ 62(4) (APR. 1983). [3] John Makhoul, Richard Schwartz, Yen-Lu Chow, Owen Kimball, Salim Roucos, and Michael Krasner (BBN), Continuous phonetic speech recognition, Presentation at ASA, October 1984. [4] Lowerre, B. The HARPY Speech Recognition System, CMU Ph.D. dissertation, 1976. [5] Bush, M., G. Kopec, and M. Hamilton (Fairchild), Network-based Isolated Digit Recognition Using Vector Quantization. Presentation at ASA, October 1984. [6] Stevens, K.N. Toward a feature-based model of speech perception. IPO Annu. Prog. Rep. 17, 36-37 (1982). [7] Shipman, D. and V. Zue. Properties of large lexicons: implications for advanced word recognition systems, 1983 IEEE International Conference on Acoustics, Speech and Signal Processing, Paris, France. [8] Huttenlocher, D. and V. Zue. ICAASP 84. [9] Church, K. Phrase-Structure Parsing: A Method for Taking Advantage of Allophonic Constraints, MIT Ph.D. dissertation, 1983 (available from RLE publications, MIT, Cambridge, MA). [10] Withgott, M. Segmental Evidence for Phonological Constituents, UT Ph.D. dissertation, 1982, Austin, TX (available from University microfilms, Ann Arbor). [11] Kiparsky, P. Metrical Structure Assignment is Cyclic. Linguistic Inquiry 10, 421-441. [12] Koskenniemi, K. Two-level morphology: A general computational model for word-form recognition and production. University of Helsinki Publications, No. 11, 1983. [13] Beth G. Greene, David B. Pisoni, Thomas D. Carrell. Recogniton of speech spectrograms JASA 76,1, '84 [14] Shockey, L. and Reddy, R. Quantitative analysis of speech perseption: Results from transcription of connected speech from unfamiliar languages," paper presented at the Speech Communication Seminar, Stockholm, Sweden, 1974. [15] Huberman, B. A. and T. Hogg. Adaption and Self-Repair in Parallel Computing Structures, Phys. Rev. Lett. 52, 1048-1051, 1984. Hogg, T. and B. A. Huberman, Understanding Biological Computation: Reliable Learning and Recognition, Proc. Natl. Acad. Sci. USA 81, 6871-6875, 1984. [16] Kaplan, R. A multi-processing approach to natural language. Proc. of the National Computer Conference, pp. 435-440, 1973.(LIST ((PAGE NIL NIL (0 0 612 792) ((FOLIO NIL (PARALOOKS (QUAD CENTERED) CHARLOOKS (FAMILY NIL OVERLINE OFF STRIKEOUT OFF UNDERLINE OFF SLOPE REGULAR WEIGHT MEDIUM)) (9525 1270 2540 1270) NIL) (TEXT NIL NIL (2646 2575 16298 22825) NIL))) (PAGE NIL NIL (0 0 612 792) ((FOLIO NIL (PARALOOKS (QUAD CENTERED) CHARLOOKS (FAMILY NIL OVERLINE OFF STRIKEOUT OFF UNDERLINE OFF SLOPE REGULAR WEIGHT MEDIUM)) (9525 1270 2540 1270) NIL) (TEXT NIL NIL (2646 2575 16298 22825) NIL))) (PAGE NIL NIL (0 0 612 792) ((FOLIO NIL (PARALOOKS (QUAD CENTERED) CHARLOOKS (FAMILY NIL OVERLINE OFF STRIKEOUT OFF UNDERLINE OFF SLOPE REGULAR WEIGHT MEDIUM)) (9525 1270 2540 1270) NIL) (TEXT NIL NIL (2646 2575 16298 22825) NIL)))))NILNIL2 TIMESROMAN TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN NILNILQ TIMESROMAN TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMAN TIMESROMAN TIMESROMAN TIMESROMANB TIMESROMAN TIMESROMANo TIMESROMAN TIMESROMANÄ TIMESROMANNILNIL TIMESROMANNILNIL½ TIMESROMANNILNIL TIMESROMANNILNILÛ TIMESROMANNILNIL TIMESROMANNILNIL  TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL+ TIMESROMAN TIMESROMANNILNIL TIMESROMANNILNILÅ TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANMATH + TIMESROMAN<<øNILNIL TIMESROMAN* TIMESROMAN TIMESROMAN<<øNILNIL' TIMESROMAN TIMESROMANNILNIL TIMESROMANMATH * TIMESROMAN<<øNILNIL TIMESROMAN TIMESROMAN<<øNILNIL TIMESROMAN TIMESROMANNILNIL TIMESROMAN MATH  TIMESROMAN<<øNILNIL TIMESROMAN TIMESROMAN TIMESROMAN<<øNILNIL TIMESROMAN TIMESROMAN$$NILNIL TIMESROMAN$$NILNIL TIMESROMAN $$NILNILÙ TIMESROMAN $$NILNIL TIMESROMAN $$NILNILŠ TIMESROMAN $$NILNIL TIMESROMAN $$NILNIL TIMESROMAN $$NILNILK TIMESROMAN  TIMESROMAN' TIMESROMAN $$NILNIL TIMESROMAN $$NILNILK TIMESROMAN NILNIL TIMESROMAN  TIMESROMAN NILNIL TIMESROMAN NILNILd TIMESROMAN TIMESROMAN TIMESROMAN TIMESROMAN˜ TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL; TIMESROMAN TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL§ TIMESROMANNILNIL TIMESROMANNILNIL] TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILMATH  TIMESROMANNILNIL TIMESROMANNILNILŸ TIMESROMANNILNIL TIMESROMANNILNILt TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILMATH  TIMESROMANNILNIL TIMESROMANNILNILÕ TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILMATH  TIMESROMANNILNIL TIMESROMANNILNIL+ TIMESROMANNILNIL TIMESROMANNILNILf TIMESROMANNILNIL TIMESROMANNILNIL1 TIMESROMAN TIMESROMANR TIMESROMANNILNIL TIMESROMANNILNIL¹ TIMESROMANNILNIL TIMESROMANNILNIL. TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILMATH  TIMESROMANNILNIL TIMESROMANNILNIL« TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILMATH  TIMESROMANNILNIL TIMESROMANNILNILš TIMESROMANNILNIL TIMESROMANNILNIL0 TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANMATH NILNILMATH NILNILõ TIMESROMANNILNIL TIMESROMANNILNILã TIMESROMANNILNIL TIMESROMANNILNIL½ TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNILm TIMESROMANNILNIL TIMESROMANNILNIL: TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN TIMESROMANNILNIL TIMESROMANNILNIL° TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMANNILNIL TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN  TIMESROMAN NILNIL TIMESROMAN NILNIL¢ TIMESROMAN NILNIL TIMESROMAN NILNIL¼ TIMESROMAN NILNIL TIMESROMAN NILNILµ TIMESROMAN NILNIL TIMESROMAN NILNILS TIMESROMAN NILNIL TIMESROMAN NILNIL  TIMESROMAN NILNIL TIMESROMAN NILNILl TIMESROMAN NILNIL TIMESROMAN NILNILÍ TIMESROMAN NILNIL TIMESROMAN NILNIL, TIMESROMAN NILNIL TIMESROMAN NILNIL· TIMESROMAN NILNIL TIMESROMAN NILNILŸ TIMESROMAN NILNIL TIMESROMAN NILNIL^ TIMESROMAN NILNIL© TIMESROMAN NILNIL TIMESROMAN NILNILj TIMESROMAN NILNIL TIMESROMAN NILNILå TIMESROMAN NILNIL TIMESROMAN NILNIL„ TIMESROMAN NILNIL TIMESROMAN NILNIL TIMESROMAN NILNIL€ TIMESROMAN mK`z¸