Managing Stored Voice in the Etherphone System Douglas B. Terry Daniel C. Swinehart Computer Science Laboratory Xerox Palo Alto Research Center Abstract: The EtherphoneTM system is a prototype developed at the Xerox Palo Alto Research Center to explore the integration of voice into a distributed personal computing environment. An important component of this voice system is support for the recording, editing, and playing of stored voice. To facilitate sharing, voice is stored on a special voice file server that is accessible via the local internet. Etherphones transfer encrypted voice to and from the voice file server over an Ethernet. A voice manager provides operations for editing a voice passage once it has been recorded. Rather than rearranging the contents of voice files when being edited, the voice manager simply builds a data structure to represent the edited voice. This data structure, called a voice rope, consists of a list of [VFID, interval] pairs stored in a server database. Clients refer to voice ropes solely by reference. Such references may be embedded in multimedia documents, for example. A modified style of reference counts, called interests, enable unwanted voice ropes to be garbage collected. These interests are grouped into classes and can be invalidated according to a class-specific algorithm that runs periodically. Outline Introduction why voice? goals for voice storage voice vs. text Etherphone system environment Voice File Server Jukebox vs. voice streams why different from conventional file server Voice Object Manager editing operations structure of voice objects storage of voice objects Garbage Collection goals - outline the problem interests classes Experience and Evaluation of Design Decisions good choices vs. bad choices experience with a voice editing interface measurements of voice usage performance evaluations Conclusions review of results and basic assumptions applicability of results to other media, e.g. video and images Acknowledgements References Notes Emphasize that the paper presents a real system, not just a paper design. New ideas: * storing voice in a server  not really new, how does it differ from text? * storing voice in encrypted form  maybe new * editing voice by building data structures  new, do some text editors do this? * garbage collection using reference counts  not new * garbage collection based on classes of interests  new * including voice in documents by reference  not really new, at least hypertext systems do this * registration of interests by various software  maybe new * measurements of voice usage patterns - probably new We must carefully look for published papers to verify that the ideas above are really new. (I don't think that there are too many papers on the subject.) We must contrast voice with text to show that the management issues are different. Problem being solved: integrating voice into a system/programming environment in particular, supporting multimedia documents containing voice existing solutions for text do not work since voice has different characteristics the problem is definitely related to the Etherphone environment (should consider if the solution applies to other environments such as time-sharing systems or workstations with vocoders) The paper must include comparisons with previous and related work. Experience and usage results: how many people are using the system? what ideas work and which didn't? What did we learn? the design is implementable the system really works garbage collection in an open system is a hard problem database and file server access is fast enough a complex editing interface can be built above our basic (simple) operations for increased performance, we allowed voice rope intervals to be specified in editing and on playback a special Jukebox may no longer be necessary (can use Hagmann's FBFS?) What should the reader learn? How generally applicable are our lessons? go back an look at assumptions these results apply to other media such as images or video Alternative design choices? storing voice directly in document storing voice object directly in document storing voice files in decrypted form would this have been done if Dorados had encryption hardware? making voice files mutable making voice ropes mutable other choices for garbage collection: distributed trace and sweep, pure reference counts other choices for security: capabilities? hierarchically structured voice objects vs. flat structures => more database accesses on playback using a conventional file server for voice analog vs. digital storage for voice Did the choices we made turn out right? Measured usage of voice files: distribution of file lengths # of times played before garbage collected when played # of times edited (and when) distribution of file intervals within voice ropes, i.e. what is the granularity of edits? how much are voice ropes shared? what % have more than one interest? Use measurements to justify (or otherwise) the major design decisions Also include performance measurements Assumptions: - a distributed environment with powerful servers and workstations - Etherphones - sharing desirable - editing desirable - diverse clients - local networks - existing communication protocols How sensitive is the work to these assumptions? what if we didn't have Etherphones? what if we had vocoders attached directly to workstations? what if we didn't have encryption? what if we (Cedar/Tioga) were the only clients? what if mail were the only client? what if we had a large timesharing system? what if we had a PBX? Additional topics of interest: the LoganBerry database facilities? operating system support required for voice security issues  encryption, key distribution, conjecture about access control error conditions (failures) and error handling, e.g. what happens if the voice file server fails, is too busy, etc.? what happens if the interest database is unavailable? non-local (cross-domain) operations, i.e. scalability. Need to highlight our two major contributions: 1) using a database to edit voice (an implementation decision) 2) using interests to aid in garbage collection (a functionality decision) To be done before submission  complete draft of implementation section  complete draft of experience section  related work  usage/performance analysis  citations and references  review abstract  number subsections in experience section  double check section numbers and references to them  number figures and references to them  make sure all  and ?'s have been removed  look at Jeremy Dion's thesis for a discussion of garbage collection in the Cambridge File Server, Needham and Herbert may also have some info on this  look at the end-to-end paper as well, does it have a discussion of simplicity vs. performance? What about Lampson's paper on hints for system design?  look at Mary-Claire's notes  remove author's names and references to Xerox PARC  do we need a footnote stating "EtherphoneTM is a trademark of the Xerox Corporation"? Draft text (NOT THE MOST RECENT VERSION!) 1. Introduction Voice is an important and widely used medium for interpersonal communication. Computers facilitate interpersonal communication through electronic mail and shared documents. Yet, our computer systems have traditionally forced us to communicate textually. A major goal of the Etherphone system developed at Xerox PARC was to allow voice to be incorporated into computing environments and used in much the same way as text. This paper addresses the problems associated with managing stored voice in a distributed computing environment. The Etherphone system is intended for use in a locally distributed computing environment containing multiple workstations and programming environments, multiple networks and communication protocols, and perhaps even multiple telephone transmission and switching choices. The system is intended to be extensible in that it is possible to introduce new applications, network services, workstations, networks, and so on. As with text, we wanted to be able to easily record voice1 and incorporate it into electronic mail messages, voice-annotated documents, user interfaces, and other interactive applications. Nicholson gives a good discussion of many office applications that are made possible by treating voice as data [Nicholson 83]. Clients should be able to combine previously recorded voice in various ways and insert fresh voice into existing voice passages. Clients should be able to share voice as freely as they share files. Also, the system should permit programmer control over all of these functions. ------- length: 1 in 1 We are interested in capturing individual voices, conversations, music, and other sounds with reasonable fidelity. This affects the choice of encoding and as a result the volume of storage required, but not the management methods described here. The remainder of this discussion thus deals only with recorded voice, without precluding any of the other uses. ------- length: 1 in The characteristics of voice, however, differ greatly from those of text. Standard telephone-quality uncompacted voice occupies 64 Kbits of storage per second of recorded voice. This is several orders of magnitude greater than the equivalent typed text. Voice also requires special devices for recording and playback; that is, a user cannot simply type in a voice passage. More importantly, voice transmission has stringent realtime requirements. These differences dictate special methods for manipulating and sharing voice. Section 2 presents the operations on stored voice available to application programs in the Etherphone system. Section 3 then discusses the design and implementation of these operations and the rationale governing various design choices. Our two major contributions stem from (1) using a database to store information about edited voice in such a way that existing voice passages need not be moved, copied, decrypted, or destroyed during editing, and (2) employing a modified style of reference counting to aid in the automatic reclamation of obsolete voice. Section 4 relates our experiences to date with incorporating voice into workstation applications, and section 5 reviews our design principles and how they were met. First, the following subsection gives a quick overview of the Etherphone system architecture. 1.1 The Etherphone System Figure 1 depicts the basic components of the Etherphone system [Swinehart et al. 87]. Each personal workstation is associated with, but not directly attached to, a microcomputer-based telephone instrument called an Etherphone. Etherphones digitize, packetize, and encrypt telephone-quality voice and transmit it directly over an Ethernet. A voice control server provides control functions similar to a conventional PABX and manages the interactions between all the other components. In particular, it allows conversations to be established between two or more Etherphones, workstations, or servers. A voice file server, described in more detail in the rest of this paper, provides storage for recorded voice. The system can also include other specialized sources or sinks of voice, such as a text-to-speech server that receives text strings and returns the equivalent spoken text to the user's Etherphone. Figure 1. The Etherphone System Environment Workstations are the key to providing enhanced user interfaces and control over the voice capabilities. We rely on the extensibility of the local programming environmentbe it Cedar, Interlisp, Unix, or whateverto facilitate the integration of voice into workstation-based applications. Workstation program libraries implement the client programmer interface to the voice system. All of the communication required for control in the voice system, such as conversation establishment, is accomplished via a remote procedure call (RPC) protocol [Birrell and Nelson 84]. Multiple implementations of the RPC mechanisms permit the integration of workstation programs and voice applications programmed in different environments. During the course of a conversation, RPC calls emanating from the Voice Control Server inform participants about various activities concerning the conversation. Active parties in a conversation exchange voice using a Voice Transmission Protocol [Swinehart et al. 83]. More information on the equipment and protocols used in the Etherphone system, as well as the applications built to date, can be found in related papers [Swinehart et al. 83] [Swinehart et al. 87]. 2. Operational Overview In the Etherphone system, sequences of stored voice samples are referred to as voice ropes. The name comes from an analogy with Cedar's heavy-weight character strings called ropes [Swinehart et al. 86]. Each voice rope has a unique identifier, a VRID. To aid in sharing and facilitate the use of voice by heterogeneous workstations, the storage for voice ropes, as well as the operations on them, are provided by a network service, the voice manager. Clients refer to voice ropes solely by reference, that is, by their unique identifiers (VRIDs). The voice manager places no restrictions on clients' use of voice ropes. For instance, voice ropes could be used by an interactive interface to provide audio feedback. Most uses involve embedding speech in some type of document, be it an annotated manuscript, program documentation, or electronic mail. The use of such embedded references to refer to voice, video, and other diverse types of information has been termed a hypermedia system [Yankelovich et al. 85]. From a client's perspective, a voice-annotated document should behave as though the voice were stored directly in the document's file rather than remotely on a server. For example, once a voice message is sent using electronic mail, it should not be possible for the author or another user to change the message's contents. For this reason, voice ropes are immutable. The recording and editing operations create new voice ropes; they do not modify existing ones. 2.1 Recording and playback To record or playback a voice rope, a conversation is set up between the voice manager and an Etherphone. The main operations supported by the voice manager are as follows: RECORD[conversation] -> VRID, requestID Voice received by the server over the given conversation is stored and assigned a unique VRID; recording continues until a subsequent STOP operation. The requestID identifies this operation is subsequent reports (see below). PLAYBACK[conversation, VRID, interval] -> requestID The specified interval of the voice rope is transmitted over the given conversation. The interval specifies the sample at which playback should start in the voice rope and the number of samples that should be played (the interval's length). If a negative value is given for the interval's length, then playback continues until the end of the voice rope. STOP[conversation] Any recording or playback operations that are in progress or queued for the given conversation are immediately halted. These operations are invoked on the voice manager using the Cedar RPC facility. The RECORD and PLAYBACK operations are performed asynchronously. That is, the remote procedure call returns after the operation has been queued by the server. Queued operations are performed in order. The voice manager generates event reports upon the start and completion of a queued operation. The requestID returned by each invocation is used to associate reports with particular operations. In particular, the voice manager makes the following call to all participants in a conversation to inform them of the status of various requested operations concerning that conversation: REPORT[requestID, {started | finished | flushed}] Indicates that the requested operation has been started, successfully completed, or has been halted by a STOP operation. This use of callback procedures for event reporting is in the style of upcalls [Clark 85]. 2.2 Editing support Once recorded, voice ropes can be "edited" to produce new, immutable voice ropes. In determining the set of operations required to edit voice, we studied the set of operations typically available in programming languages or program libraries for manipulating text strings. These often include operations such as taking the substring of an existing string or concatenating two strings together. Considering a voice file to be a string of voice samples analogous to a string of characters, many of the operations available in the Cedar Rope package seemed applicable for editing voice: CONCATENATE[VRID1, VRID2, ...] -> VRID Returns a new voice rope that is the concatenation of the given voice ropes. SUBSTRING[VRID1, interval] -> VRID Returns a new voice rope consisting of the particular interval of the given voice rope. REPLACE[VRID1, interval, VRID2] -> VRID Returns a new voice rope that is obtained by replacing the particular interval of VRID1 with VRID2. LENGTH[VRID] -> length Returns the length of the given voice rope in units of 8-bit samples. One additional operation not needed for text strings was provided to aid in editing: DESCRIBE[VRID] -> intervals Returns a list of intervals that denote the non-silent talkspurts of the given voice rope. A talkspurt is defined to be any sequence of voice samples separated by some minimum amount of silence. These operations, available via RPC calls to the voice manager, are intended for use by programmers. Applications that want to employ voice should build user interfaces that hide these operations from casual users. 2.3 Interests As with any storage system, it is desirable to reclaim storage space that is no longer needed. With voice, or other media such as video, the need is more acute due to its voluminous nature. In the Etherphone system, clients must explicitly express an interest in retaining a voice rope in order to prevent it from being garbage collected. A voice rope is considered garbage, and hence can be collected, when no interests exist for the voice. Specifically, interests are similar to reference counts except that they are idempotent and grouped into classes. The operations on interests are as follows: RETAIN[VRID, class, cite] Registers an interest of the particular class in the given voice rope. The cite identifies a reference to the voice rope within the class. This operation is idempotent in that repeated calls with the same arguments registers at most one interest in the given voice rope. FORGET[VRID, class, cite] Deregisters the particular interest. In general, the class identifies the way in which the voice rope is being used by a particular application. For example, we use a class "TiogaVoice" to indicate that a Cedar Tioga document is annotated by the given voice; the cite field is the name of the file containing the document. The class "Message" indicates that the reference is a part of an electronic mail message that includes recorded voice, and the cite is the unique postmark supplied by the message system. It is crucial that users never have to worry about interests. A combination of automatic collection and client workstation software must hide these from actual users. For annotated documents in Cedar, the workstation software detects when a file is copied from the local disk to a public file server; it then automatically registers the appropriate "TiogaVoice" interests for the public file. The Cedar mail system, automatically registers and deregisters interests of class "WalnutMsg" as voice messages are saved and deleted by users. 3. Detailed Design Decisions The voice manager uses a voice file server to store voice data. This server provides RECORD, PLAYBACK, and STOP operations that are semantically similar to those described in section 2.1, but operate on voice files. Voice ropes are actually made up of pieces of one or more voice files. Although the operations for editing voice ropes have been patterned after the Cedar Rope package, a different underlying implementation was necessitated by the different characteristics of voice and text. Specifically, editing voice by actually copying the bytes, as is sometimes done for Cedar's ropes, is expensive since voice is voluminous. Thus, rather than rearranging the contents of voice files to edit them, the voice manager simply builds a data structure representing the result of an editing operation. A database system maintains information about the structure of each voice rope. A garbage collector uses interests to reclaim voice storage that is no longer needed. Devising techniques for automatically collecting garbage in an "open" environment was one of the most difficult problems faced in the design of the voice manager. These components of the voice manager are logically layered as in Figure ?. Each is discussed in more detail in the following sections. Figure ?. Voice Storage Components 3.1 Voice file server A voice file server differs from normal file servers [Svobodova 84] in that it must support the real time requirements of voice. In particular, It must be able to maintain a sustained transfer rate of 64 Kbits/sec. And, it should be able to support several such transfers simultaneously. In order to keep the number of seeks small in retrieving and storing voice files, our server reads and writes data in contiguous 16 page (8 Kbyte) segments. Each segment is equivalent to one second of recorded voice. Thus, for any given voice connection, the server need perform no more than one seek per second. The file server must also be able to quickly allocate new files. It does this by maintaining a fixed space of voice file identifiers (VFIDs) and preallocated file descriptors. A bitmap is used as a hint of the available free file descriptors. (The truth is actually maintained in the descriptors themselves; a scavenge operation rebuilds the bitmap from the truth when necessary.) The voice transmission protocol used during a conversation packetizes the voice stream into 20 millisecond packets. On recording, a process on the voice file server receives packets directly from the network and stores them into 8 Kbyte buffers. These buffers are then written to disk by a separate process as they fill up. On playback, one process reads segments of a voice file into in-memory buffers, and another process breaks a buffer into packets and transmits them over the network. The voice transmission protocol does not actually transmit packets that contain simply silence. Nor does the voice file server explicitly store silence. So, in fact, a segment stored on disk has two components. The non-silent voice samples are stored in the first part of the segment; the second part contains a data structure describing the silent and non-silent intervals. A segment that is completely silent takes up no storage. In general, the voice file server fabricates a segment of all silence when asked to return a voice file's segment that is not explicitly stored. These fabricated segments could occur anywhere in the logical sequence of segments composing a voice file. Etherphones encrypt the voice using DES electronic-codebook (ECB) encryption as it is transmitted. Upon receipt, the voice file server simply stores the voice in its encrypted form. The stored voice is never decrypted except by an Etherphone when being played back. In fact, the computer being used as a voice file server does not have encryption hardware; it would not be able to keep up with the voice stream if it attempted to decrypt incoming packets in software. 3.2 Database facilities Information about voice ropes is stored in a database. We realized that the database system need not provide outstanding performance or sophistication. The objects representing voice ropes are immutable, multi-object atomic updates are not required, and the usage of the system is expected to generate relatively infrequent changes to small numbers of voice segments, each several seconds or even minutes in duration. As a result, we developed a simple, robust database representation that is particularly well-suited to this application. The database system internally stores each object in a write-ahead log. A B-tree index is built to map VRIDs to the object's location in the log. We also build an index on VFIDs, which is useful in garbage collection. Unlike most database systems in which the data is logged only until it can be committed and written to a more permanent location, the log is itself the permanent source of data. Once logged, the data is never moved. If a crash occurs while writing a log entry, the incomplete entry will be discovered upon recovery; if a crash occurs while updating the B-tree index, the log can be replayed to build the index from scratch. In the later case, recovery is an expensive operation. However, we are willing to pay this cost in the rare event of an untimely hardware crash instead of having to built a transaction system to update the log and index atomically. Interests are also stored in a database that is managed in the same way as the voice rope database. B-tree indices allow interests to be queried by class, cite, creator, or VRID. 3.3 Voice rope structures The data structure representing a voice rope consists of a list of [VFID, interval] pairs. Simple voice ropes consist of a single interval within a single tune, often the whole tune. More complex voice ropes can be constructed using the editing operations presented in section 2.2. For example, suppose two simple voice ropes, VRID1 and VRID2, exist with the following structures: VRID1 = VRID2 = Then the operation, REPLACE[base: VRID1, interval: [start: 1000, len: 1000], with: VRID2] produces a new voice rope, VRID3, with the structure: VRID3 = as depicted in Figure ?. Figure ?. Structure of VR3 after REPLACE operation To record a new voice rope, the voice manager calls on the voice file server to allocate and then record a voice file. Once recording completes, a simple voice rope is created that represents the complete voice file just recorded and is added to the voice rope database. When playing a voice rope, the voice manager retrieves the voice rope's structure from the database, distributes the encryption keys of the various intervals to the parties participating in the conversation, and calls upon the voice file server to play the intervals of the voice rope in the appropriate order. Requesting the playback of several intervals in succession relies on the asynchronous nature of the voice file server operations to ensure that gaps do not get inserted between the intervals. Secure RPC [Birrell 85] is used to safely distribute encryption keys. The structure of voice ropes is kept "flat" to optimize playback. By having each voice rope refer directly to voice files, only a single database access is required to completely determine the voice rope's structure when being played. An alternative design, more closely modeled on the Cedar Rope abstraction, would be to store complex voice ropes as intervals of other voice ropes. In such a design, a voice rope would conceptually be the root of a tree of other voice ropes with intervals of voice files at the leaves of the tree. This design would reduce the work associated with each editing operation, but would increase the number of database accesses required to play a voice rope. The flat design was chosen because it improves playback behavior, and, in practice, playback is much more frequent than editing. Moreover, it yields simpler and more compact data structures when used, as intended, to represent relatively small numbers of course-grained edits to voice. Note that the actual voice is never moved or copied once recorded in a voice file, even when being edited. The voice also is never decrypted by the voice file server. The encryption keys for the various intervals that compose a voice rope are stored in the database along with the object's structure. The fact that we use the ECB scheme of independently enciphering each block of 8 samples, instead of using cipher block chaining (CBC), ensures that voice can be edited on 64-bit boundaries while remaining encrypted. 3.4 Storage reclamation In the Etherphone system, if a voice file is not referenced by any voice rope then it can never be referenced by voice ropes. Thus, a voice file can be deleted when the last voice rope that includes it is deleted. This is easily determined by a database query. The more difficult problem is deciding when voice ropes can be reclaimed. Perhaps, the least troublesome method of collecting garbage, as far as clients are concerned, is via a program that periodically examines clients' storage systems to determine if anything references particular voice ropes. In our system, a distributed garbage collector of this sort is impossible to implement since we are providing a service in an open, relatively heterogeneous, environment. That is, the voice manager is not aware of all of its clients, their uses for voice ropes, or how and where they store VRIDs. A common alternative is to provide a reference counting scheme in which counters are used to determine the number of clients interested in a particular object. When an object's counter goes to zero, the object's storage can be reclaimed. The burden is placed on clients to explicitly increment and decrement the counts for the objects that they are using. The use of standard reference counting presents formidable problems in a distributed environment like the Etherphone systems'. Reference counts cannot be reliably managed unless an atomic transaction spans the use of the reference and the reference count operation. Without transactions, if the server or client fail in the process of incrementing or decrementing a reference count, the client may be left in an uncertain state regarding the outcome of the operation. Reference counts are anonymous, and give no help in locating erroneous references. Interests were designed to remedy these shortcomings associated with simple reference counts. For one thing, the operations on interests described in section 2.3 can be safely retried in case of failures or uncertainty. Also, the information stored in the interest database is sufficient to determine whether or not an interest is still valid, either by human administrators or client applications. In fact, grouping interests by class and providing automatic class-specific reclamation algorithms can, in many instances, relieve clients from the burden of remembering to explicitly unregister interests when the associated voice rope is no longer used. Consider the following scenario. A user records a voice rope and embeds a reference to it in a document; an interest in the voice rope is then registered for the document. The user then copies this document from his workstation to a public file server and announces its existence in a message to interested parties. Several months later, he deletes the file, without remembering that it had voice annotations. Unless further actions are taken, the interest, and hence the referenced voice rope, will never be reclaimed. Clearly, users should not be relied upon to perform operations in order to reclaim storage. However, expecting an arbitrary file server to delete the associated interests is also not reasonable. In the above scenario, one could argue that the file server itself should have deleted the associated voice interests when the file was deleted. However, this implies that the software running on the file server could be modified to recognize the existence of voice in files and take appropriate actions. Many of our file servers at Xerox PARC cannot be so modified either because they are old and written in some obscure programming language or else they were purchased from outside vendors and the source code is not available. We suspect that other complex, heterogeneous computing environments exhibit similar properties. To alleviate the problem of reclaiming outdated interests, each class of interests registers a procedure of the following type with the voice manager: GARBAGE[VRID, cite] -> Yes/No Determines in a class-specific way whether or not the given cite still has an interest in the particular voice rope. For example, for the class "TiogaVoice", this procedure checks whether or not the given file (identified by the cite) exists. If the file has been deleted, then it clearly has no more interest in the voice rope that it contained. In our environment, because of the autonomy of individual workstations, this check is only possible for files residing on public file servers. An interest verifier periodically enumerates the database of interests and calls the class-specific GARBAGE procedure for each interest. If this procedure returns No, then no further action is taken  some classes require their clients to take specific action, by calling FORGET, to abandon a reference. If the procedure returns Yes, then the interest is automatically deleted from the database. A garbage collector for voice ropes also runs periodically. For each voice rope in the database it (1) deletes the voice rope if no interests exist that reference it, and (2) deletes each voice file used by the voice rope if they are no longer a part of any voice rope. This process refuses to collect voice ropes that are too young in order to prevent it from collecting a newly created voice rope before a client has the opportunity to express an interest in it. Note that, unlike a mark-and-sweep style garbage collector, these algorithms can be safely executed while the system is running and need not complete a full pass through the database in order to perform useful work. In summary, garbage collection takes place on three levels. The voice manager deletes voice files when they are no longer referenced by voice ropes. Voice ropes are deleted if no interests exist for them. Interests are either explicitly unregistered by client applications or automatically deleted based on a class-specific test for validity. 4. Experience and Evaluation Approximately 50 Etherphones are in daily use in the Computer Science Laboratory. Our current voice file server runs on a Dorado with a 300 Mbyte local disk. Thus, it has the capacity to store over 7 hours of recorded voice; the actual storage capacity depends on the amount of suppressed silence. Most of our user-level applications to date have been created in the Cedar environment [Swinehart et al. 86], although limited functions have been provided for Interlisp and for stand-alone Etherphones. We have had a voice mail system running for over two years and a prototype voice editor for about 8 months. Voice-annotated Documents Manipulating stored voice solely by text-string references, besides allowing efficient sharing and resource management, has proven to make it easy to integrate voice into documents. For example, we were able to build a local voice mail system without changing the mail transport protocols or servers. Also, annotated Tioga documents can be stored on conventional file servers that are not aware that the documents logically contain voice. There are also significant performance benefits accrued by having documents refer to voice that is stored remotely. Although most requests to record or playback voice ropes are initiated from a workstation, the voice data is never received by the workstation; instead, it is transmitted directly to the associated Etherphone. Editing Voice We have gained considerable experience with the voice manager by building a voice editing system in Cedar. Figure ? displays a document with voice annotations and a passage of voice that is being edited. Our voice editor uses capillary tubes to graphically display voice where grey regions indicate talkspurts and white regions indicate silence [Ades and Swinehart 86]. The DESCRIBE operation is sufficient for generating such a display. Other voice editors have chosen to display voice as energy profiles. The voice file server maintains enough information about the energy levels in stored voice files to produce such a display and provides operations for easily accessing this data. Figure ?. Voice annotation and editing. The small set of editing operations provided by the voice manager has proven to be a sufficient base on which to build a complex voice editor and a dictation machine. However, to reduce traffic to the voice manager, the Cedar voice editor maintains its own data structures to temporarily represent the edited voice. That is, the voice manager ended up replicating much of the functionality of the voice manager, something we were trying to avoid. Only when a user elects to save the edited voice passage does the voice manager get called to perform the necessary operations. Given this arrangement, it would have been better to let clients simply pass the voice manager a complex voice rope that it could store in its database. We are torn between the simplicity of our existing interface and the performance improvements obtainable with a lower level interface. We have also observed that editing a voice passage invariably produces a set of "temporary" voice ropes that are used in the construction of the finished result. These objects are eventually collected by the garbage collector, so they do not present much of a problem except that they require seemingly unnecessary work of the voice manager. To alleviate the problem somewhat, we changed the voice manager's interface slightly so that an interval could be given for any voice rope in any operation. This substantially reduced the voice editor's use of the SUBSTRING operation. Event reporting is important in allowing the voice editor to coordinate its visual feedback with the activities of the voice file server. In particular, the voice editor moves a cursor along the screen as a voice rope is being played (the gray marker in the voice displayed in Figure ?). A report indicating that the playback of a particular object has started or finished is essential to synchronize the movement of the cursor with the transmission of voice data. Although, the voice file server writes files on disk so that 1-second segments can be continuously transferred, it is possible for clients to edit voice ropes on 1-millisecond boundaries. The file server could not possibly playback a voice rope in realtime if it had to perform a disk seek every millisecond. Fortunately, users of a voice editor insert, delete, and rearrange voice passages at the granularity of a phrase rather than a phoneme [Ades and Swinehart 86]. Thus, in practice, one rarely sees segments of a voice rope that are less than a second or so in length.  I should be able to get numbers to validate this claim by analyzing the voice rope database log.  When using a dictation machine, one often starts recording until he makes a mistake. The user then stops recording, goes back to the beginning of the last sentence, and resumes recording. Much of the uses of the voice editor in our system follows this paradigm. Since each recording creates a new file in our system and we do not delete a voice file until no voice ropes refer to it, the garbage collector does not reclaim the space at the end of a voice file that has been logically recorded over. We could avoid this by allowing the garbage collector to delete parts of voice files, but have not felt that it was worth the extra complication. Interests The notion of grouping interests into classes and providing class-specific garbage collection algorithms is a useful and workable concept. However, we are still groping with the details of how best to use these mechanisms. We have found several interest classes to be useful in Cedar. In addition to the "TiogaVoice" interest class previously discussed, a "Timeout" class has been used to automatically retract an interest after a certain amount of time. For instance, when sending a voice message, a timeout can be set by the sender that is long enough to give the recipients a chance to receive the message and register their own interests if so desired. Of course, problems can arise if a recipient is on vacation for a period of time longer than the timeout. For this reason, we have a means of archiving voice files before deleting them from the server. We have defined the "TiogaVoice" interest class such that its cite represents a publicly stored file name including the version number. With this scheme, interests must be reregistered whenever a file is written to a public server, that is, for each new version of the file. Unfortunately, the times that people want to annotate documents are precisely those times when the document is being updated often, so lots of interests get registered over and over again. We rely on the garbage collector to get rid off old interests. An alternative would be to register a file without a version number, but that causes problems if voice is deleted from the file but the file itself remains in existence. Having workstation software automatically register "TiogaVoice" interests as a file is copied to a file server works remarkably well. However, there is an important case that is not covered by this approach: renaming a file on a file server or copying files between file servers. We see no way to detect such operations except by modifying the file server's software. Security Using encryption as the basis for security has resulted is a storage system that is potentially much more secure than most existing file servers. There are two security-related problems for which questions remain: key distribution and access control. Each conversation in the Etherphone system uses a randomly generated encryption key. Since a new conversation is typically established to record a voice rope, each voice file generally has a different encryption key. Theoretically, it is possible to construct a voice rope that contains intervals of voice files encrypted with lots of distinct keys. Not only must the voice manager securely distribute all of these keys, but also it is possible to overflow the field of each voice packet that indicates which key to use for decryption. One solution to this problem is to have the voice file server decrypt a voice file as it is being packetized for transmission and reencrypt each packet using a single conversation key as the packet is sent. Thus, Etherphones would need only a single key for decrypting packets at any one time. This is a costly proposal, and is totally impossible in our system since the voice file server does not have encryption hardware. Given fast encryption devices on the server, we may have designed the system differently. Having voice files encrypted on the server is only useful if some sort of access control governs who is allowed to play particular voice ropes. It would be easy for the voice manager to store access control lists in the database along with voice ropes, or to maintain a separate database of permissions. The voice manager can obtain credentials for any client invoking an operation since we use secure RPC [Birrell 85]. At least two types of access are useful: read access allows a client to play a voice rope while modify access allows the client to use it in editing operations. One reasonable approach might be to allow anyone to read a voice rope by default, but to restrict modify access to the object's creator. Operations are needed for updating the access control lists associated with a given voice rope. These lists could be updated by workstation software in much the same way that interests are registered. For example, the mail system could automatically set the read access of a voice message to be the set of recipients. Currently, we have not implemented such an access control scheme; clients are free to play or edit any voice rope for which they have a valid VRID. Reliability The voice file server, voice manager, and voice control server were implemented so that they could run on separate physical processors. That is, they all communicate among themselves and with voice clients using RPC. In practice, we run all three on the same Dorado. There is little to be gained by running them separately, since the voice file server cannot record or playback voice files if the control server is down and the voice manager cannot record or playback voice ropes if the voice file server is down. For all practical purposes, voice can also not be edited if the voice file server is down since users invariably need to listen to the voice passages that they are editing. Thus, availability is not adversely affected by having the voice manager and file server colocated with the control server. If this server crashes or is otherwise unavailable, then no operations can be performed on stored voice. For the most part, this is simply an inconvenience to users in the same way that unavailability of conventional file servers is an inconvenience. In Cedar, the file servers containing the important system files, fonts, and documentation are replicated to improve their availability. We have not found it necessary to pay the cost to provide a highly-available voice file server. The one exception to this concerns voice interests. It is often the case that clients wish to register or deregister interests in voice ropes independently of playing the referenced voice. For example, as previously mentioned, an interest of type "TiogaVoice" is registered when a voice-annotated document is copied from a personal workstation to a public file server. A user should not be prevented from performing such a copy simply because the voice manager is unavailable. We have also observed that the interests for voice messages fail to get properly registered or deregistered if a person saves or deletes a voice message while the voice server is down. This has led us to contemplate writing a program that enumerates a person's mail database and checks that all voice messages have properly registered interests. The better solution is to make the voice interest database highly-available. Rather than fully replicating the database, we are planning to provide a mechanism whereby operations to RETAIN or FORGET a voice interest are logged locally by a user's workstation if the voice server is unavailable; the operations in this log will be retried when the workstation detects that the server is reachable. Performance (Performance measurements of the system are not available for publication at this time, but we should be able to include some in the final paper.) 5. Related Work  We need to say something about other work on building voice storage systems, voice mail, etc. and how it differs from our work. I'm not sure if these comparisons should be in this separate section or should be spread out throughout the paper.  Several companies provide speech message systems that can be accessed from standard telephones; one of the earliest examples of this type of system was IBM's experimental Speech Filing System, which was operational in 1975 [Gould and Boies 84]. Certainly the Etherphone System's facilities can be accessed from telephones, but that was not the driving application. We were interested in allowing voice to be easily integrated into a user's existing means of digital communications, rather than forcing users to learn a completely new system. The Sydis Information Manager provides workstation control over the recording, editing, and playing of voice as in the Etherphone System, but requires special workstations called VoiceStations [Nicholson 83]. Ruiz also developed a prototype voice system that integrates voice and data into some simple workstation applications; however, he did not address the important issues of sharing stored voice [Ruiz 85]. Maxemchuk's speech storage system [Maxemchuk 80] provided many of the same facilities for recording, editing, and playing voice as our voice file server. (Actually, he provided much more control over the playback of voice than we do, such as the ability to vary playback speeds or adjust silence intervals.) Also, the division of function between a main computer and a storage computer is quite similar to the separation between our voice manager and voice file server. However, Maxemchuk's system edits voice using divide and join operations that modify the control sectors of stored voice messages. Our technique of building data structures that reference voice files better supports sharing, by making voice ropes immutable, and simplifies the requirements placed on the voice file server. For instance, our techniques are very amenable to write-once storage technologies such as optical disks. Liskov and Ladin present an example of a distributed garbage collector [Liskov and Ladin 86]. Their approach requires all sites that store references to other objects to run a garbage collector locally and send information about non-local references to a reference server. In some sense, their use of a reference server is similar to our use of registered interests, but much more limited. One interesting contribution they make is how to build a highly-available reference server; we could use these to build an interest server.  Say something about Diamond [Thomas et al. 85]. The Diamond system, for instance, does not face the problems we have with garbage collection since it is a closed system.  6. Conclusions The facilities for managing stored voice in the Etherphone system were designed in adherence to the following principles: * permit sharing among various clients Maintaining voice on a publicly accessible server facilitates sharing. Clients can freely pass around references to voice ropes without incurring the overhead of actually transmitting the voice itself. Voice ropes are immutable so that they can be incorporated into documents by reference, but exhibit copy semantics. * support easy editing of voice by programs The editing operations provided by the voice manager are similar to those in the Cedar Rope package. This is intentional so that programmers can manipulate voice in the ways they are accustomed to for text. The basic facilities to support editing reside on a server; workstations are responsible for providing a user interface that is integrated with their programming environment. * move voice data as little as possible Once recorded in the voice file server, voice is never copied until a workstation sends a playback request; at this point the voice is transmitted directly to an Etherphone. In particular, although workstations initiate most of the operations in the Etherphone system, there is no reason for them to ever receive the actual voice data since they have no way of playing it. Furthermore, to efficiently support editing, we maintain a two level storage hierarchy: voice ropes refer to intervals of voice files. A many-to-many relationship may exist between voice ropes and files. That is, a given voice rope may consist of intervals from several voice files, and a given voice file may be used by several voice ropes. A database stores these relationships. Editing operations simply create new objects from old ones and add them to the database. * allow diverse workstations to be integrated into the system All of the operations on stored voice are performed on a server, the voice manager. Due to the heterogeneous nature of our environment, we felt that it was better to implement these facilities on a server once than to require each different workstation programming environment to provide their own implementations. Moreover, the only requirements placed on a workstation in order to make use of the voice services are that it have an associated Etherphone and a RPC implementation. In particular, workstations need not have hardware support for encryption or voice I/O. * do not restrict the uses of voice in client applications Voice management is provided by a server exporting an RPC interface. The voice manager makes no assumptions about the way clients make use of its services. This particularly impacted the design of the voice garbage collector. * provide a level of security at least as good as that of conventional file servers We use secure RPC for all of the control functions in the Etherphone system and DES encryption for transmitted voice. Thus promiscuous machines are prevented from listening to any communications in the system. Storing the voice in its encrypted form protects the voice on the server and also means that the voice need not be reencrypted on playback. All in all, the voice system actually provides better security than most file servers. * automatically reclaim the storage occupied by unneeded voice Garbage collection of voice ropes is done using a modified type of reference counting. Clients register interests in particular voice ropes. These interests are grouped into classes and can be invalidated according a class-specific algorithm. For the most part, users of voice applications are not aware of how or when interests are registered since it is handled transparently by the application software. The Etherphone system has provided an environment in which to explore the management of voluminous, shared data among distributed and heterogeneous workstation clients. The techniques presented in this paper are applicable to and beneficial for the management of various types of data including voice, video, images, and music. Acknowledgements  Be sure to give credit for the Voice File Server to John Ousterhout, and perhaps Larry Stewart and Stephen Ades. Others deserving mention include Polle Zellweger, Luis Cabrera, Severo Ornstein, perhaps Lea Adams. We must keep in mind that the design of voice ropes evolved for a long time and many people contributed to it. Stephen Ades' implementation of a voice editor has allowed us to get some experience with voice ropes. We can also acknowledge those that give us really sterling comments on this draft (ha ha).  References [Ades and Swinehart 86] S. Ades and D Swinehart. Voice annotation and editing in a workstation environment, Proceedings AVIOS Voice Applications '86, September 1986, pages 13-28. [Birrell 85] A. D. Birrell. Secure communication using remote procedure calls. ACM Transactions on Computer Systems 3(1):1-14, February 1985. [Birrell and Nelson 84] A. D. Birrell and B. J. Nelson. Implementing remote procedure calls. ACM Transactions on Computer Systems 2(1):39-59, February 1984. [Clark 85] D. D. Clark. The structuring of systems using upcalls. Proceedings Tenth Symposium on Operating Systems Principles, Orcas Island, Washington, December 1985, pages 171-180. [Gould and Boies 84] J. D. Gould and S. J. Boies. Speech filingAn office system for principles. IBM Systems Journal 23(1): 65-81, January 1984. [Liskov and Ladin 86] B. Liskov and R. Ladin. Highly-available distributed services and fault-tolerant distributed garbage collection. Proceedings of Symposium on Principles of Distributed Computing, Calgary, Alberta, Canada, August 1986, pages 29-39. [Maxemchuk 80] N. Maxemchuk. An experimental speech storage and editing facility. Bell System Technical Journal 59(8): 1383-1395, October 1980. [Nicholson 83] R. Nicholson. Integrating voice in the office world. BYTE 8(12):177-184, December 1983. [Ruiz 85] A. Ruiz. Voice and telephony applications for the office workstation. Proceedings 1st International Conference on Computer Workstations, San Jose, CA, November 1985, pages 158-163. [Svobodova 84] L. Svobodova. File servers for network-based distributed systems. ACM Computing Surveys 16(4):353-398, December 1984. [Swinehart et al. 83] D. C. Swinehart, L. C. Stewart, and S. M. Ornstein. Adding voice to an office computer network. Proceedings IEEE GlobeCom '83, November 1983. Also available as Xerox Palo Alto Research Center, Technical Report CSL-83-8, February 1984. [Swinehart et al. 86] D. Swinehart, P. Zellweger, R. Beach, and R. Hagmann. A structural view of the Cedar programming environment. ACM Transactions on Programming Languages and Systems 8(4):419-490, October 1986. [Swinehart et al. 87] D. C. Swinehart, D. B. Terry, and P. T. Zellweger. An experimental environment for voice system development. IEEE Office Knowledge Engineering Newsletter, February 1987. [Thomas et al. 85] R. H. Thomas, H. C. Forsdick, T. R. Crowley, R. W. Schaaf, R. S. Tomlinsin, V. M. Travers, and G. G. Robertson. Diamond: A multimedia message system built on a distributed architecture. Computer 18(12):65-78, December 1985. [Yankelovich et al. 85] N. Yankelovich, N. Meyrowitz, and A. van Dam. Reading and writing the electronic book. Computer 18(10):15-30, October 1985. A paper on Voice Ropes to be submitted to SOSP, due February 1987 Copyright 1987 by Xerox Corporation. All rights reserved. Draft last edited by Doug Terry, February 13, 1987 3:29:14 pm PST Swinehart, February 6, 1987 3:40:33 pm PST [Artwork node; type 'ArtworkInterpress on' to command tool] [Artwork node; type 'ArtworkInterpress on' to command tool] [Artwork node; type 'ArtworkInterpress on' to command tool] [Artwork node; type 'ArtworkInterpress on' to command tool] UAIcodeT"blueandwhite" styleit "blueandwhite" styleXInterpress/Xerox/3.0 fjkjWB@ C ?rj`e$WBrjWBrj((䡹(䡹(krj((W(WWW䡹(䡹krj䡹䡹krj((䡹(䡹(krj((W(WWW䡹(䡹krj䡹䡹krj^^^䡹^䡹krj[^Xerox PressFonts Helvetica-mrr Voice ManagerkrjoW?Xerox PressFonts Helvetica-mrrGarbagekrjo[lXerox PressFonts Helvetica-mrr Collectorkrj?"Xerox PressFonts Helvetica-mrr Voice filekrjHXerox PressFonts Helvetica-mrrserverkrj Xerox PressFonts Helvetica-mrrDatabasekrj Xerox PressFonts Helvetica-mrrsystemkPxPxəgxgW{NA{@/a{WRVaWdVrj(hXerox PressFonts Cream-mrrvoice transmissionkrj?Xerox PressFonts Cream-mrr operationskrj Xerox PressFonts Cream-mrrreportskkkkg Interpress:0.0 mm xmin 0.0 mm ymin 76.76444 mm xmax 91.15777 mm ymax G93.97999 mm topLeading 93.97999 mm topIndent 1.411111 mm bottomLeading ==Y# T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style  "blueandwhite" style&Z"blueandwhite" style%>Z"blueandwhite" style'@"blueandwhite" styleZ"blueandwhite" style0E"blueandwhite" style5Z"blueandwhite" style 57(W"blueandwhite" styleW"blueandwhite" styleXZInterpress/Xerox/3.0 fjkjWBH^rj3WBrjWBXeroxResearch RGBLinear`:6,U_` _|`e{R` 0|7W_| "%"0" /"0" /"#5"0""%"0" #5"/"#5" rjCC@S)Xerox PressFonts Cream-mrrdatabasekrj3Xerox PressFonts Cream-mrr voice storagekrjࠢXerox PressFonts Helvetica-mrrVF1:krj-n-nT-nT-nTTkrjlXerox PressFonts Helvetica-mrr krjXerox PressFonts Helvetica-mrrVF2:krjƀƀSƀSƀSSkrjXerox PressFonts Helvetica-mrrVR1:krj    ꡹꡹krjѠXerox PressFonts Helvetica-mrrVR2:krj+ + + + ̡̡krjXerox PressFonts Helvetica-mrrVR3:krjRRRRkrjRRRRkrjRRRRk-nLFVAaǎI$M9K!VA/FT-n ƀ;FAp!?M9WCFAQ9ƀ9NQ"C%WC!^]hp"C%; NTeW`W+9%i<whd+9%JI$eW eW@I+9%@mwh>S+9%IeWi"s - &I$M9 kEWk{k  B54 \WB54B54 X  Xj:X LV V-V $!?M9 kwjWk7_4k  K {Cu CMC  #   8m wh wjW74 w#  3##      > +.)  .) <whwjW 5ih | |.| #~!^]h rj  H| WBrjWB`>ߡ%WBrjWB6a~aA H$ꡣ$FB 000 00000000000000`0`0`0`0` 0p 000>0>0>0> ?00??00??00??0p??0q}??0q}?0q}p?0}p?0}p?0x?0x?0x?0x?0x?0`x?0À`x`0À`x0?` x0?`G 0??G ?0??G0??G0??_0???0???@0????@0???@0???0??0??0??0 0000000000¡Ġkkkrj:3CWBrjWBdݡ%WBrjWB6a~aA H$ꡣ$FB 000 00000000000000`0`0`0`0` 0p 000>0>0>0> ?00??00??00??0p??0q}??0q}?0q}p?0}p?0}p?0x?0x?0x?0x?0x?0`x?0À`x`0À`x0?` x0?`G 0??G ?0??G0??G0??_0???0???@0????@0???@0???0??0??0??0 0000000000¡Ġkkkkkkg Interpress:0.0 mm xmin 0.0 mm ymin 105.1278 mm xmax 59.26666 mm ymax G62.08888 mm topLeading 62.08888 mm topIndent 1.411111 mm bottomLeading ==Yd 3T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style@T"blueandwhite" style T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style "blueandwhite" styleZ"blueandwhite" style<4tW"blueandwhite" styleT"blueandwhite" style P9k4AT"blueandwhite" styleT"blueandwhite" style "blueandwhite" styleT"blueandwhite" style "blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style 3 8 "blueandwhite" style T"blueandwhite" style٩Interpress/Xerox/3.0 fjkjWB>iOIrjf}}WBrjWBn2g?I]FTHo8>w{m}{{0ǻ88}q}>'X{yf1ڿۮݻ۷m}mn}uv{xvwnn݃݇۷m}mûo}uv{{ww`ݿwwms׻o}uvw{wo۷{ݺw}wmwo׻n{uV{{wNoÎ1ӊ81;8{p8?:t8~& @h  8p@F! &$#  s9s؀w$b8sp sabg$qw8A4#" @"@D$dQ"<"D $J  D a"$ ("D@"HDIC>y|$D"$"|?yD"z"|@"HDI @@$D "@$H"D"@@"0I "PBDQ$d "D,H2 D ɒ"(QD!@ICw <8@xpvYÎxw8p,tA8gqu89?8@Dx w h0b " @@@@#6"9sgcsÉ^s8GĆl4&TD("$LI H"J@"H@a'2L!"TD&"$H #Ȃ>=>O@ "D"TD!"$H $H EH @ "D"D)"H $HE@HPD$b"Dû9@sqsspC: 97w:( 3>`p~ @@   4!A @@ Phsp## $4A898,ssaEqc68 P0" HDI@ |DD|$D)>|P $EI DD"H  @(D@$"D)"  @P"$BI@!DE"H$I(DDDbT""$IĄ$Ît sp8sa~:Ü898p.ssÜ9ݸ @@A@Av99@""H"DD""H>"||"H B@T@ "HBDlD8D9 @<  @@B@@%`&0B $Xq86r"9Gc(NXXcŎf8[Sbs0XcŎX 袊bOD"gHD AxRi$$2.DE"A]$ $$2.i D"D|""ODL' ZIH$q$DDI"GI"$H$q$I D"D@""H"DC$ FI$H$$DDI"DI"$$H$$I drD""HDb I$@ RI$H$$DI"DAI#$ $H$$IYqn87vg8a3p^>͙f9ݜ㛲s0>͙@B@6 @@ @ @ 0 `  `(6Ȉ88'G 'c,,E8bpXŎs, QxE0:H12)Db(I| "ȢDQr4ΒH%$:/'N$t"  }D"(LH "H>"$D`0"$$DD#$  AHD""(CH "H D"$D "$$$D"D$$" Q E8H)D"(IH"H$Q2$DH$#$$$d"D$$" ێp  8;;8s8n,fp9۲p9>Yl7fp @@p@@@ HX@Xna0yŀ8r,xyűb9<;S pI8qK`$@$)=i D@I .=/E(&t .ӒDIE ""B 9Eݠ@D@I s8$$}"$$|IA >C B | $@$D @I HH$$@h"$` $@IA  D B @ $@$d)I DH@H$$E("$ $D1E "D"B D پ`>Xϸ@ fH}q8937fpqg808 ٜr9s@p8 @D bP $@@$  p@ 00D BDXXqcx2(("918 q@s0 qq`@sXlqc8pL "L)ȐTDD ZDII HL"$I$0$DdADDDI "ED&(($BlR| 8D "@'I$D&$@@DDH"E$D!$ $B@@7R@4 `H "@$$I$D!$@@@DI"e$D$(( DII H "@$I$I8,D$$@AD@NnX>ns70=DHHQ8s|qs03>6fs70@8y<@@  wwwwwwwwwwww@      @   `0 0l3c8`#y`fqpl,p8#!s@<#,pDI $D$E@t^$"2():4%Ed]H\ IIIEdQtD@$II|$H $ h$1}$I!H @IA$$H(N@$H$ $ $ A$I HH4  @IA$$0 (D@$H "($%E$IHH II1E$Q$ a089`s>s0sp93͛fp88s3 @08sf 8@  @ ` @,Gc,ysaL,ysq1p  3c`hXfpP"4Y&t)$N\t $@$@NHI$ $DI 1I@$$;L$LH$&Hq$$@@ĀI DH$ ?$K@D$H@"$CH$b!DH$$@$@Dq D8 @$DD0$H@"$IH$$)$FL$ $@$@D$RE H"$@$Dd$H@fcfq7fq3پ`saÓ8 0b`93iY> ͷg@Pp`@@  @$@ @ @@@ D  `F,DqRr4Eg29q@Ha偙y̎mP#G9`'8l[,q@$@ɑtL@"(LΒH(&Dx&, I%"d$$HSbH:$2%@H$DD>"B>(DDH"|Hr$H->@&O "`$I"1$@H$DD "B 2(DDH"@HB$H @!$H"! "$I" $@I$DD@"B"(ND(DHD$b$ 0"@$$$H! "H,I"%پ`fnsٜ3498Hsaq1wq#0sp36ݜs8  @" @@L0 @cP@@@"$ }'1$I0|OH "H>Di0$"D@H->F$@D!@A"@C"A$! "$I @ H "H D $2"D@H A$@Dd@E"@I" E$A$I"$I$DH"H$$ "D@$@0"Dپ`l9`sw838s8ćp8fh 79ـf`1 P `/8q}{}fd}bao,9};sl۶۵o}۷wn67on}wn~}kwn۹;z}x1{cߏv8 0>`4pp ~  4L2 0`@B ! Ph# @s9ƞssɀ9(LAcc ( s͈P0 H 0$@D) $ JD(JQA$H$L(L $ &H(# '"DCy$y |%#`A$$HH y$H&$E EDD" @%$HP A$ $HI $H!$BeD@Ĉ(D  D"$HI B$KHH!%R$D#)Ît8saX胀a9\tNw8Nrsssܜ~9t ` gsasq8p"T@ `!$LD%@$""Gy 'Ȁ(0$"("D D($"(D@ %DH$H w8qpp"`< UW'UWUW'UWtÉs`$H$ĉ$$$H$xsus ¡Ġkkkg Interpress80.0 mm xmin 0.0 mm ymin 146.05 mm xmax 70.90834 mm ymax G73.73056 mm topLeading 73.73056 mm topIndent 1.411111 mm bottomLeading ==Icenter))T"blueandwhite" styleeT"blueandwhite" style T"blueandwhite" style= T"blueandwhite" stylebT"blueandwhite" style "blueandwhite" style T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style "blueandwhite" styleT"blueandwhite" styleT"blueandwhite" styleYT"blueandwhite" style3 "blueandwhite" style T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style "blueandwhite" style T"blueandwhite" style "blueandwhite" styleT"blueandwhite" styleT T"blueandwhite" styleT"blueandwhite" styleT"blueandwhite" style} "blueandwhite" styleT"blueandwhite" styley "blueandwhite" style&&Iindent"blueandwhite" style "blueandwhite" style++\"blueandwhite" style "blueandwhite" style''\"blueandwhite" style\"blueandwhite" style "blueandwhite" style==\"blueandwhite" style "blueandwhite" style::\"blueandwhite" style "blueandwhite" styleSS\"blueandwhite" style "blueandwhite" style>>\"blueandwhite" styleT"blueandwhite" style "blueandwhite" styleT"blueandwhite" style\ "blueandwhite" style I reference"blueandwhite" stylene)]O$]]$]B;9]`] 45]"blueandwhite" styleU ]"blueandwhite" styleHl]"blueandwhite" styleUB-]Q] em]"blueandwhite" style v7  ]"blueandwhite" style   8,]] \ߠ