LoganBerryDoc.tioga
Doug Terry, January 15, 1992 12:44 pm PST
Swinehart, June 17, 1991 10:13 am PDT
Polle Zellweger (PTZ) February 23, 1988 6:26:16 pm PST
Brian Oki, January 31, 1990 10:50:02 am PST
LOGANBERRY
PCEDAR 2.0 --
LoganBerry
a simple data management facility
Doug Terry (with Brian Oki)
© Copyright 1985, 1987, 1990 Xerox Corporation. All rights reserved.
Abstract: LoganBerry is a simple facility for managing databases. Data is stored in one or more log files and indexed using btrees. A database survives processor crashes, but the data management facility does not provide atomic transactions that span multiple operations. Databases may be shared by multiple clients and may reside on a workstation's local disk or be accessed remotely via RPC. A simple in-memory cache is provided that can improve performances in some instances.
Created by: Doug Terry
Maintained by: Doug Terry <Terry.pa>
Keywords: database, logs, btrees, query, servers, RPC
XEROX Xerox Corporation
Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, California 94304
Introduction
LoganBerry is a simple facility for managing databases. It allows various types of data to be stored persistently and be retrieved by key or key subrange. Data is stored in one or more log files and indexed using stable btrees. A database survives processor crashes, but the data management facility does not provide atomic transactions. Databases may be shared by multiple clients and accessed remotely via RPC.
To get started in Cedar on a Dorado, bringover [CedarChest®]<Top>LoganBerry.df. Clients should install LoganBerry. Additionally, clients that want access to remote databases should install LoganBerryStub, LoganBerryCourierStub, and/or LoganBerrySunStub. These can be installed at any time in any order. Installing LoganBerryMultiStub loads all three stubs.
To get started in PCedar on a Sun, you do not have to do any bringovers or anything. Clients should simply put "Require PCedar LoganBerry LoganBerry" in their .require file. Additionally, clients that want access to remote databases should "Require PCedar LoganBerry LoganBerrySunStub".
LoganBerry.mesa contains definitions for the basic data types and operations. This interface is the main one used by LoganBerry client programs. LoganBerryEntry.mesa contains some useful utilities for manipulating LoganBerry entries. LoganBerry supports three additional interfaces. LoganBerryExtras.mesa allows one to register procedures, or "triggers", that get called when a database is updated. These have been used to build in-memory LoganBerry caches, for example. LoganBerryBackdoor.mesa exports some useful log-management and entry-management procedures. LoganBerryClass.mesa allows one to register new LoganBerry implementations (as explained below). These last three interfaces are for use by database maintainers or application specialists, and should not be needed by ordinary clients.
Additional tools, such as the LoganBerry browser as well as more complex query facilities, can be found in [CedarChest®]<Top>LoganBerryTools.df for DCedar or /PCedar/Top/LoganBerryTools-Suite.df for PCedar. LoganBerryTools also includes some useful commands for dealing with LoganBerry databases. See LoganBerryToolsDoc.Tioga.
Database organization
Data representation
A LoganBerry database is maintained as a collection of entries, where each entry is an ordered set of type:value attributes. Both the type and value of an attribute are simply text strings (i.e. ROPEs). However, the programmer interface treats attribute types as ATOMs. For instance, a database entry pertaining to a person might be [$Name: "Doug Terry", $Phone number: "494-4427", $Workstation: "Lake Champlain"].
All of the entries in a database need not contain the same set of attributes. In fact, LoganBerry doesn't really care what attributes exist in entries that it stores except that each entry must contain a primary key attribute. The primary attribute value must be unique throughout the database. That is, for any LoganBerry database, there must be one attribute type T with the following properties: (1) every entry in the database has a value for T, (2) no two database entries have the same value for T, and (3) the database schema must declare a primary index for T. It is also a good idea, but not required, that each entry contain an attribute value for each key, i.e. indexed attribute type (see Access methods).
Database entries are stored in a collection of logs. A database is updated by appending an entry to the end of a log, log entries are never overwritten (except when a log is compacted or replaced in its entirety). Updating individual attributes of an entry requires rewriting the complete entry. In many cases, all of the logs are readonly except for a single activity log. All updates are applied to the activity log (log 0) unless the client explicitly controls which logs are written.
Access methods
Only key-based access methods are supported on a database. These are provided by btrees. A separate btree index exists for each attribute type that serves as a key for the database. Two types of btree indices are supported: primary and secondary. A primary index must have unique attribute values for the associated attribute type, whereas secondary indices need not have unique values.
A sequence of database entries, sorted in btree order, can be accessed using either an enumerator or generator style of retrieval. Generators employ a cursor to indicate a logical position in the sequence being retrieved. A cursor can be used traverse a sequence in increasing or decreasing order.
Database schema
A database must be "opened" before any of its data can be accessed. The Open operation reads a database schema, which contains information about all logs and indices that comprise a database, and returns a handle that identifies the database in subsequent operations.
A schema is created by clients of this package and stored in a DF file that can be used to backup the database. Lines of the file starting with "-->" are deemed to contain schema information for the file named in the subsequent line of the DF file or at the end of the same line. The two types of schema entries are as follows:
--> log <logID> <readOnly or readWrite> <optional CR> <filename>
--> index <key> <primary or secondary> <optional CR> <filename>
The following is a sample database schema file:
============================
-- SampleDB.lbdf
Directory [Indigo]<LoganBerry>Backups>Top>
SampleDB.lbdf!3 12-Aug-85 13:16:12 PDT
Directory [Indigo]<LoganBerry>Backups>SampleDB>
--> log 0 readwrite
Activity.lblog!2 12-Aug-85 12:07:24 PDT
--> log 1 readonly
Readonly.lblog!1 5-Aug-85 11:50:51 PDT
--> index "Name" primary Name.lbindex
============================
Operations
Basic operations
Following is a summary of operations that can be performed on a LoganBerry database; see the interface, LoganBerry.mesa, for the complete truth. All of these operations can be invoked via remote procedure calls to a LoganBerry database server (see "Access to remote servers" below).
Open [dbName] -> db
Initiates database activity and checks for consistency. This can be called any number of times to get a new OpenDB handle or reopen a database that has been closed. Generally, it is not necessary to re-call open once a database handle has been obtained, even if the database is explicitly closed. An OpenDB handle remains valid as long as the machine on which it was issued is not rebooted. Moreover, all of the operations re-open the database automatically if necessary.
Indices are not automatically rebuilt if any are missing or if a partially-completed update left them out-of-date; instead, the error $BadIndex is raised. In this case, clients should catch the error, redo the Open operation to get a valid OpenDB handle, and then call BuildIndices. (Ideally, $BadIndex should be a signal that can be resumed to cause the indices to be rebuilt or else BuildIndices should take a database name rather than an OpenDB handle; this was not done since it would require changes to the LoganBerry interface.)
Describe [db] -> info
Returns schema information about the database.
ReadEntry [db, key, value] -> entry, others
Returns an entry that contains the given attribute value for the given key; returns NIL if none exists. If the key refers to the primary index then the unique entry is returned and others is FALSE. If the key is secondary and several entries exist with the same value for the key, then an arbitrary entry is returned and others is set to TRUE; use EnumerateEntries to get all of the matching entries.
EnumerateEntries [db, key, start, end, proc]
Calls proc for each entry in the specified range of key values. An entry is in the range iff it contains an attribute of type key with value V and start <= V <= end where `<=' denotes case-insensitive lexicographic comparison. The enumeration is halted when either the range of entries is exhausted or proc returns FALSE. A NIL value for start represents the least attribute value, while a NIL value for end represents the largest attribute value. Thus, the complete database can be enumerated by specifying start=NIL and end=NIL with key equal to the primary key.
GenerateEntries [db, key, start, end] -> cursor
Similar to EnumerateEntries, but creates a cursor so that entries in the specified range of key values can be retrieved using NextEntry (defined below). Initially, the cursor points to the start of the sequence. A NIL value for start represents the least attribute value, while a NIL value for end represents the largest attribute value. Thus, the complete database can be enumerated by specifying start=NIL and end=NIL with key equal to the primary key.
NextEntry [cursor, dir] -> entry
Retrieves the next entry relative to the given cursor. The cursor, and thus the sequence of desired entries, must have been previously created by a call to GenerateEntries. The cursor is automatically updated so that NextEntry may be repeatedly called to enumerate entries. NIL is returned if the cursor is at the end of the sequence and dir=increasing or at the start of the sequence and dir=decreasing.
EndGenerate [cursor]
Releases the cursor; no further operations may be performed using the given cursor. This must be called once for every call to GenerateEntries.
WriteEntry [db, entry, log, replace]
Adds a new entry to the database. The entry is added to the activity log unless another log is explicitly requested. The entry must have an attribute for the primary key. The primary attribute value must be unique throughout the database unless replace=TRUE; in this case, an existing entry with the same primary attribute value is atomically replaced with the new entry (both must reside in the same log).
DeleteEntry [db, key, value]
Deletes the entry that contains the given attribute value for the given key. If the key is for a secondary index and the value is not unique then an Error[ValueNotUnique] is raised. Deletes are actually performed by logging the delete request (so that the log can be replayed at a later time) and then updating all of the indices.
Close [db]
Terminates database activity and closes all log and index files. Databases are always maintained consistently on disk so it doesn't hurt to leave a database open. The main reason for calling Close is to release the FS locks on the log files so they can be manually edited.
An important feature of the btree-log design is: the log is the truth! Thus, the database facility need only worry about maintaining a consistent set of logs in the presence of processor crashes. Logs are fully contained. In the event of inconsistencies arising between the logs and their indices, the associated btrees can be easily rebuilt by reading the logs.
BuildIndices [db]
Rebuilds the indices by scanning the logs and performing WriteEntry or DeleteEntry operations.
CompactLogs [db]
Removes deleted entries from the logs by enumerating the primary index and writing new logs.
Other operations
The LoganBerryExtras.mesa interface provides some additional operations on LoganBerry databases. These operations are for use by database maintainers or application specialists, and should not be needed by ordinary clients. For example, LoganBerryExtras allows one to register procedures, or "triggers", that get called when a database is updated. These procedures support the building of in-memory LoganBerry caches.
RegisterWriteProc [proc, db, ident, clientData]
The proc will be called whenever a write or delete operation is issued to db. This information is retained over close and reopen operations, so it need be called only once per session. ident should be unique per usage of this facility; subsequent registrations with the same ident will supercede earlier ones.
UnregisterWriteProc [db, ident]
There is no further interest in this registration.
LoganBerryExtras also includes StartTransaction and EndTransaction operations. While LoganBerry currently does not provide atomic transactions, using StartTransaction and EndTransaction to bracket a sequence of update operations can yield significant performance improvements. In this case, LoganBerry does not commit each update to disk immediately. Thus, until real atomic transactions are implemented, clients must be willing to recover the database in case of inopportune crashes. Neither the logs nor the B-Tree indices are committed after each write, so the recovery may involve manual correction of the log files. Moreover, these operations should not be used on databases that are shared by multiple clients/processes, since the transaction includes all operations, not just those performed by a particular client.
StartTransaction [db, wantAtomic]
Tells LoganBerry that a sequence of update operations, i.e. a transaction, is being performed. If wantAtomic=TRUE then all operations until the next EndTransaction operation should be executed as an atomic transaction (not implemented). If wantAtomic=FALSE then the outcome of subsequent operations is undefined if a crash occurs before the EndTransaction operation; this can significantly improve the performance for a sequence of update operations. This should only be called if the database is being accessed by a single client.
EndTransaction [db, commit] -> committed
Completes a sequence of update operations. If commit=TRUE then all operations since the last StartTransaction operation are completed; otherwise some or all may be aborted. Returns TRUE if all operations are successfully committed.
LoganBerryBackdoor.mesa exports some useful log-management and entry-management procedures. These can be used, for instance, to create LoganBerry logs from programs.
Access to remote servers
LoganBerryStub provides easy access to local or remote LoganBerry database servers. It attempts to hide any RPC details. That is, it (1) catches RPC failures and attempts to either recover or turn them into LoganBerry errors, (2) imports RPC interfaces from multiple servers on demand, (3) uses secure RPC conversations, and (4) tries to keep LoganBerry databases open.
LoganBerryStub exports the LoganBerry.mesa interface so any LoganBerry application should be able to install LoganBerryStub and everything should continue to work. The major difference is that the application can now access remote databases as well as local ones.
When LoganBerryStub is running, the server portion of the database name is taken to be the instance under which the remote LoganBerry server has exported its LoganBerry interface. For example,
LoganBerry.Open[dbName: "/Strowger.lark//Database.schema"];
attempts to set up an RPC connection with Strowger.lark and open a database named "///Database.schema" on that server.
An alternative stub, LoganBerryMultiStub, allows clients to access LoganBerry servers using any of Cedar, Courier, or SUN RPC protocols. Cedar RPC is used by default. So the default behavior is identical to LoganBerryStub. However, preceding a server name by one of "Courier$" or "Sun$" causes Courier, or SUN RPC protocols to be used, respectively. When LoganBerryMultiStub has been installed, each of the calls
LoganBerry.Open[dbName: "/Baobab//Database.schema"];
LoganBerry.Open[dbName: "/Courier$Baobab//Database.schema"];
LoganBerry.Open[dbName: "/Sun$Baobab//Database.schema"];
opens a database named "///Database.schema" on the machine named "Baobab", but they use different RPC protocols. Furthermore, subsequent calls to LoganBerry using the returned OpenDB handle will use the protocol specified in the Open call.
LoganBerry servers can export RPC interfaces simply by typing any or all of "LBExport", "LBCourierExport", and "LBSunExport" to a CommandTool.
Other issues
Editing logs by hand: be careful!
Logs are human readable and may be edited or created using facilities outside of LoganBerry, such as the Tioga text editor. You should do so cautiously. In particular, logs must be of the following format:
1) Attribute types and values are text strings separated by a `:'. Thus, attribute types may not contain colons.
2) Attributes are terminated by a CR. Attribute values may contain CR's if the complete value is enclosed in double-quotes `"' and any embedded quotes are expressed as `\"', i.e. the attribute value looks like a ROPE literal. (LoganBerry will automatically format values that it writes as ROPE literals if they contain CR's.)
3) Entries are terminated by a CR.
4) The end of the log must be the character 377C.
Do not delete or change entries in a log that has entries of the form: DELETED: number, since the number is a pointer into the log. It is always safe to add entries to the end of a log (right before the 377C character) or edit a log after it has been compacted.
You must remember to run LoganBerry.BuildIndices after manually editing a log!
The following is a sample log file (containing a single entry with three attributes):
============================
Name: Doug Terry
Phone number: 494-4427
Workstation: "Lake Champlain"
ÿ
============================
Creating a new database
To create a new LoganBerry database, create a schema file with the format described above (start with a DF file and add index and log entries). You must also explicitly create log files, as LoganBerry will refuse to do this for you. If you want an empty database, simply create zero-length log files with the proper names (you can copy Empty.lblog out of LoganBerry.df). Alternatively, if you have existing information, you can convert it to LoganBerry log format using Tioga or a variety of Cedar tools (see the previous section). Once you create the schema and log files, you will need to Open the database and call BuildIndices to create the index files (this can be tricky so see the fine point above under Open).
Using timestamps as primary keys
For some databases, there may be no natural attribute to serve as a key. In this case, a unique timestamp can be used as a key. LoganBerryEntry.NewTimestamp generates a timestamp that is close to the current time and such that subsequent calls to NewTimestamp yield different results provided (1) the calls are performed on the same machine, and (2) the machine has not crashed and been restarted in between calls. Even if NewTimestamp is called on different machines, the results will be unique with a high probability.
Timestamps generated by calls to NewTimestamp may be used directly as keys as in the following piece of code:
LoganBerry.WriteEntry[db: db, entry: LoganBerryEntry.AddAttr[entry, $Key, LoganBerryEntry.NewTimestamp[]]];
However, this may occasionally fail since NewTimestamp is not guaranteed to provide a unique result. Hence, the following is a safer way to use NewTimestamp:
WriteWithTimestamp:
PROC [db: LoganBerry.OpenDB, entry: LoganBerry.Entry] = {
timestampedEntry: LoganBerry.Entry ← LoganBerryEntry.AddAttr[entry, $Key, LoganBerryEntry.NewTimestamp[]];
LoganBerry.WriteEntry[db: db, entry: timestampedEntry
! LoganBerry.Error => IF ec=$ValueNotUnique THEN {LoganBerryEntry.SetAttr[entry, $Key, LoganBerryEntry.NewTimestamp[]]; RETRY} ];
};
Registering new classes
Mechanisms exist for registering new LoganBerry classes (see LoganBerryClass.mesa). A LoganBerry class is a collection of routines that implement the LoganBerry interface. In addition to the standard LoganBerry implementation, examples of LoganBerry classes include stubs for performing RPC calls to remote LoganBerry servers and "veneers" that enable other databases, such as Cypress, to be accessed as LoganBerry databases. New classes can be registered at any time in any order. All registered classes are accessed through the same LoganBerry interface.
A specific class is bound to a database handle during the Open call. On Open, each registered LoganBerry class is given the opportunity to Open the named database. The first class that returns a non-null open database handle is used for subsequent calls on that handle. Thus, the Open routine for each class should first check that the named database is one that is managed by the particular class. If it can't handle the named database then it should return LoganBerry.nullDB. If the database name is one that is manged by this class but the database can not be successfully opened for some reason, then the Open procedure should raise an Error.
LoganBerry Entry Cache
One particularly useful class, included automatically with LoganBerry, intercepts ordinary LoganBerry calls to provide an in-memory cache of LoganBerry entries. For applications that read the same entries repeatedly, performance can be considerably improved by using an entry cache.
This cache resides on the client machine issuing the calls, not on the server containing the persistent copy of the database. It is therefore dangerous to use (not guaranteed to be up to date) when another client might be updating the database concurrently. It also provides cached access only for ReadEntry queries requesting a complete match on the primary key: secondary key accesses and accesses through the pattern enumeration operations do not use the cache.
The implementation uses the same Error instance as LoganBerry does to raises its own LoganBerry errors; it also lets most of the real ones through. This may sometimes be confusing, since the DB in effect at the time may be different.
Some of the more exotic features of LoganBerry, such as the ability to specify specific logs for individual writes, may not work properly when an entry cache is used. The client programmer must use judgement, tempered by consultation with the implementors, before enabling caching on a database.
The cache implementation is willing to do non-write-through caching of entries. That is, WriteEntry calls store the changed entry only in the cache, deferring the actual database update until a later time. This feature is supported only for those who have consulted the author(s). Non-write-through caching, which is enabled separately, is of limited value: any attempt to access entries through any key other than the primary one while non-write-through caching is asserted, or any concurrent attempts to write new entries, are fraught with peril. Also, many operations, such as entry deletion, will not work properly when database update is deferred in this way. The mechanism was created to improve the performance of a batch database creation process where, with performance further enhanced by the use of LoganBerry transactions, many entries are written several times before the process completes; deferring the writes reduces the total time and space required to log the entries each time.
Whenever the caching class is loaded (it is present in the standard LoganBerry configuration), its use can be requested by appending the suffix "-cached" to the name of the schema file in a LoganBerry Open operation. Databases opened using file name specifications that do not contain the suffix will use the ordinary local or remote LoganBerry class. However, the cache will only be maintained, and ReadEntry requests will only honor the cache contents, when LoganBerry caching is enabled by the commands below. When the feature is disabled, all requests will simply be passed on to the underlying persistent databases.
Commands to enable caching and change parameters
name
LBParameter:
syntax
LBParam [-caching (on|off) | -writeCaching (on|off) |
-cacheSize <numEntries> | -cacheStretch <numEntries> | -dateCompare (on|off)]
description
Provides new values for several LoganBerry cache implementation parameters. If no parameters are specified, prints the current value of all of them.
caching: When on, enables caching for all databases supported by the cached entry class. Default: off.
writeCaching: When on, enables non-write-through caching for all databases supported by the cached entry class. This is effective only if entry caching is also enabled. Use of this feature is recommended only in carefully-controlled situations.
When turned off by this command, disables non-write-through caching for all databases supported by the cached entry class, then forces all dirty entries to their respective databases. An EndTransaction operation will also force all dirty entries to the affected database, but does not subsequently disable the feature. The default is that this feature is disabled. The user is urged to leave it that way! Default: off.
cacheSize: Specifies the minimum number of cache entries that will be retained for any cached database. These sizes are not specific to individual databases at present. Default: 3000.
cacheStretch: Specifies the number of cache entries beyond the minimum that will be permitted. When the cache size exceeds this number, the cache is pruned back to the minimum size, based on an LRU aging algorithm. If non-write-through caching is enabled, dirty entries among the victims are written to the database as the database is pruned. Default: 1500.
dateCompare: When on, enables a particularly arcane feature: If the primary key contains text, following a "@" trigger character, that appears to be a date and time, database updates that are deferred by the WriteCaching feature will be written in increasing date-time order when the cache is later pruned or flushed. If no date is found, or if the feature is not enabled, they will be written alphabetically. This ordering should increase the probability that later database enumerations will occur in the desired order. Default: off.
name
LBCachedDBs:
description
Prints a list of the names of all LoganBerry databases that have active entry caches. Report includes the cache occupancy statistics for each database. This output includes some information useful only to the system developers.
name
LBCacheStats:
syntax
LBCacheStats dbname
description
Prints out some cache occupancy statistics useful to the system developers. The dbname should include the "-cached" suffix.
examples
% LBCacheStats /walnut/WalnutLB.df-cached
Porting LoganBerry to PCR/Sun OS -- Details of Changes
Philosophy: Preserve the original code as much as possible and change where absolutely necessary to maintain the same semantics.
Replaced occurrences of GeneralFS and FS with UFS. The convention I followed is to bracket the old FS (GeneralFS) statement with << >> and turn it into a comment. The new statement either precedes or succeeds the old statement. There are two changes worth mentioning. First, the only way files are opened is through UFS.StreamOpen in either read, write, or create mode. Second, log files are not now ever opened in append mode, only in write mode. In the old code, a file opened in append mode was mostly being used as if it had been opened in write mode.
File creation time problem: When a file is opened in either $write or $create mode, the file creation time is updated but doesn't change even when writing to it until the file is opened again (after being closed). In Unix, the create date (mtime) is continually updated as the file is written to. I worried about this because IndicesOutOfDate assumes the former. Added a new data structure LogCreateDate to remember the time the file is opened and a routine SetCreateTime to reset the file mtime when the file is closed. This is a hack to preserve the semantics the client sees and to avoid making incompatible changes. A better approach in the long run is to timestamp the file as a comment. This stuff is part of LoganBerryImpl so as to avoid changing UFSImpl's interface.
Compacting logs: Beware!! The old code assumes the existence of versions, so you never need to worry about clobbering the old log files. Not so under Unix. There's now a window of vulnerability during which the old log file can be completely clobbered while the bits from the compacted log are being copied. Since UFS hasn't implemented a Rename routine, I used UFS.Copy followed by UFS.Delete: Copy will copy the new bits to the old name, and Delete will delete the temporary file. If the system crashes during the copy, the old log will be clobbered and lost. If there's a way to do this atomically, someone let me know. (Peter Kessler says that DFS handles versions. Rather than using yet another file system, I'll wait until PFS becomes available.)
BTreeSimple interface change: used Russ Atkinson's changes to BTreeVMImpl as the basis.
Removed all the RPC multi-stub stuff since they're mostly not portable and kept the Sun RPC stuff. Sun RPC hasn't been tested yet. Bill Jackson is doing that. Hence, it is not possible to call LoganBerry operations remotely. For the moment, RPC.conversation is just a CARD32 type.
Michael Plass pointed out some problems with binary data: If you store data containing a CR, it gets stored in a rope literal as "\n". In the PCedar world, this would get read as LF, which is OK if it is really used as a newline, but if it is really binary data, it means trouble. LoganBerry (or maybe IO) should emit "\r" or "\015" to keep the bits unambiguous. These changes have been incorporated.
Known bugs and limitations
LoganBerryImpl raises some (most) Errors with the monitor locked.
BuildIndices should take a database name rather than a OpenDB handle as an argument. Also, it should be able to selectively build an index rather than rebuilding all indices.
The error raised when indices are out of date (currently Error[$BadIndex]) could be a signal. Resuming this signal should cause the indices to automatically be rebuilt.
The more complex techniques for logging Deletes should be explored, i.e. how to remove the no-cross-log deletes/replaces restriction (there's a separate design document).
The LoganBerryExtras interface should be folded into LoganBerry when possible..
Performance
On a Dorado
The following output from the test program gives some indication of the performance obtained by LoganBerry running on a Dorado and performing operations on a local test database. The test database contains 1000 entries where each entry consists of two attributes, an integer and a variable-length rope.
% TestLoganBerry -r 1 17
Enumerating entries.
Test database contains 1000 entries.
Running time: 00:00:06.883456
Generating entries.
Running time: 00:00:06.294176
Reading and deleting 10 entries.
Running time: 00:00:03.179168
Verifying deletes.
Running time: 00:00:00.014176
Rewriting deleted entries.
Running time: 00:00:02.061792
Verifying rewrites.
Running time: 00:00:00.02992
Replacing (then restoring) 10 entries.
Running time: 00:00:07.09152
Reading 100 entries.
Running time: 00:00:01.606688
Checking data.
Running time: 00:00:00.236736
Closing and reopening the database.
Running time: 00:00:00.439136
Advanced test of generate.
Running time: 00:00:00.286336
Writing strange values and verifying them.
Running time: 00:00:03.093504
Checking generated errors.
Running time: 00:00:00.108768
Rebuilding indices.
Running time: 00:00:33.028064
Compacting log.
Running time: 00:01:55.818624
Counting number of entries.
Running time: 00:00:06.267104
Generating entries using secondary index.
Running time: 00:00:08.660896
0 Errors; elapsed time = 00:03:15.257888
%
On a Sun 4/260
The following output from the test program gives some indication of the performance obtained by LoganBerry running on a Sun 4 with 32 megabytes of RAM (Lyrane) and performing operations on a local test database. The test database contains 1000 entries where each entry consists of two attributes, an integer and a variable-length rope. We assume that the indices have already been created.
pcr: LoadAndRun /palain/boki/Cedar/Users/boki/loganberry/sun4/LoganBerryTest.c2c.o
LoganBerryTest
First test number: 1
Last test number: 17
Enumerating entries.
Test database contains 1000 entries.
Running time: 00:00:08.560004
Generating entries.
Running time: 00:00:05.439996
Reading and deleting 10 entries.
Running time: 00:00:30.560001
Verifying deletes.
Running time: 00:00:00.07
Rewriting deleted entries.
Running time: 00:00:17.349995
Verifying rewrites.
Running time: 00:00:00.099993
Replacing (then restoring) 10 entries.
Running time: 00:00:54.389994
Reading 100 entries.
Running time: 00:00:01.279998
Checking data.
Running time: 00:00:00.179996
Closing and reopening the database.
Running time: 00:00:02.38
Advanced test of generate.
Running time: 00:00:00.459993
Writing strange values and verifying them.
Running time: 00:00:21.429995
Checking generated errors.
Running time: 00:00:00.119998
Rebuilding indices.
Running time: 00:01:46.369996
Compacting log.
Running time: 00:09:05.989998
Counting number of entries.
Running time: 00:00:05.479996
Generating entries using secondary index.
Running time: 00:00:06.079993
0 Errors; elapsed time = 00:13:27.700003
Wall-clock 829620 msec. Local stats: User cpu 120450 msec. System cpu 26350 msec.
End of reading commands from file 'LoganBerry.pcr'.
pcr:
On a Sun SPARCstation 1
The following output from the test program gives some indication of the performance obtained by LoganBerry running on a Sun SPARCstation 1 with 28 megabytes of RAM (Kestrel) and performing operations on a test database that resides on a NFS server (Palain). The test database contains 1000 entries where each entry consists of two attributes, an integer and a variable-length rope. We assume that the indices have already been created. (Test performed October 25, 1990 9:51:59 am PDT, and does not yet reflect performance improvements made for operations occurring between StartTransaction/EndTransaction calls. Nor are there any performance figures for SPARCStation 2's.)
% TestLoganBerry -r 1 17
Enumerating entries.
Test database contains 1000 entries.
Running time: 00:00:05.262621
Generating entries.
Running time: 00:00:07.888103
Reading and deleting 10 entries.
Running time: 00:00:04.969256
Verifying deletes.
Running time: 00:00:00.010234
Rewriting deleted entries.
Running time: 00:00:03.99831
Verifying rewrites.
Running time: 00:00:00.031192
Replacing (then restoring) 10 entries.
Running time: 00:00:22.32567
Reading 100 entries.
Running time: 00:00:01.024134
Checking data.
Running time: 00:00:00.220013
Closing and reopening the database.
Running time: 00:00:01.291327
Advanced test of generate.
Running time: 00:00:00.153915
Writing strange values and verifying them.
Running time: 00:00:04.521016
Checking generated errors.
Running time: 00:00:00.250714
Rebuilding indices.
Running time: 00:00:42.994876
Compacting log.
Running time: 00:02:35.866546
Counting number of entries.
Running time: 00:00:03.990346
Generating entries using secondary index.
Running time: 00:00:03.984024
0 Errors; elapsed time = 00:04:19.517667
%
On a Sun SPARCstation 2
The following output from the test program gives some indication of the performance obtained by LoganBerry running on a Sun SPARCstation 2 with 28 megabytes of RAM (Kestrel) and performing operations on a test database that resides on a NFS server (Palain). Test was performed January 15, 1992.
% TestLoganBerry -r 1 17
Enumerating entries.
Test database contains 1000 entries.
Running time: 00:00:01.581755
Generating entries.
Running time: 00:00:01.59866
Reading and deleting 10 entries.
Running time: 00:00:05.165927
Verifying deletes.
Running time: 00:00:00.010991
Rewriting deleted entries.
Running time: 00:00:01.298343
Verifying rewrites.
Running time: 00:00:00.017329
Replacing (then restoring) 10 entries.
Running time: 00:00:03.419241
Reading 100 entries.
Running time: 00:00:01.024249
Checking data.
Running time: 00:00:00.084826
Closing and reopening the database.
Running time: 00:00:01.662453
Advanced test of generate.
Running time: 00:00:00.060094
Writing strange values and verifying them.
Running time: 00:00:02.074357
Checking generated errors.
Running time: 00:00:00.029002
Rebuilding indices.
Running time: 00:00:12.693149
Compacting log.
Running time: 00:00:13.021114
Counting number of entries.
Running time: 00:00:01.769305
Generating entries using secondary index.
Running time: 00:00:01.990349
0 Errors; elapsed time = 00:00:47.651323
%