HashTableDoc.tioga
Bertrand Serlet, August 1, 1985 11:19:59 am PDT
Mike Spreitzer January 7, 1987 4:47:18 pm PST
HashTable
CEDAR 6.1 — FOR INTERNAL XEROX USE ONLY
HashTable
Bertrand Serlet and Mike Spreitzer
© Copyright 1986 Xerox Corporation. All rights reserved.
Abstract: HashTable is a generalization of RefTab that allows client-given hash and equal functions. A HashTable will now grow as necessary to keep the density limited.
Created by: Bertrand Serlet, mostly stolen from RefTab (growing added by Mike Spreitzer)
Maintained by: Bertrand Serlet <Serlet.pa>, Mike Spreitzer <Spreitzer.pa>
Keywords: Hash, Re-Hash, RefTab, SymTab
XEROX  Xerox Corporation
   Palo Alto Research Center
   3333 Coyote Hill Road
   Palo Alto, California 94304

For Internal Xerox Use Only
1. History
This package was created as a generalization of RefTab and SymTab, which are packages that implement REF ANY to REF ANY and ROPE to REF ANY hash tables. Some time later, Mike Spreitzer added the idea of automatically growing the size of the table to limit the density. This was done because in many applications it is difficult to make a good size estimate at creation time, and growing the table can result in a net savings in execution time at a minimal cost in memory.
Below are some messages exchanged discussing the growing; following that are the files used in the benchmarks cited.
1.1. Discussion
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
29 Oct 85 serlet.pa Re: ReHashTable for CedarChest 6.0
Date: Tue, 29 Oct 85 15:10:30 PST
From: serlet.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Tue, 29 Oct 85 14:30:27 PST"
To: Spreitzer
Reply-to: Bertrand <Serlet.pa>
Cc: serlet
Made table modulus be powers of two, instead of prime numbers: that seems much more reasonnable to me (computing the 300 first primes is NOT reasonnable, I believe). Do you know what is the effect on the distribution if mod is not a prime? If it is not bad, what about transforming mod to a power of 2 (logModulus instead of mod)? That could avoid making divisions in favor of shifts.
What about getting rid now of densityLimit and growthFactor (the default are just right)? That will also get rid of REAL numbers, which I find tasteless in that package.
Erase is meaningless. Jennifer added it, but it seems more rational to just NIL the REF, especially since the table is ReHashed!
Bertrand


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
29 Oct 85 To: CedarUse... Comparison of different kinds of tables
Date: Tue, 29 Oct 85 15:37:23 PST
From: Spreitzer.pa
Subject: Comparison of different kinds of tables
To: CedarUsers^.pa
Cc: Spreitzer
Reply-to: Spreitzer.pa
Introduction
I've done a little bit of experimental computer science, to try to evaluate some of the different ways we Cedar users have of making tables. More precisely, I've compared 4 ways of making REF ANY -> REF ANY tables (RefTab, HashTable, ReHashTable, and RedBlackTree), and 4 ways of making ROPE -> REF ANY tables (SymTab, HashTable, ReHashTable, and RedBlackTree). I was particularly interested in what happens for the hash tables when the initial size prediction is much too small.
The Experiment
A table is created; some insertions, lookups of existing entries, and lookups of non-existing entries are done, and then the table is explicitly destroyed. Garbage collections are done before and after. This is repeated 5 times, and the average and standard deviation are computed for the execution time.
For the ROPE -> REF ANY tables, the key used for lookup of existing entries is at a different address than the key used for insertion, although the characters are, of course, the same.
The ReHashTable rehashing parameters are: grow by a factor of 2.0 when the density passes 1.0.
For further details of the experiment, see [User]<Spreitzer>6.0Top>TableBench.DF.
The execution time data follow. The head of each section gives: domain, number of insertions, number of lookups of existing entries, number of lookups of non-existing entries, initial size prediction. Execution times are in seconds.
REF ANY, 1500, 15000, 7500, 7:
RefTab  10.7 ± 0.29
HashTab  12.4 ± 0.37
ReHashTab 01.4 ± 0.17
RedBlack  09.4 ± 0.30
REF ANY, 1500, 15000, 7500, 17:
RefTab  05.0 ± 0.25
HashTab  05.8 ± 0.27
ReHashTab 01.5 ± 0.17
RedBlack  09.1 ± 0.16
REF ANY, 1500, 15000, 7500, 2003:
RefTab  01.4 ± 0.30
HashTab  01.4 ± 0.28
ReHashTab 01.3 ± 0.26
RedBlack  09.1 ± 0.37
ROPE, 750, 7500, 3750, 7:
SymTab  15.2 ± 0.24
HashTab  22.3 ± 0.33
ReHashTab 02.7 ± 0.18
RedBlack  08.2 ± 0.26
ROPE, 750, 7500, 3750, 17:
SymTab  07.2 ± 0.27
HashTab  10.1 ± 0.25
ReHashTab 02.7 ± 0.17
RedBlack  08.2 ± 0.19
ROPE, 750, 7500, 3750, 2003:
SymTab  01.8 ± 0.17
HashTab  02.1 ± 0.21
ReHashTab 02.2 ± 0.23
RedBlack  08.1 ± 0.29
The instantaneous (as opposed to cumulative) memory usage of the different table implementations can be computed by inspection:
For RefTab, SymTab, and HashTable, it is (at the worst time: just before the table is destroyed) (within small offset) 2*P + 8*I, where P is the predicted size, and I is the actual number of entries inserted. This is 2 words for each of the P entries in the sequence of bucket heads, and 8 words for each bucket entry.
For ReHashTable, it is (somewhere between 10 and 12)*I; again 2 words per bucket head, and 8 per bucket entry. Since the ReHashTable (in this experiment) doubles the length of its sequence of bucket heads, at time which are in general random with respect to the application, there will be somewhere between 1.0 and 2.0 bucket heads for each inserted entry.
For RedBlackTree, it is 16*I. This is 10 words for the RedBlackTree node, and 6 words for the client node that contains the key and value.
Conclusions
A size underestimate costs execution time for hash tables that don't rehash; it cost a factor of 10 in the worst case examined (predict 7, insert 1500). It saves a bit of memory.
Generalizing from RefTab or SymTab to HashTable or ReHashTable has a variable smallish cost. The worst observed is almost 50% execution time (awful size prediction, ROPE -> REF ANY). In good cases (REF ANY -> REF ANY, good size prediction) the difference is within experimental error. There is no cost in memory consumption.
Generalizing from HashTable to ReHashTable has no execution time cost that can be observed, within the accuracy of this experiment. It can cost up to 20% memory usage.
A RedBlackTree is better than a hash table that doesn't rehash when you can't make a good prediction on the number of entries to be inserted.
A ReHashTable is almost always better. The only time it looses is against a SymTab when you can make a good size prediction (this is due to the extra level of procedure calling to access the rope hashing and comparison operations), or when you are so foolish as to care about that up to 20% of memory usage.
Mike


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
29 Oct 85 To: Bertrand ... Re: ReHashTable for CedarChest 6.0
Date: Tue, 29 Oct 85 17:48:50 PST
From: Spreitzer.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Tue, 29 Oct 85 15:10:30 PST"
To: Bertrand <Serlet>
Cc: Spreitzer
I, too, wondered about the effect of changing the sort of modulus used. I finally decided to think harder about how the hash functions used would relate to a power-of-2 modulus. The REF hash (XOR high & low words) should be acceptable. The ROPE hash does rotates & XORs, and so should also be acceptable. Prompted by your message, I did some experiments:
Inserting 1500 REF ANYs into a HashTable (size 1511 (a prime)) and a ReHashTable (size 2048):
HashTable list size distribution: [570 of 0.0, 526 of 1.0, 296 of 2.0, 98 of 3.0, 18 of 4.0, 2 of 5.0, 1 of 6.0]
ReHashTable list size distribution: [1191 of 0.0, 413 of 1.0, 275 of 2.0, 142 of 3.0, 24 of 4.0, 3 of 5.0]
Inserting 1020 ROPEs into a HashTable (size 1021 (a prime)) and a ReHashTable (size 1024):
HashTable list size distribution: [399 of 0.0, 358 of 1.0, 169 of 2.0, 64 of 3.0, 25 of 4.0, 4 of 5.0, 2 of 6.0]
ReHashTable list size distribution: [516 of 0.0, 228 of 1.0, 143 of 2.0, 76 of 3.0, 38 of 4.0, 14 of 5.0, 7 of 6.0, 2 of 7.0]
Inserting 1020 REF ANYs into a HashTable (size 1021 (a prime)) and a ReHashTable (size 1024):
HashTable list size distribution: [398 of 0.0, 345 of 1.0, 189 of 2.0, 63 of 3.0, 22 of 4.0, 4 of 5.0]
ReHashTable list size distribution: [567 of 0.0, 161 of 1.0, 141 of 2.0, 82 of 3.0, 45 of 4.0, 19 of 5.0, 7 of 6.0, 2 of 7.0]
In this case we saw it choose only even indices in our sample of the first 32 buckets.
Recalling the graphs, the ReHashTable has a poorer distribution, but it's not catastrophic.
When comparing with HashTable, the distribution is irrelevant --- what we actually care about is execution time. We saw that the execution times didn't seem to suffer much from the poor quality of the distribution --- I think this is because the lists are kept shortish anyway. You're point about some application that might generate address of the form A0 + I*A1 is more worrisome, but I don't think it's likely to be a big problem. I think the allocation in Cedar is sufficiently random that although you might find runs of the form A0 + I*A1, different runs would have different A0's, because a run can only occupy so much storage (probably 65536 words), and where the next run comes from is determined by current usage of VM, which I expect generally to be somewhat fragmented and randomly worked over. More importantly, I don't even think it likely that you'll actually find runs of the form A0 + I*A1, because your application probably allocates (perhaps indirectly) things other than the objects handed to the table as keys, and there's probably some variation in the number and/or type of other objects allocated between insertions. You've probably noticed the great enthusiasm that Cedar has for allocating and reclaiming objects.
Before fixing densityLimit and growthFactor I'd like to know how varying them affects performance, and might remain reluctant to fix them if it is revealed that they control a significant space/time tradeoff. I don't see why you find REAL numbers so distasteful --- the quantities they talk about really are not integers, and the uses of them are so infrequent that they don't take a lot of time.
Erase is comforting because you never know where a pointer might get "stuck" in Cedar --- it's nice to be able to "really smash" something that could hang on to a lot of stuff.
Mike


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
1 Nov 85 To: Serlet.pa Comparing moduli
Date: Fri, 1 Nov 85 19:28:43 PST
From: Spreitzer.pa
Subject: Comparing moduli
To: Serlet.pa
Cc: Spreitzer
I conducted an experiment to compare: primes (selected from a progression at least a factor of 21/4 apart), powers of two, and one less than powers of two. For the REF tables (presented in the left column of the pictures), I inserted 01021 times, looked up successfully 15000 times, and looked up unsuccessfully 7500 times, in a table initially estimated to be 01021 long. For the ROPEs, it was 00509, 7500, 3750, and 00509. Following are the results of conducting this experiment twice. Note that for the REFs, the best seems to be powers of two, and for the ROPEs the best seems to be the primes.
[Artwork node; type 'ArtworkInterpress on' to command tool]
November 1, 1985 6:32:53 pm PST
ReHashTable[selectedPrime] REF list size distribution: [370 of 0.0, 387 of 1.0, 175 of 2.0, 75 of 3.0, 11 of 4.0, 3 of 5.0]
ReHashTable[powerOfTwo] REF list size distribution: [350 of 0.0, 405 of 1.0, 208 of 2.0, 47 of 3.0, 11 of 4.0, 3 of 5.0]
ReHashTable[oneLessThanPowerOfTwo] REF list size distribution: [375 of 0.0, 366 of 1.0, 208 of 2.0, 59 of 3.0, 13 of 4.0, 2 of 5.0]
ReHashTable[selectedPrime] ROPE list size distribution: [193 of 0.0, 180 of 1.0, 97 of 2.0, 24 of 3.0, 13 of 4.0, 1 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] ROPE list size distribution: [259 of 0.0, 108 of 1.0, 75 of 2.0, 43 of 3.0, 15 of 4.0, 10 of 5.0, 2 of 6.0]
ReHashTable[oneLessThanPowerOfTwo] ROPE list size distribution: [218 of 0.0, 156 of 1.0, 81 of 2.0, 37 of 3.0, 15 of 4.0, 4 of 5.0]
[Artwork node; type 'ArtworkInterpress on' to command tool]
November 1, 1985 7:02:57 pm PST
ReHashTable[selectedPrime] REF list size distribution: [367 of 0.0, 396 of 1.0, 177 of 2.0, 60 of 3.0, 15 of 4.0, 5 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] REF list size distribution: [341 of 0.0, 420 of 1.0, 195 of 2.0, 62 of 3.0, 5 of 4.0, 1 of 5.0]
ReHashTable[oneLessThanPowerOfTwo] REF list size distribution: [380 of 0.0, 368 of 1.0, 190 of 2.0, 70 of 3.0, 12 of 4.0, 3 of 5.0]
ReHashTable[selectedPrime] ROPE list size distribution: [198 of 0.0, 176 of 1.0, 90 of 2.0, 31 of 3.0, 11 of 4.0, 2 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] ROPE list size distribution: [264 of 0.0, 99 of 1.0, 80 of 2.0, 43 of 3.0, 16 of 4.0, 6 of 5.0, 1 of 6.0, 3 of 7.0]
ReHashTable[oneLessThanPowerOfTwo] ROPE list size distribution: [210 of 0.0, 164 of 1.0, 85 of 2.0, 38 of 3.0, 10 of 4.0, 3 of 5.0, 1 of 6.0]


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
1 Nov 85 To: Serlet.pa ReHashTable experiment results
Date: Fri, 1 Nov 85 23:50:10 PST
From: Spreitzer.pa
Subject: ReHashTable experiment results
To: Serlet.pa
Cc: Spreitzer
1) Varying densitLimit and growthFactor over reasonable ranges does not have much of an effect.
2) There is a modest variation in execution times among the three different modulus choices, but the magnitude of the differences, and the ordering among those choices, changes from test case to test case. The greatest variance observed in one test is 6.2 (oneLessThanPowerOfTwo) vs. 5.5 (selectedPrime and powerOfTwo). selectedPrime lost only once, and then not by much (2.1±0.2 vs 2.2±0.2).
3) There is some variation in quality of distribution of list sizes. Again, the magnitude of the differences and ordering among the choices varies with the details of the test case. Again, the selectedPrime never losses by much.
Thus, I'd conclude to use densitLimit=1.0, growthFactor=2.0, and select the modulus from a table of 15 primes between 2 and 65536 (here's such a table: 00002, 00005, 00011, 00023, 00047, 00097, 00197, 00397, 00797, 01597, 03203, 06421, 12853, 25717, 51437; a table of all primes between 2 and 65536 can be found in [User]<Spreitzer>Stranded>Prime.Table).
All the tests and data are in [User]<Spreitzer>6.0Top>TableBench.DF; I can go over any of it with you if you're interested.
Mike


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
2 Nov 85 serlet.pa Re: ReHashTable experiment results
Date: Sat, 2 Nov 85 19:05:19 PST
From: serlet.pa
Subject: Re: ReHashTable experiment results
In-reply-to: "Your message of Fri, 1 Nov 85 23:50:10 PST"
To: Spreitzer
Reply-to: Bertrand <Serlet.pa>
Cc: Serlet
That's very interesting results (and I like the inclusion of graphics in your message). If you agree to get rid of densityLimit, growthFactor and all uses of reals, I think it would be perfect either to call this package HashTable, or to use it to implement HashTable. Perhaps also we should implement RefTab and SymTab using ReHashTable and discuss with Russ about including that in Cedar7.0?
Bertrand


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
3 Nov 85 serlet.pa Re: ReHashTable for CedarChest 6.0
Date: Sun, 3 Nov 85 18:49:46 PST
From: serlet.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Sun, 3 Nov 85 18:27:20 PST"
To: Spreitzer
Reply-to: Bertrand <Serlet.pa>
Cc: serlet
1) I do not understand this business of DInhibit. What is inhibitCount ??
2) If you had a very large prime at the end of your primeTable, you could probably get rid of sizeLimit by having it be mod.
3) Since rehashing involves a lot of computation, the cost of recomputing pti is neglectible and you could recompute the new table.mod by just finding the smallest number of primeTable which is greater than table.mod * 2 -1:
GetPrime: PROC [oldPti: NAT, min: CARDINAL] RETURNS [newPti: NAT, p: SeqIndex] = {
max: CARDINAL = LAST[SeqIndex];
p is prime.
p <= max.
If there are any primes in [min, max], p is the least such, otherwise p is the largest prime < min.
pti: NAT ← oldPti;
IF primeTable[pti] > max THEN ERROR;
IF primeTable[pti] > min THEN ERROR;
DO
IF pti = PrimeTableSize OR primeTable[pti] > max THEN RETURN [pti-1, primeTable[pti-1]];
IF primeTable[pti] >= min THEN RETURN [pti, primeTable[pti]];
pti ← pti + 1;
ENDLOOP;
};
and (in ReHash)
newPTI, newMod: CARDINAL;
[newPTI, newMod] ← GetPrime[
table.pti,
table.mod * 2
];
IF newMod = table.mod THEN {table.sizeLimit ← LAST[INT]; RETURN};
table.pti ← newPTI;
replaced by (in ReHash):
newPTI: CARDINAL ← 0;
newMod: CARDINAL ← table.mod * 2 - 1;
WHILE primeTable[newPTI]<newMod DO newPTI ← newPTI + 1 ENDLOOP;

Bertrand


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
4 Nov 85 To: Bertrand... Re: ReHashTable for CedarChest 6.0
Date: Mon, 4 Nov 85 13:59:21 PST
From: Spreitzer.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Sun, 3 Nov 85 18:49:46 PST"
To: Bertrand <Serlet>
Cc: Spreitzer
1) While an enumeration (Pairs) is in progress, we can't rehash, since that would confuse the enumeration. inhibitCount is the number of enumerations going on.
2) The reason sizeLimit and mod are distinct variables is so that when size passes the last tabled prime (which must be <= LAST[SeqIndex]), sizeLimit can be set to LAST[INT] to disable consideration of rehashing in a natural way.
3) The real reason is that it's left over from when the table was larger. It's now a trade-off between negligible time versus negligible space and some simplicity (which, as you know, I think you often rate too highly in comparison to other things, but in this case I agree that there's now not much else, so I'll probably make the change you suggest).
Mike


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
4 Nov 85 serlet.pa Re: ReHashTable for CedarChest 6.0
Date: Mon, 4 Nov 85 15:48:17 PST
From: serlet.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Mon, 4 Nov 85 13:59:21 PST"
To: Spreitzer
Reply-to: Bertrand <Serlet.pa>
Cc: Bertrand <Serlet>
1) While an enumeration (Pairs) is in progress, we can't rehash, since that would confuse the enumeration. inhibitCount is the number of enumerations going on.
=> ENTRY to avoid that! Anyway, I think the notion that you can change a table while enumerating it seems confusing.
2) The reason sizeLimit and mod are distinct variables is so that when size passes the last tabled prime (which must be <= LAST[SeqIndex]), sizeLimit can be set to LAST[INT] to disable consideration of rehashing in a natural way.
=> What about having a very large prime (larger than the VM) (order of magnitude 2 ^ 30 or so) at the end of the table, so that you never reach that many elements.

Bertrand


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
4 Nov 85 To: Bertrand... Re: ReHashTable for CedarChest 6.0
Date: Mon, 4 Nov 85 16:07:07 PST
From: Spreitzer.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Mon, 4 Nov 85 15:48:17 PST"
To: Bertrand <Serlet>
Cc: Spreitzer
1)
a) We want to be able to do other operations on the table inside an enumeration consumer. I have a number of pieces of code that do this (all on RedBlackTrees, since that's what I've been using untill ReHashTable was created). For instance, one thing that makes sense is to enumerate all entries, and delete ones that meet certain criteria.
b) Making the enumeration be monitored (the other operations already are) doesn't allow doing other operations inside the enumeration --- you'd just hang forever, waiting to acquire the monitorlock you've already got for the enumeration.
2)
a) All the numbers in the table are potential moduli. Including 2^30 in the table means that when the size passes the previous entry, we try to allocate a Seq that is 2^30 long!
b) We could delete the mod field from the TableRec --- it's redundant with the max field in the Seq.
Mike


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
6 Nov 85 serlet.pa Re: ReHashTable for CedarChest 6.0
Date: Wed, 6 Nov 85 10:26:31 PST
From: serlet.pa
Subject: Re: ReHashTable for CedarChest 6.0
In-reply-to: "Your message of Mon, 4 Nov 85 16:07:07 PST"
To: Spreitzer
Reply-to: Bertrand <Serlet.pa>
Cc: Bertrand <Serlet>
1) Enumeration  => OK
2) sizeLimit  => Put half a dozen more primes in your table and that will be enough for Dorados, Dragons and all 32 bit machines. The cost of it is the storage of half a dozen words, against one word for each table plus all the bother to handle this sizeLimit field!
3) We could delete the mod field --- it's redundant with the max field. => YES, GREAT!
Bertrand
1.2. [User]<Spreitzer>6.0Top>TableBench.DF
1.2.1. TableBench.DF 01-Nov-85 23:27:32 PST
-- TableBench.DF
-- Spreitzer, November 1, 1985 11:19:34 pm PST

Exports [User]<Spreitzer>6.0Top>

TableBench.DF 01-Nov-85 23:27:32 PST

Exports [User]<Spreitzer>6.0TableBench>

TableBench.Mesa!8 01-Nov-85 22:56:11 PST
+TableBench.BCD!7 01-Nov-85 22:56:32 PST
TableBench.Load!2 29-Oct-85 16:54:07 PST
BenchTables.cm!3 29-Oct-85 13:47:26 PST
Bench.data!6 29-Oct-85 14:14:52 PST
Abstract.data!2 29-Oct-85 14:24:16 PST
BenchMethods.cm!1 01-Nov-85 18:23:29 PST
MethodBench.data!1 01-Nov-85 19:11:49 PST
BenchParms.cm!1 01-Nov-85 21:01:05 PST
ParmBench.data!1 01-Nov-85 22:35:27 PST
BenchMethodTimes.cm!1 01-Nov-85 22:59:54 PST
MethodTimeBench.data!1 01-Nov-85 23:19:07 PST

Imports [Cedar]<Cedar6.0>Top>BasicPackages.df Of ~=
Using [Random.BCD, RedBlackTree.BCD, RefTab.BCD, SymTab.BCD]

Imports [Cedar]<Cedar6.0>Top>BasicTime.df Of ~=
Using [BasicTime.BCD]

Imports [Cedar]<CedarChest6.0>Top>HashTable.DF Of ~=
Using [HashTable.BCD]

Exports Imports [Cedar]<CedarChest6.0>Top>HashTable.DF Of ~=

Imports [Cedar]<CedarChest6.0>Top>Histograms.df Of ~=
Using [Histograms.BCD]

Exports Imports [Cedar]<CedarChest6.0>Top>Histograms.df Of ~=

Imports [Cedar]<Cedar6.0>Top>IO.df Of ~=
Using [IO.BCD]

Imports [Cedar]<Cedar6.0>Top>MesaRuntime.df Of ~=
Using [Process.BCD]

Imports [Cedar]<Cedar6.0>Top>Real.df Of ~=
Using [Real.BCD, RealFns.BCD]

Imports [Cedar]<CedarChest6.0>Top>RedBlackTreeExtras.DF Of ~=
Using [RedBlackTreeExtras.BCD]

Exports Imports [Cedar]<CedarChest6.0>Top>RedBlackTreeExtras.DF Of ~=

Imports [User]<Spreitzer>6.0Top>ReHashTable.DF Of ~=
Using [ReHashTable.BCD]

Exports Imports [User]<Spreitzer>6.0Top>ReHashTable.DF Of ~=

Imports [Cedar]<Cedar6.0>Top>Rope.df Of ~=
Using [Rope.BCD]

Imports [Cedar]<Cedar6.0>Top>SafeStorage.df Of ~=
Using [SafeStorage.BCD]

Imports [Cedar]<Cedar6.0>Top>Viewers.df Of ~=
Using [ViewerClasses.bcd]

Imports [Cedar]<Cedar6.0>Top>VM.df Of ~=
Using [VMStatistics.BCD]
1.2.2. TableBench.Mesa!8 01-Nov-85 22:56:11 PST
TableBench.Mesa
Spreitzer, November 1, 1985 10:56:11 pm PST
DIRECTORY BasicTime, HashTable, Histograms, IO, Process, Random, Real, RealFns, RedBlackTree, RedBlackTreeExtras, RefTab, ReHashTable, Rope, SafeStorage, SymTab, VMStatistics;
TableBench: CEDAR PROGRAM
IMPORTS BasicTime, HashTable, Histograms, IO, Process, Random, RealFns, RedBlackTree, RedBlackTreeExtras, RefTab, ReHashTable, Rope, SafeStorage, SymTab, VMStatistics
SHARES HashTable, ReHashTable
=
BEGIN
ROPE: TYPE = Rope.ROPE;
REALList: TYPE = LIST OF REAL;
Subject: TYPE = REF SubjectRec;
SubjectRec: TYPE = RECORD [
name: ROPE,
Create: PROC [s: Subject] RETURNS [handle: REF ANY],
Insert: PROC [handle, key, value: REF ANY],
Lookup: PROC [handle, key: REF ANY],
Destroy: PROC [handle: REF ANY],
kind: Kind,
densityLimit: REAL ← 1.0,
growthFactor: REAL ← 2.0,
modMethod: ReHashTable.ModMethod ← selectedPrime
];
Kind: TYPE = {ref, rope};
TrialData: TYPE = RECORD [
seconds, words, faults: StatisticData ← []
];
TrialResults: TYPE = RECORD [
seconds, words, faults: StatisticResult ← []
];
StatisticData: TYPE = RECORD [
sum, sumOfSquares: REAL ← 0];
StatisticResult: TYPE = RECORD [
avg, s: REAL ← 0];
Seq: TYPE = REF Sequence;
Sequence: TYPE = RECORD [s: SEQUENCE length: NAT OF REF ANY];
nCollections: NAT ← 2;
histDist: BOOLFALSE;
histLog: IO.STREAMNIL;
reportStream: IO.STREAMIO.noWhereStream;
seed: INT;
sizeGoal, lookupGoal, missGoal, tryGoal, mod: INT;
seq, seq2: Seq;
cal: TrialResults;
Init: PROC [size, lookups, misses, tries: INT, useMod: INT ← 7] = {
ks: Random.RandomStream ← Random.Create[seed: -1];
sizeGoal ← size;
lookupGoal ← lookups;
missGoal ← misses;
tryGoal ← tries;
mod ← useMod;
seed ← ks.NextInt[];
seq ← NEW [Sequence[sizeGoal+100]];
seq2 ← NEW [Sequence[sizeGoal+100]];
FOR i: INT IN [0 .. seq.length) DO
k: INT = ks.NextInt[];
seq[i] ← IO.PutFR["%g", [integer[k]]];
seq2[i] ← IO.PutFR["%g", [integer[k]]];
ENDLOOP;
FOR i: INT IN [0 .. seq.length-1) DO
j: INT = ks.ChooseInt[i, seq.length-1];
IF i # j THEN {
{t: REF ANY ← seq [i]; seq [i] ← seq [j]; seq [j] ← t};
{t: REF ANY ← seq2[i]; seq2[i] ← seq2[j]; seq2[j] ← t};
};
ENDLOOP;
};
Report: PROC [subj: Subject, which: {none, reals, method} ← none, isCallibration: BOOLFALSE] = {
tr: TrialResults ← Try[subj, isCallibration];
IF tr.words.s # badS THEN {
reportStream.PutF[
"%12g",
[rope[subj.name]]
];
SELECT which FROM
none => NULL;
reals => reportStream.PutF[
"[%3.1f, %3.1f]",
[real[subj.densityLimit]],
[real[subj.growthFactor]]
];
method => reportStream.PutF[
"[%21g]",
[rope[rhtMethodName[subj.modMethod]]]
];
ENDCASE => ERROR;
reportStream.PutF[
"\t%5.2f %l+%l %4.2f\n",
[real[tr.seconds.avg]],
[rope["m"]],
[rope["M"]],
[real[tr.seconds.s]]
];
};
};
Try: PROC [subj: Subject, isCallibration: BOOLFALSE] RETURNS [tr: TrialResults] = {
td: TrialData ← [];
rs: Random.RandomStream ← Random.Create[seed: seed];
td ← [];
FOR try: INT IN [0 .. tryGoal) DO
THROUGH [0 .. nCollections) DO SafeStorage.ReclaimCollectibleObjects[] ENDLOOP;
{
startFaults: INT = VMStatistics.pageFaults;
startWords: INT = SafeStorage.NWordsAllocated[];
startPulses: BasicTime.Pulses = BasicTime.GetClockPulses[];
insertLimit: INT ← sizeGoal;
missLimit: INT ← sizeGoal + missGoal;
inserted: INT ← 0;
handle: REF ANY ← subj.Create[subj];
FOR toDo: INT ← sizeGoal + lookupGoal + missGoal, toDo - 1 WHILE toDo > 0 DO
guide: INT = rs.ChooseInt[1, toDo];
insert: BOOL = inserted = 0 OR guide <= insertLimit;
IF insert
THEN {
subj.Insert[handle, seq[inserted], seq[inserted]];
inserted ← inserted + 1;
insertLimit ← insertLimit - 1;
missLimit ← missLimit - 1;
}
ELSE {
miss: BOOL = guide <= missLimit;
source: Seq = IF subj.kind = ref THEN seq ELSE seq2;
seek: REF ANY;
SELECT miss FROM
FALSE => seek ← source[rs.ChooseInt[0, inserted-1]];
TRUE => {seek ← source[rs.ChooseInt[inserted, source.length-1]]; missLimit ← missLimit - 1};
ENDCASE => ERROR;
subj.Lookup[handle, seek];
};
ENDLOOP;
subj.Destroy[handle];
handle ← NIL;
THROUGH [0 .. nCollections) DO SafeStorage.ReclaimCollectibleObjects[] ENDLOOP;
{
pulsesNow: BasicTime.Pulses = BasicTime.GetClockPulses[];
wordsNow: INT = SafeStorage.NWordsAllocated[];
faultsNow: INT = VMStatistics.pageFaults;
td.seconds ← Update[td.seconds, BasicTime.PulsesToSeconds[pulsesNow - startPulses]];
td.words ← Update[td.words, wordsNow - startWords];
td.faults ← Update[td.faults, faultsNow - startFaults];
Process.CheckForAbort[];
}};
ENDLOOP;
tr.seconds ← FinalAnalysis[td.seconds, tryGoal];
tr.words ← FinalAnalysis[td.words, tryGoal];
tr.faults ← FinalAnalysis[td.faults, tryGoal];
IF isCallibration
THEN cal ← tr
ELSE tr ← Adjust[tr, cal];
};
Update: PROC [s: StatisticData, D: REAL] RETURNS [ns: StatisticData] = {
ns ← s;
ns.sum ← ns.sum + D;
ns.sumOfSquares ← ns.sumOfSquares + D*D;
};
FinalAnalysis: PROC [d: StatisticData, n: REAL] RETURNS [r: StatisticResult] = {
r.avg ← d.sum/n;
IF n = 1.0 THEN r.s ← badS ELSE {
add: REAL = d.sumOfSquares + n * r.avg*r.avg;
sub: REAL = 2 * r.avg * d.sum;
square: REAL ← add - sub;
mag: REAL = MAX[add, sub];
IF square < mag*-1.0E-4 THEN ERROR ELSE square ← MAX[square, 0.0];
r.s ← RealFns.SqRt[square / (n - 1.0)];
};
};
badS: REAL = Real.LargestNumber;
Adjust: PROC [tr, cal: TrialResults] RETURNS [ntr: TrialResults] = {
DoS: PROC [tr, cal: StatisticResult] RETURNS [ntr: StatisticResult] = {
ntr ← [
avg: tr.avg - cal.avg,
s: IF tr.s # badS AND cal.s # badS THEN RealFns.SqRt[tr.s*tr.s + cal.s*cal.s] ELSE badS
];
};
ntr ← [
seconds: DoS[tr.seconds, cal.seconds],
words: DoS[tr.words, cal.words],
faults: DoS[tr.faults, cal.faults]
]
};
rnSub: Subject ← NEW [SubjectRec ← ["Baseline", CreateNull, InsertNull, LookupNull, DestroyNull, ref]];
snSub: Subject ← NEW [SubjectRec ← ["Baseline", CreateNull, InsertNull, LookupNull, DestroyNull, rope]];
CreateNull: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← NIL;
};
InsertNull: PROC [handle, key, value: REF ANY] = {
};
LookupNull: PROC [handle, key: REF ANY] = {
};
DestroyNull: PROC [handle: REF ANY] = {
IF histDist THEN handle ← handle;
};
rtSub: Subject ← NEW [SubjectRec ← ["RefTab", CreateRefTab, InsertRefTab, LookupRefTab, DestroyRefTab, ref]];
CreateRefTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← RefTab.Create[mod];
};
InsertRefTab: PROC [handle, key, value: REF ANY] = {
ref: RefTab.Ref = NARROW[handle];
[] ← ref.Store[key, value];
};
LookupRefTab: PROC [handle, key: REF ANY] = {
ref: RefTab.Ref = NARROW[handle];
[] ← ref.Fetch[key];
};
DestroyRefTab: PROC [handle: REF ANY] = {
ref: RefTab.Ref = NARROW[handle];
DestroyPair: PROC [key, val: REF ANY] RETURNS [quit: BOOL] = {
[] ← ref.Delete[key];
quit ← FALSE;
};
[] ← ref.Pairs[DestroyPair];
IF histDist THEN handle ← handle;
};
stSub: Subject ← NEW [SubjectRec ← ["SymTab", CreateSymTab, InsertSymTab, LookupSymTab, DestroySymTab, rope]];
CreateSymTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← SymTab.Create[mod];
};
InsertSymTab: PROC [handle, key, value: REF ANY] = {
ref: SymTab.Ref = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Store[rope, value];
};
LookupSymTab: PROC [handle, key: REF ANY] = {
ref: SymTab.Ref = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Fetch[rope];
};
DestroySymTab: PROC [handle: REF ANY] = {
ref: SymTab.Ref = NARROW[handle];
DestroyPair: PROC [key: ROPE, val: REF ANY] RETURNS [quit: BOOL] = {
[] ← ref.Delete[key];
quit ← FALSE;
};
IF histDist THEN handle ← handle;
[] ← ref.Pairs[DestroyPair];
};
rhtSub: Subject ← NEW [SubjectRec ← ["HashTable", CreateRefHashTab, InsertRefHashTab, LookupRefHashTab, DestroyHashTab, ref]];
CreateRefHashTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← HashTable.Create[mod: mod];
};
InsertRefHashTab: PROC [handle, key, value: REF ANY] = {
ref: HashTable.Table = NARROW[handle];
[] ← ref.Store[key, value];
};
LookupRefHashTab: PROC [handle, key: REF ANY] = {
ref: HashTable.Table = NARROW[handle];
[] ← ref.Fetch[key];
};
DestroyHashTab: PROC [handle: REF ANY] = {
ref: HashTable.Table = NARROW[handle];
IF histDist THEN HistHTDist[ref];
ref.Erase[];
};
HistHTDist: PROC [table: HashTable.Table] = {
h: Histograms.Histogram ← Histograms.NewHistogram[];
FOR i: NAT IN [0 .. table.data.max) DO
n: NAT ← 0;
FOR rn: HashTable.Node ← table.data[i], rn.next WHILE rn # NIL DO
n ← n + 1;
ENDLOOP;
h.Increment[n];
ENDLOOP;
histLog.PutRope["\nHashTable list size distribution: "];
h.WriteTo[histLog];
histLog.PutRope["\n"];
[] ← h.Show[
viewerInit: [name: "HashTable list size", iconic: TRUE]
];
};
shtSub: Subject ← NEW [SubjectRec ← ["HashTable", CreateRopeHashTab, InsertRopeHashTab, LookupRopeHashTab, DestroyHashTab, rope]];
CreateRopeHashTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← HashTable.Create[mod: mod, equal: ReHashTable.RopeEqual, hash: ReHashTable.HashRope];
};
InsertRopeHashTab: PROC [handle, key, value: REF ANY] = {
ref: HashTable.Table = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Store[rope, value];
};
LookupRopeHashTab: PROC [handle, key: REF ANY] = {
ref: HashTable.Table = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Fetch[rope];
};
RHTSubGen: TYPE = PROC [densityLimit: REAL ← 1.0, growthFactor: REAL ← 2.0, modMethod: ReHashTable.ModMethod ← selectedPrime] RETURNS [s: Subject];
RRhtSub: RHTSubGen = {
s ← NEW [SubjectRec ← ["ReHashTable", CreateRefReHashTab, InsertRefReHashTab, LookupRefReHashTab, DestroyReHashTab, ref, densityLimit, growthFactor, modMethod]];
};
rrhtSub: Subject ← RRhtSub[];
CreateRefReHashTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← ReHashTable.Create[mod: mod, densityLimit: s.densityLimit, growthFactor: s.growthFactor, modMethod: s.modMethod];
};
InsertRefReHashTab: PROC [handle, key, value: REF ANY] = {
ref: ReHashTable.Table = NARROW[handle];
[] ← ref.Store[key, value];
};
LookupRefReHashTab: PROC [handle, key: REF ANY] = {
ref: ReHashTable.Table = NARROW[handle];
[] ← ref.Fetch[key];
};
DestroyReHashTab: PROC [handle: REF ANY] = {
ref: ReHashTable.Table = NARROW[handle];
IF histDist THEN HistRHTDist[ref];
ref.Erase[];
};
HistRHTDist: PROC [table: ReHashTable.Table] = {
h: Histograms.Histogram ← Histograms.NewHistogram[];
label: ROPE = Rope.Cat[
"ReHashTable[",
rhtMethodName[table.modMethod],
"] ",
SELECT table.equal FROM
NIL => "REF",
ReHashTable.RopeEqual, ReHashTable.RopeEqualModCase => "ROPE",
ENDCASE => ERROR,
" list size distribution"
];
FOR i: NAT IN [0 .. table.data.max) DO
n: NAT ← 0;
FOR rn: ReHashTable.Node ← table.data[i], rn.next WHILE rn # NIL DO
n ← n + 1;
ENDLOOP;
h.Increment[n];
ENDLOOP;
histLog.PutF["\n%g: ", [rope[label]] ];
h.WriteTo[histLog];
histLog.PutRope["\n"];
[] ← h.Show[
viewerInit: [name: label, iconic: TRUE]
];
};
rhtMethodName: ARRAY ReHashTable.ModMethod OF ROPE = [
selectedPrime: "selectedPrime",
powerOfTwo: "powerOfTwo",
oneLessThanPowerOfTwo: "oneLessThanPowerOfTwo"
];
SRhtSub: RHTSubGen = {
s ← NEW [SubjectRec ← ["ReHashTable", CreateRopeReHashTab, InsertRopeReHashTab, LookupRopeReHashTab, DestroyReHashTab, rope, densityLimit, growthFactor, modMethod]];
};
srhtSub: Subject ← SRhtSub[];
CreateRopeReHashTab: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← ReHashTable.Create[mod: mod, equal: ReHashTable.RopeEqual, hash: ReHashTable.HashRope, densityLimit: s.densityLimit, growthFactor: s.growthFactor, modMethod: s.modMethod];
};
InsertRopeReHashTab: PROC [handle, key, value: REF ANY] = {
ref: ReHashTable.Table = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Store[rope, value];
};
LookupRopeReHashTab: PROC [handle, key: REF ANY] = {
ref: ReHashTable.Table = NARROW[handle];
rope: ROPE = NARROW[key];
[] ← ref.Fetch[rope];
};
rrtSub: Subject ← NEW [SubjectRec ← ["RedBlackTree", CreateRefRedBlack, InsertRefRedBlack, LookupRefRedBlack, DestroyRedBlack, ref]];
CreateRefRedBlack: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← RedBlackTreeExtras.NewRefMap[];
};
InsertRefRedBlack: PROC [handle, key, value: REF ANY] = {
ref: RedBlackTree.Table = NARROW[handle];
[] ← RedBlackTreeExtras.Store[ref, key, value];
};
LookupRefRedBlack: PROC [handle, key: REF ANY] = {
ref: RedBlackTree.Table = NARROW[handle];
[] ← RedBlackTreeExtras.Fetch[ref, key];
};
DestroyRedBlack: PROC [handle: REF ANY] = {
ref: RedBlackTree.Table = NARROW[handle];
IF histDist THEN handle ← handle;
ref.DestroyTable[];
};
srtSub: Subject ← NEW [SubjectRec ← ["RedBlackTree", CreateRopeRedBlack, InsertRopeRedBlack, LookupRopeRedBlack, DestroyRedBlack, rope]];
CreateRopeRedBlack: PROC [s: Subject] RETURNS [handle: REF ANY] = {
handle ← RedBlackTreeExtras.NewRopeMap[];
};
InsertRopeRedBlack: PROC [handle, key, value: REF ANY] = {
ref: RedBlackTree.Table = NARROW[handle];
k: ROPE = NARROW[key];
[] ← RedBlackTreeExtras.Store[ref, k, value];
};
LookupRopeRedBlack: PROC [handle, key: REF ANY] = {
ref: RedBlackTree.Table = NARROW[handle];
k: ROPE = NARROW[key];
[] ← RedBlackTreeExtras.Fetch[ref, k];
};
END.
1.2.3. TableBench.Load!2 29-Oct-85 16:54:07 PST
TableBench.Load
Spreitzer, October 29, 1985 4:54:06 pm PST
Run HashTableImpl
RedBlackTreeExtras
Run ReHashTableImpl
Histograms
Run TableBench
1.2.4. BenchTables.cm!3 29-Oct-85 13:47:26 PST
BenchTables.cm
Spreitzer, October 29, 1985 1:47:26 pm PST
← TableBench.Init[1500, 15000, 7500, 5, 7]
← TableBench.Try[TableBench.rnSub, TRUE]
← TableBench.Try[TableBench.rtSub]
← TableBench.Try[TableBench.rhtSub]
← TableBench.Try[TableBench.rrhtSub]
← TableBench.Try[TableBench.rrtSub]
← TableBench.Init[750, 7500, 3750, 5, 7]
← TableBench.Try[TableBench.snSub, TRUE]
← TableBench.Try[TableBench.stSub]
← TableBench.Try[TableBench.shtSub]
← TableBench.Try[TableBench.srhtSub]
← TableBench.Try[TableBench.srtSub]
← TableBench.Init[1500, 15000, 7500, 5, 17]
← TableBench.Try[TableBench.rnSub, TRUE]
← TableBench.Try[TableBench.rtSub]
← TableBench.Try[TableBench.rhtSub]
← TableBench.Try[TableBench.rrhtSub]
← TableBench.Try[TableBench.rrtSub]
← TableBench.Init[750, 7500, 3750, 5, 17]
← TableBench.Try[TableBench.snSub, TRUE]
← TableBench.Try[TableBench.stSub]
← TableBench.Try[TableBench.shtSub]
← TableBench.Try[TableBench.srhtSub]
← TableBench.Try[TableBench.srtSub]
← TableBench.Init[1500, 15000, 7500, 5, 2003]
← TableBench.Try[TableBench.rnSub, TRUE]
← TableBench.Try[TableBench.rtSub]
← TableBench.Try[TableBench.rhtSub]
← TableBench.Try[TableBench.rrhtSub]
← TableBench.Try[TableBench.rrtSub]
← TableBench.Init[750, 7500, 3750, 5, 2003]
← TableBench.Try[TableBench.snSub, TRUE]
← TableBench.Try[TableBench.stSub]
← TableBench.Try[TableBench.shtSub]
← TableBench.Try[TableBench.srhtSub]
← TableBench.Try[TableBench.srtSub]
1.2.6. Bench.data!6 29-Oct-85 14:14:52 PST
Bench.data
Spreitzer, October 29, 1985 2:14:52 pm PST
October 29, 1985 2:14:29 pm PST
% BenchTables
>
>
> ← TableBench.Init[1500, 15000, 7500, 5, 7]

> ← TableBench.Try[TableBench.rnSub, TRUE]
[seconds: [avg: 4.403827, s: 0.1675595], words: [avg: 78.0, s: 43.84062], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.rtSub]
[seconds: [avg: 10.72274, s: 0.2875807], words: [avg: 12112.4, s: 55.7315], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rhtSub]
[seconds: [avg: 12.36351, s: 0.3671875], words: [avg: 12145.6, s: 56.01786], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rrhtSub]
[seconds: [avg: 1.356122, s: 0.1752161], words: [avg: 20502.4, s: 55.44366], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rrtSub]
[seconds: [avg: 9.356869, s: 0.29785], words: [avg: 24237.6, s: 55.44366], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Init[750, 7500, 3750, 5, 7]

> ← TableBench.Try[TableBench.snSub, TRUE]
[seconds: [avg: 2.223488, s: 4.107371e-2], words: [avg: 62.4, s: 34.47898], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.stSub]
[seconds: [avg: 15.20653, s: 0.2398629], words: [avg: 6090.4, s: 34.47898], faults: [avg: 0.0, s: 0.0]]
Gadarene Swine Law:
 Merely because the group is in formation does not mean that the group is
 on the right course.
> ← TableBench.Try[TableBench.shtSub]
[seconds: [avg: 22.34474, s: 0.3316074], words: [avg: 6138.8, s: 55.1797], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.srhtSub]
[seconds: [avg: 2.699238, s: 0.1845807], words: [avg: 10416.2, s: 49.36395], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.srtSub]
[seconds: [avg: 8.186483, s: 0.2599509], words: [avg: 12183.2, s: 54.00740], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Init[1500, 15000, 7500, 5, 17]

> ← TableBench.Try[TableBench.rnSub, TRUE]
[seconds: [avg: 4.314976, s: 1.324674e-2], words: [avg: 46.0, s: 0.0], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.rtSub]
[seconds: [avg: 5.029946, s: 0.2490368], words: [avg: 12152.4, s: 42.33202], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rhtSub]
[seconds: [avg: 5.784314, s: 0.2666999], words: [avg: 12133.6, s: 53.66563], faults: [avg: 0.0, s: 0.0]]
Flon's Law:
 There is not now, and never will be, a language in which it is
 the least bit difficult to write bad programs.

> ← TableBench.Try[TableBench.rrhtSub]
[seconds: [avg: 1.53737, s: 0.1691681], words: [avg: 20493.2, s: 43.08132], faults: [avg: 0.0, s: 0.0]]
Algren's Precepts:
 Never eat at a place called Mom's. Never play cards with a man named
 Doc. And never lie down with a woman who's got more troubles than you.
> ← TableBench.Try[TableBench.rrtSub]
[seconds: [avg: 9.142733, s: 0.1622623], words: [avg: 24267.2, s: 37.52333], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Init[750, 7500, 3750, 5, 17]

> ← TableBench.Try[TableBench.snSub, TRUE]
[seconds: [avg: 2.225606, s: 0.0384473], words: [avg: 46.0, s: 0.0], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.stSub]
[seconds: [avg: 7.205011, s: 0.2739636], words: [avg: 6081.2, s: 42.61455], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.shtSub]
[seconds: [avg: 10.11373, s: 0.2547424], words: [avg: 6112.4, s: 34.8712], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.srhtSub]
[seconds: [avg: 2.660493, s: 0.1670179], words: [avg: 10376.6, s: 34.8712], faults: [avg: 0.0, s: 0.0]]
Parkinson's Third Law:
 Expansion means complexity; and complexity decay.
> ← TableBench.Try[TableBench.srtSub]
[seconds: [avg: 8.198586, s: 0.1944856], words: [avg: 12178.8, s: 39.19183], faults: [avg: 0.0, s: 0.0]]
Crane's Rule:
 There are three ways to get something done: do it yourself, hire
 someone, or forbid your kids to do it.
>
> ← TableBench.Init[1500, 15000, 7500, 5, 2003]

> ← TableBench.Try[TableBench.rnSub, TRUE]
[seconds: [avg: 4.403117, s: 0.2000309], words: [avg: 58.8, s: 26.4424], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.rtSub]
[seconds: [avg: 1.390829, s: 0.2902281], words: [avg: 16084.6, s: 41.51144], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rhtSub]
[seconds: [avg: 1.412211, s: 0.2795153], words: [avg: 16094.6, s: 50.54899], faults: [avg: 0.0, s: 0.0]]
Why did the Roman Empire collapse? What is the Latin for office
automation?

> ← TableBench.Try[TableBench.rrhtSub]
[seconds: [avg: 1.333727, s: 0.2588683], words: [avg: 16194.6, s: 50.54899], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.rrtSub]
[seconds: [avg: 9.086156, s: 0.3654432], words: [avg: 24256.0, s: 45.90425], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Init[750, 7500, 3750, 5, 2003]

Who made the world I cannot tell;
'Tis made, and here am I in hell.
My hand, though now my knuckles bleed,
I never soiled with such a deed.

  -- A. E. Housman

> ← TableBench.Try[TableBench.snSub, TRUE]
[seconds: [avg: 2.282637, s: 0.159909], words: [avg: 46.0, s: 0.0], faults: [avg: 0.0, s: 0.0]]
>
> ← TableBench.Try[TableBench.stSub]
[seconds: [avg: 1.838425, s: 0.1659381], words: [avg: 10092.6, s: 34.8712], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.shtSub]
[seconds: [avg: 2.124614, s: 0.2131525], words: [avg: 10108.2, s: 42.52058], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.srhtSub]
[seconds: [avg: 2.192877, s: 0.2262529], words: [avg: 10192.6, s: 34.8712], faults: [avg: 0.0, s: 0.0]]
> ← TableBench.Try[TableBench.srtSub]
[seconds: [avg: 8.113274, s: 0.2895849], words: [avg: 12198.8, s: 42.70831], faults: [avg: 0.0, s: 0.0]]
>
God made the world in six days, and was arrested on the seventh.

1.2.7. Abstract.data!2 29-Oct-85 14:24:16 PST
abstract.data
Spreitzer, October 29, 1985 2:24:15 pm PST
REF, 1500, 15000, 7500, 7:
Ref/SymTab 10.7 ± 0.29
HashTab  12.4 ± 0.37
ReHashTab 01.4 ± 0.17
RedBlack  09.4 ± 0.30
ROPE, 750, 7500, 3750, 7:
Ref/SymTab 15.2 ± 0.24
HashTab  22.3 ± 0.33
ReHashTab 02.7 ± 0.18
RedBlack  08.2 ± 0.26
REF, 1500, 15000, 7500, 17:
Ref/SymTab 05.0 ± 0.25
HashTab  05.8 ± 0.27
ReHashTab 01.5 ± 0.17
RedBlack  09.1 ± 0.16
ROPE, 750, 7500, 3750, 17:
Ref/SymTab 07.2 ± 0.27
HashTab  10.1 ± 0.25
ReHashTab 02.7 ± 0.17
RedBlack  08.2 ± 0.19
REF, 1500, 15000, 7500, 2003:
Ref/SymTab 01.4 ± 0.30
HashTab  01.4 ± 0.28
ReHashTab 01.3 ± 0.26
RedBlack  09.1 ± 0.37
ROPE, 750, 7500, 3750, 2003:
Ref/SymTab 01.8 ± 0.17
HashTab  02.1 ± 0.21
ReHashTab 02.2 ± 0.23
RedBlack  08.1 ± 0.29
Ref, 2K, 20K, 10K, 2003:
Ref/SymTab 02.1 ± 0.31
HashTab  02.1 ± 0.30
ReHashTab 02.1 ± 0.32
RedBlack  13.6 ± 0.26
1.2.8. BenchMethods.cm!1 01-Nov-85 18:23:29 PST
BenchMethods.cm
Spreitzer, November 1, 1985 6:23:28 pm PST
← TableBench.histDist ← TRUE
← TableBench.histLog ← TableBench.reportStream ← &stdout[]
← TableBench.Init[01021, 15000, 7500, 1, 01021]
← TableBench.Report[TableBench.rnSub, TRUE]
← TableBench.Report[TableBench.RRhtSub[1.0, 2.0, selectedPrime]]
← TableBench.Report[TableBench.RRhtSub[1.0, 2.0, powerOfTwo]]
← TableBench.Report[TableBench.RRhtSub[1.0, 2.0, oneLessThanPowerOfTwo]]
← TableBench.Init[00509, 7500, 3750, 1, 00509]
← TableBench.Report[TableBench.snSub, TRUE]
← TableBench.Report[TableBench.SRhtSub[1.0, 2.0, selectedPrime]]
← TableBench.Report[TableBench.SRhtSub[1.0, 2.0, powerOfTwo]]
← TableBench.Report[TableBench.SRhtSub[1.0, 2.0, oneLessThanPowerOfTwo]]
1.2.9. MethodBench.data!1 01-Nov-85 19:11:49 PST
MethodBench.data
Spreitzer, November 1, 1985 7:11:49 pm PST
[Artwork node; type 'ArtworkInterpress on' to command tool]
November 1, 1985 6:32:53 pm PST
ReHashTable[selectedPrime] REF list size distribution: [370 of 0.0, 387 of 1.0, 175 of 2.0, 75 of 3.0, 11 of 4.0, 3 of 5.0]
ReHashTable[powerOfTwo] REF list size distribution: [350 of 0.0, 405 of 1.0, 208 of 2.0, 47 of 3.0, 11 of 4.0, 3 of 5.0]
ReHashTable[oneLessThanPowerOfTwo] REF list size distribution: [375 of 0.0, 366 of 1.0, 208 of 2.0, 59 of 3.0, 13 of 4.0, 2 of 5.0]
ReHashTable[selectedPrime] ROPE list size distribution: [193 of 0.0, 180 of 1.0, 97 of 2.0, 24 of 3.0, 13 of 4.0, 1 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] ROPE list size distribution: [259 of 0.0, 108 of 1.0, 75 of 2.0, 43 of 3.0, 15 of 4.0, 10 of 5.0, 2 of 6.0]
ReHashTable[oneLessThanPowerOfTwo] ROPE list size distribution: [218 of 0.0, 156 of 1.0, 81 of 2.0, 37 of 3.0, 15 of 4.0, 4 of 5.0]
[Artwork node; type 'ArtworkInterpress on' to command tool]
November 1, 1985 7:02:57 pm PST
ReHashTable[selectedPrime] REF list size distribution: [367 of 0.0, 396 of 1.0, 177 of 2.0, 60 of 3.0, 15 of 4.0, 5 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] REF list size distribution: [341 of 0.0, 420 of 1.0, 195 of 2.0, 62 of 3.0, 5 of 4.0, 1 of 5.0]
ReHashTable[oneLessThanPowerOfTwo] REF list size distribution: [380 of 0.0, 368 of 1.0, 190 of 2.0, 70 of 3.0, 12 of 4.0, 3 of 5.0]
ReHashTable[selectedPrime] ROPE list size distribution: [198 of 0.0, 176 of 1.0, 90 of 2.0, 31 of 3.0, 11 of 4.0, 2 of 5.0, 1 of 6.0]
ReHashTable[powerOfTwo] ROPE list size distribution: [264 of 0.0, 99 of 1.0, 80 of 2.0, 43 of 3.0, 16 of 4.0, 6 of 5.0, 1 of 6.0, 3 of 7.0]
ReHashTable[oneLessThanPowerOfTwo] ROPE list size distribution: [210 of 0.0, 164 of 1.0, 85 of 2.0, 38 of 3.0, 10 of 4.0, 3 of 5.0, 1 of 6.0]
1.2.10. BenchParms.cm!1 01-Nov-85 21:01:05 PST
BenchParms.cm
Spreitzer, November 1, 1985 9:01:05 pm PST
{ OPEN TableBench; NULL}
← histDist ← FALSE
← reportStream ← &stdout[]
← &sqrt05 ← RealFns.SqRt[0.5]
← &sqrt20 ← RealFns.SqRt[2.0]
← &e ← RealFns.Exp[1.0]
{ densityLimits: REALList ← LIST[0.5, &sqrt05, 1.0, &sqrt20, 2.0]; NULL}
{ growthFactors: REALList ← LIST[&sqrt20, 2.0, &e, 3.0, 4.0, 6.0]; NULL}
{ TestR: PROC = {FOR dls: REALList ← densityLimits, dls.rest WHILE dls # NIL DO FOR gfs: REALList ← growthFactors, gfs.rest WHILE gfs # NIL DO Report[RRhtSub[dls.first, gfs.first, selectedPrime]] ENDLOOP; IO.PutRope[reportStream, "\n"]; ENDLOOP}; NULL}
{ TestS: PROC = {FOR dls: REALList ← densityLimits, dls.rest WHILE dls # NIL DO FOR gfs: REALList ← growthFactors, gfs.rest WHILE gfs # NIL DO Report[SRhtSub[dls.first, gfs.first, selectedPrime]] ENDLOOP; IO.PutRope[reportStream, "\n"]; ENDLOOP}; NULL}
← Init[800, 8000, 4000, 3, 7]
← Report[rnSub, TRUE]
← TestR[]
← Init[300, 3000, 1500, 3, 7]
← Report[snSub, TRUE]
← TestS[]
← Init[2000, 20000, 10000, 3, 7]
← Report[rnSub, TRUE]
← TestR[]
← Init[750, 7500, 3750, 3, 7]
← Report[snSub, TRUE]
← TestS[]
1.2.11. ParmBench.data!1 01-Nov-85 22:35:27 PST
ParmBench.data
Spreitzer, November 1, 1985 10:35:26 pm PST
% BenchParms
>
>
> ← histDist ← FALSE
FALSE
> ← reportStream ← &stdout[]
^[streamProcs: 6101362B^, streamData: NIL, propList: NIL, backingStream: 20217324B^]
> ← &sqrt05 ← RealFns.SqRt[0.5]
0.7071069
> ← &sqrt20 ← RealFns.SqRt[2.0]
1.414214
> ← &e ← RealFns.Exp[1.0]
2.718282
>
> { densityLimits: REALList ← LIST[0.5, &sqrt05, 1.0, &sqrt20, 2.0]; NULL}
> { growthFactors: REALList ← LIST[&sqrt20, 2.0, &e, 3.0, 4.0, 6.0]; NULL}
>
> { TestR: PROC = {FOR dls: REALList ← densityLimits, dls.rest WHILE dls # NIL DO FOR gfs: REALList ← growthFactors, gfs.rest WHILE gfs # NIL DO Report[RRhtSub[dls.first, gfs.first, selectedPrime]] ENDLOOP; IO.PutRope[reportStream, "\n"]; ENDLOOP}; NULL}
>
> { TestS: PROC = {FOR dls: REALList ← densityLimits, dls.rest WHILE dls # NIL DO FOR gfs: REALList ← growthFactors, gfs.rest WHILE gfs # NIL DO Report[SRhtSub[dls.first, gfs.first, selectedPrime]] ENDLOOP; IO.PutRope[reportStream, "\n"]; ENDLOOP}; NULL}
>
> ← Init[800, 8000, 4000, 3, 7]

> ← Report[rnSub, TRUE]
Baseline[1.0, 2.0]  2.4 ± 0.01

>
> ← TestR[]
ReHashTable[0.5, 1.4]  1.2 ± 0.10
ReHashTable[0.5, 2.0]  0.9 ± 0.17
ReHashTable[0.5, 2.7]  0.8 ± 0.04
ReHashTable[0.5, 3.0]  0.8 ± 0.04
ReHashTable[0.5, 4.0]  0.9 ± 0.16
ReHashTable[0.5, 6.0]  1.0 ± 0.06

ReHashTable[0.7, 1.4]  0.9 ± 0.04
ReHashTable[0.7, 2.0]  1.1 ± 0.15
ReHashTable[0.7, 2.7]  0.8 ± 0.04
ReHashTable[0.7, 3.0]  1.1 ± 0.19
ReHashTable[0.7, 4.0]  0.8 ± 0.02
ReHashTable[0.7, 6.0]  1.0 ± 0.01

ReHashTable[1.0, 1.4]  0.9 ± 0.17
ReHashTable[1.0, 2.0]  0.8 ± 0.02
ReHashTable[1.0, 2.7]  0.8 ± 0.02
ReHashTable[1.0, 3.0]  0.8 ± 0.19
ReHashTable[1.0, 4.0]  0.8 ± 0.05
ReHashTable[1.0, 6.0]  0.8 ± 0.01

ReHashTable[1.4, 1.4]  0.9 ± 0.19
ReHashTable[1.4, 2.0]  0.9 ± 0.07
ReHashTable[1.4, 2.7]  0.8 ± 0.03
ReHashTable[1.4, 3.0]  0.9 ± 0.20
ReHashTable[1.4, 4.0]  0.7 ± 0.06
ReHashTable[1.4, 6.0]  0.8 ± 0.04

ReHashTable[2.0, 1.4]  1.0 ± 0.14
ReHashTable[2.0, 2.0]  0.9 ± 0.01
ReHashTable[2.0, 2.7]  0.7 ± 0.02
ReHashTable[2.0, 3.0]  0.9 ± 0.15
ReHashTable[2.0, 4.0]  0.7 ± 0.02
ReHashTable[2.0, 6.0]  0.8 ± 0.02


>
> ← Init[300, 3000, 1500, 3, 7]

> ← Report[snSub, TRUE]
Baseline[1.0, 2.0]  1.3 ± 0.17

>
> ← TestS[]
ReHashTable[0.5, 1.4]  0.8 ± 0.17
ReHashTable[0.5, 2.0]  0.8 ± 0.17
ReHashTable[0.5, 2.7]  0.7 ± 0.17
ReHashTable[0.5, 3.0]  0.9 ± 0.26
ReHashTable[0.5, 4.0]  0.8 ± 0.17
ReHashTable[0.5, 6.0]  0.8 ± 0.17

ReHashTable[0.7, 1.4]  0.8 ± 0.17
ReHashTable[0.7, 2.0]  0.9 ± 0.23
ReHashTable[0.7, 2.7]  0.7 ± 0.17
ReHashTable[0.7, 3.0]  0.8 ± 0.17
ReHashTable[0.7, 4.0]  0.7 ± 0.17
ReHashTable[0.7, 6.0]  0.8 ± 0.17

ReHashTable[1.0, 1.4]  0.9 ± 0.23
ReHashTable[1.0, 2.0]  0.8 ± 0.17
ReHashTable[1.0, 2.7]  0.8 ± 0.17
ReHashTable[1.0, 3.0]  0.8 ± 0.18
ReHashTable[1.0, 4.0]  0.8 ± 0.24
ReHashTable[1.0, 6.0]  0.7 ± 0.17

ReHashTable[1.4, 1.4]  0.8 ± 0.17
ReHashTable[1.4, 2.0]  0.9 ± 0.17
ReHashTable[1.4, 2.7]  0.8 ± 0.25
ReHashTable[1.4, 3.0]  0.8 ± 0.17
ReHashTable[1.4, 4.0]  0.7 ± 0.17
ReHashTable[1.4, 6.0]  0.8 ± 0.18

ReHashTable[2.0, 1.4]  1.0 ± 0.25
ReHashTable[2.0, 2.0]  0.9 ± 0.17
ReHashTable[2.0, 2.7]  0.8 ± 0.17
ReHashTable[2.0, 3.0]  0.9 ± 0.17
ReHashTable[2.0, 4.0]  0.9 ± 0.26
ReHashTable[2.0, 6.0]  0.8 ± 0.17


>
> ← Init[2000, 20000, 10000, 3, 7]

> ← Report[rnSub, TRUE]
Baseline[1.0, 2.0]  5.9 ± 0.16

>
> ← TestR[]
ReHashTable[0.5, 1.4]  2.1 ± 0.18
ReHashTable[0.5, 2.0]  2.4 ± 0.20
ReHashTable[0.5, 2.7]  1.9 ± 0.24
ReHashTable[0.5, 3.0]  2.4 ± 0.28
ReHashTable[0.5, 4.0]  1.8 ± 0.28
ReHashTable[0.5, 6.0]  2.1 ± 0.17

ReHashTable[0.7, 1.4]  2.1 ± 0.28
ReHashTable[0.7, 2.0]  2.2 ± 0.22
ReHashTable[0.7, 2.7]  2.3 ± 0.30
ReHashTable[0.7, 3.0]  2.0 ± 0.26
ReHashTable[0.7, 4.0]  2.2 ± 0.21
ReHashTable[0.7, 6.0]  2.4 ± 0.18

ReHashTable[1.0, 1.4]  2.5 ± 0.24
ReHashTable[1.0, 2.0]  2.2 ± 0.24
ReHashTable[1.0, 2.7]  2.4 ± 0.28
ReHashTable[1.0, 3.0]  2.0 ± 0.25
ReHashTable[1.0, 4.0]  1.6 ± 0.17
ReHashTable[1.0, 6.0]  2.5 ± 0.17

ReHashTable[1.4, 1.4]  2.0 ± 0.24
ReHashTable[1.4, 2.0]  2.1 ± 0.23
ReHashTable[1.4, 2.7]  1.6 ± 0.17
ReHashTable[1.4, 3.0]  2.1 ± 0.21
ReHashTable[1.4, 4.0]  1.7 ± 0.23
ReHashTable[1.4, 6.0]  1.9 ± 0.18

ReHashTable[2.0, 1.4]  2.1 ± 0.24
ReHashTable[2.0, 2.0]  2.1 ± 0.17
ReHashTable[2.0, 2.7]  1.9 ± 0.24
ReHashTable[2.0, 3.0]  2.1 ± 0.30
ReHashTable[2.0, 4.0]  1.9 ± 0.35
ReHashTable[2.0, 6.0]  1.9 ± 0.19


>
> ← Init[750, 7500, 3750, 3, 7]

> ← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.4 ± 0.01

>
> ← TestS[]
ReHashTable[0.5, 1.4]  2.6 ± 0.05
ReHashTable[0.5, 2.0]  2.4 ± 0.17
ReHashTable[0.5, 2.7]  2.3 ± 0.07
ReHashTable[0.5, 3.0]  2.4 ± 0.17
ReHashTable[0.5, 4.0]  2.1 ± 0.06
ReHashTable[0.5, 6.0]  2.2 ± 0.18

ReHashTable[0.7, 1.4]  2.4 ± 0.07
ReHashTable[0.7, 2.0]  2.4 ± 0.16
ReHashTable[0.7, 2.7]  2.2 ± 0.06
ReHashTable[0.7, 3.0]  2.4 ± 0.20
ReHashTable[0.7, 4.0]  2.1 ± 0.04
ReHashTable[0.7, 6.0]  2.3 ± 0.19

ReHashTable[1.0, 1.4]  2.5 ± 0.07
ReHashTable[1.0, 2.0]  2.5 ± 0.24
ReHashTable[1.0, 2.7]  2.4 ± 0.07
ReHashTable[1.0, 3.0]  2.3 ± 0.04
ReHashTable[1.0, 4.0]  2.2 ± 0.05
ReHashTable[1.0, 6.0]  2.3 ± 0.17

ReHashTable[1.4, 1.4]  2.6 ± 0.04
ReHashTable[1.4, 2.0]  2.6 ± 0.21
ReHashTable[1.4, 2.7]  2.3 ± 0.02
ReHashTable[1.4, 3.0]  2.5 ± 0.18
ReHashTable[1.4, 4.0]  2.3 ± 0.04
ReHashTable[1.4, 6.0]  2.4 ± 0.21

ReHashTable[2.0, 1.4]  2.7 ± 0.03
ReHashTable[2.0, 2.0]  2.7 ± 0.22
ReHashTable[2.0, 2.7]  2.4 ± 0.01
ReHashTable[2.0, 3.0]  2.6 ± 0.15
ReHashTable[2.0, 4.0]  2.2 ± 0.02
ReHashTable[2.0, 6.0]  2.6 ± 0.15


>
% ← TestS[]
ReHashTable[0.5, 1.4]  2.6 ± 0.02
ReHashTable[0.5, 2.0]  2.3 ± 0.22
ReHashTable[0.5, 2.7]  2.1 ± 0.03
ReHashTable[0.5, 3.0]  2.3 ± 0.29
ReHashTable[0.5, 4.0]  2.1 ± 0.05
ReHashTable[0.5, 6.0]  2.3 ± 0.17

ReHashTable[0.7, 1.4]  2.5 ± 0.03
ReHashTable[0.7, 2.0]  2.4 ± 0.15
ReHashTable[0.7, 2.7]  2.2 ± 0.02
ReHashTable[0.7, 3.0]  2.3 ± 0.16
ReHashTable[0.7, 4.0]  2.2 ± 0.02
-- Aborted.
% ← Init[750, 7500, 3750, 5, 7]

% ← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.3 ± 0.03

% ← TestS[]
ReHashTable[0.5, 1.4]  2.7 ± 0.07
ReHashTable[0.5, 2.0]  2.3 ± 0.12
ReHashTable[0.5, 2.7]  2.3 ± 0.13
ReHashTable[0.5, 3.0]  2.3 ± 0.14
ReHashTable[0.5, 4.0]  2.2 ± 0.07
ReHashTable[0.5, 6.0]  2.2 ± 0.11

ReHashTable[0.7, 1.4]  2.5 ± 0.13
ReHashTable[0.7, 2.0]  2.4 ± 0.13
ReHashTable[0.7, 2.7]  2.3 ± 0.14
ReHashTable[0.7, 3.0]  2.3 ± 0.04
ReHashTable[0.7, 4.0]  2.2 ± 0.13
ReHashTable[0.7, 6.0]  2.2 ± 0.20

-- Aborted.
% ← Init[750, 7500, 3750, 3, 7]

% ← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.4 ± 0.15

% ← Try[SRhtSub[0.5, &e, selectedPrime]]
[seconds: [avg: 2.075776, s: 0.1586726], words: [avg: 12341.0, s: 5.656854], faults: [avg: 0.0, s: 0.0]]
% ← Try[SRhtSub[0.5, 3.0, selectedPrime]]
[seconds: [avg: 2.117301, s: 0.1690441], words: [avg: 14315.0, s: 45.25484], faults: [avg: 0.0, s: 0.0]]
% ← Try[SRhtSub[0.5, 4.0, selectedPrime]]
[seconds: [avg: 2.036086, s: 0.1642949], words: [avg: 12349.0, s: 5.656854], faults: [avg: 0.0, s: 0.0]]
% ← TestS[] --GC interval ← 4000
ReHashTable[0.5, 1.4]  2.7 ± 0.16
ReHashTable[0.5, 2.0]  2.4 ± 0.24
ReHashTable[0.5, 2.7]  2.2 ± 0.16
ReHashTable[0.5, 3.0]  2.3 ± 0.21
ReHashTable[0.5, 4.0]  2.2 ± 0.16
ReHashTable[0.5, 6.0]  2.3 ± 0.26

ReHashTable[0.7, 1.4]  2.6 ± 0.17
ReHashTable[0.7, 2.0]  2.5 ± 0.26
ReHashTable[0.7, 2.7]  2.3 ± 0.17
ReHashTable[0.7, 3.0]  2.3 ± 0.16
ReHashTable[0.7, 4.0]  2.3 ± 0.16
ReHashTable[0.7, 6.0]  2.2 ± 0.15

ReHashTable[1.0, 1.4]  2.8 ± 0.21
ReHashTable[1.0, 2.0]  2.4 ± 0.15
ReHashTable[1.0, 2.7]  2.6 ± 0.21
ReHashTable[1.0, 3.0]  2.3 ± 0.15
ReHashTable[1.0, 4.0]  2.5 ± 0.21
ReHashTable[1.0, 6.0]  2.3 ± 0.16

ReHashTable[1.4, 1.4]  2.8 ± 0.26
ReHashTable[1.4, 2.0]  2.6 ± 0.16
ReHashTable[1.4, 2.7]  2.5 ± 0.24
ReHashTable[1.4, 3.0]  2.4 ± 0.16
ReHashTable[1.4, 4.0]  2.3 ± 0.24
ReHashTable[1.4, 6.0]  2.4 ± 0.17

ReHashTable[2.0, 1.4]  3.0 ± 0.23
ReHashTable[2.0, 2.0]  2.6 ± 0.18
ReHashTable[2.0, 2.7]  2.7 ± 0.20
-- Aborted.
% ← Init[750, 7500, 3750, 4, 7] --GC interval ← 8000

%
← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.3 ± 0.05

% ← TestS[]
ReHashTable[0.5, 1.4]  2.7 ± 0.14
ReHashTable[0.5, 2.0]  2.3 ± 0.20
ReHashTable[0.5, 2.7]  2.2 ± 0.06
ReHashTable[0.5, 3.0]  2.3 ± 0.14
ReHashTable[0.5, 4.0]  2.2 ± 0.15
ReHashTable[0.5, 6.0]  2.1 ± 0.08

ReHashTable[0.7, 1.4]  2.6 ± 0.17
ReHashTable[0.7, 2.0]  2.4 ± 0.07
ReHashTable[0.7, 2.7]  2.3 ± 0.14
ReHashTable[0.7, 3.0]  2.3 ± 0.14
ReHashTable[0.7, 4.0]  2.2 ± 0.05
ReHashTable[0.7, 6.0]  2.1 ± 0.07

ReHashTable[1.0, 1.4]  2.6 ± 0.19
ReHashTable[1.0, 2.0]  2.3 ± 0.05
ReHashTable[1.0, 2.7]  2.4 ± 0.20
ReHashTable[1.0, 3.0]  2.4 ± 0.20
ReHashTable[1.0, 4.0]  2.3 ± 0.06
ReHashTable[1.0, 6.0]  2.3 ± 0.15

ReHashTable[1.4, 1.4]  2.6 ± 0.05
ReHashTable[1.4, 2.0]  2.5 ± 0.14
ReHashTable[1.4, 2.7]  2.5 ± 0.15
ReHashTable[1.4, 3.0]  2.3 ± 0.07
ReHashTable[1.4, 4.0]  2.4 ± 0.14
ReHashTable[1.4, 6.0]  2.4 ± 0.14

-- Aborted.
% ← Init[750, 7500, 3750, 5, 7] --GC interval ← 4000

%
← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.3 ± 0.04

% ← TestS[]
ReHashTable[0.5, 1.4]  3.0 ± 0.12
ReHashTable[0.5, 2.0]  2.5 ± 0.18
ReHashTable[0.5, 2.7]  2.4 ± 0.14
ReHashTable[0.5, 3.0]  2.5 ± 0.11
ReHashTable[0.5, 4.0]  2.4 ± 0.16
ReHashTable[0.5, 6.0]  2.4 ± 0.09

ReHashTable[0.7, 1.4]  2.8 ± 0.05
ReHashTable[0.7, 2.0]  2.6 ± 0.16
ReHashTable[0.7, 2.7]  2.5 ± 0.21
ReHashTable[0.7, 3.0]  2.5 ± 0.16
ReHashTable[0.7, 4.0]  2.4 ± 0.07
ReHashTable[0.7, 6.0]  2.4 ± 0.15

ReHashTable[1.0, 1.4]  2.8 ± 0.14
ReHashTable[1.0, 2.0]  2.6 ± 0.17
ReHashTable[1.0, 2.7]  2.7 ± 0.12
ReHashTable[1.0, 3.0]  2.6 ± 0.15
ReHashTable[1.0, 4.0]  2.6 ± 0.06
ReHashTable[1.0, 6.0]  2.5 ± 0.14

ReHashTable[1.4, 1.4]  2.9 ± 0.17
ReHashTable[1.4, 2.0]  2.7 ± 0.14
ReHashTable[1.4, 2.7]  2.6 ± 0.12
ReHashTable[1.4, 3.0]  2.6 ± 0.14
ReHashTable[1.4, 4.0]  2.4 ± 0.06
ReHashTable[1.4, 6.0]  2.5 ± 0.12

ReHashTable[2.0, 1.4]  3.0 ± 0.15
ReHashTable[2.0, 2.0]  2.7 ± 0.19
ReHashTable[2.0, 2.7]  2.7 ± 0.22
ReHashTable[2.0, 3.0]  2.7 ± 0.13
ReHashTable[2.0, 4.0]  2.5 ± 0.07
ReHashTable[2.0, 6.0]  2.8 ± 0.11


% ← Init[750, 7500, 3750, 3, 7] --GC interval ← 2000

%
← Report[snSub, TRUE]
Baseline[1.0, 2.0]  2.3 ± 0.03

% ← TestS[]
ReHashTable[0.5, 1.4]  3.3 ± 0.05
ReHashTable[0.5, 2.0]  2.8 ± 0.17
ReHashTable[0.5, 2.7]  2.5 ± 0.05
ReHashTable[0.5, 3.0]  2.6 ± 0.06
ReHashTable[0.5, 4.0]  2.4 ± 0.06
ReHashTable[0.5, 6.0]  2.5 ± 0.06

ReHashTable[0.7, 1.4]  3.2 ± 0.21
ReHashTable[0.7, 2.0]  2.7 ± 0.05
ReHashTable[0.7, 2.7]  2.8 ± 0.15
ReHashTable[0.7, 3.0]  2.7 ± 0.06
ReHashTable[0.7, 4.0]  2.7 ± 0.19
ReHashTable[0.7, 6.0]  2.6 ± 0.05

ReHashTable[1.0, 1.4]  3.2 ± 0.15
ReHashTable[1.0, 2.0]  2.8 ± 0.06
ReHashTable[1.0, 2.7]  2.9 ± 0.19
ReHashTable[1.0, 3.0]  2.7 ± 0.04
ReHashTable[1.0, 4.0]  2.8 ± 0.19
ReHashTable[1.0, 6.0]  2.4 ± 0.05

ReHashTable[1.4, 1.4]  3.2 ± 0.17
ReHashTable[1.4, 2.0]  2.8 ± 0.04
ReHashTable[1.4, 2.7]  2.7 ± 0.11
ReHashTable[1.4, 3.0]  2.6 ± 0.04
ReHashTable[1.4, 4.0]  2.7 ± 0.16
ReHashTable[1.4, 6.0]  2.7 ± 0.04

ReHashTable[2.0, 1.4]  3.3 ± 0.20
-- Aborted.
%
1.2.12. BenchMethodTimes.cm!1 01-Nov-85 22:59:54 PST
BenchMethodTimes.cm
Spreitzer, November 1, 1985 10:59:54 pm PST
{ OPEN TableBench; NULL}
← histDist ← FALSE
← reportStream ← &stdout[]
{ Test: PROC [Gen: RHTSubGen] = {FOR mm: ReHashTable.ModMethod IN ReHashTable.ModMethod DO Report[Gen[1.0, 2.0, mm], method] ENDLOOP}; NULL}
← Init[800, 8000, 4000, 5, 7]
← Report[rnSub, none, TRUE]
← Test[RRhtSub]
← Init[300, 3000, 1500, 5, 7]
← Report[snSub, none, TRUE]
← Test[SRhtSub]
← Init[2000, 20000, 10000, 5, 7]
← Report[rnSub, none, TRUE]
← Test[RRhtSub]
← Init[750, 7500, 3750, 5, 7]
← Report[snSub, none, TRUE]
← Test[SRhtSub]
← Init[5000, 50000, 25000, 5, 7]
← Report[rnSub, none, TRUE]
← Test[RRhtSub]
← Init[1875, 18750, 9375, 5, 7]
← Report[snSub, none, TRUE]
← Test[SRhtSub]
1.2.13. MethodTimeBench.data!1 01-Nov-85 23:19:07 PST
MethodTimeBench.data
Spreitzer, November 1, 1985 11:19:06 pm PST
% BenchMethodTimes
>
>
> ← histDist ← FALSE
FALSE
> ← reportStream ← &stdout[]
^[streamProcs: 12775322B^, streamData: NIL, propList: NIL, backingStream: 12672344B^]
>
> { Test: PROC [Gen: RHTSubGen] = {FOR mm: ReHashTable.ModMethod IN ReHashTable.ModMethod DO Report[Gen[1.0, 2.0, mm], method] ENDLOOP}; NULL}
>
> ← Init[800, 8000, 4000, 5, 7]

> ← Report[rnSub, none, TRUE]
Baseline  2.33 ± 0.03

> ← Test[RRhtSub]
ReHashTable[ selectedPrime]  0.86 ± 0.04
ReHashTable[ powerOfTwo]  0.90 ± 0.18
ReHashTable[oneLessThanPowerOfTwo]  0.91 ± 0.05

>
> ← Init[300, 3000, 1500, 5, 7]

> ← Report[snSub, none, TRUE]
Baseline  1.13 ± 0.17

> ← Test[SRhtSub]
ReHashTable[ selectedPrime]  0.92 ± 0.17
ReHashTable[ powerOfTwo]  0.98 ± 0.17
ReHashTable[oneLessThanPowerOfTwo]  1.05 ± 0.24

>
> ← Init[2000, 20000, 10000, 5, 7]

> ← Report[rnSub, none, TRUE]
Baseline  5.60 ± 0.16

> ← Test[RRhtSub]
ReHashTable[ selectedPrime]  2.21 ± 0.24
ReHashTable[ powerOfTwo]  2.08 ± 0.24
ReHashTable[oneLessThanPowerOfTwo]  2.29 ± 0.25

>
> ← Init[750, 7500, 3750, 5, 7]

> ← Report[snSub, none, TRUE]
Baseline  2.22 ± 0.04

> ← Test[SRhtSub]
ReHashTable[ selectedPrime]  2.66 ± 0.16
ReHashTable[ powerOfTwo]  2.80 ± 0.16
ReHashTable[oneLessThanPowerOfTwo]  2.73 ± 0.17

>
> ← Init[5000, 50000, 25000, 5, 7]

> ← Report[rnSub, none, TRUE]
Baseline 13.83 ± 0.26

> ← Test[RRhtSub]
ReHashTable[ selectedPrime]  5.47 ± 0.33
ReHashTable[ powerOfTwo]  5.49 ± 0.37
ReHashTable[oneLessThanPowerOfTwo]  6.19 ± 0.43

>
> ← Init[1875, 18750, 9375, 5, 7]

> ← Report[snSub, none, TRUE]
Baseline  5.21 ± 0.03

> ← Test[SRhtSub]
ReHashTable[ selectedPrime]  6.88 ± 0.25
ReHashTable[ powerOfTwo]  6.92 ± 0.19
ReHashTable[oneLessThanPowerOfTwo]  6.98 ± 0.23

>
>
%