FunctionCache.mesa
Michael Plass, June 19, 1985 9:42:48 pm PDT
Copyright © 1985 by Xerox Corporation. All rights reserved.
~
BEGIN
Documentation:
This module provides a facility for writing cached implementations of REF-valued functions. The caches are managed with a move-to-front discipline, and trimmed according to the client supplied limits on the total number of entries or the total number of words consumed.
Cache: TYPE ~ REF CacheRep;
CacheInfo:
TYPE ~
RECORD [nEntries:
INT, maxEntries:
INT, totalSize:
INT, maxTotalSize:
INT];
Create:
PROC [maxEntries:
INT ← 10, maxTotalSize:
INT ←
INT.
LAST]
RETURNS [Cache];
ClientID: TYPE ~ REF; -- to allow otherwise independent clients to share cache space.
Domain: TYPE ~ REF;
Eval:
PROC [x: Cache, F: FuncProc, arg: Domain, match: Match, clientID: ClientID ←
NIL]
RETURNS [Range];
This is a convenience when the client already has an argument REF allocated; use the Lookup proc, below, to do more general matching or to avoid early argument allocation.
FuncProc: TYPE ~ PROC [Domain] RETURNS [value: Range, size: INT];
Match:
TYPE ~ {refs, case, text} ← refs;
refs ~ ref equality
case ~ understands REF INT, REF REAL, REF CHAR, LIST OF REF, ROPE, ATOM; otherwise uses ref equality
text ~ like above, but ignores case on REF CHARs, ATOMs and ROPEs
Insert:
PROC [x: Cache, argument: Domain, value: Range, size:
INT, clientID: ClientID ←
NIL];
CompareProc: TYPE ~ PROC [argument: Domain] RETURNS [good: BOOL];
Any: CompareProc; -- returns TRUE
Equal:
PROC [a, b: Domain, match: Match]
RETURNS [
BOOL];
The fancy compare used by Eval;
Lookup:
PROC [x: Cache, compare: CompareProc, clientID: ClientID ←
NIL]
RETURNS [value: Range, ok:
BOOL];
Allows client to inspect each entry that matches clientID and decide if it meets the requirements; compare is called under a monitor, so make sure it is bulletproof. If compare returns TRUE for some entry, Lookup returns the value of that entry with ok=TRUE; otherwise returns [NIL, FALSE].
CacheEntry:
TYPE ~
RECORD [clientID: ClientID, argument: Domain, value: Range, size:
INT];
Obtain:
PROC [x: Cache, compare: CompareProc, limit:
INT ← 1, clientID: ClientID ←
NIL]
RETURNS [
LIST
OF CacheEntry];
Gains exclusive use of the entries by removing them from the cache. The result has no more than n entries.
Release:
PROC [x: Cache, list:
LIST
OF CacheEntry];
Puts the entries back in the cache.
GetInfo:
PROC [x: Cache]
RETURNS [CacheInfo];
GetList:
PROC [x: Cache]
RETURNS [
LIST
OF CacheEntry];
Copies the list under the monitor.
SetLimits:
PROC [x: Cache, maxEntries:
INT ← 10, maxTotalSize:
INT ←
INT.
LAST];
SetValueSize:
PROC [x: Cache, value: Range, size:
INT];
GlobalCache:
PROC
RETURNS [Cache];
Use this for applications where the entry sizes are large, so that all such clients can compete fairly for the available memory. Use individual caches when the entries are small; this will keep the cache lookup time down.
CacheRep:
TYPE ~
PRIVATE
MONITORED
RECORD [
nEntries: INT,
maxEntries: INT,
totalSize: INT,
maxTotalSize: INT,
entryList: LIST OF CacheEntry
];