Page Numbers: Yes X: 306 Y: 1.0" First Page: 16
Margins: Top: 1.0" Bottom: 1.3"
Heading:
des Rivìeres & SmithThe Implementation of Procedurally Reflective Languages Draft
5.A 3-LISP Implementation Processor Program
The principle reason that the reflective processor program for 3-LISP cannot be translated into some other implementation language, such as LISP machine microcode or C, is that it is not a closed program. Because of reflective procedures, the locus of control may leave the set of primary processor procedures and venture off into code that was supplied by the user. It is this problem that will be tackled in this section: the development of a closed program that implements 3-LISP. This program will be expressed in a conservative subset of 3-LISP; no crucial use will be made of 3-LISP’s meta-structural, reflective, or higher-order function capabilities. The decision to write such a (true) meta-circular processor for 3-LISP makes it easy to suppress many implementation details that would necessarily surface if a different language were chosen. The most important omissions are the memory representation of the elements of the structural field, garbage collection, error detection and handling, and all I/O. While important, these concerns 3-LISP shares with other LISP dialects, and are not germane to this discussion of how to implement procedural reflection.
As noted in earlier sections, the 3-LISP implementation processor program will have the same basic structure as the reflective processor program itself. Thus the first step is to transform the primary processor procedures into an implementation processor program capable of normalising D<1 structures. The ability to handle D>1 structures by shifting up and down can then be incorporated into the implementation processor program.
5.1The Basic Implementation Processor
The following collection of procedures is based directly on the primary processor procedures. For each primary processor procedure there is a corresponding implementation processor procedure bearing its source’s name prefixed by ‘‘&&’’; e.g., &&NORMALISE implements NORMALISE. As will be discussed later, each ‘‘&&’’ procedure takes an additional parameter named STATE that represents the absorbed state, which only comes into play when shifting up or down happens (underlined code highlights these spots).
(define &&NORMALISE
(lambda simple [state exp env cont]
(cond [(normal exp) (&&call state cont exp)]
[(atom exp) (&&call state cont (binding exp env))]
[(rail exp) (&&normalise-rail state exp env cont)]
[(pair exp) (&&reduce state (car exp)
(cdr exp) env cont )])))
For each type of primary processor continuation there is a corresponding implementation procedure with names of the form &&xxx-CONTINUATION. E.g., &&PROC-CONTINUATION implements the ‘‘PROC’’ type continuations, which fields the result of normalising the procedure part of a redex (pair). Moreover, while the original continuations use a handful of free (non-global) variables, their implementation equivalents are passed these data as additional arguments. E.g., &&PROC-CONTINUATION is passed as arguments the bindings of the PROC, ARGS, ENV, and CONT variables from the incarnation of &&REDUCE that spawned it. &&EXPAND-CLOSURE (section 5.5) implements the last clause of the ‘‘ARGS’’ continuation, but does not correspond to a continuation of its own.
(define &&REDUCE
(lambda simple [state proc args env cont]
(&&normalise state proc env
(make-proc-continuation proc args env cont) )))
(define &&PROC-CONTINUATION
(lambda simple [state proc! proc args env cont]
(if (reflective proc!)
(&&call state
(de-reflect proc!) args env cont)
(&&normalise state args env
(make-args-continuation proc! proc args env cont) ))))
(define &&ARGS-CONTINUATION
(lambda simple [state args! proc! proc args env cont]
(if (or (primitive proc!) (kernel-utility proc!))
(&&call state cont ↑(
proc! . args!))
(&&expand-closure state proc! args! cont) )))
The remaining ‘‘&&’’ procedures (see appendix), such as &&NORMALISE-RAIL and &&READ-NORMALISE-PRINT, are derived in an analogous manner.
5.2Creating Continuation Closures Explicitly
In the implementation processor, all continuations must be created explicitly, whereas in the reflective processor program, this was accomplished with higher-order functional arguments. E.g., (make-proc-continuationprocargsenvcont) in &&REDUCE must normalise to exactly the same closure that (lambdasimple[proc!] ...) in REDUCE would, given identical bindings for those four variables.
(define MAKE-PROC-CONTINUATION
(lambda simple [proc args env cont]
(ccons ’simple ↑(bind ’[proc args env cont reduce]
↑[proc args env cont reduce] global)
’[proc!]
’(if (reflective proc!)
(
(de-reflect proc!) args env cont)
(normalise args env
(lambda [args!]
(if (primitive proc!)
(cont ↑(
proc! . args!))
(normalise (body proc!)
(bind (pattern proc!) args!
(environment proc!))
cont ))))) )))
The other continuation-building procedures are derived in an analogous manner.
5.3Handling Calls
In the cases in which it is not possible to determine exactly which procedure to call, the implementation procedures defer this task to &&CALL. E.g., whereas NORMALISE calls the procedure designated by the local variable CONT, &&NORMALISE passes the buck to &&CALL. &&CALL inspects the closure designating the function to be called; it is handled according to whether it is that of a primary processor procedure or one corresponding to a processor continuation. The appropriate ‘‘&&’’ procedure is invoked accordingly. In the case of the continuations, the non-global bindings captured by them are extracted and passed as the extra arguments to the implementation versions, as discussed earlier. (The two shift-up cases are discussed below.)
(define &&CALL
(lambda simple x
(let [[state (1st x)] [f (2nd x)] [a (rest (rest x))]]
(cond [(kernel-primary-closure ↑f)
(&&call-primary state f a)]
[(and (kernel-continuation-closure ↑f)
(= (length a) 1))
(&&call-continuation state f (1st a))]
[(or (primitive ↑f) (kernel-utility ↑f))
(&&call (shift-up state) ; Shift up for primitives
(reify-continuation state) ↑(f . a))]
[$t ; Shift up for non-primitives
(&&expand-closure (shift-up state)
↑f ↑a (reify-continuation state) )]))))
(define &&CALL-PRIMARY
(lambda simple [state f a]
(select (kernel-primary-type ↑f)
[’normalise
(&&normalise state (1st a) (2nd a) (3rd a))]
[’normalise-rail
(&&normalise-rail state (1st a) (2nd a) (3rd a))]
[’reduce
(&&reduce state (1st a) (2nd a) (3rd a) (4th a))]
[’read-normalise-print
(&&read-normalise-print state (1st a) (2nd a))]
[’if
(&&if state (1st a) (2nd a) (3rd a))]
[’lambda
(&&lambda state (1st a) (2nd a) (3rd a)) ])))
(define &&CALL-CONTINUATION
(lambda simple [state f arg]
(select (kernel-continuation-type ↑f)
[’proc (&&proc-continuation state arg
(extract ’proc f) (extract ’args f)
(extract ’env f) (extract ’cont f))]
[’args (&&args-continuation state arg
(extract ’proc! f) (extract ’proc f)
(extract ’args f) (extract ’env f)
(extract ’cont f))]
[’first (&&first-continuation state arg
(extract ’rail f) (extract ’env f)
(extract ’cont f))]
[’rest (&&rest-continuation state arg
(extract ’first! f) (extract ’rail f)
(extract ’env f) (extract ’cont f))]
[’reply (&&reply-continuation state arg
(extract ’level f) (extract ’env f))]
[’if (&&if-continuation state arg
(extract ’premise f) (extract ’c1 f)
(extract ’c2 f) (extract ’env f)
(extract ’cont f) )])))
5.4Shifting Up
If the expression given to the implementation processor to normalise had D>1, this would become apparent at the time that a redex involving a user-defined reflective procedure was first processed by &&REDUCE. The fact would be manifested by &&CALL’s failure to recognize the closure as being one for which there was an implementation equivalent. The underlined clauses in &&CALL handle this eventuality while ensuring that the locus of control remains within the code of the implementation processor program. As discussed earlier, the implementation processor must shift up, altering its internal state to accurately reflect what would have been happening at the next higher processing level in the tower.
The replacement of the underlined clauses of &&CALL with the single clause [$t(f.a)] would result in an equivalent version of &&CALL that more clearly reflected the computation to be performed, but which would fail be a closed program. Instead of calling the function designated by f with the objectified argument designated by a, call the processor on the structures to which f and a are bound; i.e., for (f.a) use (reduce↑f↑a~~). Switching to the implementation processor, and assuming that f is bound to a non-reflective, non-primitive closure, the call to reduce can be rendered (&&expand-closure~↑f↑a~). As discussed in section 4, the continuation that must be passed to &&EXPAND-CLOSURE can be popped from the absorbed state. REIFY-CONTINUATION is used to extract from STATE the continuation that corresponds to the next higher level of processing; SHIFT-UP returns the saved states for all levels above that.
Since primitive closures are never actually dissected by the reflective processor program, there is special shift-up clause in &&CALL that ensures that &&EXPAND-CLOSURE is not used on them.
5.5Shifting Down
[This must be reworded somewhat because &&EXPAND-CLOSURE changed. ]
The corresponding shift down operation can occur whenever the implementation processor finds itself processing a structure that is part of the kernel. Bearing in mind that the locus of control should stay within the ‘‘&&’’ procedures, it is a simple matter of having &&EXPAND-CLOSURE detect when the closure which it is about to expand corresponds to NORMALISE; if it does, &&EXPAND-CLOSURE can shift down and call &&NORMALISE to do the job directly. SHIFT-DOWN is used to absorb the continuation into the absorbed states of the higher levels.
(define &&EXPAND-CLOSURE
(lambda simple [state proc! args! cont]
(cond
[(and (= (kernel-primary-type proc!) ’normalise)
(plausible-arguments-to-normalise args!))
(&&normalise (shift-down cont state) ; Shift down #1
(1st args!) (2nd args!) (3rd args!))]
[(and (kernel-continuation-closure proc!) ; Shift down #2
(plausible-arguments-to-a-continuation args!))
(&&call-continuation (shift-down cont state)
proc! (1st args!))]
[$t (&&normalise state (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont )])))
PLAUSIBLE-ARGUMENTS-TO-NORMALISE is used to ensure that certain invariants within the implementation processor are preserved. For example, we have been implicitly assuming that a) EXP in &&NORMALISE designates some structure, b) ENV designates a well-formed environment, and c) CONT designates a function of one argument that is designated by a simple closure (reflective closures used as continuations are dangerous).
5.6Absorbed State Management
The way the infinite tower is initialized determines the exact form of the initial continuations at each level. The initialization process described in section 3 would result in a REPLY continuation per level. Naturally, the creation of the level n initial continuation will be deferred until such time as the implementation processor needs to reify it. Hence the absorbed state can be represented as a) the number of the highest level reached to date and b) a (finite) sequence of continuations for the intervening levels from there down to the current level of the implementation processor. The implementation processor is started off at level 1 in the code corresponding to READ-NORMALISE-PRINT; hence the absorbed state is initially just a (virtual) tower of initial continuations for levels 2 to #.
(define 3-LISP
(lambda simple []
(&&read-normalise-print (initial-tower 2) 1 global) ))
(define INITIAL-TOWER
(lambda simple [level] (scons level)) )
(define SHIFT-DOWN
(lambda simple [continuation state]
(prep continuation state) ))
(define REIFY-CONTINUATION
(lambda simple [state]
(if (= (length state) 1)
(make-creply-continuation (1st state) global)
(1st state) )))
(define SHIFT-UP
(lambda simple [state]
(if (= (length state) 1)
(scons (1+ (1st state)))
(rest state) )))
5.7Kernel Utilities
As was discussed in section 4, as long as a sufficiently broad basis is supported by the implementation processor to ensure that every call to a kernel procedure will ‘‘top out’’ at some finite level, there is no need for the implementation processor to handle many kernel utility procedures (e.g., NORMAL and BIND) in any special way. Nevertheless, we have included the appropriate code to handle these kernel utilities exactly as if they were primitive procedures. This optimization is valid provided that they are well-behaved procedures that simply compute a result and do not have funny control side effects (e.g., like QUITing).
[Now I’m not so sure that it’s merely a matter of "optimization". Be non-committal?]
5.8Summary
It is easy to verify by inspection that all ‘‘&&’’ procedures are used in the following restrictive ways: 1) they are always called from other ‘‘&&’’ procedure, with the exception of 3-LISP which is the root procedure; 2) they are always called from non-embedded contexts; 3) they never use, either directly or indirectly, any reflective procedure other that those for the standard control structures; 4) they are never passed as an argument, or returned as a result; 5) they are never remembered in a user data structure; and 6) barring an error, the chain of processing initiated by the call to 3-LISP is never broken (i.e., it will never return). It is a relatively straightforward final step to translate such a program into one’s favourite imperative language.