Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading: Not-on-first-page
Jim des RivìeresThe Implementation of Procedurally Reflective Languages Summary
—————————————————————————————————————————————————
Summary for 1984 LISP and Functional Programming Conference
—————————————————————————————————————————————————
5.A 3-LISP Implementation Processor
[This section looks in detail at how 3-LISP can be implemented with a level-shifting implementation. The code for a meta-circular processor program for 3-LISP that demonstrates this level-shifting technique is given and explained.]
The principle reason that the reflective processor program for 3-LISP cannot be translated into some other implementation language, such as LISP machine microcode or C, is that it is not a closed program. Because of reflective procedures, the locus of control may leave the set of primary processor procedures and venture off into code that was supplied by the user. It is this problem that will be tackled in this section: the development of a closed program that implements 3-LISP. This program will be expressed in a conservative subset of 3-LISP; no crucial use will be made of 3-LISP’s meta-structural, reflective, or higher-order function capabilities. The decision to write a (true) meta-circular processor for 3-LISP makes it easy to suppress many implementation details that would necessarily surface if a different language were chosen. The most important omissions are the memory representation of the elements of the structural field, garbage collection, error detection and handling, and all I/O. While important, these concerns 3-LISP shares with other LISP dialects, and are not germane to this discussion of how to implement procedural reflection.
As a first approximation of a 3-LISP implementation processor program one can take the reflective processor program itself. This is justified simply by noting that 3-LISP minus user-defined reflective procedures is a LISP dialect reminiscent of SCHEME, and that the reflective processor program stripped of its ability to handle user-defined reflective procedures closely resembles a continuation-passing style version of a SCHEME meta-circular processor program. Thus the first step will be to transfrom the reflective processor program into an implementation processor that is capable of normalising D<1 structures. The ability to handle D>1 structures by shifting up and down will be added later.
5.1First Approximation Implementation Processor Program
The following collection of procedures are based directly on the primary processor procedures. For each primary processor procedure there is a corresponding implementation processor procedure bearing its source’s name prefixed by ‘‘&&’’; e.g., &&NORMALISE implements NORMALISE. In anticipation of what is to follow, each ‘‘&&’’ procedure takes an additional parameter named ABSORBED-STATE, which, for the time being, is simply passed along. For uninteresting technical reasons it is most convenient if this extra argument to be written as the first.
(define &&NORMALISE
(lambda simple [absorbed-state exp env cont]
(cond
[(normal exp) (&&call absorbed-state cont exp)]
[(atom exp) (&&call absorbed-state cont (binding exp env))]
[(rail exp) (&&normalise-rail absorbed-state exp env cont)]
[(pair exp) (&&reduce absorbed-state (car exp) (cdr exp) env cont)])))
For each type of primary processor continuation there is a corresponding implementation procedure with names of the form &&xxx-CONTINUATION. E.g., &&PROC-CONTINUATION implements the ‘‘PROC’’ type continuations, which field the result of normalising the procedure part of a redex (pair). Moreover, while the original continuations are designated by expressions that use non-global variables freely, their implementation equivalents are passed these data as additional arguments. E.g., &&PROC-CONTINUATION is passed as arguments the bindings of the PROC, ARGS, ENV, and CONT variables from the incarnation of &&REDUCE that spawned it. (&&EXPAND-CLOSURE implements the last clause of the ‘‘ARGS’’ continuation, but does not correspond to a continuation of its own. This procedure will be revised in a subsequent subsection.)
(define &&REDUCE
(lambda simple [absorbed-state proc args env cont]
(&&normalise absorbed-state proc env
(make-proc-continuation proc args env cont))))
(define &&PROC-CONTINUATION
(lambda simple [absorbed-state proc! proc args env cont]
(if (reflective proc!)
(&&call absorbed-state
(de-reflect proc!) args env cont)
(&&normalise absorbed-state args env
(make-args-continuation proc! proc args env cont)))))
(define &&ARGS-CONTINUATION
(lambda simple [absorbed-state args! proc! proc args env cont]
(if (or (primitive proc!) (kernel-utility proc!))
(&&call absorbed-state cont ↑(
proc! . args!))
(&&expand-closure absorbed-state proc! args! cont))))
(define &&EXPAND-CLOSURE
(lambda simple [absorbed-state proc! args! cont]
(&&normalise absorbed-state (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont)))
In the implementation processor, all continuations are created explicitly; in the reflective processor program, this is accomplished with higher-order functional arguments. E.g., (make-proc-continuationprocargsenvcont) in &&REDUCE will normalise to exactly the same closure that (lambdasimple[proc!] ...) in REDUCE would, given identical bindings for those four variables.
(define MAKE-PROC-CONTINUATION
(lambda simple x
(ccons ’simple (bind ’[proc args env cont] ↑x (environment ↑reduce))
’[proc!]
’(if (reflective proc!)
(
(de-reflect proc!) args env cont)
(normalise args env
(lambda [args!]
(if (primitive proc!)
(cont ↑(
proc! . args!))
(normalise (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont))))) )))
In the few cases in which it is not possible to determine exactly which procedure to call, the implementation procedures defer this task to &&CALL (discussed in detail below). E.g., whereas NORMALISE calls the function designated by the local variable CONT, &&NORMALISE passes the buck to &&CALL. &&CALL inspects the closure designating the function to be called; it is handled according to whether it is that of a primary processor procedure or one corresponding to a processor continuation. The appropriate ‘‘&&’’ procedure is invoked accordingly. In the case of the continuations, the non-global bindings captured by them are extracted and passed as the extra arguments to the implementation versions, as discussed earlier.
(define &&CALL
(lambda simple x
(let [[absorbed-state (1st x)] [f (2nd x)] [a (rest (rest x))]]
(cond [(kernel-primary-closure ↑f)
(&&call-primary absorbed-state f a)]
[(and (kernel-continuation-closure ↑f) (= (length a) 1))
(&&call-continuation absorbed-state f (1st a))] ))))
(define &&CALL-PRIMARY
(lambda simple [absorbed-state f args]
(select (kernel-primary-type ↑f)
[’normalise (&&normalise (prep absorbed-state args))]
[’normalise-rail (&&normalise-rail (prep absorbed-state args))]
[’reduce (&&reduce (prep absorbed-state args))]]
[’read-normalise-print (&&read-normalise-print (prep absorbed-state args))]
[’if (&&if (prep absorbed-state args))]
[’lambda (&&lambda (prep absorbed-state args))] )))
(define &&CALL-CONTINUATION
(lambda simple [absorbed-state f arg]
(select (kernel-continuation-type ↑f)
[’proc (&&proc-continuation absorbed-state arg (extract ’proc f)
(extract ’args f) (extract ’env f) (extract ’cont f))]
[’args (&&args-continuation absorbed-state arg (extract ’proc! f)
(extract ’proc f) (extract ’args f) (extract ’env f) (extract ’cont f))]
[’first (&&first-continuation absorbed-state arg (extract ’rail f)
(extract ’env f) (extract ’cont f))]
[’rest (&&rest-continuation absorbed-state arg (extract ’first! f)
(extract ’rail f) (extract ’env f) (extract ’cont f))]
[’reply (&&reply-continuation absorbed-state arg (extract ’level f)
(extract ’env f))]
[’if (&&if-continuation absorbed-state arg (extract ’premise f)
(extract ’c1 f) (extract ’c2 f) (extract ’env f) (extract ’cont f))] )))
The ‘‘&&’’ procedures that were not shown above, such as &&NORMALISE-RAIL, appear in APPENDIX I. [Space permitting.]
5.2Shifting Up
If the structure given to the implementation processor to normalise had D>1, this would become apparent at the time that a redex involving a user-defined reflective procedure was first processed by &&REDUCE. The fact would be manifested by &&CALL’s failure to recognize the closure as being one for which there was an implementation equivalent. This suggests that &&CALL must be modified in such a way as to handle this eventuality; at the same time, it is essential to ensure that the locus of control remains within the code of the implementation processor program. As discussed earlier, the device to be employed is for the implementation processor to shift up, altering its internal state to accurately reflect what would have been happening at the next higher processing level in the tower.
The following rendition of &&CALL accurately reflects the computation to be performed but fails in keeping the program closed.
(define &&CALL
(lambda simple x
(let [[absorbed-state (1st s)] [f (2nd s)] [a (rest (rest x))]]
(cond [(kernel-primary-closure ↑f)
(&&call-primary absorbed-state f a)]
[(and (kernel-continuation-closure ↑f) (unit a))
(&&call-continuation absorbed-state f (1st a))]
[$t (f . a)] ))))
Intuitively, in a language that provides explicit access to the processor, there is always an option to performing a computation directly, and that is to explicitly invoke the processor on the structure that would have formerly been used. Instead of calling the function designated by f with the objectified argument designated by a, call the processor on the structures to which f and a are bound; i.e., for (f.a) use (reduce↑f↑a~~). Switching the the implementation processor, and assuming that f is bound to a non-reflective, non-primitive closure, the call to reduce can be rendered (&&expand-closure~↑f↑a~). From this it is readily apparent that a continuation must be supplied, and that this continuation formerly found its home at the next higher level of processing. For the moment, take it on faith that the continuation can be computed, and that it is not a function of the state of the current processing level. Instead, it is derivable solely from the combined absorbed state of all higher levels of processing (to be described shortly). The level-shifting version of &&CALL is as follows:
(define &&CALL
(lambda simple x
(let [[absorbed-state (1st s)] [f (2nd s)] [a (rest (rest x))]]
(cond [(kernel-primary-closure ↑f)
(&&call-primary absorbed-state f a)]
[(and (kernel-continuation-closure ↑f) (unit a))
(&&call-continuation absorbed-state f (1st a))]
[$t (&&expand-closure (shift-up absorbed-state) ↑f ↑a
(reify-continuation absorbed-state))] ))))
There are two important things to realise about the reflective processor program. First, it implements a ‘‘tail-recursive’’ dialect of LISP; i.e., it is not procedure calls per se that cause the processor to accumulate state, but rather only embedded procedure calls. For example, with respect to the procedure given by (lambdasimple[x](f(gx))), the call (gx) is embedded in the first argument position of (f(gx)), and therefore requires the processor to save state until (gx) returns, just as in a conventional implementation of procedure calls; on the other hand, the call to f is not embedded, and can be implemented much like a GO TO statement, except that arguments must be passed as well. The fact that 3-LISP has a tail-recursive processor is can been seen by inspecting the reflective processor program and observing that a) the number of bindings in an environment is a (more-or-less) linear function of the static nesting depth of programs, and b) when a call to a simple procedure is reduced, the continuation in effect upon entry to REDUCE is the one passed to NORMALISE for the body of the called procedures’s closure. The key implication is that when one procedure calls another from a non-embedded context, the continuation carried by the processor upon entry to the called procedure is the same as what it was upon entry to the calling procedure. The second crucial realization is that the primary processor procedures always call one another in non-embedded ways. Together with the other observation, it implies the following property of the reflective processor processing the reflective processor program itself: the continuation carried by the processor upon entry to any primary processor procedure is always the same and, in fact, is an instance of the REPLY continuation born within the call to READ-NORMALISE-PRINT that spawned that level of processing (barring interference from user-defined reflective procedures).
The assertion can be phrased in other ways: the (level 2) reflective processor processing the (level 1) the reflective processor program processing a (level 0) D<1 structure always carries the same level 2 continuation at every trip through level 2 REDUCE when the level 2 PROC is bound to ’NORMALISE. At level 2, in REDUCE, if PROC is bound to ’NORMALISE (or ’REDUCE, or ’CONT), then CONT is always bound to the same closure. In NORMALISE, if EXP is bound to a structure that only occurs in a non-embedded context, then CONT is bound to the same closure.
Since the point in &&CALL where the up-shift is to take place logically corresponds to a non-embedded call within the reflective processor program (specifically, to either a (cont...) or the ((de-reflectproc!)...)), the continuation that must be passed to &&EXPAND-CLOSURE is not a function of the current level of processing. Instead, it is a function only of the original continuation at the next higher level. It is the state at all higher levels of processing that will be held in the variable ABSORBED-STATE. REIFY-CONTINUATION is used to extract the continuation that corresponds to the next higher level of processing, and SHIFT-UP returns the saved states for all levels above that. These procedures will be fleshed out in a subsequent section, once the minor complications introduced by shifting down have been laid bare.
5.3Shifting Down
The corresponding ‘‘shift down’’ operation can occur whenever the implementation processor finds itself processing a structure that is part of the kernel. Bearing in mind that the locus of control should stay within the ‘‘&&’’ procedures, it is a simple matter of having &&EXPAND-CLOSURE detect when the closure which it is about to expand corresponds to NORMALISE; if it does, &&EXPAND-CLOSURE can shift down and call &&NORMALISE to do the job directly:
(define &&EXPAND-CLOSURE
(lambda simple [absorbed-state proc! args! cont]
(if (and (normalise-closure proc!)
(plausible-arguments-to-normalise args!)
(absorbable cont))
(&&normalise (shift-down cont absorbed-state)
(1st args!) (2nd args!) (3rd args!))
(&&normalise absorbed-state (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont)
)))
The continuation in effect prior to shifting down must be recorded in the absorbed state, so that it may be brought into play at the next shift up. As mentioned earlier, this continuation will usually be just the same REPLY continuation the one for that level of processing. However, it is quite possible for the user to call NORMALISE, REDUCE, etc. from an embedded context, and it is this eventuality that makes it essential to save the continuation each time a downward shift occurs.
[There are other questions to be answered. First, why only shift down on NORMALISE? Why not trigger REDUCE, NORMALISE-RAIL, and the processor continuations?]
5.4Loose Ends
[More work needed here.]
(define 3-LISP
(lambda simple []
(&&read-normalise-print (initial-state 2) 1 global)))
(define INITIAL-STATE
(lambda simple [level] (scons level)))
(define SHIFT-DOWN
(lambda simple [continuation absorbed-state]
(prep continuation absorbed-state)))
(define REIFY-CONTINUATION
(lambda simple [absorbed-state]
(if (unit absorbed-state)
(make-creply-continuation (1st absorbed-state) global)
(1st absorbed-state))))
(define SHIFT-UP
(lambda simple [absorbed-state]
(if (unit absorbed-state)
(scons (1+ (1st absorbed-state)))
(rest absorbed-state))))
The user can verify, by inspection, that all ‘‘&&’’ procedures are used in the following ways: 1) they are always called from other && procedure, with the exception of 3-LISP which is the root procedure; 2) they are always called from non-embedded contexts; 3) they never use, either directly or indirectly, any reflective procedure other that those for the standard control structures; 4) they are never passed as an argument, or returned as a result; 5) they are never remembered in a user data structure; and 6) barring an error, the chain of processing initiated by the call to 3-LISP is never broken (i.e., it never returns). It is a easy next step to translate such a program into one’s favourite imperative language.
Notes
{Note Hardware}.Can one describe what causes that microcode program to run using the vocabulary of this paper, or is there something fundamentally different about the nature of the next level?
{Note Restartable computations}. The re-startability of a computation does not imply that external world side effects (e.g., I/O) are out of the question for a procedurally reflective system. All that would be required is for all interactions with the external world to be remembered by G. Since the restarted computation will merely retrace the steps up to the point that G detected the problem, the computation up to that point could be replayed without having to interact with the external world. (Moreover, the performance degradation would be likely to go unnoticed.)
{Note No-RPLACA}.Although 3-LISP has primitive procedures that ‘‘smash’’ structures, in this paper we will pretend that there aren’t any. Without this simplifying assumption, bothersome technicalities would tend to obscure the otherwise straightforward solution. The interested reader is referred to the Interim 3-LISP Reference Manual [Smith&desRivìeres84] which contains a correct implementation for the unabridged language.
References
Allen, J. Anatomy of LISP. New York: McGraw-Hill (1978).
Henderson, P. Functional Programming, Application and Implementation. Prentice-Hall, Englewood Cliffs, N.J. (1980).
McCarthy, J., et al., LISP 1.5 Programmer’s Manual. Cambridge, Mass.: The MIT Press (1965).
Muchnick, S., and Pleban, U. ‘‘A Semantic Comparison of LISP and SCHEME’’, 1980 LISP Conference, Stanford, pp. 56-64 (1980).
Smith. B. Reflection and Semantics in a Procedural Language, M.I.T. Laboratory for Computer Science Report MIT-TR-272 (1982a).
Smith. B. ‘‘Reflection and Semantics in Lisp’’, 1984 ACM POPL Conference, Salt Lake City, Utah, pp. 23-35 (January 1984a).
Smith. B., and des Rivìeres, J. ‘‘Interim 3-LISP Reference Manual’’, Xerox PARC ISL Report, Palo Alto (1984, forthcoming).
Stoy, J. E., Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, Cambridge: MIT Press (1977).
Sussman, G, and Steele, G. ‘‘SCHEME: An Interpreter for Extended Lambda Calculus’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-349 (1975).
Sussman, G, Holloway, J., Steele, G., and Bell, A. ‘‘SCHEME-79 LISP on a Chip’’, IEEE Computer, pp. 10-21 (July1981).
Steele, G., and Sussman, G. ‘‘LAMBDA: The Ultimate Imperative’’, M.I.T Artificial Intelligence Laboratory Memo AIM-353 (1976a).
Steele, G. ‘‘LAMBDA: The Ultimate Declarative’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-379 (1976b).
Steele, G. ‘‘RABBIT: A Compiler for SCHEME (A Study in Compiler Optimization)’’, M.I.T. Artificial Intelligence Laboratory Technical Report AI-TR-474 (1977a).
Steele, G. ‘‘Debunking the ‘‘Expensive Procedure Call’’ Myth’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-443 (1977b).
Steele, G., and Sussman, G., ‘‘The Revised Report on SCHEME, A Dialect of LISP’’, M.I.T Artificial Intelligence Laboratory Memo AIM-452 (1978a).
Steele, G., and Sussman, G., ‘‘The Art of the Interpreter, or, The Modularity Complex (Parts Zero, One, and Two)’’, M.I.T Artificial Intelligence Laboratory Memo AIM-453 (1978b).
Steele, G., and Sussman, G., ‘‘Design of a LISP-Based Microprocessor’’, CACM 23, 11, pp. 628-645 (November 1980).




9
9
9
——————————————————————————————
Reflective processor program running at level 3.
——————————————————————————————
Reflective processor program running at level 2.
——————————————————————————————
Reflective processor program running at level 1.
——————————————————————————————
User program running at level 0.
——————————————————————————————
FIGURE 1. The numbering of processing level in a reflective tower.




——————————————————————————————
Implementation processor G running at level n.
——————————————————————————————
Reflective processor program with D=1 running at level n-1.
——————————————————————————————
9
9
9
——————————————————————————————
Reflective processor program with D=n-3 running at level 3.
——————————————————————————————
Reflective processor program with D=n-2 running at level 2.
——————————————————————————————
Reflective processor program with D=n-1 running at level 1.
——————————————————————————————
User program with D=n running at level 0.
——————————————————————————————
FIGURE 2. How to run a D=n program with a black-box
processor that can only handle
D=1 programs.