Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading: Not-on-first-page
des RivìeresThe Implementation of Procedurally Reflective Languages Summary
—————————————————————————————————————————————————
Summary for 1984 LISP and Functional Programming Conference
—————————————————————————————————————————————————
The Implementation of Procedurally Reflective Languages
Jim des Rivìeres
Xerox Palo Alto Research Center
3333 Coyote Hill Road, Palo Alto, CA 94304; and
Center for the Study of Language and Information
Stanford University, Stanford, CA 94305
—————————————————————————————————————————————————
Filed on: [phylum]<desrivieres>impl-paper-xxx.bravo, xxx=1,ap1,code
Last Edited: February 14, 1984 10:22 PM
—————————————————————————————————————————————————
Note to Reviewers: I have been working with Brian C. Smith on procedurally reflective programming languages [Smith 82a and Smith 84a]. If this paper is accepted, Brian expects to participate in preparing the final version, which will then be jointly authored. We would particularly welcome reviewers comments on whether emphasis should be placed on a) explaining our 3-LISP implementation in detail, in order to demonstrate more convincincly that it is "right", or b) expanding the moderately abstract explanation of reflective implementation, half way between the very general presentation (section 2) and the implementation (sketched in section 4 and fleshed out in the appendix).
—————————————————————————————————————————————————
1.Introduction
As described in [Smith 82a; Smith 84a], a procedurally reflective programming language is one in which all programs are executed not through the agency of a primitive and inaccessible "interpreter", but rather by the explicit running of a program that represents that interpreter. Since the latter program, which we call the reflective processor program (we use the term processor in place of interpreter in order to avoid semantic confusion) is written in the same reflective language as the user program, it too is executed by the explicit running of a copy of itself, and so on ad infinitum. In other words, in at least the virtual machine, no program is ever run directly by the actual implementation, but is run indirectly through the explicit action of the running of the reflective processor program. It follows that in the implementation of any reflective language there are (at least virtually) an infinite number of levels of processing, all simultaneously active (in exactly the same way that a program written in language L and the program implementing language L are simultaneously active). A simple user’s program is said to run at the level 0, by the reflective processor program running at level 1, which is in turn run by (another activation of) the reflective processor program running at level 2. Again, and so on. Each level has its own local state distinct from the state of neighbouring levels (i.e., there is one ‘‘control stack’’ per level).
The architecture resembles an infinite tower of continuation-passing meta-circular intepreters [ref??], except that (again as discussed in [Smith 84a]) there are crucial causal connections between the levels. A program running at one level can cause non-system code to be run at the next higher level at that of the program’s processor thereby gaining explicit access to the formerly implicit state of the computation. This all enables the user to define new control contructs or implement debuggers without requiring special hooks into the actual implementation processor. In fact, all control constructs can be defined in the language in terms of a handful of primitive data-manipulation procedures.
Reflection is a powerful tool to add to any language designer’s toolbox. Even if one decides that reflection is too powerful a capability to make generally available to regular users, one may find that the task of producing a correct and complete implementation (e..g, including debugging facilities) is simplified by adopting a reflective architecture. Also, reflection has interesting (and unique) properties that are a direct effect of making it possible to view a computation from more than one vantage point at the same time. For example, a purely functional procedurally reflective language, entirely lacking primitive functions or special constructs with side effects, can nevertheless use reflection to define an assignment statement. In general, reflection is a technique whereby a theory of a language embedded within a language can convey remarkable and otherwise inobtainable power.
Given a virtual machine consisting of an infinite number of levels of processing, it is clear that one of the most important questions to ask about a reflective language is whether, and why, it is computationally tractable. This paper addresses that question by considering the general question of producing an efficient implementation of a procredurally reflective language. After presenting principles and techniques in a rather general discussion and in a way that should apply to reflective variants of any standard applicative or imperative programming languages we describe an efficient implementation of a particular reflective dialect, 3-Lisp, as described above.
The level-shifting implementation processor for 3-LISP that we derive is a rational reconstruction of certain mysterious aspects of several existing language implementations, providing an explanation for such assorted questions as a) a LISP system’s treatment of calls between compiled and interpreted code, b) spaghetti stacks, c) the ‘‘microcode punts’’ of INTERLISP-D, and d) SMALLTALK-80’s explicit use of the compiled code interpreter during debugging.
2.Towers of Processing
Each level of processing is numbered, for convenience. Let the level at which the user’s program is running be 0; assign 1 to the level at which the processor program that runs the user’s program is running; and so on. In general, the structures at any given level represent, and therefore designate, the implicit state of the computation at one level below; thus level n+1 is one level ‘‘meta’’ to level n. This whole arrangement, which we call a tower, is depicted in FIGURE1. Finite heterogeneous towers of processing (i.e., a finite number of different languages) are commonplace a BASIC program running at level 0, being run by the BASIC processor (interpreter), which is a machine language program running at level 1, which, in turn might be run by the emulator, a microcode program running at level 2. {NoteHardware} In the case of procedurally reflective architectures, however, the tower is infinite and homogeneous. The user’s program (at level 0) is run by the reflective processor program (running at level 1), which is in turn run by another incarnation of that same reflective processor program (at level 2). And so on.
When we say that a user’s program runs at level 0, we are actually lying: procedurally reflective languages allow user code run at level 1 or higher, giving user programs explicit access to the data structures encoding their own state, and therefore power to direct the course of the computation. What we will call the real implementation (that process that mimics the virtual infinite tower) must therefore be able to make explicit those structures that normally encode the implicit state of the user’s program. It is this fact that makes procedurally reflective systems more difficult to implement than systems without introspective capabilities. Before showing how to do this, however, we need to discharge the threat of the infinite.
The key observation is that the activity at most levels in fact at all but a finite number of the lowest levels will be monotonous: the reflective processor program will primarily process the same old expressions, namely those that make up the reflective processor program itself. Identify as kernel those expressions in the reflective processor program that are used in the course of processing the reflective processor program one level below. {Note Kernel} Call a processing level boring if the only expressions it processes (in the course of a computation) are kernel expressions. Define the degree of introspection (D) of a program to be the least m such that when the program is run at level 0, all levels numbered higher than m are boring. All programs consisting entirely of kernel expression have D=0. Normal programs will have D=1, meaning that no out-of-the-ordinary processing is required at level 1; i.e., the processing of the level 0 program will not entail running non-kernel code at level 1. D=2 would be assigned to programs that involve running user code at levels 0 and 1, but not at the second meta level. And so on. Just as a correct implementation of recursion is not required to terminate when a procedure recurses indefinitely, a correct implementation of a procedurally reflective system need terminate only on computations having a finite degree of introspection, providing that a single computation step can increase the degree of introspection by only a finite amount (in 3-LISP it is by one).
It is now possible to formulate a general plan for implementing a procedurally reflective system. Suppose that one has a implementation processor G that engenders the behaviour of the processor for the language provided that the program it is given to run has D=1. The existence of G is not necessarily a big ‘‘if’’, since a procedurally reflective language minus the ability for the user to use the meta levels is likely to be quite conventional. For example, 3-LISP minus the ability to define reflective procedures is a simple SCHEME-like language that will succumb to standard implementation techniques [Allen78, Steele77a, Henderson80]. The essential observation is that the overall degree of introspection of a reflective processor program that is running some D=n computation is D=n-1 (this follows directly from the definition of D). So, if instead of having the user program run directly by G it is run indirectly by the reflective processor program which itself is run directly by G, then any D<2 user program will be processed correctly. In general, any D<n program can be run correctly by G provided that n-1 levels of genuine reflective processor program are placed in between. This result is depicted in FIGURE2.
Since it is unlikely that a program’s D can be determined without processing it, the insight employed above is more useful when incorporated into a slightly different implementation strategy. Assume initially that D=1, and give the program to G to run directly. If G detects that the program that it is running has D>1, start the whole computation over again, but this time run it indirectly with one more level of intervening reflective processor program. Repeat this last step until G does not protest. This procedure is guaranteed to terminate for any computation with a finite degree of introspection. It requires only that G be able to recognize, at some point during its processing, that a computation has a D>1, and that the computation be re-startable {NoteRestartable computations}. Both of these assumptions are reasonable.
It would be far better, of course, if there were some computationally tractable way of inferring the instantaneous state of the level n+1 reflective processor program from the instantaneous state of the level n one. This suggestion is not as unlikely as it might first seem. The processing that goes on at adjacent levels is always strongly correlated (since, after all, level n+1 essentially implements level n). Adjacent levels are related by ‘‘meta’’-ness; it is not as if different levels had ‘‘minds of their own’’. If it were possible to make such a step, one could refine the implementation strategy so as not to restart the computation when an impasse was reached, but rather to manufacture the state that would have existed one level up had the implementation been explicitly running up there right from the beginning. In other words, the real implementation processor would be able to make a instantaneous shift up to where it would have been had an extra level of explicit reflective processor program been in effect since the start. Thus a D=n program would be run directly by the implementation processor G until it was discovered that n>1, at which time the internal state of G would be used to create the explicit state that would be passed to the explicit reflective processor program that would take over the running of the user program. After modifying its own internal state to reflect what would have been the state one level up, G would devote its attention to running the reflective processor program. This means that the original program will now be run indirectly. It will continue to be run that way until such time as it is revealed that n>2, at which time it will start being run double-indirectly. And so on. {Note Shift-points}
Invariably, each additional level of indirection can only degrade the system’s performance with respect to the user program. This is alarming, given that interpretive overhead is typically measured in orders of magnitude. What is really wanted is a implementation that will never run any higher that necessary. Not only should the implementation be able to shift up easily, but it should to be able to shift back down whenever it discovers that things have locally become boring (this requires local, rather than global, notions of boredom and introspective degree, but those are relatively straightforward extensions). That is, when it appears that the program that G is running directly has a local D=0, the implementation processor should compensate by absorbing the explicit state of the reflective processor program it was previously running directly, and proceed to take direct responsibility for running of the computation formerly two levels down. This ensures maximum utilization of the capability of G to directly run arbitrary D=1 computations. A real implementation will be called optimal if it never processes a kernel expression indirectly.
There are two subtleties here. First, it is not necessarily reasonable to expect that every reflective processor program will permit this determination of local boredom. Once the user has been able to run code at a meta level there is no telling what might have been done there. A time bomb may have been planted, in particular, that will detonate at some later point in time. The local notions of boredom just cited must be able to say that a portion of a program is boring, even if some of its embedding context is not. Then the implementation can depend on the fact that it is safe to turn its back on an arbitrary number of boring levels of processing as long as it can turn around and shift back up the moment any of them becomes interesting again. In general, it would seem to be very difficult to determine whether it is safe to shift down. On the other hand, as the 3-LISP example will show in some detail, there are some eminently reasonable assumptions and techniques that enable optimality at least to be approached.
Second, we said above that, when shifting down, the implementation should absorb the explicit state of the reflective processor program it was previously running directly; just what it is to absorb this state in a way so that it can later be rendered explicit, should the need arise, requires some care, as the discussion of 3-LISP will show.
In sum, a correct implementation is one that engenders the same computation as that specified by the limit as n←# of a tower of n reflective processor levels. The base case requires an independent specification of the capabilities of an implementation processor capable only of running D=1 programs. The induction step shows that adding an extra level of processing engenders exactly the same computation while increasing by one the maximum degree of introspection that can be handled. In order to produce a level-shifting implementation we also need compuationally effective rules for determining when and how to shift up and back down.
3.3-LISP is a Procedurally Reflective Dialect of LISP (or SCHEME)
In order to make this all a little more specific, we need a specific reflective language. 3-LISP [Smith82a] is a reduction-based higher-order, lexically scoped dialect of LISP whose closest ancestor is SCHEME. Although there are many minor differences between the languages, readers familiar with SCHEME should have little difficulty understanding 3-LISP programs. The reader is referred to [Smith84a] for an introduction to both the language and to the intuitions that guided its development. Very much like the meta-circular interpreters discussed in ‘‘the Lambda papers’’ [Sussman and Steele 1975, 76a, 76b, 77a, 77b, 78a, 78b, 80], we present a continuation-passing 3-LISP reflective processor program:
(define READ-NORMALISE-PRINT
(lambda simple [level env]
(normalise (prompt&read level) env
(lambda simple [result]
; REPLY continuation
(block (prompt&reply level result)
(read-normalise-print level env))))))
(define NORMALISE
(lambda simple [exp env cont]
(cond [(normal exp) (cont exp)]
[(atom exp) (cont (binding exp env))]
[(rail exp) (normalise-rail exp env cont)]
[(pair exp) (reduce (car exp) (cdr exp) env cont)])))
(define REDUCE
(lambda simple [proc args env cont]
(normalise proc env
(lambda simple [proc!]
; PROC continuation
(if (reflective proc!)
(
(de-reflect proc!) args env cont)
(normalise args env
(lambda simple [args!]
; ARGS continuation
(if (primitive proc!)
(cont ↑(
proc! . args!))
(normalise (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont)))))))))
(define NORMALISE-RAIL
(lambda simple [rail env cont]
(if (empty rail)
(cont (rcons))
(normalise (1st rail) env
(lambda simple [first!]
; FIRST continuation
(normalise-rail (rest rail) env
(lambda simple [rest!]
; REST continuation
(cont (prep first! rest!)))))))))
(define LAMBDA
(lambda reflect [[kind pattern body] env cont]
(cont (ccons kind ↑env pattern body))))
(define IF
(lambda reflect [[premise c1 c2] env cont]
(normalise premise env
(lambda simple [premise!]
; IF continuation
(normalise (ef
premise! c1 c2) env cont)))))
3-LISP is based a notion of expression reduction, rather than evaluation: the processor (NORMALISE, in place of the more standard EVAL) returns a co-designating normal-form expression for each expression it is given; see [Smith 84a]. We write X g Y to mean that X normalises to Y.
All the procedures used in the above code, other than those explicitly defined, are straightforward, side-effect-free, data manipulation functions. None have any special control responsibilities (except COND, DEFINE, and BLOCK, which have been omitted only to shorten the presentation). PROMPT&READ and PROMPT&REPLY perform input and output, respectively, but are otherwise innocuous; and mediate between a sign and what is signified: (examples: ↑(+22) g ’4, ↑↑(+22) g ’’4, ’’4 g ’4, ’’(+22) g ’(+22), but ’(+22) is an error). There are no hidden procedures; user programs may use CCONS (the closure constructor), BODY, NORMALISE, etc. even and with impunity. In fact, by defining his own reflective procedures (using LAMBDA REFLECT), the user may augment this processor these reflective procedures are handled by the ((de-reflectproc!)argsenvcont) line of REDUCE. When the level 1 processor encounters (fooe1...en) in the program it is running, the reflective procedure associated with the name foo is called at the same level as the processor with exactly three arguments: a designator of the unnormalised argument structure (from the level 0 redex) ’[e1...en], the variable binding environment, and the continuation. In this way, the user’s program may gain access to all of the state information maintained by the processor that is running his program. From this unique vantage point, it is easy to realize new control constructs, such as CATCH and THROW, or to implement a resident debugger. The infinite tower appears to the user exactly as if the system had been initialized in the following manner:
.
.
.
4> (read-normalise-print 3 global)
3> (read-normalise-print 2 global)
2> (read-normalise-print 1 global)
1>
The user can verify this by defining a QUIT procedure that returns a result instead of calling the continuation, thereby causing one level of processing to cease to exist:
1> (define QUIT (lambda reflect [args env cont] ’DONE))
1= ’QUIT
1> (quit) ; QUIT
is run as part of the level 1 processor, which it kills.
2= ’DONE
2> (+ 2 (quit)) ;
This time QUIT terminates the level 2 processor.
3= ’DONE
3> (read-normalise-print 1 global) ;
But levels can be re-created at will.
1> (read-normalise-print 2001 global) ;
Level numbers are arbirtary.
2001> (quit)
1= ’DONE
1> (quit)
3= ’DONE
To some extent, a meta-circular interpreter or a reflective processor program can be viewed as a theory of a language (or at least of how it is processed) that is written within that language. As such, it "explains" various things about how the language is processed, but depending on the theory, it can account for more or less of what is the case. In particular, it is important to realize what the 3-LISP reflective processor program does and does not explain. This reflective processor was designed to be similar to standard Scott-Strachey continuation-based semantic accounts of l-calculus based languages (e.g., [Stoy77, Muchnick80]). Its primary purpose is to explain the variable binding mechanisms and the flow of control in the course of error-free computations. This account intentionally does not say anything about how errors are processed, nor does it shed any light on how data structures are implemented, nor on how I/O is carried out. These details are buried in the primitive procedures, and the reflective processor carefully avoids accounting for what they actually do. A different theory that did explain these aspects of the language could be written, yielding a different reflective processor program, and a different reflective dialect all of which would require a different implementation. But the basic architecture and strategies we employ would generalize to such other circumstances.
4.3-LISP has an Efficient Level-Shifting Implementation Processor
In order to understand how to build a level-shifting implementation of 3-LISP, it is necessary to study the reflective processor program in some depth. We begin by observing that the state explicitly maintained at each level of processing by the reflective processor consists of the expressions, environments, and continuations that are passed as arguments among what we will call the primary processor procedures: NORMALISE, REDUCE, NORMALISE-RAIL, LAMBDA, IF, READ-NORMALISE-PRINT, and their embedded continuations (which we call the PROC, ARGS, FIRST, REST, IF, and REPLY continuations). Not captured at any particular level are the global state of I/O streams and the structural field itself; however, the reflective processor does not use side effecting primitives to remember state information except when the program that it is running forces it to. {Note No RPLACA}
As defined earlier, the kernel consists of those parts of the reflective processor program that are used in the course of processing the reflective processor program one level below. For 3-LISP, this includes the closures and all of the code for the primary processor procedures, utilities like BINDING, BIND, and NORMAL, and the primitives like CAR, CDR, , , and READ. Hence, the D<1 implementation processor only need be capable of processing a subset LISP dialect reminiscent of SCHEME the construction of such an implementation processor is, therefore, a solved problem. What is important is to determine the criteria by which the implementation processor will decide that it is necessary to shift up, the criteria by which it will decide that it is safe to shift down, and the mechanisms for achieving these transitions.
4.1When and How to Shift Up
Shifting up will have to occur when control would leave the implementation code corresponding to the kernel. This can happen only at a handful of places in the reflective processor program: one of the continuation calls, (cont...), and where a reflective procedure is called, ((de-reflect proc!) ...). The real question is where in the implementation processor should the shift up take us? It is all very well to note where one needs to leave the level below and shift up; it is much less clear where, in the level above, we should arrive. For example, if we assume that exp and cont normalise to exp and the simple closure cont, respectively, the implemetation processor could shift from ‘‘doing’’ (contexp) to doing any of
(normalise’(contexp)e?c?)
(reduce’cont’[exp]e?c?)
(reducecont’[exp]e?c?)
(reducecont↑[exp]e?c?)
In fact, the natural seam at which dissect computations occurs in the ARGS continuation the instant NORMALISE is about to be called on the body of the (simple) closure:
(normalise(body ↑cont)(bind (pattern ↑cont) ↑[exp] (environment ↑cont))c?)
Since this is the place where there is a discontinuity in the environment, it is not necessary to go to any extra work constructing one; only a continuation needs to be ‘‘pulled out of the hat’’. What should this continuation be?
There are two critical things to realise about the reflective processor program. First, it implements a ‘‘tail-recursive’’ dialect of LISP (e.g., SCHEME); i.e., it is not procedure calls per se that cause the processor to accumulate state, but rather only embedded procedure calls. For example, with respect to a call to the procedure given by (lambdasimple[x](f(gx))), the call (gx) is embedded in the first argument position of (f(gx)), and therefore requires the processor to save state until (gx) returns, just as in a conventional implementation of procedure calls; on the other hand, the call to f is not embedded with respect to the initial call (rather, it substitutes for it), and can be implemented much like a GOTO statement, except that arguments must be passed as well. The fact that 3-LISP has a tail-recursive processor can been seen by inspecting the reflective processor program and observing that a) the number of bindings in an environment is a (more-or-less) linear function of the static nesting depth of programs, and b) when a call to a simple procedure is reduced, the continuation in effect upon entry to REDUCE is the one passed to NORMALISE for the body of the called procedures’s closure. The key implication is that when one procedure calls another from a non-embedded context, the continuation carried by the processor upon entry to the called procedure is the same as what it was upon entry to the calling procedure. The second crucial realization is that the primary processor procedures always call one another in non-embedded ways. Together with the other observation, it implies the following property of the reflective processor processing the reflective processor program itself: the continuation carried by the processor upon entry to any primary processor procedure is always the same. This assertion can be phrased more precisely: the (level 2) reflective processor processing the (level 1) reflective processor program processing a (level 0) D<1 structure always carries the same level 2 continuation at every trip through level 2 REDUCE when the level 2 PROC is bound to ’NORMALISE. In other words, if one were to ‘‘watch’’ the level 2 state upon entry to REDUCE, one would find that CONT was always bound to the same closure whenever PROC is bound to the atom ’NORMALISE (or ’REDUCE, or ’CONT, etc.).
Since the points where the up-shift are to take place logically corresponds to a non-embedded call within the reflective processor program (specifically, to either a (cont...) or ((de-reflectproc!)...)), the continuation that must be reified is not a function of the current level of processing. Instead, it is a function only of the original continuation at the next higher level, assuming that user-defined code has never been run at that level before.
4.2When and How to Shift Down
Deciding when to shift down is similarly straightforward. The implementation processor should shift down whenever it is processing kernel code instead of user code. In practice, it is not necessary to shift down as soon as possible (i.e., full optimality need not be achieved); it suffices to recognize only the situation where the implementation processor is processing a call to NORMALISE, since all paths through the reflective processor program will pass through this procedure. The situation can be detected in the code corresponding to the ARGS continuation (i.e., is PROC! bound to the closure for NORMALISE?). It is also essential that the arguments passed to NORMALISE be scrutinized, to ensure that they are ‘‘reasonable’’ (of proper type and so forth). If they are, the implementation processor can make the transition from
(normalise (body ↑normalise)
(bind (pattern ↑normalise) args! (environment ↑normalise))
cont)
to
(normalise (1st args!) (2nd args!) (3rd args!))
The continuation in effect prior to shifting down must be recorded in the absorbed state. It will usually be the same REPLY continuation the original one for that level of processing, born within the call to READ-NORMALISE-PRINT that created that level at system genesis. However, since it is possible for the user to call NORMALISE from an embedded context, it essential to save the continuation each time a downward shift occurs so that it may be brought back into play the next time the processor shifts up to this level. What allows a non-kernel continuation to be absorbed? The answer is simply that the continuation cannot come into play until such time as the computation at the lower level returns a result. Since each primary processor procedure ends in a tail-recursive call, this chain can break down only if some non-primary processor procedure is called which returns a result instead of calling the continuation passed to it. But it is precisely these calls that always cause a shift up; hence, the implementation processor will automatically find its way back to the appropriate level whenever a non-kernel continuation would be called at a higher level.
Notes
{Note Hardware}.In a finite tower, there is one level which is run "by the hardware", at which point there is no further program, and therefore no question of who runs it. See [Smith 82b].
{Note Kernel}.There are three classes of expressions that one might think of as the relevant base for the induction: those that are primitive, those that are simple (i.e., do not involve reflection), and those that are kernel. In 3-LISP the three classes are distinct. It is the kernel ones that are key to a correct implementation.
{Note Restartable computations}. The re-startability of a computation does not imply that external world side effects (e.g., I/O) are out of the question for a procedurally reflective system. All that would be required is for all interactions with the external world to be remembered by G. Since the restarted computation will merely retrace the steps up to the point that G detected the problem, the computation up to that point could be replayed without having to interact with the external world. (Moreover, the performance degradation would be likely to go unnoticed.)
{Note No RPLACA}.Although 3-LISP has primitive procedures that ‘‘smash’’ structures, in this paper we will pretend that there aren’t any. Without this simplifying assumption, bothersome technicalities would tend to obscure the otherwise straightforward solution. The interested reader is referred to the Interim 3-LISP Reference Manual [Smith&desRivìeres84] which contains a correct implementation for the unabridged language.
{Note Shift-points}.We are assuming (not unreasonably) that the point at which it is determined that D>1 is a point at which all upper levels would have been boring so far, even if they had been run explicitly. A more formal treatment would make this explicit.
References
Allen, J. Anatomy of LISP. New York: McGraw-Hill (1978).
Henderson, P. Functional Programming, Application and Implementation. Prentice-Hall, Englewood Cliffs, N.J. (1980).
McCarthy, J., et al., LISP 1.5 Programmer’s Manual. Cambridge, Mass.: The MIT Press (1965).
Muchnick, S., and Pleban, U. ‘‘A Semantic Comparison of LISP and SCHEME’’, 1980 LISP Conference, Stanford, pp. 56-64 (1980).
Smith. B. Reflection and Semantics in a Procedural Language, M.I.T. Laboratory for Computer Science Report MIT-TR-272 (1982a).
Smith, B. "The Computational Metaphor", presented at a Conference on Real-Time Processing of ..., (1982b), available from the author.
Smith. B. ‘‘Reflection and Semantics in Lisp’’, 1984 ACM POPL Conference, Salt Lake City, Utah, pp. 23-35 (January 1984a).
Smith. B., and des Rivìeres, J. ‘‘Interim 3-LISP Reference Manual’’, Xerox PARC ISL Report, Palo Alto (1984, forthcoming).
Stoy, J. E., Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, Cambridge: MIT Press (1977).
Sussman, G, and Steele, G. ‘‘SCHEME: An Interpreter for Extended Lambda Calculus’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-349 (1975).
Sussman, G, Holloway, J., Steele, G., and Bell, A. ‘‘SCHEME-79 LISP on a Chip’’, IEEE Computer, pp. 10-21 (July1981).
Steele, G., and Sussman, G. ‘‘LAMBDA: The Ultimate Imperative’’, M.I.T Artificial Intelligence Laboratory Memo AIM-353 (1976a).
Steele, G. ‘‘LAMBDA: The Ultimate Declarative’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-379 (1976b).
Steele, G. ‘‘RABBIT: A Compiler for SCHEME (A Study in Compiler Optimization)’’, M.I.T. Artificial Intelligence Laboratory Technical Report AI-TR-474 (1977a).
Steele, G. ‘‘Debunking the ‘‘Expensive Procedure Call’’ Myth’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-443 (1977b).
Steele, G., and Sussman, G., ‘‘The Revised Report on SCHEME, A Dialect of LISP’’, M.I.T Artificial Intelligence Laboratory Memo AIM-452 (1978a).
Steele, G., and Sussman, G., ‘‘The Art of the Interpreter, or, The Modularity Complex (Parts Zero, One, and Two)’’, M.I.T Artificial Intelligence Laboratory Memo AIM-453 (1978b).
Steele, G., and Sussman, G., ‘‘Design of a LISP-Based Microprocessor’’, CACM 23, 11, pp. 628-645 (November 1980).




9
9
9
——————————————————————————————
Reflective processor program running at level 3.
——————————————————————————————
Reflective processor program running at level 2.
——————————————————————————————
Reflective processor program running at level 1.
——————————————————————————————
User program running at level 0.
——————————————————————————————
FIGURE 1. The numbering of processing level in a reflective tower.




——————————————————————————————
Implementation processor G running at level n.
——————————————————————————————
Reflective processor program with D=1 running at level n-1.
——————————————————————————————
9
9
9
——————————————————————————————
Reflective processor program with D=n-3 running at level 3.
——————————————————————————————
Reflective processor program with D=n-2 running at level 2.
——————————————————————————————
Reflective processor program with D=n-1 running at level 1.
——————————————————————————————
User program with D=n running at level 0.
——————————————————————————————
FIGURE 2. How to run a D=n program with a black-box
processor that can only handle
D=1 programs.