Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading: Not-on-first-page
Jim des RivìeresThe Implementation of Procedurally Reflective Languages Summary
—————————————————————————————————————————————————
Summary for 1984 POPL Conference
—————————————————————————————————————————————————
The Implementation of Procedurally Reflective Languages
Jim des Rivìeres
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304
(415) 494-4384
—————————————————————————————————————————————————
Filed: [phylum]<3-LISP>implementing-reflection.bravo
Last Edited: August 19, 1983 3:58 AM
—————————————————————————————————————————————————
Abstract
[To be supplied.]
1.Introduction
Reflection, in the philosophical sense, is defined as ‘‘the mode, operation, or faculty by which the mind has knowledge of itself and its operations, or by which it deals with the ideas received from sensation and perception.’’ [OED]. A procedurally reflective programming language [Smith82a] allows a running program to deal not only with a given subject domain but also with how that program is being run. This ability to introspect necessitates the existence of some theory of how programs in the language are run. A procedural theory of how a procedural programming language is processed is called a processor (or more commonly, an interpreter). A processor is a program expressed in some meta language that traffics in data structures representing fragments of the object language program being processed and other structures encoding the implicit state of the processing such as environments and continuations. It is possible to formulate theories (processors) that are user-extensible so that not only can a program gain access to data structures that encode its normally implicit state but also use these structures to change the course of the program’s computation in ways not necessarily even expressible at the object level.
The question that immediately arises is what should the meta language look like? Although it is conceivable to use a meta language different from the object level one, this is arguably uninteresting because it would not be possible to reflect on the processing of the meta level programs (unless, of course, there was a meta meta language, ...). Once it has been agreed that the object language and the meta language should be one and the same, it necessarily follows that the processor itself can be thought of as ‘‘just another program’’. It follows that above each processor level there is another processor that is ‘‘meta’’ to it. Although at each meta level the same program is being run (the code for the processor), each level has its own local state.
Two problems are addressed in this paper. First, how is it possible to implement a procedurally reflective language, given the need for an infinite tower of processing levels? Much like the perceived threat of infinite regress posed by recursion, the infinite tower of processors can be discharged. Second, under what circumstances might efficient implementations be feasible? Different theories (processors) will admit different implementations. One class of processors, the ‘‘tail-recursive’’ ones, have a particularly efficient implementation strategy.
The goal throughout the paper will be to expose those key ideas that underly all procedurally reflective architectures. However, since dealing with any new concept purely in the abstract is fraught with problems, the discussion will be grounded by an extended example to wit, the procedurally reflective dialect of LISP called 3-LISP [Smith82a, Smith&desRivìeres83, Smith84].
The following section contains a description of 3-LISP and introduces its form of procedural reflection. Section 3 looks at the infinite tower of processing, shows how it can be tamed, and outlines the properties that a processor should have if there is to be an efficient implementation. 3-LISP will again be the focus in section 4 where just such an efficient implementation will be shown to exist. Future research paths will be sketched in the final section.
2.3-LISP is a Procedurally Reflective Dialect of LISP (or SCHEME)
[This section is intended to introduce the reader to the 3-LISP language and the form of procedural reflection that it supports.]
3-LISP is a variant of LISP [McCarthy65] and SCHEME [Sussman&Steele78]. It is a higher-order, statically-scoped, meta-structural, l-calculus-based language with an applicative order, normalization-based, tail-recursive processor {Note1}.
The two most significant ways in which 3-LISP differs from its ancestors are as follows {Note2}. First, the notion of evaluation is rejected in favour of a rationalized semantics based on the orthogonal notions of reference (what an expression designates, stands for, refers to) and simplification (how an expression is handled by the 3-LISP processor; what is returned). Second, the language is procedurally reflective: it has facilities whereby a program may cause a user-defined procedure to be run at the level of the program’s processor.
All 3-LISP expression are taken as designating something. For example, 1, 100, and -6 always designate the abstract numbers one, ten and negative six, respectively; $T always designates truth and $F always designates falsity; [] always designates the empty sequence, and [123] always designates the sequence whose elements are the numbers one, two, and three in that order. More generally, expressions of the form [x1x2 ... xn] designate the abstract sequence of length n consisting of the objects designated by the xi in the specified order. Expressions of the form (f.a) designate the result of applying the function designated by f to the argument object designated by a. The common case of applying a function to a sequence of n(>0) arguments (f.[x1x2 ... xn]) is abbreviated (fx1x2 ... xn). Variables, such as Knights, l, and Calculus (case is unimportant) designate different objects in different contexts. The most common context is the global one that associates names with all of the standard functions. For example, with respect to the global context, + designates the addition function.
There is a wide assortment of standard functions defined over numbers, truth-values, and sequences. For numbers, +, -, and * designate the conventional arithmetic functions. The standard sequence operations are named EMPTY, 1ST, REST, and PREP (corresponding to LISP 1.5’s NULL, CAR, CDR, and CONS respectively). (1ST(PREPXS)) and X are co-designating expressions, as are (REST(PREPXS)) and S; (EMPTY S) designates true iff S designates an empty sequence.
There are several control operators (such as l, IF, and COND) that do not fit the standard mold in that they cannot be characterized solely in terms of what their arguments designate. l-expressions are the most common way to designate other functions. For example, (l[X]X) designates the identity function (of one argument); (l[X](1ST(RESTX))) designates the function that selects the second element from the sequence designated by parameter X. Uses of free variables are always satisfied in the lexically enclosing environment. (Unless otherwise indicated, all uses of standard names will assume their global meaning.) Both IF and EF designate functions that select either their second or third argument depending on whether their first argument is true or false, respectively; they differ only in that not all of IF’s arguments are processed.
Here is an example of a global definition for the function that appends two sequences.
(define APPEND
(
l [s1 s2]
(if (empty s1)
s2
(prep (1st s1) (append (rest s1) s2)))))
In addition to the preceding informal description of what 3-LISP expressions designate, it is necessary to explain what a 3-LISP processor does with expressions. The answer: it simplifies them, subject to the constraints that both the ‘‘before’’ and ‘‘after’’ expressions designate the same object, and that the ‘‘after’’ expression be in the simplest form possible. This process in also called normalisation, and the simplest resultant expression is called a normal form expression. In 3-LISP, a normal form expression is one containing no variables or function applications. The symbol g should be understood as the binary relationship ‘‘normalises to’’. Examples: (+22) g 4; [1(*55)] g [125]; (REST(REST[123])) g [3].
The 3-LISP base language sketched above would be an adequate formalism for expressing programs that deal with numbers, truth values, functions, and sequences call the union of these abstract domains D. (More precisely, the functions are mappings from D to D, and the sequences contain only elements of D.) However, to formulate a theory of how the 3-LISP processor works it will be necessary to mention 3-LISP expressions, and this will require that the language be augmented with expressions that are taken as designating other 3-LISP expressions.
The expression x is used to designate the expression x. For each different form of x, there is a specially-named collection of designators. For example, ’100 designates the numeral 100; ’$T designates the boolean $T; ’[12] designates the rail [12]; ’FOO designates the atom FOO; ’(X.Y) designates the pair (X.Y). There are also normal form function designators which have no adequate printed representation in 3-LISP; however, if {+closure} was agreed upon as designating the addition function in all contexts, then the expression ’{+closure} would designate the closure {+closure}. The expressions ’’FOO, ’’[1], and ’’’’$F designate the handles ’FOO, ’[1], and ’’’$F, respectively. Note that numerals, booleans, rails, atoms, pairs, closures, and handles are all considered to be structures that live inside the computer, whereas numbers, truth values, sequences, and functions do not.
The standard functions NUMERAL, BOOLEAN, RAIL, ATOM, PAIR, CLOSURE, and HANDLE are characteristic functions for the seven kinds of internal structures. The standard operations on sequences apply equally well to rails. For example, (1ST’[AB]) designates the atom A; i.e., (1ST’[AB]) g ’A. The additional standard operation RCONS can used to construct new rails: (RCONS) designates the empty rail []. The standard operations on pairs are named PCONS, CAR, and CDR; (PCONS’A’B) designates the pair (A.B); (CAR’(A.B)) designates the atom A; and (CDR’(A.B)) designates the atom B {Note 3}. The standard operations on closures are named CCONS, ENVIRONMENT, REFLECTIVE, BODY, and PATTERN.
There are a few more relevant standard functions defined over internal structures. (=XY) designates true iff X and Y designate the same internal structure. (UPX), abbreviated ↑X, designates a normal form designator of what X designates; e.g., ↑(+55) designates the numeral 10. Its converse, (DOWNX), abbreviated X, designates the object designated by the normal form structure designated by X; e.g., ’10 designates the number ten. (From a more operational viewpoint standpoint, ↑10 g ’10, ’10 g 10. Just remember that adds a quote mark and removes one.) For any expression x, x is co-designative.
Having expressions that designate other 3-LISP expressions, it is now possible to specify exactly what properties we want of any theory of the 3-LISP processor formulated in terms of some other 3-LISP expression.
Definition of a meta-circular processor program for language L: an L-expression that designates the function computed by the processor for language L. That is, if for all L-expressions x and y, x g y implies Y(x) g y, then the L-expression Y is a meta-circular processor expression (program) for L.
For any given language there may be several flavours of meta-circular processor programs, each making explicit some details of how the language is processed while absorbing others. Example: the standard LISP𔁇.5 meta-circular processor [McCarthy65] does make explicit the management of variable bindings (environments) without actually making explicit any control information procedure calls/returns in the object language are mirrored with procedure calls/returns in the meta-circular processor program. The 3-LISP meta-circular processor program goes a step further and explicitly records control information in the form of continuations, which are functions from intermediate results to final results [Stoy77, Sussman&Steele76b]. Still, there are aspects of the 3-LISP processor that are not exposed, namely error detection and handling, I/O, the structural field (= the collection of internal structures), and garbage collection.
The 3-LISP meta-circular processor program is shown in Figure1. NORMALISE is the principle procedure (cf. LISP𔁇.5’s EVAL). Its three arguments are: exp, which designates the expression to be normalised; env, which designates the environment, a sequence of atom/binding pairings (i.e., an a-list); and cont, which designates the continuation, a function from normal form structures to final results. A typical use of NORMALISE would be (NORMALISE’(+22)GLOBALID), which designates the object that results from applying the function designated by NORMALISE to the pair (+22) in the standard global environment, and with the identity function as a continuation. If NORMALISE does indeed designate the function computed by the 3-LISP processor, we would expect this expression to designate the numeral 4. NORMALISE basically does a type dispatch on the expression under consideration. All normal form expressions (handles, numerals, booleans, closures, and rails thereof) need no further processing. They are passed to the continuation (i.e., returned) unchanged. With the help of BINDING, atoms are looked up in the current environment, with the normal form co-designative expression associated with the atom being passed along. The remaining cases, (non-normal form) rails and all pairs, are delegated to NORMALISE-RAIL and REDUCE, respectively. NORMALISE-RAIL (cf. LISP𔁇.5’s EVLIS) normalises each component of the rail, working from left to right. The results of these normalisations are collected into a (normal form) rail which is eventually passed to the continuation.
The real work, the handling of function applications, is performed by REDUCE (cf. LISP𔁇.5’s APPLY, but not SCHEME’s). Function applications are dealt with in one of three ways based on the properties of the closure that is designated by the result of explicitly normalising the function part of the expression:
8Reflective procedures that take their arguments unprocessed. It is through this mechanism that idiosyncratic control operators such as IF, l, and COND are handled. Reflective procedures are explained below.
8Primitive procedures (all of which take their arguments processed). The term primitive is used to describe any of the standard functions that cannot be (or is not) described in terms of others. For example, UP and DOWN are primitives, as are +, 1ST, and PCONS. Of the approximately 35 primitives closures recognized by PRIMITIVE a few are for I/O, but most have to do with the structural field. Note that all of the primitives are well-behaved (partial) functions of their arguments and the state of the structural field. No control information is being swept under the rug. The reader should attempt to convince himself that the cryptic expression ↑(proc!.args!) simply converts the object language function application into the corresponding meta level one {Note 4}.
8Non-primitive procedures that take their arguments processed. Most procedures fit into this category. These are ones like APPEND that are DEFINEd with l-expressions (there are other ways). The code in REDUCE for these highlights the following properties of 3-LISP’s processor: a) the argument expression is always normalised prior to calling the procedure (i.e., an applicative order, or call by value parameter passing regimen); b) the environment in which the body of the l-expression is processed is obtained by matching the l-expression’s parameter pattern against the processed argument expression and adding the variable bindings to the environment captured when the l-expression was processed (i.e., a static, or lexical, scoping discipline); and c) the continuation in effect when the processing of the body of the l-expression begins is the one that was handed in when REDUCE was called (i.e., function application is tail recursive much more on this later).
Instead of placing the special case code for control operators in REDUCE, this work is done in auxiliary functions. For example, the auxiliary functions that handle the normalisation of l and IF expressions can be expressed {Note6}:
(define REDUCE-l
(
l [[pattern body] env cont]
(cont (ccons ’simple ↑env pattern body))))
(define REDUCE-IF
(
l [[premise c1 c2] env cont]
(normalise premise env
(
l [premise!]
(normalise (ef
premise! c1 c2) env cont)))))
A reflective procedure is represented by specially-tagging the closure of one of these auxiliary functions. It is these reflective closures that are associated with the names of the control constructs in the standard environment. When REDUCE encounters a reflective closure, it strips off the tag (using de-reflect), adjusts it to the correct designational level, and calls it with the as-yet-unprocessed argument structure, the current environment, and current continuation. Using another control operator, lr, to flag a l-expression that is to be tagged as reflective, the reader may verify that the following definition accurately describes SCHEME’s CATCH operator:
(define CATCH
(
lr [[escape-name exp] env cont]
(normalise exp
(bind escape-name
↑(
lr [[throw-exp] env2 junk] (normalise throw-exp env2 cont))
env)
cont)))
As stated earlier, a meta-circular processor program must designate the function computed by the real processor for the language. If it didn’t, it would not deserve the title ‘‘meta-circular’’. In 3-LISP, the claim that the expression (l[exp] (normaliseexpglobalid)) designates the function computed by the top-level processor for 3-LISP is taken utterly seriously. Your 3-LISP program is not being run directly by the ‘‘real’’ 3-LISP processor; instead, it is being run by the 3-LISP meta-circular processor program described above. And who’s running it, you ask? Oddly enough, it isn’t being run by the ‘‘real’’ 3-LISP processor either. It too is being run by the 3-LISP meta-circular processor program. And so on. Much more than mere idle speculation, it is quite simple to prove conclusively that this is indeed exactly what is going on. One may begin by verifying that NORMALISE, REDUCE, et al are present in the standard global environment, and that the patterns and bodies of the closures bound to these atoms are as advertised. Then simply use lr to define your own reflective procedure and then use it in the function part of an expression. Lo and behold! Your reflective procedure will be passed the unprocessed argument structure, environment, and continuation that are being passed around in the meta-circular processor program that is running your expression.
In honour of its truly special status in a procedurally reflective language, the meta-circular processor program that sees service at each of the infinitely many meta levels is called the language’s reflective processor program.
3.Towers of Processing
[This section analyzes the infinite tower and shows how it can be implemented, albeit so inefficiently as to be impractical. The discussion then turns to properties to look for in the reflective processor program that may allow an efficient ’’level-shifting’’ strategy to be employed.]
For the sake of clarity, each level of processing will be numbered. Let the level at which the user’s program is being run be 0; assign 1 to the level at which the reflective processor program that runs the user’s program; and so on. Thus level n+1 is ‘‘meta’’ to level n. This arrangement is depicted in Figure2. Finite heterogeneous towers of processing are commonplace your BASIC program running at level 0, being run by the BASIC processor, which is a machine language program running at level 1, which, in turn is being run at by the emulator, which is a microcode program running at level 2. {Note7} In the case of procedurally reflective architectures, the tower is infinite and homogeneous except for level 0. The problem, then, is to show how such an architecture could ever be implemented.
Observe that at each level of processing there are explicit data structures that represent the implicit state of the computation at one level below. In the case of 3-LISP, this state consists of the expressions, environments, and continuations that are passed among the primary processor procedures NORMALISE, REDUCE, and NORMALISE-RAIL.
Also notice that most of the activity at most levels is monotonous: the reflective processor program will spend most of its time normalising the same old expressions that are the source code for the reflective processor program. Identify as the kernel those expressions of the reflective processor program that are used in the course of processing the reflective processor program one level below. For 3-LISP, this includes the closures and all of the code for NORMALISE, REDUCE, NORMALISE-RAIL, utilities like BINDING, NORMAL, and REDUCE-l, and all of the primitives like CAR, CDR, and RCONS.
Call a processing level boring if the only expressions it processes are ones in the kernel. Define the degree of introspection (D) of a program to be the least m such that when the program is run at level 0, all levels numbered higher than m are boring. All programs consisting entirely of kernel expression have D=0. Normal programs will have D=1, meaning that no out-of-the-ordinary processing is required at level 1 or above; i.e., the processing of the level 0 program will not entail running user code at level 1. D=2 would be assigned to programs that involve running user code at levels 0 and 1, but not at the second meta level. And so on. Just as a correct implementation of recursion is not required to terminate when a procedure recurses indefinitely, it is probably reasonable to only insist that a correct implementation of a procedurally reflective system terminate on computations having a finite degree of introspection.
It is now possible to formulate a general plan for implementing a procedurally reflective system. Suppose that one has a implementation processor G that engenders the behaviour of the processor for the language provided that the program it is given to run has D=1. The existence of G is not necessarily a big ‘‘if’’, since a procedurally reflective language minus the ability for the user to mess with the meta levels is likely to be quite conventional. For example, take away from 3-LISP the ability for the user to define new reflective procedures and one is left with a SCHEME-like language that will succumb to standard implementation techniques [Allen78, Steele77a, Henderson80]. The essential observation is that the overall degree of introspection of a reflective processor program that is running some D=n computation is only D=n-1. This follows directly from the definition of D. So if instead of having the user program run directly by G it is run indirectly by the reflective processor program which itself is run directly by G, then any D<2 user program will be processed correctly. In general, any D<n program can be run correctly by G provided that n-1 levels of genuine reflective processor program are placed in between. This result is depicted in Figure3.
Since it is unlikely that D will be something that is known beforehand, the insight employed above may be incorporated into a slightly different implementation strategy. Assume that D=1, and give the program to G to run directly. If G detects that the program that it is running has D>1, start the whole computation over again, but this time running it indirectly via one more level of reflective processor program. Repeat this last step until G does not protest. This procedure is guaranteed to terminate for any computation with a finite degree of introspection. It requires only that G be able to recognize when a computation has a D>1, and that the computation be re-startable {Note8}. Both of these assumptions are reasonable. Although there may be some question as to what G should deem to be beyond its capabilities, there should be little doubt that G would have to make this determination somehow. It can also be argued that D must be an effectively computable property of programs if there is to be any effective implementation of a reflective system.
What if there was some computationally tractable way of inferring the instantaneous state of the level n+1 reflective processor program given the instantaneous state of the level n one? This suggestion is not as wild as it might first seem. The processing that goes on at adjacent levels is always strongly correlated. Adjacent levels are related by ‘‘meta’’-ness, so it is not as if different levels had ‘‘minds of their own’’, so to speak. Whatever, if it were possible to do this, one could refine the implementation strategy so as not to restart the computation when an impasse was reached but rather to manufacture the state that would have existed one level up had the implementation been explicitly running up there right from the beginning. In other words, the implementation processor would be able to make a instantaneous shift up to where it would have been had an extra level of explicit reflective processor program had been present from square 1. Thus a D=n program would be run directly by the implementation processor G until it was discovered that n>1, at which time the internal state of G would be used to create the explicit state that would be passed to the explicit reflective processor program that would take over the running of the user program; after modifying its own internal state to reflect what would have been the state one level up, G would devote its attention to running the reflective processor program. This means that the original program will now be run indirectly. It will continue to be run that way until such time as it is revealed that n>2, at which time it will start being run double-indirectly. And so on. Invariably, each additional level of indirection can only degrade the system’s performance with respect to the user program (known as ‘‘interpretive overhead’’; think of the paradox that would result if this were not the case!). Given that interpretive overhead is typically measured in orders of magnitude, one should probably remain skeptical of the practicality of procedurally reflective systems.
What is really wanted is a implementation that will never run any higher that necessary. Not only should the implementation be able to shift up easily, but that it should to be able to shift back down whenever it discovers that things have become boring. That is, when it appears that the program that G is running directly has D=0, the implementation processor should compensate by absorbing the explicit state of the reflective processor program it was previously running directly, and proceed to take direct responsibility for running of the computation formerly two levels down. This ensures maximum utilization of the capability of G to directly run arbitrary D=1 computations.
Unfortunately, it is probably unreasonable to expect that every reflective processor program will permit this. Once the user has been able to run code at a meta level there is no telling what might have been done there. A time bomb may have been planted, so to speak, that will detonate at some later point in time. Before the implementation processor absorbs a level and shifts down it must be sure that it is not unwittingly swallowing a time bomb that should be triggered unexpectedly at some point in the future. The correctness of the implementation depends on the fact that it is safe to turn its back on an arbitrary number of boring levels of processing just so long as the implementation can turn around the moment any of them becomes interesting. In general, it would seem to be very difficult to determine whether is is safe to shift down.
4.3-LISP has an Efficient Level-Shifting Implementation Processor
[This section will look in detail at how 3-LISP can be implemented with a level-shifting implementation. The code for a new meta-circular processor program for 3-LISP that demonstrates this level-shifting technique will be given and explained.]
5.Conclusions and Future Research
[This section will briefly discuss the current activities of the author and his colleagues and point out some of the interesting outstanding problems having to do with both the theory and the practice of building reflective systems.]
Notes
1.In contrast to 3-LISP, LISP𔁇.5 is a first-order, dynamically-scoped, meta-structural, l-calculus-based language with an applicative order, evaluation-based, non-tail-recursive processor. On the other hand, SCHEME is a higher-order, statically-scoped, non-meta-structural, l-calculus-based language with an applicative order, evaluation-based, tail-recursive processor. The l-calculus itself is a higher-order, statically-scoped, non-meta-structural, l-calculus-based language with a normal order, normalization-based processor.
2.These two aspects are independent. A procedurally reflective dialect could be based on a non-rationalized semantics, and a rationalised semantics need not imply reflective capabilities (e.g., 2-LISP [Smith82a]). Smith argues that the principles underlying procedural reflection are most readily understood in the context of a rationalised base. The reason is quite straight-forward. Whereas an object level program will use expressions to designate objects in some domain (e.g., the domain of abstract numbers and sequences thereof), the program that is at the meta level will need a way of mentioning these object level expressions, or, equivalently, it will have to use expressions that designate fragments of object level source code. The simplest way to avoid use/mention errors is to ensure that the use of a symbol be distinguished from the mention of one and that all but a select few level-crossing operators are semantically flat. That way the programmer will have always have to explicitly indicate whether use or mention is intended and write extra code whenever the use/mention boundary is to be crossed. According to Smith’s analysis, LISP’s evaluation is not semantically flat, and should, therefore, be rejected. An alternate viewpoint treats evaluation as semantically flat, with the problem actually being that use/mention distinctions can only be made contextually. Whatever the reasoning, it is fair to say that the procedurally reflective patient is best dissected on a semantically flat table, so to speak.
3.Note that rails and pairs are distinct data types in 3-LISP, in contrast to most other LISPs, which represent lists with nested dotted-pairs (conses).
4.It works as follows. proc! designates a closure; therefore proc! designates the function designated by that closure (i.e., the function of the application). Similarly, args! designates a (normal form) designator of the argument of the application; therefore args! designates the argument of the application. Hence (proc!.args!) designates the object level function applied to the argument. Finally, is then used to bring the result back to its appropriate designational level (recall, normalisation is a semantically flat operation). {Note5}
5.The interested reader should read the dialogue entitled ‘‘Little Harmonic Labyrinth’’ in GEB [Hofstadter79]. The treatment of primitives in 3-LISP causes a ripple to flow all the way up the tower and back down. For just an instant, the processors at every level will all align in the midst of processing this expression, each of them waiting to pass along the answer from GOD.
‘‘Genie:I wanted not only to make a request of the Meta-Genie, but also of all the Djinns over her. The recursive acronym method accomplishes this quite naturally. You see, when the Meta-Genie received my request, she then had to pass it upward to her GOD. So she forwarded a similar message to the Meta-Meta-Genie, who then did likewise to the Meta-Meta-Meta-Genie ... Ascending the chain this way transmits the message to GOD.
Achilles:I see. You mean GOD sits up at the top of the ladder of djinns?
Genie: No, no, no! There is nothing ‘‘at the top’’, for there is no top. That is why GOD is a recursive acronym. GOD is not some ultimate djinn; GOD is the tower of djinns above any given djinn.’’
6.A white lie. 3-LISP actually has no standard operator named l. Instead, it has a more general lambda abstraction operation called lambda, but for the purposes of this paper its additional power is outweighed by its lack of perspicuity.
7.Can one describe what causes that microcode program to run using the vocabulary of this paper, or is there something fundamentally different about the nature of the next level?
8. The re-startability of a computation does not imply that external world side effects (e.g., I/O) are out of the question for a procedurally reflective system. All that would be required is for all interactions with the external world to be remembered by G. Since the restarted computation will merely retrace the steps up to the point that G detected the problem, the computation up to that point could be replayed without having to interact with the external world. (Moreover, the performance degradation would be likely to go unnoticed.)
Acknowledgements
[To be supplied.]
References
The Shorter Oxford English Dictionary. 3rd Edition, Oxford University Press, London (1959).
Allen, J. Anatomy of LISP. New York: McGraw-Hill (1978).
Henderson, P. Functional Programming, Application and Implementation. Prentice-Hall, Englewood Cliffs, N.J. (1980).
Hofstadter, D. Godel, Escher, Bach. Basic Books, N.Y. (1979).
McCarthy, J., et al., LISP 1.5 Programmer’s Manual. Cambridge, Mass.: The MIT Press (1965).
Smith. B. Reflection and Semantics in a Procedural Language, M.I.T. Laboratory for Computer Science Report MIT-TR-272 (1982a).
Smith. B. ‘‘Linguistic and Computational Semantics’’, ACL Conference (1982b).
Smith. B., and des Rivìeres, J. ‘‘Interim 3-LISP Reference Manual’’, Xerox PARC Report CIS-nn, Palo Alto (1983, forthcoming).
Stoy, J. E., Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, Cambridge: MIT Press (1977).
Sussman, G, and Steele, G. ‘‘SCHEME: An Interpreter for Extended Lambda Calculus’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-349 (1975).
Sussman, G, Holloway, J., Steele, G., and Bell, A. ‘‘SCHEME-79LISP on a Chip’’, IEEE Computer, pp. 10-21 (July1981).
Steele, G., and Sussman, G. ‘‘LAMBDA: The Ultimate Imperative’’, M.I.T Artificial Intelligence Laboratory Memo AIM-353 (1976a).
Steele, G. ‘‘LAMBDA: The Ultimate Declarative’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-379 (1976b).
Steele, G. ‘‘RABBIT: A Compiler for SCHEME (A Study in Compiler Optimization)’’, M.I.T. Artificial Intelligence Laboratory Technical Report AI-TR-474 (1977a).
Steele, G. ‘‘Debunking the ‘‘Expensive Procedure Call’’ Myth’’, M.I.T. Artificial Intelligence Laboratory Memo AIM-443 (1977b).
Steele, G., and Sussman, G., ‘‘The Revised Report on SCHEME, A Dialect of LISP’’, M.I.T Artificial Intelligence Laboratory Memo AIM-452 (1978a).
Steele, G., and Sussman, G., ‘‘The Art of the Interpreter, or, The Modularity Complex (Parts Zero, One, and Two)’’, M.I.T Artificial Intelligence Laboratory Memo AIM-453 (1978b).
Steele, G., and Sussman, G., ‘‘Design of a LISP-Based Microprocessor’’, CACM 23, 11, pp. 628-645 (November 1980).


(define NORMALISE
(
l [exp env cont]
(cond [(normal exp) (cont exp)]
[(atom exp) (cont (binding exp env))]
[(rail exp) (normalise-rail exp env cont)]
[(pair exp) (reduce (car exp) (cdr exp) env cont)])))
(define REDUCE
(
l [proc args env cont]
(normalise proc env
(
l [proc!]; Continuation C-PROC!
(if (reflective proc!)
(
(de-reflect proc!) args env cont)
(normalise args env
(
l [args!]; Continuation C-ARGS!
(if (primitive proc!)
(cont ↑(
proc! . args!))
(normalise (body proc!)
(bind (pattern proc!) args! (environment proc!))
cont)))))))))
(define NORMALISE-RAIL
(
l [rail env cont]
(if (empty rail)
(cont (rcons))
(normalise (1st rail) env
(
l [first!]; Continuation C-FIRST!
(normalise-rail (rest rail) env
(
l [rest!]; Continuation C-REST!
(cont (prep first! rest!)))))))))
FIGURE 1. The 3-LISP meta-circular (reflective) processor program.




9
9
9
——————————————————————————————
Reflective processor program running at level 3.
——————————————————————————————
Reflective processor program running at level 2.
——————————————————————————————
Reflective processor program running at level 1.
——————————————————————————————
User program running at level 0.
——————————————————————————————
FIGURE 2. The numbering of processing level in a reflective tower.




——————————————————————————————
Implementation processor G running at level n.
——————————————————————————————
Reflective processor program with D=1 running at level n-1.
——————————————————————————————
9
9
9
——————————————————————————————
Reflective processor program with D=n-3 running at level 3.
——————————————————————————————
Reflective processor program with D=n-2 running at level 2.
——————————————————————————————
Reflective processor program with D=n-1 running at level 1.
——————————————————————————————
User program with D=n running at level 0.
——————————————————————————————
FIGURE 3. How to run a D=n program with a black-box
processor that can only handle
D=1 programs.
—————————————————————————————————————————————————
[Is this relevant?]
Procedural reflection might be viewed as the ultimate goal of any strictly procedural calculus seeking fulfillment. Procedural reflective architectures can be seen as uniting several practices, tying them up in a neat little ball.
8Meta-circular processors.
8Explicit access to an EVAL primitive.
8FEXPRs and NLAMBDAs.
8Program-data equivalence.
8Explicit access to control and environment (e..g, spaghetti stacks).
8Amalgamation of object language and meta language.
—————————————————————————————————————————————————
[The distinction between a processor (= surface behaviour) and a program/interior processor pair shoud be described.]
[Process, processor, and program: terminological clarification needed.]
[What about REPLACE?]
—————————————————————————————————————————————————
[Should we bother to throw in this viciously circular definition?]
(define lr
(
lr [[pattern body] env cont]
(cont (ccons ’reflect ↑env pattern body))))
—————————————————————————————————————————————————
[What about information bleeding between levels by the kernel? E.g., tack the continuation into each closure in addition to the current environment.]