**DRAFT --- NOT FOR QUOTATION OF REFERENCE**


                            Comments on a Draft of
                 "Reification: Reflection without Metaphysics"


                               Jim des Rivieres
                       Xerox PARC and Stanford CSLI
                               August 4, 1984


N.B.  I have not seen the final version of Friedman and Wand's paper which will appear in the proceedings of the 1984 LISP and Functional Programming Conference.  My comments are based on their extended summary, dated February 1984.

Terminology: In order to avoid needless confusion, I will consistently use the term "reflection" when talking about Smith's notion, use the pair of terms "reification" and "absorbtion" (not "reflection") when referring to Friedman and Wand's notions, and use "meta-level capabilities" when I want to evoke the most liberal notion that meaningfully covers this whole topic.   


From Friedman and Wand's conclusions:

   "Our goal in this work was to attempt to understand the idea of reflection, as advocated by Smith, in a way which was motivated by our understanding of semantics, and without being overly concerned with overriding philosophical positions.  For us, reification is just a way of giving the interpreter's data to the program, and reflection [absorbtion] is a way of loading the interpreter's registers from the program."

And, from their abstract:

   "We show how these processes may be considered as an extension to the fexpr concept in which not only the form and the environment, but also the continuation, are made available to the body of the procedure."


   To begin with, BROWN is not a reflective language (or at least it does not have what Smith calls a reflective architecture), despite what one might expect from the title of the paper.  If Friedman and Wise did intend BROWN to exhibit a reflective architecture, they have most surely missed out on perhaps the most crucial aspect of Smith's reflection.  On the other hand, if they did not intend BROWN to be reflective (something which I believe to be the case), then they have shown clearly some of the thorny problems faced by the language designer when he tries to give programs access to the interpreter's data structures --- problems which a reflective architecture, such as 3-LISP, squarely addresses.  Regardless of the intent, the numerous similarities and subtle, crucial differences between BROWN and 3-LISP will help to show just what Smith's reflective architecture is and is not.

    BROWN belongs to the class of non-reflective architectures that successfully provide access to the interpreter's data structures in a machine independent way (other members include Smalltalk-80's with its MethodContexts, and the spaghetti stacks of Interlisp-D).  All of the systems that I have seen suffer from the same fundamental problem: the need for *detachment*, and BROWN is no exception.

    Simply put, the problem of detachment is this.  When a program invokes a reifying operation that hands the program the object representing some or all of the current state of the interpreter, the normal course of events will see those state objects, perhaps slightly altered, subsequently absorbed --- i.e., later re-instated as the state of the interpreter.   In the period between reification and absorbtion, the interpreter has continued to run the user's program, changing internal state in the process.  In most cases, the interpreter cannot both absorb the state supplied by the user and continue with the accumulated internal state.  One of these states is going to have to be discarded, and the only real choice is the internal one.  The net result is that absorbtion-causing operations tend to be rather "jumpy" (e.g, BROWN's "black hole" continuations).  Like non-local GOTOs, these operations involve a massive irreversible state change.  In a procedure-oriented language, "black hole" type behaviour in a procedure seems to be readily incorporated into most programmers' conceptual model of how programs are processed.  Nevertheless, there will always be the risk that absorbtion-causing operations will feel out of place when compared to the normal, more disciplined control regimes offered by the high-level language. 

     Smith was well aware of this problem; section 5.a or his dissertation is devoted to the topic, from which the following passages are taken:

"...some crucial properties that a reflective theory would have to have --- of which adequate causal connection and sufficient detachment are two of primary importance. ... A great many of the technical problems in reflection, in other words, are best viewed as facets of two general issues: where you stand and what you have access to."
 
He goes on to say:

"The crucial insight ... is this:

        Reflective code should be run at the same level as the 
        meta-circular processor; it should not be processed by 
        the meta-circular processor."


   In other words, the key idea behind reflection is to reify the state of the interpreter not by passing the interpreter's data structures to the program that it is running, but rather by lifting that part of the program desiring access up to the same level as the interpreter and then giving it access to the interpreter's data structures.  Think of the interpreter as being on top and the program it is running as below it.  It is not the interpreter's data structures that shift down when reified and up when absorbed; instead, it is the part of the program that will receive access that is shifted up.  The interpreter's state always stays at the same level; what changes is who's mucking with it (system vs user).  

   Lest anyone think that this amounts to the same thing, let me assure them most emphatically that THIS IS NOT THE CASE.  The differences are quite significant, as I hope I'll convince you.

   What, then, does reflection buy us with the problem of achieving sufficient detachment?   What it does is ensure that no state information need ever be discarded, and that no massive state changes need ever happen.   Why is this so?  Because when the program is working with reified interpreter structures, it is not being run by the same interpreter that was running the user program that caused the reification to happen; it is being run by the same interpreter that was running the interpreter that was running the program.   The implicit state of the user program at the bottom is explicit at the level of the interpreter just above it; however, when a part of the user program runs at the level of the interpreter, its implicit state is another matter entirely --- it is only made explicit at the level of the interpreter above it (and so on... the infamous infinite tower of processors, each with their own implicit and explicit state).

   Let's look in detail at what's going on in BROWN and in 3-LISP by considering what happens during the course of processing the following roughly equivalent expressions:


    BROWN:      (add1 ((abs reify (e r k) (k 100))))
    3-LISP:        (add1 ((lambda reflect [a e c] (c '100))))

Both of these expression will return 101.  Some carefully chosen questions and answers should help give you a feel for the absolutely fundamentally different ways in these expressions are intended to be read.


Q. 1  In a few words, describe what happens when your expression is processed.

BROWN:  While processing the call to add1, the first argument expression calls a reifier.  This causes the then-current state of the interpreter --- the form, environment, and the continuation --- to be reified and bound to the variables e, r, and k, respectively.  The expression (k 100) is then processed.  This causes the argument to k to be processed (trivial in this case) and passed to the continuation bound to k.  Calling the continuation with 100 has the net effect of returning 100 as the result of the call that caused k to be reified; i.e., causing the redex ((abs reify (e r k) (k 100))) to return 100. 

3-LISP:  While processing the call to add1, the first argument expression calls a reflective procedure.  One level up, this causes the then-current state of the interpreter --- the unprocessed argument expressions, environment, and the continuation --- to become bound to the variables a, e, and c, respectively.  The expression (c '100) is then processed.  This causes the argument to c to be processed (trivial in this case) and passed to the continuation bound to c.  Calling the continuation with '100 has the net effect of returning 100 as the result of the call that caused c to be reified; i.e., causing the redex ((lambda reflect [a e c] (c '100))) to return 100.  Aside:  at the meta level, all structures in the program below it will appear to be quoted.  For example, continuations are always called with a quoted expression such as '100.  Don't worry about this too much, though; it has to do with levels of designation, and is somewhat independent of the levels of processing on which we are focussing here. 

Q. 2  Who runs the expression (k 100) in BROWN/ (c '100) in 3-LISP?

BROWN:  The expression (k 100) is processed by the same interpreter that processed the call to add1.  In fact, there is only one interpreter, so it runs everything.

3-LISP:  The expression (c '100) is not processed by the interpreter that processed the call to add1; rather, it runs as part of the interpreter that processed the call to add1.  Therefore it is run by the interpreter that normally runs the interpreter.


Q. 3  Is k / c a regular function?

BROWN:  Not quite.   The arguments to k will gets processed as per a normal function call, but the call to k is more like a unconditional branch than a procedure call; i.e., that call to k won't ever return.  At the level of the interpreter for BROWN, a continuation is a normal (SCHEME) function.  When it is exported into the user's world, it is specially wrapped so that it will be treated specially when it is called. 

3-LISP:  Yes.  c is just like any other garden variety 3-LISP function.  Under normal circumstances, c won't return.  That's because of 3-LISP's top level puts the interpreter into a loop.  But if the user was to subsequently use a function like quit, then it is possible for that call to c to return something.   So, at the level of the interpreter, a continuation is just a normal function.  Since we don't export continuations --- we instead import the user's code that wants access to the continuation --- we never have to do anything out-of-the-ordinary to them.  


Q. 4  What if the body expression instead read (add1 (k 100)) / (add1 (c '100))?

BROWN:  Wouldn't make any difference.  Once k is called, it won't ever return.  The (add1 (k 100)) bit becomes old history.

3-LISP:  You would have a non-tail-recursive call to a continuation, something that one doesn't normally write because it causes the interpreter to build up some implicit state (something which it carefully avoids doing in order that the interpreter's explicit continuation capture all of the implicit control information for the program it is running.)  You could do it, and most everything would work as before, except that  if you were to ever quit the interpreter with some result. one would get added to it!


Q. 5  What if the body expression instead read just 100?

BROWN:  That would cause an error.  The only two ways that you can get out of the body of a reifying abstraction are a) calling a continuation, and b) calling the meaning function.  It is not permitted to simply return a result.  [Aside:  Friedman and Wand could have allowed this, but chose not to.  The result returned by the body could have been sent to the continuation automatically simply by replacing the "wrong" in the definition of <reify> by "k".]

3-LISP:  That would cause the interpreter that was processing the program to quit, returning 100 probably to the top level at which the now defunct interpreter was started.  The system doesn't come to a screeching halt because there are lots of levels of processing, and you just killed the lowest one.   All this is quite predicatable because continuations and the processor program itself (normalise) are ordinary 3-LISP functions.  All the normal rules apply when trying to figure out what will happen.  The only novel thing is that you are applying these rules to the interpreter that was running your program, in addition to applying them to your base level program. 


Q. 6  How does one explicitly invoke the interpreter on an expression?

BROWN:  (meaning e r k) calls the interpreter explicitly to process the expression e in the environment r with continuation k.

3-LISP:  (normalise a e c) calls the interpreter explicitly to process the expression a in the environment e with continuation c.


Q. 7  Is meaning / normalise a regular function?

BROWN:  Yes.  Mind you, it's a primitive --- the user himself would be unable to define it.  But it does behave normally:  (meaning '(+ 2 2) initenv (abs simple (x) x)) would return 4.

3-LISP: Yes.  Quite regular, and it isn't even primitive (though it is built-in and does have a rather central role to play).  

[Lengthy aside:  However, there is a bug in Friedman and Wand's SCHEME definition of meaning as

    (schemeF-to-brown
       (scheme-to-schemeF
          (lambda (e r k)
             ((denotation e)
              (brown-to-schemeU r)
              (brown-to-schemeK k)) )))

which, when expanded, is equivalent to

    (lambda (e r k)
       ((denotation-list e)
        r
        (lambda (v*)
           (k ((denotation (first v*))
                (brown-to-schemeU (second v*))
                (brown-to-schemeK (third v*)))) )))

where denotation-list evaluates a list as in the definition of <simple>.   Notice that the processor call embedded within the call to k.   This implies that the BROWN interpreter builds up implicit state every time that the program that it is running calls meaning.  Note: this is much more serious than saying that the BROWN interpreter processes all calls to meaning in a non-tail-recursive manner (which it also does).  The obvious fix is to instead define meaning as

     (lambda (e r k)
        ((denotation-list e)
         r
         (lambda (v*)
            ((denotation (first v*))
             (brown-to-schemeU (second v*))
             (brown-to-schemeK (third v*)))) ))


This would make meaning a "black hole" operation, which, when called, would discard the implicit continuation in favour of the supplied one.  It would also mean that expressions such as 

        (add1 (meaning '(+ 2 2) initenv (abs simple (x) x)))

would cause the BROWN interpreter to stop with the answer 4.  To plug this leak, it is necessary to change the "(lambda (x) x)" in brown-to-schemeK to "
wrong".  One would have to be careful to always furnish as a third argument to meaning some expression that'll end up calling a black hole continuation.   Unfortunately, all these fixes would make it difficult to mediate between normal and continuation-passing style, as exemplified by the above expression.  This significantly degrades the utility of such an operation.

Here's a better solution.  Redefine brown-to-schemeK to take an old continuation as its second argument:

     (define brown-to-schemeK
         (lambda (bf k)
             (lambda (v) (bf '((quote ,v)) initenv k)) ))

and redefine meaning as

     (lambda (e r k)
        ((denotation-list e)
         r
         (lambda (v*)
            ((denotation (first v*))
             (brown-to-schemeU (second v*))
             (brown-to-schemeK (third v*) k))) ))

In other words, the interpreter's continuation at the time meaning is called will be pushed onto the end of the explicitly supplied continuation.  That way, if the explicitly supplied one eventually terminates, the result will be returned as the result of the call to meaning.   The only remaining criticism is that no call to meaning is tail-recursive, due to brown-to-schemeK always wrapping an extra layer around the old continuation.  Fortunately this isn't too serious, since it will only have a bearing on those calls to meaning that fail to ever call the explicit continuation, and these ones are unusual.  The common cases, such as in the read/eval/print loop, are not adversely affected.

End of lengthy aside.]


   The preceding discussion should have convinced you that, while very similar, there are some differences between 3-LISP and BROWN, especially with regard to the models under which each are intended to be understood.  Let's summarize the major differences between 3-LISP and BROWN, and ask whether one language is superior to the other:

Infinite Tower of Processing Levels
   Clearly, 3-LISP's model of processing requires an infinite tower whereas BROWN's does not.  Although, by and large, the user of 3-LISP need never think of the infinite tower --- he instead need only think of every level having a meta level (more akin to standing inside an infinitely tall skyscraper, with visible staircases leading up (and down) from each floor) --- there is nevertheless substantial reluctance to adopt this model (Friedman and Wand are definitely on the side of the majority).  On the other hand, fexprs and "black hole" procedures have not encountered much resistance (except, perhaps, Kent Pitman's attack on the former).  Thus it appears that 3-LISP is conceptually more complex than BROWN, a property that clearly favours BROWN.

Achieving Sufficient Detachment
   As discussed at length in the question/answer period, above, 3-LISP uses a meta-level shift to achieve sufficient detachment, whereas BROWN uses "black hole" continuations.  The meta-level shift allows 3-LISP to use regular procedures at the meta level, whereas BROWN is forced to use some form of "self-detaching" continuations.  So it would seem that 3-LISP's way is simpler and more general than BROWN's (but, of course, you'd have to buy the infinite tower bit first).   

Granularity of the Account
   With regard to the level of detail observable from the reified objects, 3-LISP is clearly much more fine-grained: it has closures, environments, and continuations that can be taken apart by the program.  BROWN's uses a functional representation for its procedures, environments, and continuations; this gives the language much coarser-grained access to these meta-level objects.  Each has its merits and drawbacks.  One the one hand, 3-LISP's account is sufficiently fine-grained that it is possible to write a resident 3-LISP debugger that can provide backtraces, etc.  This could not be done in BROWN, since all the information needed to display a backtrace is hidden from the user.  It would also be impossible to write a pretty-printer for procedures.  Again, the problem is that the source expressions for a procedure are just not made accessible to the BROWN program, whereas they are in 3-LISP.  Then, on the other hand, 3-LISP's use of closures as function designators, a-lists as environment designators, and functions as continuations is far from ideal when it comes to designing a debugger in 3-LISP.  Although all the information that is maintained at a level is accessible (as much as there ever is for a tail-recursive processor), it is quite awkward to get at it.  If not too fine-grained, 3-LISP's account at least runs cross-grain to an adequate model for a debugger's purposes (perhaps one more like the spaghetti stack model).  All this is not too surprising, given that Smith wanted the reflective processor for 3-LISP to have roughly the feel of a denotational semantic account of a functional language.  The grain of such an account should facilitate the designing of the control constructs of the language;  by and large, 3-LISP succeeds in this area.  But it could also have done so with a coarser-grained account analogous to BROWN's (however, some of the other pedagogical design principles underlying 3-LISP, such as category alignment, would have had to go out the window; and side-effects would have been more difficult, as Friedman and Wand point out).  So it seems that BROWN's level of grain is more appropriate that 3-LISP's, given the things that 3-LISP was primarily intended to do (and tends to do) best.  Also, it should be clear that this issue is quite orthogonal to the preceding two.


   So when 3-LISP and BROWN are objectively compared toe to toe, BROWN comes out on top.  The languages have very nearly the same expressive power. The designers of 3-LISP ask the user to swallow a big pill --- the infinite tower;  the designers of BROWN show how to live quite happily without it.

   Given that BROWN is the easier to learn of the two languages, is 3-LISP be considered a failure?  I would argue definitely not.  The strength of 3-LISP is not in its merits as a useable programming language.  Smith has stated all along that the true strength of 3-LISP lies in the underlying ideas for which it was designed to be an exemplar --- that the truly important idea is "... a general architecture, called procedural reflection, to support self-directed reasoning in a serial programming language. ...  The strategy of presenting a general architecture by developing a concrete instance of it was selected on the grounds that a genuine theory of reflection (perhaps analogous to the theory of recursion) would be difficult to motivate or defend without first taking this first, more pragmatic, step".  (Smith, POPL'84)
 
   Smith's analogy between reflection and recursion is especially apt: 

o  Due to its apparent circularity, recursion is perplexing at first.  Reflection, with its infinite tower, meta-circular program, and reflective procedures, is even more bewildering.     
 
o  Recursion is more powerful than its competitors: iteration constructs, and non-recursive procedure calls.  A reflective language is more powerful than a non-reflective one.  Each takes a significant effort to master.  

o  The user's model must be made more complex in order to understand recursion.  (E.g., FORTRAN's static mapping from a subroutine's variable names to memory locations will not suffice.)  Same for reflection:  the user's model must include the meta-level view of his computation. 

o  Recursion, though theoretically sufficient, is not always the best way to approach some problems; many nuts don't require a sledgehammer to crack.  Similarly, use of a reflective language's formidable capabilities is seldom required;  however, there are real problems that do require it.         

o  Even if the language that you are programming in doesn't support recursion, you may still find it easier to solve it using recursion, and then use standard techniques to transform the resulting program to eliminate the recursion.  With reflection, this is also the case.  (More on this below.)

o  Although a mathematical theory underwrites our ability to use recursion in programming languages, many programmers will never need to understand much about the formal theory at all.  A mathematical theory of reflection is still undeveloped, but we're all muddling along quite merrily.  A theory seems only to be a matter of time, provided we can convince someone of the importance of such an endeavour.

o  With a clear understanding of what recursion amounts to, it is straightforward matter to design tests top see whether an allegedly correct implemention of recursion is sound.   This applies equally well to reflection, or other forms of meta-level access.  (E.g., my mental model of a reflective system helped me predict where problems might be found in the implementation of BROWN; viz., in the absorbtion code, where the issue of detachment must rear its ugly head.)  

o  An implementor's life is easier if he doesn't have to allow for recursion in the object language.  (All activation records can be allocated statically; all non-parametric variables can be addressed directly; stacks, displays, and static links aren't needed.)  Same story for reflection:  the implementor of a reflective language has to work harder (use level shifting).  In both cases, the penalty for success is that normal users will invariably fail to even begin to appreciate all the crud the implementor had to worry about. 


   In conclusion, I believe that the effort it takes to learn 3-LISP, insofar as it can assists one in understanding procedural reflection, will have lasting rewards.  You'll learn a powerful way of looking at any system that offers meta-level facilities, whether or not it is reflective in Smith's sense.  My own experience with 2-LISP and 3-LISP has convinced me that the job of designing and implementing a language that has some meta-level capabilities should be tackled as one would a fully reflective language in which the additional meta-level facilities are left unpublished (if necessary).  After designing the reflective processor program, it is then a matter of transforming the reflective processor program into a level-shifting implementation processor program (as discussed in des Rivieres and Smith, LISP&FP'84) that does not make use of the reflective capabilities.  This two stage process greatly helps in guiding the subsequent implementation effort.