Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading:
EXAMPLES FOR LECTURE #10 LISP: LANGUAGE AND LITERATURE May 15, 1984
————————————————————————————————————————————

Examples for Lecture #10: Control Structure
Filed as:[phylum]<3-lisp>course>notes>Lecture-10.examples
User.cm:
[phylum]<BrianSmith>system>user.classic
Last edited:
May 15, 1984 1:17 PM
————————————————————————————————————————————
Some limitations on our definition of OBJECT:
Need to have state variables other than those set up upon initialization. For example, repeating sequence:
(define REPEATING-SEQUENCE
(object [seq]
[next (lambda []
(if (null seq) (set seq
initial-value))
(first seq))]))
or
(define REPEATING-SEQUENCE
(object [seq]
[first! (lambda [] (first seq))]
[rest! (lambda []
(repeating-sequence (append (rest seq))
[(first seq)])))]))
To do the first, can expand our template for object definitions to be:
(object [init-var-1 ... init-var-k]
[[state-var-1 initial-binding-1]
...
[
state-var-j initial-binding-j]]
[
method-name-1 procedure-1]
...
[
method-name-n procedure-n]])
Thus for example:
(define REPEATING-SEQUENCE
(object [initial]
[[current initial]]
[next (lambda []
(if (null current) (set current initial))
(first current))]))
Similarly, need more flexible means of initialization. Imagine the geography routines: could start up a "map" with the names of all the roads, and indications of the intersections; before accepting requests for shortest routes and the like, would like to run an initialization routine that sets up the internal tables. Could extend the template further:
(object [init-var-1 ... init-var-k]
[[
state-var-1 initial-binding-1]
...
[
state-var-j initial-binding-j]]
initialization procedure
[
method-name-1 procedure-1]
...
[
method-name-n procedure-n]])
Also, our current scheme doesn’t allow an (instance of an) object a name of itself. This is a severe limitation. Suppose for example you wanted a BALLOON object, which could change in size but without a change in mass of air within it, defined as follows:
(define BALLOON
(object [radius mass]
[radius (lambda [] radius)]
[volume (lambda [] (/ 3 (* 4 (* pi (cube radius)))))]
[density (/ mass
... my density ... )]
[new-radius (lambda [new] (set radius new))]))
Again, could extend the definition, so that (SELF), say, always referred to the instance in question. Thus:
(define BALLOON
(object [radius mass]
[radius (lambda [] radius)]
[volume (lambda [] (/ 3 (* 4 (* pi (cube radius)))))]
[density (/ mass (volume (self)))]
[new-radius (lambda [new] (set radius new))]))
And then there are questions of how to put them together into hierarchies; about how to default method selection, etc. — talked some about this last time. For example, a printing method for a sub-type might want to use the printing method of a super type, and then append, say, an asterisk (suppose you were to define a special type for normal-form rails, so that they would print out this way: i.e., you would have:
1> [1 2 (+ 2 3)]
1= [1 2 5]*
Could imagine yet a more complex template for objects:
(object [init-var-1 ... init-var-k]
[[
state-var-1 initial-binding-1]
...
[
state-var-j initial-binding-j]]
[super-type-1 ... super-type-m]
initialization procedure
[
method-name-1 procedure-1]
...
[
method-name-n procedure-n]])
Except this is all getting out of hand in terms of complexity.
More seriously, we don’t really have a full theory of this, in anything like the way we have a theory of "function-based" procedures. Admittedly, the last is not just derivative on its functional base, nor are there no questions (subsequent effects to closed variables, "multiple-valued returns", etc.), but it is much better developed. So be it.
General Control Structures:
FORCE and DELAY:
1> (set test [(print ps "Hello there " cr)
(delay (print ps "folks" cr))])
Hello there
1=
...
1> (first test)
1= ’ok
1> (force (second test))
folks
1= ’ok
Can be used to define STREAMS, so that (W> means macro-expansion):
(STREAM-CONS x y) W> [x (delay y)]
(STREAM-FIRST x) W> (first x)
(STREAM-REST
x) W> (force (second x))
Need something called THE-EMPTY-STREAM. Then can have:
1> (define INFINITE-LIST-OF-SQUARES
(lambda [n]
(stream-cons (* n n)
(infinite-list-of-squares (+ n 1)))))
1= ’infinite-list-of-squares
1> (set x (infinite-list-of-squares 3))
1=
...
1> (stream-first x)
1= 9
1> (stream-first (stream-rest x))
1= 16
1> (stream-first (stream-rest (stream-rest x)))
1= 25
1>
... ; etc. forever
The point is that, by using FORCE and DELAY (how they are defined we will explain in a moment), we can play with when structures are processed. The techniques just illustrated are what is known as "lazy" — if 3-LISP were a dialect in which CONS always worked like this, we would say that 3-LISP had a lazy processor.
A little inefficient, though; normalizes the rest of the stream every time it is accessed. I.e., leads to:
1> (set y (stream-cons (begin (print ps cr "Processing first arg" cr)
10)
(stream-cons (begin (print ps cr "Processing second arg" cr)
20)
the-empty-stream)))
Processing first arg
1= ...
1> (stream-first (stream-rest y))
Processing second arg
1= 20
1> (stream-first (stream-rest y))
Processing second arg ; even though it has happened already.
1= 20
So leads us to propose a cleverer algorithm. First, though, how are FORCE and DELAY defined? Very simply, in terms of procedure definitions:
(FORCE x) W> (x)
(DELAY
x) W> (lambda [] x)
Thus we know the following:
(set THE-EMPTY-STREAM (lambda [] []))
Also (ugly!):
(define STREAM-NULL
(lambda [s]
(= ↑s ↑the-empty-stream)))
So, noting how these definitions work, we can define:
(define MEMO
(lambda [delayed-structure]
(let [[computed $false]
[result "not computed yet"]]
(lambda []
(if computed
result
(begin (set result (force delayed-structure))
(set computed $true)
result))))))
Then we could have:
(STREAM-CONS x y) W> [x (memo (delay y))]
(STREAM-FIRST x) W> (first x)
(STREAM-REST
x) W> (force (second x))
This would mean:
1> (set y (stream-cons (begin (print ps cr "Processing first arg" cr)
10)
(stream-cons (begin (print ps cr "Processing second arg" cr)
20)
the-empty-stream)))
Processing first arg
1= ...
1> (stream-first (stream-rest y))
Processing second arg
1= 20
1> (stream-first (stream-rest y))
1= 20 ; no repetitious processing of second argument.
The first definition of STREAMs employed what in ALGOL was called call-by-name; what we have just defined is known as call-by-need. The standard parameter passing protocols are called call-by-value — although of course we don’t use the word "value" for the result of processing an expression or structure.
Other sorts of games one can play with control. Consider the problem of determining whether two trees have the same fringe (cf. FRINGE) from the problem set. A natural definition would be the following: use = on the results of using FRINGE on two trees. I.e. (this is lifted from the problem set solution):
(define FRINGE
(lambda [x]
(cond [(leaf x) [x]]
[(null x) []]
[$true (append (fringe (first x))
(fringe (rest x)))])))
or, using an even more elegant version:
(define FRINGE
(lambda [x]
(if (leaf x)
[x]
(append . (map fringe x)))))
We assume that the predicate LEAF is true of leaves of the tree; an obvious defintion would be:
(define LEAF
(lambda [e]
(not (or (sequence e) (rail e)))))
We can then define:
(define SAME-FRINGE
(lambda [t1 t2]
(= (fringe t1) (fringe t2))))
But this is pretty inefficient: it computes the entire flattened fringe for both trees, even if they obviously differ on the very first argument. For example:
1> (same-fringe [2 [[[[[[[[4 [[[[[5 [[[[[[[6 7]]]]] 8]]]]]] 9 10 11]]]]]]]]]]
[3 [[[[[[[[[[[[[[[[[[[[4 5]]]]]]]]]]]]]]]]]]]]])
1= $false
But our algorithm went to a tremendous amount of work that you didn’t need to go to, since you notice right away that they differ on the very first terminal. So: define a new kind of FRINGE, using our object routines:
(define FRINGE
(object [remaining]
[next (lambda []
(let [[l-and-r (leaf-and-residue remaining [])]]
(set remaining (second l-and-r))
(first l-and-r)))]
[fringe-null (lambda [] (null-tree remaining))]))
(define LEAF-AND-RESIDUE
(lambda [tree residue]
(cond [(leaf tree) [tree residue]]
[(null tree)
(if (null residue)
(error "Leaf of a null tree?" ↑tree)
(leaf-and-residue (first residue) (rest residue)))]
[$t (leaf-and-residue (first tree)
(if (null residue)
(rest tree)
(cons (rest tree) residue)))])))
(define NULL-TREE
(lambda [tree]
(and (not (leaf tree))
(or (null tree)
(and (null-tree (first tree))
(null-tree (rest tree)))))))
Or, to put it more modularly:
(letrec
[[LEAF-AND-RESIDUE
(lambda [tree residue]
(cond [(leaf tree) [tree residue]]
[(null tree)
(if (null residue)
(error "Leaf of a null tree?" ↑tree)
(leaf-and-residue (first residue) (rest residue)))]
[$t (leaf-and-residue (first tree)
(if (null residue)
(rest tree)
(cons (rest tree) residue)))]))]
[NULL-TREE (lambda [tree]
(and (not (leaf tree))
(or (null tree)
(and (null-tree (first tree))
(null-tree (rest tree))))))]]
(define FRINGE
(object [remaining]
[next (lambda []
(let [[l-and-r (leaf-and-residue remaining [])]]
(set remaining (second l-and-r))
(first l-and-r)))]
[fringe-null (lambda [] (null-tree remaining))])))
Then, we can define a different version of SAME-FRINGE:
(define SAME-FRINGE
(letrec [[helper
(lambda [f1 f2]
(cond [(fringe-null f1) (fringe-null f2)]
[(= (next f1) (next f2)) (helper f1 f2)]
[$true $false]))]]
(lambda [t1 t2]
(helper (fringe t1) (fringe t2)))))
or, less modularly but perhaps more perspicuously:
(define SAME-FRINGE
(lambda [t1 t2]
(sf-helper (fringe t1) (fringe t2))))
(define SF-HELPER
(lambda [f1 f2]
(cond [(fringe-null f1) (fringe-null f2)]
[(= (next f1) (next f2))
(helper f1 f2)] ; note side-effects to f1 and f2!
[$true $false])))
Then this would lead to the same behaviour:
1> (same-fringe [2 [[[[[[[[4 [[[[[5 [[[[[[[6 7]]]]] 8]]]]]] 9 10 11]]]]]]]]]]
[3 [[[[[[[[[[[[[[[[[[[[4 5]]]]]]]]]]]]]]]]]]]]])
1= $false
but much more quickly.
An obvious way to think about this is to imagine the two fringes as processes in their own right, generating new elements each time they are asked. It is as if you say tell two friends each to recite numbers to you, and you will stop them both the first time that they tell you different numbers. This general technique is called co-routining, since the two "routines" (F1 and F2) can be imagined to be running at the same time. Of course they aren’t actually; rather, we have created objects that contain enough state so that we can do a little work on one, and then a little work on another, and then resume the first, etc.
I.e., we have used functional procedures to model complex objects with state; we have here used objects with state to model (at least very tentative approximations to) independent processes. This approach can be continued.
Also, the technique generalizes to infinite sequences, in a way that the former (the first definition of SAME-FRINGE) clearly does not. For example:
(define INFINITE-SEQUENCE
(object [initial function incrementer]
[[current initial]]
[next (lambda []
(let [[answer (function current)]]
(set current (incrementer current))
answer))]
[re-initialize (lambda [] (set current initial))]))
Thus:
(set s1 (infinite-sequence 1 (lambda [n] (* n n)) 1+))
(set s2 (infinite-sequence 1 (lambda [n] (* n 2)) 1+))
(set s3 (infinite-sequence 1 (lambda [n] (+ n n)) 1+))
The former designates an infinite sequence of squares; the latter two designate the same infinite sequence of positive even integers:
1> (next s1)
1= 1
1> (next s1)
1= 4
1> (next s1)
1= 9
1> (next s1)
1= 16
1>
... ; etc.
Can define an equality operation on such infinite things, which will be finitely defined if there is a finite index into the sequences where they part company. I.e.:
(define STREAM-EQUAL
(lambda [s1 s2]
(and (= (next s1) (next s2))
(stream-equal s1 s2))))
leads to:
1> (re-initialize s1)
1=
...
1> (re-initialize s2)
1=
...
1> (stream-equal s1 s2)
1= $false
1> (re-initialize s2)
1=
...
1> (re-initialize s3)
1=
...
1> (stream-equal s2 s3)
... wait for a very long time.
One final strange kind of control structure, very different from the ones we have looked at so far. There are in 3-LISP a control operator called CATCH. The basic form is:
(CATCH tag body)
In general, processing this expression has the effect of processing the body expression; i.e., the CATCH and tag part are essentially invisible. If, however, during the course of processing the body expression (i.e., within the lexical scope of the CATCH, which can be thought of as a variable-binding operator [!]), the form
(tag x)
is processed, then the result of processing x will be the result of the whole CATCH expression. For example:
1> (define TEST
(lambda [n]
(catch oops
(+ 1000
(factorial (/ 18
(if (zero n)
(oops 0)
n)))))))
1= ’test
1> (test 3)
1= 1720
1> (test 0)
1= 0
Here is a routine that multiplies together all of the numbers at the leaves of an arbitrary binary tree:
(define TREE-PRODUCT
(lambda [tree]
(catch top
(letrec [[helper (lambda [tree]
(if (number tree)
(if (zero tree)
(top 0)
tree)
(* (helper (first tree))
(helper (second tree)))))]]
(helper tree)))))
Thus:
1> (tree-product [[1 2] [3 4]])
1= 24
1> (tree-product [10 [20 [30 40]]])
1= 24000
1> (tree-product [[10 0] [30 40]])
1= 0
Operators like this are often called "escape" operators, for the obvious reason.