Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading:
LECTURE NOTES #16 LISP: LANGUAGE AND LITERATURE June 5, 1984
————————————————————————————————————————————
Lecture Notes #16 Computation in the Large
Filed as:[phylum]<3-lisp>course>notes>Lecture-16.notes
User.cm:
[phylum]<BrianSmith>system>user.classic
Last edited:
June 5, 1984 1:41 PM
————————————————————————————————————————————
A. Introductory Notes
Two lectures this (final) week, taking a step back and reviewing what computation is, and what role it plays on theories of language and mind:
Today: "Computation in the Large":
a review of implementation, interpreters and compilers, physical realization, etc.
talk about serial and concurrent languages, why internal structures are internal, etc.
Thursday: "The Computational Claim on Mind":
Review Haugeland’s, Fodor’s, Pylyshyn’s etc. analysis of what the computational claim comes to, talk about a genuinely semantic notion of computation, etc.
Specifically, review the "formality condition".
Problem Set #3 distributed on Thursday, along with questionnaires, etc.
<LLL.baggins> type directories on Turing will be maintained over the summer.
People’s solutions to problem set #1 available today; solutions to problem set #2 available on Thursday.
B. (Abstract) Processes as Subject Matter
Assume subject matter is interpretable or semantic processes
Haven’t said what a process is before: something like a connected series of events, except:
what it is for them to be connected, and
what sorts of objects participate in them
is far from clear.
Also, don’t necessarily want to assume discreteness, or that the objects are primary
i.e., don’t want to take machines as primitive, and talk derivatively about behaviour, but rather take process or activity as primitive, and subsidiarily define the machine as an abstraction across time.
The temporal nature of computation is certainly primary:
One hears that one can abstract away from this, reducing a computation to a tree or a function computed or something, but this throws out what of the crucial things that is unique about computation.
To the extent that computational processes are used to model things, the temporal dimension is (almost) never used to model anything other than the temporal dimension of the thing modelled.
That activity, and the concomitant notions of agency or anima, are tremendously important both in and of themselves, and in terms of illuminating things like how reference could work.
I.e., autonomous systems, of a certain sort.
Not necessarily physical, except on a pretty liberal notion of physical
Unlike pencils, which are physical objects even though the notion of being a pencil is functionally, not physically, defined.
Start, stop, run a process on a different machine, etc., all without reference to any sort of physical coordinates.
Might be defined over physical events, in the sense that it is emergent from, or supervenient upon, physical events, but that doesn’t make it physical in and of itself.
Temporal but abstract is strange: but then static and abstract (i.e., mathematics) is strange too, and we (at least some of us) live pretty well with that.
Evidence: whether a token is a physical objects:
canvassed grad students while at M.I.T.: linguists and philosophers yes, computer science graduate students no there was a physical representation of the token, but the token itself was no more physical than the type of which it was a token.
If not physical, then unclear what the connectivity is of a process, so as to make it a single coherent object
note use of "object"
But then physics doesn’t tell us what the boundary conditions on physical objects are, either.
Like other objects, processes are compositionally constituted:
they are made up of things: have internal parts, etc.
We will talk of the surface of a process, which is (over time) a set or sequence of events (if you like event talk) in which objects external to the object are participants:
like displays on the screen, if you take the computer as a machine whose behaviour is a process, or
the relationship between the 3-LISP processor and the processes that you write programs for.
The notion of input/output would then be definable as a certain delimited class of surface events (cf. Maturana’s structural coupling or structural perturbations).
And computing a function would be yet another abstraction, defined in terms of input/output, but ignoring the internal structure and temporality.
General facts about computational processes:
Interpretable (semantics again: more on Thursday)
There is always an internal process
It is processes "all the way down"
Don’t get activity except out of other activity.
Not all ingredients are active: some (like memories and programs) may be passive (not quite the same as static, but close enough for now). Point is only that somewhere there is activity, out of which the whole was generated.
Cf. Erector or Meccano set: need a motor or crank or something, in order to build something that exhibits any active behaviour.
Historicity Principle:
The history of a process can only affects its future to the extent that the process has memory.
In the case of discrete atuomata: the state of the machine and the output it emits, at time Tk, myst be a funciton only of its state, and of the inputs it receives at time Tk-1.
Can have state.
Have seen this many times already.
Where do these constraints come from:
From physical realizability, in the end?
Keep this in mind. Next time (Thursday) will talk about the permeating influence of these realizability constraints.
C. Serial and Concurrent Architectures
What kinds of architecture (types of process) have been designed? Two important ones (actually many; these are just salient because they are common):
Serial (draw picture): single internal process, and a field (memory) of structures with which it can interact
3-LISP case obvious.
Turing machine: internal process is the controller; field is the tape.
Concurrent (picture); multiple interacting processes
Combinations of the two:
Blackboard systems like Hearsay, etc.
Why these two?
Probably modelled on people reading text, and people talking to each other (unsurprisingly).
Other models being designed.
Definitions:
A processor is the internal process of serial process.
Field is mutable, typically: it can be the repository of state.
Thus more accurate to say that it is passive than that it is static.
Not on its own the origin of activity.
Implementation:
Talk about the internal structure of processors.
That is what the last week has been about:
Draw picture of implementing PROLOG in 3-LISP.
SMALLTALK and PLASMA as double reductions: the SMALLTALK system gives you a processor and a field (programs) with which to implement a concurrent process.
Quite different from genuine forking primitives etc. in the language, as in CEDAR, CCS, etc.
No type-type theory reduction across implementation boundaries, either
analog of point last time, when we said that semantics doesn’t cross implementation boundaries,
Therefore, if you describe a system at the wrong implementaiton level, your theoretical claims may in fact be deeply flawed: not just a question of elegance etc., may not support counter-factual conditionals, etc. all the standard nice properties scientific generalizations are meant to have.
This is a terribly serious moral for theorists of mind.
Locality considerations:
FORTRAN’s COME-FROM, mentioned earlier: violates locality assumptions.
Similar sorts of locality permeate many systems, although more and more abstract architectures are being defined all the time (usually, though not always, subject to the constraint that there be a physically realizable implementation).
D. Programs, Fields, and Data Types and Structures
One of the most confusing aspects of 3-LISP, it has been clear, is the ontological status of the structural field.
Am mroe and more convinced that it is what computer science calls abstract data types, and don’t think of as internal at all.
I have reasons for calling it internal, which I will review in a moment.
First point is merely that they are not linguistic in the sense of being expressions (have not talked about notation which has to do with interaction, which we haven’t brough up yet), nor syntactic in any simple sense.
And yet they are the stuff of computation, such that if computation is formal symbol manipulation, that’s all there is to be formal symbol manipulation.
I have clearly had difficulty in describing them in otherwise-accesible terminology:
Confusing if they are described as syntactic, which I have sort of done (by calling some of them numerals, defining semantic properties like designation over them, etc.)
And yet equally confusing to call them semantic, because if so, you tend to think that giving an account of them is giving the semantics, thereby finessing the issue of giving what I think is the genuine semantic story.
Probably should invent a whole new category of terms, although this too has its problems.
Programs as meta-structural:
Review F, F, etc., and double-F, double-F, etc.: merge, under this notion, if the model and the abstract data types are merged.
Cf, picture and regimen at the top of page 37 in CM.
Some definitions:
Define a program to be a structural field arragement of an (internal) processor
i.e., more specific than simply a structural field arrangement in general
Note this doesn’t make a program be a linguistic object that describes a computation: that is something else.
Define a data structure to be an ingredient in a structural field.
Means that whether something is a program and whether it is a data structure are different questions: programs are a kind of data structure.
Makes sense of the intuition that programs, from some point of view, are just more data structures.
But doesn’t validate the thought-to-be-conclusion, that therefore the declarative/procedural controversy is vacuous.
What is normally called an interpreter i.e., a processor for a given language is really a processor within a processor:
the internal processor within a serial internal process.
[draw picture]
Similarly, a "programming language" i.e., a computational architecture is (a set of possible arrangements of) an internal structural field and a behaviourally specified processor that processes arragnements of that field.
Note that all of these definitions are restricted to serial processes: would need a different set of notions for concurrent processes.
E. Input/Output, Notation, and Communication
Tell standard story about q, internalisation, etc: programs aren’t run when they are internalized, etc.
Nothing new, but review.
Talk about how much work could be done at "internalization" time (cf. problem set #2); indexicals, etc.
No need to have communication itself represented within the field (although that too is possible, of course, is a pseudo-reflective way).
Note ambiguity with respect to Turing machines, as to whether the tape is communication, or internal field
Really it plays both roles, for simplicity but at expense of clarity.
F. Compilation
Basically a notion of translation:
like translating a recipe for chocolate truffles from English to French.
a transformation or translation of (an arrangement of) a structural field S1 to another (arrangement of) a structural field S2, so that the surface of the process R1 that would result from the processing of S1 by its processor P1 would be equivalent (modulo some metric) to the process R2 that would result from the processing of S2 by its processor P2.
i.e., a directed relationship, from a source to a target.
Some facts about compilation:
1.Defined over static structures, even though the constraints that it must satisfy are active, behavioural constraints.
2.Defined relative to two (presumably different) serial processes.
3.Mandated only to ensure surface-to-surface equivalence.
4.Our definition admits compilations of arbitrary structural fields, although in fact it is usually defined only over programs.
5.Completely different relationship from internalization: the former is between fields, and involves two processors; the latter is between external notation and internal structures, and involves only one.
6.Has nothing to do with increased efficiency, or inscrutability (like playing a piano).
That is why one compiles, typically, but not essential to the notion itself.
7.Has nothing to do with theories or descriptions of computational processes.
Can extend the notion, clearly, to apply to external notations:
a relationship between two notations N1 and N2 just in case the appropriate internalization of N1 is a compilation of the appropriate internalization of N2.
Also: define a compiler to be a process that effects a compilation, etc.
Useful morals to remember:
Compiled programs are often more "inaccessible" than non-compiled ones; gratuitous fact.
To assume that anything about the human mind involves a compilation would mean that we have two mentaleses, not one.
It is sometimes said (Fodor) that a program has to be compiled into machine language in order to run: false. Compilers don’t get you food; they merely translate the recipe into another language. If you have a foreign-language cook, that may be a step, but it isn’t dessert.
G. Implementation and Realisation
Example from CM:
Asked to choreograph the half-time show for next Stanford football game. Tired of simplistic initials and slogans, devise a set of intricate marching orders for each musician, so that, when viewed from high up in the stands, it turns out that the patterns on the field look like a full reduction machine for the l-calculus (hope to have lots of musicians). Band members go charging around the arena in such a way that, competely unbeknownst to them, "implements" a whole set of a- and b-reductions for a long proof of why our team will win.
Semantics doesn’t cross implementation boundaries, as we said last time.
Question is what does an implementation have to honor?
Full information, in some sense.
Two notions:
To implement a process or behaviour merely means to construct a structural field arrangment S for some processor R so that the surface of the process that results from the processing of S by R yields the desired behaviour
for example: implement a check-book reviewer, or a sorting algorithm.
To implement a machine or architecture (programming language), construct a serial process P consisting of the structural field and processor in question.
Total mappings, no type-type reductions, token-token aren’t even consistent across time.
Moral: computer science is a "special science" (on Fodor and Block’s notion of that term) with respect to implementation.
A physical device D realises process P just in case the behaviour of D can be understood as the surface of P.
This is completely orthogonal to the notion of implementing one process in another: cf. the example of many machine languages of Gordon Bell.
H. A Turing Machine Review
[ see CM section 3.8, pp 5961.]
I. Four Concluding Morals for Cognitive Science:
1.Internal intepretability:
Computational processes are those where the internal structures and processes are interpretable (semantic), not the surface. If it weren’t so, then the human mind would be computational tautologically.
Consequences: I will argue that the controller of a Turing machine is not a computer. So be it. Problem is that the controller is the processor: if we interpret the tape that it deals with, then whole Turing machine is a computer. But processor, as usual, is behaviourally (superficially) specified.
2.Strong claims don’t cross implementation boundaries
This is why implementation is so important: it gives you too much freedom.
3.A Process is not its Processor
Suggestion that when I think about tres I inspect my mental representaion of the conept "tree" is false, crazy, and a category error.
the plasibility of Searle’s chinese room example hinges on exactly his assuming that he is his own processor. Confusions of this sort are legion, within and without AI.
4.Internalization, Compilation, Processing, (Semantic) Interpretation, and Realisation: Five different Concepts.
Clearly undermine any argument if you get them confused.