Page Numbers: Yes X: 306 Y: 1.0" First Page: 1
Margins: Top: 1.0" Bottom: 1.3"
Heading:
LECTURE NOTES #13 LISP: LANGUAGE AND LITERATURE May 24, 1984
————————————————————————————————————————————
Lecture Notes #13 Semantics, Mind, and Failure
Filed as:[phylum]<3-lisp>course>notes>Lecture-13.notes
User.cm:
[phylum]<BrianSmith>system>user.classic
Last edited:
May 24, 1984 1:36 PM
————————————————————————————————————————————
A. Introductory Notes
Problem set #2; people who find it too easy should get in touch; code for some of the things mentioned at the end exists.
B. Semantic Assessment
Over the last few weeks, have dealt with several kinds of complexity, and introduced programming techniques for coping with them.
Have also suggested that these techniques can be generalized, and used in extraordinarily complex situations.
And that the process of building computer systems is largely one of design and construction. Software is the ultimate erector set.
From these claims, one might conclude that we could do anything; that there is no limit to the power or subtlety of computational process we could build.
But that is not so; will talk today about limits.
As usual, this will lead us into talk about semantics, modelling, etc.
Will therefore review that, in preparation.
And, in passing, we will examine how the sorts of semantics we have outlined for computational processes relate to AI’s computational claim on mind.
I.e., what do our semantic analyses lead us to think about the claim that the mind is a computational device?
But will then turn to limits and failures in reliability.
This will involve us in so-called specification languages, and will introduce notions of software verification.
It will also lead us into discussions of assumptions and intent the very notions that computation and AI are supposed (according, for example, to Dennett) to help us understand.
C. Semantics and Modelling
Talked a couple of times ago about relationships between designation, modelling, and abstraction a complex of issues that you should not be misled into thinking we clearly understand.
Nonetheless, we were able to characterize a number of different approaches to dealing with information about an embedding world, as summarized in the following two diagrams:
A semantical diagram for a process with direct facilities for abstraction and abstract data types:
1










In contrast, a semantical diagram for a process using an embedded language (of data structures or whatever) with its own separate semantic interpretation function:
1
















In discussion following that class, it became clear that there is another distinction that has to be made, with respect to the a (modelling) relationships, to make clear what I was trying to say.
Imagine, for example, the complex number case: we modelled complex numbers as two-element sequences of reals.
So we could say that <2.0, 3.4> modelled the complex number 2+3.4i.
But consider the geography program:
We modelled intersections as two-element sequences of locations on roads, where each location was in turned modelled by a road and a milepost.
Except, more literally, we modelled intersections as two-element sequences of models of locations on roads, where each location was in turned modelled by a model of a road and a model of a milepost.
Over this model we defined a delimited set of operations so that details about the model, like the length of a location-model, for example, were ruled out of court.
We defined, in other words, what would be called an abstract data type for intersections, another for locations on roads, etc., where the abstract data type abstracted away (hence its name) from gratuitous facts about what we might call the representational data type.
However, an abstract data type for a road or an instance of such a type is of course no more a road than the two-element mathematical sequence; it too is still a model.
Roads, on the other hand, aren’t models at all; they are quite real things, on which porcupines’ defenses don’t work well, meaning that they tend to get squished.
So: complicate the diagram a little, splitting a into a1 and a2. Similarly, define F and F", where F" is the designation relationship into the "real" world (what other world is there?):
1










And similarly:
1
















Given these observations, we will back off a little and look more carefully at the overall picture of computation we are building up.
I am concerned, in particular, that some distinctions between programs and processes are liable to be ignored if we stay at the foregoing level.
Basic framework in which we are working:
1
















Explanation:
Process R is the computational process that is one of our constant subject matters; that’s what this class is about.
World W is our other constant subject matter; the fact that R bears some relationship to W is what makes R computational.
at least according to our claim of what computation is!
Program P is (for now) roughly what you type into the EMACS buffer; a piece of language.
So the P−R relationship is something we have modelled with a combination of Q and Y.
As well as the three primary ingredients, we introduce two models, MR and MW.
They aren’t quite different things from R and W, but are closer to ways of classifying or ways of characterizing R and W.
But, you may object, any account of R or W will embody some way of looking at it. So why aren’t R and W models, too?
There is in the end a crucial difference (which will matter when we get to questions of reliability):
Point is that a model is related by theory to what it models, it is not related by actuality; it is not the same.
The thing itself supervenes on some physical substrate, for example, whereas the model does not.
Process R, for example, supervenes on the transistors; if one of them breaks, the process will likely experience at least a twinge.
The model, however, may become false, but it’s own existence won’t be affected; it won’t feel it.
The model is not causally connected to the thing itself.
(even if it is a model of causality!)
The rains may wash away Highway 1, but they will not wash away your model of it.
Nor are MR and MW the same the same as each other; to say so would be to say that the process and the world have merged, which is wrong.
Remember that all these modelling relationships a will be splittable into a1 and a2, as described earlier.
So this is the basic picture.
Our primary concern is with the semantics of R, and particularly with the RW relationship.
I.e., with what processes mean, not with what programs mean.
In order to understand that, however, and in particular to understand it just given P
which, after all, is all we theorists are likely ever to see
we need to understand the other relationships diagrammed.
Various points to be made about computational processes R.
As implied by our discussion of models, even though we may classify computational process in terms of real-world objects, doesn’t mean that we can make that process out of real-world objects.
This is crucial.
Its also ok, because you can’t compute anything other than abstract objects of various sorts
paradigmatically, set-theoretic, linguistic, etc. things you can have "new ones of" for pretty much free.
What would it mean to "compute" a table, or the strike zone, or 55 degrees, or detente? Nothing.
In general, however, we will want there to be some sort of correlation between MR and MW, although not exactly clear what, nor is it same from one application to another.
Perhaps build out of some of the same building blocks.
MR = MW just in case of simulation
Note particularly the internal temporal relationships: time or temporal flow in R versus time or temporal flow in W.
I.e., when MR = MW, then F" is something like a similacrum or correspondence.
Note, however, that if a process is about mathematical objects, then three things MR, MW, and W itself all collapse into one.
This is why mathematical objects are so poor for developing semantical intuitions.
Being about W isn’t all that is true of computational processes R; such processes also interact with the world, through sensors, effectors, and via human interaction.
Rather, aboutness is what distinguishes computers from other things that also participate in the world.
I.e., computers are just those objects whose behaviour has F".
Note analogy with programs and processes; former are not just about the latter; they also effect the latter.
I.e., the view of program as describing a computation doesn’t say how program engenders the computation.
One way to view programming language semantics is as the study of the P−R and P−MR relationships.
Operational semantics being primarily concerned with P−R;
Denotational semantics being primarily concerned with P−MR;
This is why operational and denotational semantics can be proved equivalent, for a given program or programming language.
Note that this is not what the F/Y distinction was trying to get at: it was trying to separate out the relationship between structures and behaviour within R, and the F" relationship between R and W.
In other words, I am not disposed to call any of the P, R, and MR relationships semantic.
D. The Computational Claim on Mind
In a few moments we will be able to use this picture to discuss reliability and failure. But first look at the enterprise of AI in its terms.
We have assumed that R is computational (means it bears F").
Can clearly use R to "model" the weather, say, or traffic on Highway 101.
This sense of "model" is legit, even on our reconstruction, because R is a simulation, in MR = MW sense.
If MR, which is a model of R, is the same as MW, which in turn is a model of W, then a model of R is a model of W, which probably licenses talk of R being a model of W directly.
Depending, of course, on exactly what you take modelling to require something we haven’t taken up here.
However, from the fact that R can model the weather or traffic, it doesn’t follow that the weather, or traffic, is computational.
AI’s claim is not that computational processes can model the mind
since that would be vacuous: nothing is said about a phenomenon just because you can model it.
but rather that the mind itself is computational.
I.e., suppose we add a mind to our diagram.
Being traditionalists, we will locate it inside a head:
1
















The computational claim on mind certainly makes sense,
since B certainly bears F (thoughts are about the world).
In fact could B not be computational?
Question is whether anything that bears F is computational. Takes us even further into the philosophy of computation than we are already.
my favourite subject, but not quite the subject matter of this class.
Leads, specifically, to questions of formality, about which I also have a great deal to say, but I won’t say anything here.
Except my conclusion: that the notion of "computational" that will reconstruct not only current practice but the direction in which computer science is headed, including new work on VLSI circuits and so forth, is a semantical notion of computation, under which I suspect that the mind is computational.
On the other hand, I know of almost no one who agrees with this, so leave it for now.
What we can do, however, is to add a new relationship, b, between R and mind B.
Presumably, if we have a mind B, we can have a model of mind MB.
AI’s claim is presumably not that R = B, but rather that MR = MB.
This, too, is a kind of modelling.
I.e., AI claim is that computational process R can simulate the mind, according to our previous notion of simulation.
Except before, since it was MR = MW, the relationship F" became the "simulation" relationship.
Now, since MR = MB, the relationship b is the "simulation" relationship, but there is a crucial difference:
The F" relationship R−W is presumably also supposed to simulate the F2" B−W relationship.
I.e., a relationshiprelationship correspondence, not a relationship of correspondence between two objects or models.
a second-order relationship.
Nothing like this occured in the case of simulating weather, or traffic.
Also, it would not occur if you were to construct a computational model of mind in the sense of modelling the neurons, which would presumably be that the particular W you were modelling would be the neurons of a brain; MR = MW (because it is simulation); and no b would enter the picture. Nor would F2", the semanticity of the mind.
Nor, from any success in modelling the neurons, would (or at least should) you be disposed to conclude that the brain is computational.
And if this is not already complex enough, note that we have yet to talk about the use of language.
To model natural language understanding, a new entity is introduced into the diagram some language L and two more relationships:
The relationship between L and W; and
The relationship between L and R.
But we won’t pursue this.
E. Reliability and Failure
So what does all of this have to do with reliability and failure?
Because they are semantic devices, computers have the property of being able to be wrong.
Because they are actual things, connected into the real world and capable of exhibiting actual behaviour, they can fail.
The combination leads to an extraordinarily subtle interplay of issues surrounding the notion of whether a computer is what we will call reliable.
It will become clear during the discusion, but should say at the outset, that this notion of reliability is considerably richer than simply whether the transistors work, or whether it comes to a complete halt (though it includes those things).
won’t talk much about failures in underlying hardware, etc., but they are hard, not completely eliminable, and very much like, in the end, the sorts of failures we will talk about.
First, failures in reliability are real, as anyone who has experienced computers is fully aware.
Admit that this issue concerns me a great deal; not so much the actual failures that are experienced, but the theoretical equipment with which to understand such things.
Issues come up in disarmament, including Soviet automated launch-on-warning systems that are supposed to respond to Pershing II’s six-minute flight time to Moscow, and similar flight times from submarine-based missiles off the East Coast (the number of which was increased just yesterday, according to NPR).
But the basic issues arise across the board; won’t focus on any specific application here.
Second, failures in reliability are ineliminable an inherent consequence of the way the world is: not just a question of better or worse designs, etc.
Proofs of correctness deal with only a very small part of the problem.
This ineliminability of error, which follows from the model of computation we have already sketched, is what I want to get across in the remaining part of the lecture.
Also, not saying people are any more reliable: this is not a comparison between the two types of entity, but an inherent analysis of the properties of computation itself.
Start: how are computers constructed? They are defined in terms of the model of the world MW.
As we have seen (geography, for example), one builds a computational process by formulating rules or other structures that represent, in terms of a model of the world, the situations that the computer will encounter, and specify a range of behaviours that it should take in reponse to them.
Then, when you run the program, the processor, with lightning speed and unparalled accuracy, in a maddeningly literal-minded fashion, "executes" your program.
I.e., careful pre-planning in advance, and instantaneous response when it matters most.
However, the behavior of the system depends ientirely on the structure of the programs on the rules and ways in which they are put together.
Classical programs, of the 3-LISP sort, are not only rigid in terms of the basic set of rules, but rigid as well in the procedures defined over them.
AI systems rely on the same kind of underlying rule, but put them together in more flexible ways.
So, the behaviour is limited to responses that can be formulated in terms of the formluated-in-advance model of the world MW.
Even with respect to this model, there are problems with ensuring that the program does the "right" thing.
Limits of proof: any but the simplest programs are potentially intractable, in terms of being able to prove properties of them.
Huge subject, which we won’t go into here, but various incompleteness results, the halting problem, and so forth, get in the way.
However, state of the art is that only the simplest of programs
of a complexity not terribly much greater than the programs we have been examining in this class
can be "proven" correct (will get back to what this "correct" means in a moment).
Examples where presumably hadn’t proved such things:
Niagara Falls in 1965: doing exactly what it was programmed to do, but probably not "correct", in some broader sense.
Fired on a Mexican fishing vessel 180o off of intent.
What is the "right thing", anyway?
Enters the domain of specification languages and program verification.
In order to "prove" that a program is correct, need to specify what it should do.
This requires another specification, in a formal language, of the behaviour of the process R is supposed to exhibit.
But wasn’t this exactly what program P was?
Not quite: program P had to satisfy two requirements:
It had to describe some computational process R, or a model of a computational process MR.
It had to lead, explicitly or implicitly, to the engendering of that process.
I.e., it had to say how R should come into existence, as well as what R should be.
A specification S need specify only the first of these.
For example, for SQUARE-ROOT, could simply say that if X=(SQUARE-ROOTY), then Y*Y=X.
This wouldn’t do for the SQUARE-ROOT program itself, because it doesn’t implicitly encode an algorithm.
So specificaiton languages are designed and used, in which behaviour of the process R is described without the extra requirement of effective computability.
Proving a program correct means proving that the two specifications (S and P) are equivalent:
More specifically, that the process that P engenders will satisfy description S.
However, specification S could also be wrong. And it is certainly limited by the same model of the world MW on which P depends.
So, even when a program is proved correct, it can exhibit all kinds of failures.
Most notably, when the process encounters behaviour that it wasn’t designed for.
NORAD system, early on:
Radar reflections off the rising moon.
Flocks of geese.
Blend into previous kinds of errors:
PARC mail system coming down a year ago, when a UNIX message had an address that was too long (greater than 80 characters).
Standard way around this is to allow human intervention.
I.e., cover other sorts of failure with human assessment.
Examples: threat assessment conferences
But then world W includes human behaviour, which means a much higher chance of inadequacy in the model of the world MW on which the program is based, partly because human behaviour is so hard to predict.
Wrong tape drives in command control center.
Happened again just a few months ago in Pennsylvania.
Three Mile Island.
How in fact is reliability achieved?
Heavily tested in situ.
Errors slowly decrease; sometimes one "fix" interacts with something else.
Acceptability, not perfection, is what is achieved.
What about testing it ahead of time (different from trying to prove that it is correct)?
Leads to the notion of simulation again: build simulators, embed the program within them.
Problem is that the simulation again relies on the adequacy of MW, among other things.
Examples: Club of Rome predication of demise of energy by year 2000.
Soviet Peace Academy: models of American government and psychology.
In sum:
There are at least two kinds of failure:
doing the "wrong" thing with respect to that parse.
failing to "parse" the world correctly;
Latter stem from:
limits of the model:
from the fact that computational processes are designed in terms of that model
and from the fact that computational processes aren’t able to change the model.
Except for extremely simple or highly formalized domains, the complexity of the world transcends the adequacy of language to describe it.
This is one reason why it is so hard to specify behaviour exactly.
Imagine, for example, writing full specifications for a refrigerator:
What about if you insert a toaster into it, or put in a chemical soup that reacts and gives off heat?
Or put the whole thing inside a huge microwave oven?
Or send it into space, where there is no air to carry away the heat?
And even in terms of an understood world, what it really is for a system to behave "correctly" is to do what we intend
or would have intended, if only we had thought about it.
or would intend after it is all over, with 2020 hindsight.
or ought to have intended.
It is because correctness and reliability, in other words, are ultimately defined with respect to W, not with respect to MW, that they are unattainable ideals, beyond the reach of formal languages and processes.