Introduction
The charter of the Intelligent Systems Laboratory is to create new kinds of intelligent systems. VLSI-based systems are becoming very powerful. VLSI enables new forms of systems that are not possible with existing hardware. These opportunities take several different forms: processors that support Interlisp, special purpose chips for tasks like language parsing, and highly parallel machines supporting AI tasks like belief revision. VLSI has matured to the point where it is a available as a tool for researchers with software backgrounds. The ability to create VLSI-based systems may now be a requirement to lead in the AI field.
I am proposing a course of research based on creating VLSI-based intelligent systems. This research subsumes my personal directions in the short term and, potentially, the directions of a small group of researchers in the longer term.
Background
The complexity of VLSI chips has reached a threshold where interesting systems can be built. A few years ago when chip complexity was limited to 16 bit microprocessors, enough circuitry to handle the needs of special purpose systems could not be fit onto a chip. That has changed with processes with 2 micron features that are able to fabricate 256K bit Rams and 32 bit microprocessors. VLSI has reached the point where the limits are not based on the technology but rather on the ability to conceive of uses of the silicon.
Building system in VLSI may now be easier than in TTL. Placing a 2 input Nand gate in a VLSI system is faster than plugging a TTL dip into its socket. Clearly these are other relevant times to consider but overall the amount of effort is similar. Likewise the inherent performance of the two technologies is equivelant.
Programming is a tool for someone who is building systems. Early in the development of computers, programming was performed by professional programmers at the request of the application area expert. Over time, this changed to permit the person who understands the application area to directly do the programming. The ability to build VLSI-based systems has moved from the hands of the professional engineer to the system architect.
An example of this is Jim Clark of Silicon Graphics, Inc. Jim has an excellent understanding of computer graphics but no understanding of VLSI. Through the available courses, he learned how to design VLSI without becoming a professional enginner. He developed a novel set of chips which build systems which were not possible previously. This set of chips could not have been designed by a professional engineer because he would not have the graphics background. Jim opportunity to make tradeoffs in both fields allowed the creation of the Geometry Engine system.
The ability to build VLSI-based systems may be required for us to stay competitve in the AI field. Not having this cabability would exclude us from experimenting with state of the art systems. One example is the speech understanding research being investigated by Richard Lyon at Fairchild. He has built special purpose hardware to emulate tohe cochlea of the ear. Large scale experments with his ear models are only possible on the custom hardware. He will be able to perform experiments to understand the algorithms of the ear which we won’t be able to do.
A similar example exists in the logic programming field where custom hardware for high-performance inference engines are being built in the United States and Japan. If these suceed and the researchers that have access to them gain a 1000 fold performance increase, it will be impossible for us to keep at the start of the art.
VLSI-Based Opportunities
I envision opportunities in three classes of systems. These are VLSI-based Lisp machines, special purpose chips, and highly parallel architectures.
Evolving series of lisp processors
The hardware supporting Interlisp at Xerox was not designed around Lisp. There has always been serious mismatches between the hardware and the requirements of the Lisp language. Major mismatchs include the 16 bit wordsize, inapplicable instruction buffers, unusable hardware stacks, etc. These problems have made Lisp run slower than the machine’s native language. The hardware features required to fix these problems are not complex; they are just different than those required for Mesa. Similar mismatches exist between Interlisp and the commercial microprocessors.
By using VLSI, a machine can be created that is well suited to Interlisp. It should be possible to create machines that equal or improve on existing Lisp processors at a much lower cost and smaller size.
This would be evolving line of processors. To gain higher performance, a new processor would be needed. However, the peripherals - display, Ethernet, disk controllers - and the memory system would remain the same. Many of the internal pieces of the processor chip could be reused. The experience gained would allow optimized design decisions on the later versions.
The initial version would be a low-cost, medium performance machine. It would be similar in size and cost to an IBM-XT. The market would be a low-cost delivery vehicle for the many companies building commericial AI systems. The machine would require a minimum of software changes in the Lisp system.
With more design effort, a more complex processor can be built that is several times faster than a Dorado running Interlisp.
As process technology continues to permit more circuitry on a chip, still better Lisp processors can be created.
Special hardware
VLSI systems will start including specific purpose devices other than just processors. Silicon Graphics used custom chips to provide graphics capablities that could not be provided without custom chips. I believe that our laboratory has similar applications in which specific purpose chips provide an enabling condition for radically new tasks.
A few possible domains include chips to aid speech research, chips to aid perform natural language recognition, and chips to facilatate symbolic communication.
Highly parallel machines
The ability of VLSI electronics to scale over a long period of time has allowed a tremendous increase in the computing power. The limits to VLSI scaling are approaching. To further increase our computing power, parallel computer architectures are needed. The Dragon project is investigating small scale parallelism. I propose that we investigate large-scale parallelism.
Large scale parallelism means that 10,000 processors form a single system. The granularity of each processor would be much smaller than current microprocessors. The methods to program or use such a machine need to be developed. Our lab has several projects that could drive the development of programming methods. These include belief revision systems, constraint-based systems, logic programming, and reflective languages.
Plan
Pursuing all these possibilities would require several people. My current plan is to initially work on the low-cost Lisp Processor. Most of my effort will go into building the hardware for the initial prototype. An effort to modify the software will need to be started by either AISBU or other people in ISL. After the prototype is finished, the hardware would be handed off to AISBU for productization.
The next project would either be a higher-performance version or effort to build a highly parallel machine. This would be determined at the time based on the needs of the lab and the available resources.