Today’s Computers, Intelligent Machines and Our Future Hans Moravec Stanford University July 21, 1976 this version 1978 Introduction: The unprecedented opportunities for experiments in complexity presented by the first modern computers in the late 1940’s raised hopes in early computer scientists (eg. John von Neumann and Alan Turing) that the ability to think, our greatest asset in our dealings with the world, might soon be understood well enough to be duplicated. Success in such an endeavor would extend mankind’s mind in the same way that the development of energy machinery extended his muscles.
In the thirty years since then computers have become vastly more capable, but the goal of human performance in most areas seems as elusive as ever, in spite of a great deal of effort. The last ten years, in particular, has seen thousands of people years devoted directly to the problem, referred to as Artificial Intelligence or AI. Attempts have been made to develop computer programs which do mathematics, computer programming and common sense reasoning, are able to understand natural languages and interpret scenes seen through cameras and spoken language heard through microphones and to play games humans find challenging.
There has been some progress. Samuel’s checker program can occasionally beat checker champions. Chess programs regularly play at good amateur level, and in March 1977 a chess program from Northwestern University, running on a CDC Cyber-176 (which is about 20 times as fast as previous computers used to play chess) won the Minnesota Open Championship, against a slate of class A and expert players. A ten year effort at MIT has produced a system, Mathlab, capable of doing symbolic algebra, trigonometry and calculus operations better in many ways than most humans experienced in those fields.
Programs exist which can understand English sentences with restricted grammar and vocabulary, given the letter sequence, or interpret spoken commands from hundred word vocabularies. Some can do very simple visual inspection tasks, such as deciding whether or not a screw is at the end of a shaft. The most difficult tasks to automate, for which computer performance to date has been most disappointing, are those that humans do most naturally, such as seeing, hearing and common sense reasoning. A major reason for the difficulty has become very clear to me in the course of my work on computer vision.
It is simply that the machines with which we are working are still a hundred thousand to a million times too slow to match the performance of human nervous systems in those functions for which humans are specially wired. This enormous discrepancy is distorting our work, creating problems where there are none, making others impossibly difficult, and generally causing effort to be misdirected. In the early days of AI the thought that existing machines might be much too small was widespread, but people hoped that clever mathematics and advancing computer technology could soon make up the difference.
The idea that available compute power might still be vastly inadequate has since been swept under the rug, due to wishful thinking and a feeling that there was nothing to be done about it anyway and that voicing such an opinion could cause AI to be considered impractical, resulting in reduced funding. This attitude has had some bad effects, one of them being that AI research has been centered on computers less powerful than absolutely necessary. The first section of this essay discusses natural intelligence.
It notes two major branches of the animal kingdom in which intelligence evolved independently, and suggests that it is easier to construct than is sometimes assumed. The second part compares the information processing ability of present computers with intelligent nervous systems. The factor of one million is derived in two different ways. Section three examines the development of electronics, and concludes the state of the art can provide more power than is now available, and that the one million gap could be closed in ten years.
Part four introduces some hardware and software aspects of a system which would be able to make use of the advancing technology, providing a means for achieving human equivalence, perhaps by the next decade. Part five considers the implications of the emergence of intelligent machines, and concludes that they are the final step in a revolution in the nature of life. Classical evolution based on DNA, random mutations and natural selection may be completely replaced by the much faster process of intelligence mediated cultural and technological evolution.
Section 1: The Natural History of Intelligence Product lines: Natural evolution has produced a continuum of complexities of behavior, from the mechanical simplicity of viruses to the magic of mammals. In the higher animals most of the complexity resides in the nervous system. Evolution of the brain began in early multi-celled animals a billion years ago with the development of cells capable of transmitting electrochemical signals. Because neurons are more localized than hormones they allow a greater variety of signals in a given volume.
They also provide evolution with a more uniform medium for experiments in complexity. The advantages of implementing behavioral complexity in neural nets seem to have been overwhelming, since all modern animals more than a few cells in size have them [animal refs. ]. Two major branches in the animal kingdom, vertebrates and mollusks, contain species which can be considered intelligent. Both stem from one of the earliest multi-celled organisms, an animal something like a hydra made of a double layer of cells and possessing a primitive nerve net.