|Abstract or Summary
- "Fifty years after the pioneering work of McCulloch and Pitts, the study of neural nets is alive and active. In this paper, I have discussed some of the work that is of current interest to me and my co-workers. I would, perhaps, be remiss if I failed to mention some of the current hype about neural nets. Can neural nets quickly solve NP-complete problems? No. A look at the proposed nets will show that the question of whether the net will converge, or where the net will converge to, are as difficult as the original NP-complete problem. This does not prevent the neural net from giving an approximate solution to a hard optimization problem, but no one has yet proven any approximation bounds. Hard problems are only hard in the worst case, so there may be many easy instances of a hard problem. Nothing prevents a neural net from solving these easy instances quickly. Can analog neural nets compute things not computable a Turing machine? Yes. But any analog device with infinite precision has more computational power than a Turing machine, so a neural net with unlimited precision should be a very powerful device. But practically all devices are constructed with limited precision, and these limited precision devices have no more power than a Turing machine. Can neural nets compute faster than other parallel models? No. Neural nets are in fact equivalent to the usual parallel models. The only difference that can occur is if the neural net has infinite precision which as mentioned above is highly unlikely. Does learning in neural nets make programming unnecessary? No. As we saw in the discussion of learning, learning rules must be devised, and it seems that different learning tasks will require different learning rules. Further, the kind of net to use for a particular task will be an important decision. In our decoding example, some network topologies did not lead to good decoders, while other topologies did. Neural nets will not replace programmers, but give programmers another paradigm in which to program. In spite of the hype, I believe that neural nets will be useful both as biological models and as programming paradigms. Finally, according to an often-told tale, there was a golden age of neural nets which suddenly ended in 1970. Depending on the version of the tale, the golden age ended because of the Vietnam war, or Minsky and Papert's book on perceptions, or cuts in funding, or the rise of artificial intelligence. But I hope that the reader of this paper and the rest of this volume will see that the death of Warren McCulloch had a most profound effect on the field. We miss him as a brilliant scientist, as a warm human being, and as the greatest story-teller of our age."--Conclusion.