Connectionism: computing with neurons
Modern ‘connectionists’ are exploring the idea of using artificial neurons (artificial brain cells) to compute. Many see connectionist research as the route not only to artificial intelligence (AI) but also to achieving a deep understanding of how the human brain works. It is less well known than it should be that Turing was the first pioneer of connectionism. Digital computers are superb number crunchers. Ask them to predict a rocket’s trajectory or calculate the financial figures for a large multinational corporation and they can churn out the answers in seconds. But seemingly simple actions that people routinely perform, such as recognizing a face or reading handwriting, have been devilishly tricky to program. Perhaps the networks of neurons that make up a brain have a natural facility for these and other tasks that standard computers simply lack (Fig. 29.1). Scientists have therefore been investigating computers modelled more closely on the biological brain. Connectionism is the science of computing with networks of artificial neurons. Currently researchers usually simulate the neurons and their interconnections within an ordinary digital computer, just as engineers create virtual models of aircraft wings and skyscrapers. A training algorithm that runs on the computer adjusts the connections between the neurons, honing the network into a special-purpose machine dedicated to performing some particular function, such as forecasting international currency markets. In a famous demonstration of the potential of connectionism in the 1980s, James McClelland and David Rumelhart trained a network of 920 neurons to form the past tenses of English verbs. Verbs such as ‘come’, ‘look’, and ‘sleep’ were presented (suitably encoded) to the layer of input neurons. The automatic training system noted the difference between the actual response at the output neurons and the desired response (such as ‘came’) and then mechanically adjusted the connections throughout the network in such a way as to give the network a slight push in the direction of the correct response. About 400 different verbs were presented to the network one by one, and after each presentation the network’s connections were adjusted. By repeating this whole procedure approximately 200 times, the connections were honed to meet the needs of all the verbs in the training set. The network’s training was now complete, and without further intervention it could form the past tenses of all the verbs in the training set.