Decidability and the Entscheidungsproblem
In 1936 Turing invented a mathematical model of computation, known today as the Turing machine. He intended it as a representation of human computation and in particular as a vehicle for refuting a central part of David Hilbert’s early 20th-century programme to mechanize mathematics. By a nice irony it came to define what is achievable by non-human computers and has become deeply embedded in modern computer science. A simple example is enough to convey the essentials of a Turing machine. We then describe the background to Hilbert’s programme and Turing’s challenge—and explain how Turing’s response to Hilbert resolves a host of related problems in mathematics and logic. If I had to portray, in less than 30 seconds, what Alan Turing achieved in 1936 it seems to me that drawing the picture shown in Fig. 37.1 would be a reasonable thing to do. That this might be so is a testament to the quite extraordinary merging of the concrete and the abstract in Turing’s 1936 paper on computability. It is regarded by, I suppose, a large majority of mathematical scientists as his greatest work. The details of our picture are not especially important. As it happens, it is a machine for deciding which whole numbers, written in binary form, are multiples of 3. It works thus: suppose the number is 105, whose binary representation is 1101001, because (1 × 26) + (1 × 25) + (0 × 24) + (1 × 23) + (0 × 22) + (0 × 21) + (1 × 20) = 64 + 32 + 8 + 1 = 105. We start at the node labelled A and use the binary digits to drive us from node to node. The first couple of 1s take us to node B and back to A again. The third digit, 0, loops us around at A. Now a 1 and a 0 take us across to node C; and the final 0 and 1 take us back via B to A once more.