Turing and the paranormal

Author(s):  
David Leavitt

Of the nine arguments against the validity of the imitation game that Alan Turing anticipated and refuted in advance in his ‘Computing machinery and intelligence’, the most peculiar is probably the last, ‘The argument from extra-sensory perception’. So out of step is this argument with the rest of the paper that most writers on Turing (myself included) have tended to ignore it or gloss over it, while some editions omit it altogether.1 An investigation into the research into parapsychology that had been done in the years leading up to Turing’s breakthrough paper, however, provides some context for the argument’s inclusion, as well as some surprising insights into Turing’s mind. Argument 9 (of the nine arguments against the validity of the imitation game) begins with a statement that to many of us today will seem remarkable. Turing writes:… I assume that the reader is familiar with the idea of extra-sensory perception and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition, and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming…. To what ‘statistical evidence’ is Turing referring? In all likelihood it is the results of some experiments carried out in the early 1940s by S. G. Soal (1899–1975), a lecturer in mathematics at Queen Mary College, University of London, and a member of the London-based Society for Psychical Research (SPR). To give some background, the SPR had been founded in 1882 by Henry Sidgwick, Edmund Gurney, and F. W. H. Myers—all graduates of Trinity College, Cambridge—for the express purpose of investigating ‘that large body of debatable phenomena designated by such terms as mesmeric, psychical and spiritualistic . . . in the same spirit of exact and unimpassioned enquiry which has enabled science to solve so many problems, once no less obscure nor less hotly debated’. Although the membership of the SPR included numerous academics and scientists—most notably William James, Sir William Crookes, and Lord Rayleigh, a Nobel laureate in physics—it had no academic affiliation. Indeed, in the view of their detractors, the ‘psychists’, as they were known, occupied the same fringe as the mediums and mind-readers whose claims it sought to verify—or disclaim.

2022 ◽  
pp. 165-182
Author(s):  
Emma Yann Zhang

With advances in HCI and AI, and increasing prevalence of commercial social robots and chatbots, humans are communicating with computer interfaces for various applications in a wide range of settings. Kissenger is designed to bring HCI to the populist masses. In order to investigate the role of robotic kissing using the Kissenger device in HCI, the authors conducted a modified version of the imitation game described by Alan Turing by including the use of the kissing machine. Results show that robotic kissing has no effect on the winning rates of the male and female players during human-human communication, but it increases the winning rate of the female player when a chatbot is involved in the game.


Author(s):  
Tyler J. Renshaw ◽  
Nathan A. Sonnenfeld ◽  
Matthew D. Meyers

Alan Turing developed the imitation game – the Turing Test – in which an interrogator is tasked with discriminating and identifying two subjects by asking a series of questions. Based on subject feedback, the challenge to the interrogator is to correctly identify those subjects. Applying this concept to the discrimination of reality from virtual reality is essential as simulation technology progresses toward a virtual era, in which we experience equal and greater presence in virtuality than reality. It is important to explore the conceptual and theoretical underpinnings of the Turing Test in order to avoid possible issues when adapting the test for virtual reality. This requires an understanding of how users judge virtual and real environments, and how these environments influence their judgement. Turing-type tests, the constructs of reality judgement and presence, and measurement methods for each are explored. Following this brief review, the researchers contribute a theoretical foundation for future development of a Turing-type test for virtual reality, based on the universal experience of the mundane.


Author(s):  
Gary Hatfield

The perception of space was a central topic in the philosophy, psychology, and sensory physiology of the nineteenth century. William James engaged all three of these approaches to spatial perception. On the prominent issue of nativism versus empirism, he supported nativism, holding that space is innately given in sensory perception. This chapter focuses on James’s discussions of the physiology and psychology of spatial perception in his Principles of Psychology. It first examines the historical context for James’s work, guided by (and commenting on) his own account of that history. Included here are his arguments for nativism. It then examines central aspects of his theory of spatial sensation, perception, and conception. Finally, it touches on the reception of his nativism, his phenomenological holism, his characterization of perception as involving active processes of discernment and construction, and his conception of perceiving organisms as environmentally embedded.


2020 ◽  
Vol 30 (4) ◽  
pp. 589-615 ◽  
Author(s):  
Matthew Crosby

AbstractIn ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing a top-down approach for building thinking machines. I argue that to fix shortcomings with modern AI systems a nonverbal operationalisation is required. This is provided by the recent Animal-AI Testbed, which translates animal cognition tests for AI and provides a bottom-up research pathway for building thinking machines that create predictive models of their environment from sensory input.


IdeBahasa ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 81-92
Author(s):  
Julius ◽  
Ambalegin Ambalegin

This research aims to find out types of negative politeness strategies expressed by the main character in the movie titled The Imitation Game. This research is categorised as descriptive qualitative research. The data of the research were taken from utterances identified as negative politeness strategies by the main character “Alan Turing” and analysed with theory proposed by (Brown & Levinson, 1988). Data were collected using the observation and non-participatory method. Additionally, to analyse the data, pragmatic identity method were used. The result discovered in this research are; 5 be conveniently indirect, 16 question and hedge, 1 be pessimistic,   6 give deference, 4 impersonalise interlocutors, and 4 state the FTA as the general rule, totalling to 36 indication of negative politeness strategies. Question and hedge became the most frequently used strategy the main character tends to assume unwillingness to comply to the other characters in The Imitation Game movie.


2021 ◽  
pp. 16-32
Author(s):  
Simone Natale

The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.


Author(s):  
Alan Turing

Together with ‘On Computable Numbers’, ‘Computing Machinery and Intelligence’ forms Turing’s best-known work. This elegant and sometimes amusing essay was originally published in 1950 in the leading philosophy journal Mind. Turing’s friend Robin Gandy (like Turing a mathematical logician) said that ‘Computing Machinery and Intelligence’. . . was intended not so much as a penetrating contribution to philosophy but as propaganda. Turing thought the time had come for philosophers and mathematicians and scientists to take seriously the fact that computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent; he sought to persuade people that this was so. He wrote this paper—unlike his mathematical papers—quickly and with enjoyment. I can remember him reading aloud to me some of the passages— always with a smile, sometimes with a giggle. The quality and originality of ‘Computing Machinery and Intelligence’ have earned it a place among the classics of philosophy of mind. ‘Computing Machinery and Intelligence’ contains Turing’s principal exposition of the famous ‘imitation game’ or Turing test. The test first appeared, in a restricted form, in the closing paragraphs of ‘Intelligent Machinery’ (Chapter 10). Chapters 13 and 14, dating from 1951 and 1952 respectively, contain further discussion and amplification; unpublished until 1999, this important additional material throws new light on how the Turing test is to be understood. The imitation game involves three participants: a computer, a human interrogator, and a human ‘foil’. The interrogator attempts to determine, by asking questions of the other two participants, which of them is the computer. All communication is via keyboard and screen, or an equivalent arrangement (Turing suggested a teleprinter link). The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (So the computer might answer ‘No’ in response to ‘Are you a computer?’ and might follow a request to multiply one large number by another with a long pause and a plausibly incorrect answer.) The foil must help the interrogator to make a correct identification.


2010 ◽  
Vol 1 (2) ◽  
pp. 12-37 ◽  
Author(s):  
Jordi Vallverdú ◽  
Huma Shah ◽  
David Casacuberta

Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.


Sign in / Sign up

Export Citation Format

Share Document