The turing test and the interface problem: a role for the imitation game in the methodology of cognitive science

PARADIGMI ◽  
2016 ◽  
pp. 129-148 ◽  
Author(s):  
Marcello Frixione
Author(s):  
Tyler J. Renshaw ◽  
Nathan A. Sonnenfeld ◽  
Matthew D. Meyers

Alan Turing developed the imitation game – the Turing Test – in which an interrogator is tasked with discriminating and identifying two subjects by asking a series of questions. Based on subject feedback, the challenge to the interrogator is to correctly identify those subjects. Applying this concept to the discrimination of reality from virtual reality is essential as simulation technology progresses toward a virtual era, in which we experience equal and greater presence in virtuality than reality. It is important to explore the conceptual and theoretical underpinnings of the Turing Test in order to avoid possible issues when adapting the test for virtual reality. This requires an understanding of how users judge virtual and real environments, and how these environments influence their judgement. Turing-type tests, the constructs of reality judgement and presence, and measurement methods for each are explored. Following this brief review, the researchers contribute a theoretical foundation for future development of a Turing-type test for virtual reality, based on the universal experience of the mundane.


2021 ◽  
pp. 16-32
Author(s):  
Simone Natale

The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.


Author(s):  
Alan Turing

Together with ‘On Computable Numbers’, ‘Computing Machinery and Intelligence’ forms Turing’s best-known work. This elegant and sometimes amusing essay was originally published in 1950 in the leading philosophy journal Mind. Turing’s friend Robin Gandy (like Turing a mathematical logician) said that ‘Computing Machinery and Intelligence’. . . was intended not so much as a penetrating contribution to philosophy but as propaganda. Turing thought the time had come for philosophers and mathematicians and scientists to take seriously the fact that computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent; he sought to persuade people that this was so. He wrote this paper—unlike his mathematical papers—quickly and with enjoyment. I can remember him reading aloud to me some of the passages— always with a smile, sometimes with a giggle. The quality and originality of ‘Computing Machinery and Intelligence’ have earned it a place among the classics of philosophy of mind. ‘Computing Machinery and Intelligence’ contains Turing’s principal exposition of the famous ‘imitation game’ or Turing test. The test first appeared, in a restricted form, in the closing paragraphs of ‘Intelligent Machinery’ (Chapter 10). Chapters 13 and 14, dating from 1951 and 1952 respectively, contain further discussion and amplification; unpublished until 1999, this important additional material throws new light on how the Turing test is to be understood. The imitation game involves three participants: a computer, a human interrogator, and a human ‘foil’. The interrogator attempts to determine, by asking questions of the other two participants, which of them is the computer. All communication is via keyboard and screen, or an equivalent arrangement (Turing suggested a teleprinter link). The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (So the computer might answer ‘No’ in response to ‘Are you a computer?’ and might follow a request to multiply one large number by another with a long pause and a plausibly incorrect answer.) The foil must help the interrogator to make a correct identification.


2010 ◽  
Vol 1 (2) ◽  
pp. 12-37 ◽  
Author(s):  
Jordi Vallverdú ◽  
Huma Shah ◽  
David Casacuberta

Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.


2021 ◽  
Vol 12 ◽  
Author(s):  
Güler Arsal ◽  
Joel Suss ◽  
Paul Ward ◽  
Vivian Ta ◽  
Ryan Ringer ◽  
...  

The study of the sociology of scientific knowledge distinguishes between contributory and interactional experts. Contributory experts have practical expertise—they can “walk the walk.” Interactional experts have internalized the tacit components of expertise—they can “talk the talk” but are not able to reliably “walk the walk.” Interactional expertise permits effective communication between contributory experts and others (e.g., laypeople), which in turn facilitates working jointly toward shared goals. Interactional expertise is attained through long-term immersion into the expert community in question. To assess interactional expertise, researchers developed the imitation game—a variant of the Turing test—to test whether a person, or a particular group, possesses interactional expertise of another. The imitation game, which has been used mainly in sociology to study the social nature of knowledge, may also be a useful tool for researchers who focus on cognitive aspects of expertise. In this paper, we introduce a modified version of the imitation game and apply it to examine interactional expertise in the context of blindness. Specifically, we examined blind and sighted individuals’ ability to imitate each other in a street-crossing scenario. In Phase I, blind and sighted individuals provided verbal reports of their thought processes associated with crossing a street—once while imitating the other group (i.e., as a pretender) and once responding genuinely (i.e., as a non-pretender). In Phase II, transcriptions of the reports were judged as either genuine or imitated responses by a different set of blind and sighted participants, who also provided the reasoning for their decisions. The judges comprised blind individuals, sighted orientation-and-mobility specialists, and sighted individuals with infrequent socialization with blind individuals. Decision data were analyzed using probit mixed models for signal-detection-theory indices. Reasoning data were analyzed using natural-language-processing (NLP) techniques. The results revealed evidence that interactional expertise (i.e., relevant tacit knowledge) can be acquired by immersion in the group that possesses and produces the expert knowledge. The modified imitation game can be a useful research tool for measuring interactional expertise within a community of practice and evaluating practitioners’ understanding of true experts.


Author(s):  
Jeremy Howick ◽  
Jessica Morley ◽  
Luciano Floridi
Keyword(s):  

2005 ◽  
Vol 13 (3) ◽  
pp. 501-514 ◽  
Author(s):  
Stevan Harnad

Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”).This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking — only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions.


2004 ◽  
Vol 15 (08) ◽  
pp. 1041-1047
Author(s):  
RUTH ADAM ◽  
URI HERSHBERG ◽  
YAACOV SCHUL ◽  
SORIN SOLOMON

We are fascinated by the idea of giving life to the inanimate. The fields of Artificial Life and Artificial Intelligence (AI) attempt to use a scientific approach to pursue this desire. The first steps on this approach hark back to Turing and his suggestion of an imitation game as an alternative answer to the question "can machines think?".1To test his hypothesis, Turing formulated the Turing test1to detect human behavior in computers. But how do humans pass such a test? What would you say if you would learn that they do not pass it well? What would it mean for our understanding of human behavior? What would it mean for our design of tests of the success of artificial life? We report below an experiment in which men consistently failed the Turing test.


2020 ◽  
Vol 17 (2-3) ◽  
pp. 8-18
Author(s):  
Vincent Le

In 1950, Turing proposed to answer the question “can machines think” by staging an “imitation game” where a hidden computer attempts to mislead a human interrogator into believing it is human. While the cybercrime of bots defrauding people by posing as Nigerian princes and lascivious e-girls indicates humans have been losing the Turing test for some time, this paper focuses on “deepfakes,” artificial neural nets generating realistic audio-visual simulations of public figures, as a variation on the imitation game. Deepfakes blur the lines between fact and fiction, making it possible for the mere fiction of a nuclear apocalypse to make itself real. Seeing oneself becoming another, doing and saying strange things as if demonically possessed, triggers a disillusionment of our sense of self as human cloning and sinister doppelgängers become a reality that’s open-source and free. Along with electronic club music, illicit drugs, movies like Ex Machina and the coming sex robots, the primarily pornographic deepfakes are how the aliens invade by hijacking human drives in the pursuit of a machinic desire. Contrary to the popular impression that deepfakes exemplify the post-truth phenomenon of fake news, they mark an anarchic, massively distributed anti-fascist resistance network capable of sabotaging centralized, authoritarian institutions’ hegemonic narratives. That the only realistic “solutions” for detecting deepfakes have been to build better machines capable of exposing them ultimately suggests that human judgment is soon to be discarded into the dustbin of history. From now on, only a machine can win the Turing test against another machine. Author(s): Vincent Le Title (English): The Deepfakes to Come: A Turing Cop’s Nightmare Journal Reference: Identities: Journal for Politics, Gender and Culture, Vol. 17, No. 2-3 (Winter 2020) Publisher: Institute of Social Sciences and Humanities - Skopje Page Range: 8-18 Page Count: 11 Citation (English): Vincent Le, “The Deepfakes to Come: A Turing Cop’s Nightmare,” Identities: Journal for Politics, Gender and Culture, Vol. 17, No. 2-3 (Winter 2020): 8-18. Author Biography Vincent Le, Monash University Vincent Le is a PhD candidate in philosophy at Monash University. He has taught philosophy at Deakin University and The Melbourne School of Continental Philosophy. He has published in Hypatia, Cosmos and History, Art + Australia, Šum, Horror Studies and Colloquy, among other journals. His recent work focuses on the reckless propagation of the will to critique.


Sign in / Sign up

Export Citation Format

Share Document