turing's test
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 1)

2020 ◽  
Vol 30 (4) ◽  
pp. 487-512 ◽  
Author(s):  
Diane Proudfoot
Keyword(s):  

2020 ◽  
Vol 30 (4) ◽  
pp. 513-532
Author(s):  
Michael Wheeler

AbstractThe Turing Test is routinely understood as a behaviourist test for machine intelligence. Diane Proudfoot (Rethinking Turing’s Test, Journal of Philosophy, 2013) has argued for an alternative interpretation. According to Proudfoot, Turing’s claim that intelligence is what he calls ‘an emotional concept’ indicates that he conceived of intelligence in response-dependence terms. As she puts it: ‘Turing’s criterion for “thinking” is…: x is intelligent (or thinks) if in the actual world, in an unrestricted computer-imitates-human game, x appears intelligent to an average interrogator’. The role of the famous test is thus to provide the conditions in which to examine the average interrogator’s responses. I shall argue that Proudfoot’s analysis falls short. The philosophical literature contains two main models of response-dependence, what I shall call the transparency model and the reference-fixing model. Proudfoot resists the thought that Turing might have endorsed one of these models to the exclusion of the other. But the details of her own analysis indicate that she is, in fact, committed to the claim that Turing’s account of intelligence is grounded in a transparency model, rather than a reference-fixing one. By contrast, I shall argue that while Turing did indeed conceive of intelligence in response-dependence terms, his account is grounded in a reference-fixing model, rather than a transparency one. This is fortunate (for Turing), because, as an account of intelligence, the transparency model is arguably problematic in a way that the reference-fixing model isn’t.


2019 ◽  
Vol 10 (2) ◽  
pp. 52-67 ◽  
Author(s):  
Peter Remmers

A defining goal of research in AI and robotics is to build technical artefacts as substitutes, assistants or enhancements of human action and decision-making. But both in reflection on these technologies and in interaction with the respective technical artefacts, we sometimes encounter certain kinds of human likenesses. To clarify their significance, three aspects are highlighted. First, I will broadly investigate some relations between humans and artificial agents by recalling certain points from the debates on Strong AI, on Turing’s Test, on the concept of autonomy and on anthropomorphism in human-machine interaction. Second, I will argue for the claim that there are no serious ethical issues involved in the theoretical aspects of technological human likeness. Third, I will suggest that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans.


2014 ◽  
Vol 57 (12) ◽  
pp. 8-9 ◽  
Author(s):  
CACM Staff
Keyword(s):  

2014 ◽  
Vol 25 (6) ◽  
pp. 76-76
Author(s):  
Dwayne Godwin ◽  
Jorge Cham
Keyword(s):  

2013 ◽  
Vol 110 (7) ◽  
pp. 391-411 ◽  
Author(s):  
Diane Proudfoot ◽  
Keyword(s):  

Author(s):  
Diane Proudfoot ◽  
B. Jack Copeland

In this article the central philosophical issues concerning human-level artificial intelligence (AI) are presented. AI largely changed direction in the 1980s and 1990s, concentrating on building domain-specific systems and on sub-goals such as self-organization, self-repair, and reliability. Computer scientists aimed to construct intelligence amplifiers for human beings, rather than imitation humans. Turing based his test on a computer-imitates-human game, describing three versions of this game in 1948, 1950, and 1952. The famous version appears in a 1950 article inMind, ‘Computing Machinery and Intelligence’ (Turing 1950). The interpretation of Turing's test is that it provides an operational definition of intelligence (or thinking) in machines, in terms of behavior. ‘Intelligent Machinery’ sets out the thesis that whether an entity is intelligent is determined in part by our responses to the entity's behavior. Wittgenstein frequently employed the idea of a human being acting like a reliable machine. A ‘living reading-machine’ is a human being or other creature that is given written signs, for example Chinese characters, arithmetical symbols, logical symbols, or musical notation, and who produces text spoken aloud, solutions to arithmetical problems, and proofs of logical theorems. Wittgenstein mentions that an entity that manipulates symbols genuinely reads only if he or she has a particular history, involving learning and training, and participates in a social environment that includes normative constraints and further uses of the symbols.


Sign in / Sign up

Export Citation Format

Share Document