Turing, Alan Mathison (1912–54)

Author(s):  
James H. Moor

Alan Turing was a mathematical logician who made fundamental contributions to the theory of computation. He developed the concept of an abstract computing device (a ‘Turing machine’) which precisely characterizes the concept of computation, and provided the basis for the practical development of electronic digital computers beginning in the 1940s. He demonstrated both the scope and limitations of computation, proving that some mathematical functions are not computable in principle by such machines. Turing believed that human behaviour might be understood in terms of computation, and his views inspired contemporary computational theories of mind. He proposed a comparative test for machine intelligence, the ‘Turing test’, in which a human interrogator tries to distinguish a computer from a human by interacting with them only over a teletypewriter. Although the validity of the Turing test is controversial, the test and modifications of it remain influential measures for evaluating artificial intelligence.

Author(s):  
Seth Lloyd

Before Alan Turing made his crucial contributions to the theory of computation, he studied the question of whether quantum mechanics could throw light on the nature of free will. This paper investigates the roles of quantum mechanics and computation in free will. Although quantum mechanics implies that events are intrinsically unpredictable, the ‘pure stochasticity’ of quantum mechanics adds randomness only to decision-making processes, not freedom. By contrast, the theory of computation implies that, even when our decisions arise from a completely deterministic decision-making process, the outcomes of that process can be intrinsically unpredictable, even to—especially to—ourselves. I argue that this intrinsic computational unpredictability of the decision-making process is what gives rise to our impression that we possess free will. Finally, I propose a ‘Turing test’ for free will: a decision-maker who passes this test will tend to believe that he, she, or it possesses free will, whether the world is deterministic or not.


2017 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.


Author(s):  
Huma Shah ◽  
Kevin Warwick

The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based conversational measure of gender, is the ultimate test for deception and hence, thinking. So conceived Alan Turing when he introduced a machine into the game. His idea, that once a machine deceives a human judge into believing that they are the human, then that machine should be attributed with intelligence. What Turing missed is the presence of emotion in human dialogue, without expression of which, an entity could appear non-human. Indeed, humans have been confused as machine-like, the confederate effect, during instantiations of the Turing Test staged in Loebner Prizes for Artificial Intelligence. We present results from recent Loebner Prizes and two parallel conversations from the 2006 contest in which two human judges, both native English speakers, each concomitantly interacted with a non-native English speaking hidden-human, and jabberwacky, the 2005 and 2006 Loebner Prize bronze prize winner for most human-like machine. We find that machines in those contests appear conversationally worse than non-native hidden-humans, and, as a consequence attract a downward trend in highest scores awarded to them by human judges in the 2004, 2005 and 2006 Loebner Prizes. Analysing Loebner 2006 conversations, we see that a parallel could be drawn with autistics: the machine was able to broadcast but it did not inform; it talked but it did not emote. The hidden-humans were easily identified through their emotional intelligence, ability to discern emotional state of others and contribute with their own ‘balloons of textual emotion’.


2017 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.


2018 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of Synthetic Consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of Artificial Intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favor of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that are suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.


2014 ◽  
pp. 10-20
Author(s):  
Heinz Muhlenbein

The work of Alan Turing and John von Neumann on machine intelligence and artificial automata is reviewed. Turing's proposal to create a child machine with the ability to learn is discussed. Von Neumann had doubts that with teacher based learning it will be possible to create artificial intelligence. He concentrated his research on the issue of complication, probabilistic logic, and self-reproducing automata. The problem of creating artificial intelligence is far from being solved. In the last sections of the paper I review the state of the art in probabilistic logic, complexity research, and transfer learning. These topics have been identified as essential components of artificial intelligence by Turing and von Neumann.


Author(s):  
Mahesh K. Joshi ◽  
J.R. Klein

New technologies like artificial intelligence, robotics, machine intelligence, and the Internet of Things are seeing repetitive tasks move away from humans to machines. Humans cannot become machines, but machines can become more human-like. The traditional model of educating workers for the workforce is fast becoming irrelevant. There is a massive need for the retooling of human workers. Humans need to be trained to remain focused in a society which is constantly getting bombarded with information. The two basic elements of physical and mental capacity are slowly being taken over by machines and artificial intelligence. This changes the fundamental role of the global workforce.


Author(s):  
Mahesh K. Joshi ◽  
J.R. Klein

The world of work has been impacted by technology. Work is different than it was in the past due to digital innovation. Labor market opportunities are becoming polarized between high-end and low-end skilled jobs. Migration and its effects on employment have become a sensitive political issue. From Buffalo to Beijing public debates are raging about the future of work. Developments like artificial intelligence and machine intelligence are contributing to productivity, efficiency, safety, and convenience but are also having an impact on jobs, skills, wages, and the nature of work. The “undiscovered country” of the workplace today is the combination of the changing landscape of work itself and the availability of ill-fitting tools, platforms, and knowledge to train for the requirements, skills, and structure of this new age.


2020 ◽  
pp. 43-58
Author(s):  
Desireé Torres Lozano

ResumenEl presente artículo tiene como finalidad definir la IA y poner en discusión su injerencia social, así como las consecuencias éticas que esto conlleva, ya que la construcción del hombre contemporáneo debe tener en cuenta el trato con estos sistemas. Definiremos qué es la inteligencia, cómo es que se le ha llamado inteligencia a los procesos de las máquinas y podremos establecer un diálogo entre la influencia ética que conlleva el trato con las mismas. Palabras clave Inteligencia artificial; Ética; Sistemas; Tecnología; Hombre Referencias Aristóteles, De Anima, Madrid: Gredos, 2000. ___, Ética a Nicómaco, Madrid: Gredos, 2000. ___, Política, Madrid, Gredos, 2003. Aspe, V. Nuevos sentidos mimesis en la Poética de Aristóteles, en Tópicos, Revista de filosofía, México: Tópicos, 2005. Bellman, Richard, An Introduction To Artificial Intelligence, San Francisco: Boyd and Fraser Publishing Company, 1978. Büchner et al, Discovering Internet Marketing Intelligence through Web Log Mining, Antrin, Mine it, Newtownabbey: University of Ulster Shore Road, 1998. Corominas, Pascual, Diccionario Crítico Etimológico Castellano e Hispánico, Madrid, Gredos, 2002. Descartes, Meditaciones Metafísicas, Gredos, Madrid, 2000. Elaine Rich, Kevin Knight, Artificial Intelligence, New Delhi: McGraw-Hill, 1991. Bude, Gesellschaft der Angst, Hamburgo: Hamburger Edition HIS, 2014. Heidegger, Platon: Sophistes, Frankfurt: Vittorio Klostermann, 1992. ___, Über den Humanismus, Frankfurt: Vittorio Klostermann, 1949. ___, Was heisst denken?, Frankfurt Am Main: Vittorio Klostermann, 2002. Hickock, Gregory, The Myth of Mirror Neurons. The Real Neuroscience of communication and cognition, Nueva York: W. W. Norton & ­Company, 2014. J. Haugeland, Artificial Intelligence: The very idea, Cambridge: MIT Press, 1985. Kirk, G.S. y Raven, J. E., Los filósofos presocráticos, Madrid: Gredos, 1970. Kurzweil Raymond, The Age of Intelligent Machines, Cambridge: MIT Press, 1990. Mariarosaria Taddeo, Luciano Floridi, How AI can be a force for good, en Science, Vol. 361, Issue 6404, Oxford: Oxford University, 2018. Nils Johan Nilsson, Artificial Intelligence: A new synthesis, USA: Morgan Kaufmann, 1998. Platón, Cratilo, Madrid, Gredos, 2004. Poole David et al, Computational Intelligence, a Logical Approach, Oxford: Oxford University, 1998. Press, Gill, A Very Short History Of Artificial Intelligence (AI), USA: Forbes, 2016. Russell, Norvig, Artificial Intelligence, A Modern Approach, New Jersey, Pearson, 2010. Armstrong, S., & K. Sotala, ​How we​’re predicting AI​ or failing to,​ Beyond Artificial Intelligence, Machine Intelligence Research Institute, Pilsen: University of West Bohemia,2015. Turing Alan, MIND, Computing Machinery and Intelligence, Cambridge: A Quarterly Review of Psychology and Philosophy, 1950. Winston Patrick Henry, Artificial intelligence, USA: Addison Wesley, Publishing Company, 1992.


Author(s):  
Alejandro Londoño-Valencia

For several decades the term Artificial Intelligence, coined by John McCarthy in 1956, is being used to generate complex computer solutions to everyday problems and for the development of technologies based on the conceptualization about human intelligence, allowing imitate it the more closely possible. Although there have been major advances in this field, there has not been possible to create a computer or a sufficiently complex algorithm that allow to make undifferentiated the human intelligence of the artificial intelligence, such as proposed by Alan Turing in his famous test. For this reason it is important to reflect on the reasons for not been able to reach this ambitious goal, so an analytical and compared proposal is presented in this paper about the limits of AI paired against the psychobiological characteristics and processes that support the intelligence in humans.Keywords: artificial intelligence, human intelligence, adaptation, development, biology, evolution.


Sign in / Sign up

Export Citation Format

Share Document