The Creation of a Conscious Machine

2017 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.

2017 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.


2018 ◽  
Author(s):  
Jean E. Tardy

"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of Synthetic Consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of Artificial Intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favor of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that are suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.


2021 ◽  
pp. 7
Author(s):  
Vasily I. Zhukov

The author analyzes the process of accumulation of knowledge in the field of philosophy and law in order to create an epistemological basis for the perception of justice in the paradigm of the Philosophy of Law. The analytical review is based on the analysis of philosophical, theological, historical and other theories developed from ancient times to the present. The author focuses on the works of ancient thinkers (first of all, Plato, his disciple Aristotle, their followers, Roman authors), the works of scientists who created original concepts and enriched jurisprudence in the Middle Ages, the new and the newest times. Special attention is paid to the interpretation of theories that brought science closer to the creation of a theory of justice in the context of the Philosophy of Law. The author also describes the theories of justice of law developed by the largest scientists of the XX century, J.ºRawls, H. Otfried, F. von Hayek, Ph. Selznick, etc. The article considers the contribution to the development of knowledge about justice in the paradigm of the Philosophy of Law made by the local legal scholars, Soviet scientists, the largest modern specialists in this field, including V.D. Zorkin, V.I. Khairullin, etc. Based on the results of the analytical review, the main conclusions are developed and the author's definition of justice in the format of the Philosophy of Law is given.


2020 ◽  
pp. 49-62
Author(s):  
Joshua Grimm

The evolution of artificial intelligence in science fiction film has showcased an array of technological marvels, and yet each reflects the era in which the films were made, be it what the device looks like, the extent of its power, or the ethical/moral issues surrounding its existence. Ex Machina is no different, with the development of AI firmly embedded in the tech industry. Caleb’s entire purpose for being at Nathan’s compound is to determine whether Nathan has, in fact, created artificial intelligence or if Ava is simply imitating human interactions. This is called the Turing Test, which has been around for nearly 70 years, and it has been rigorously debated for almost its entire existence. Ex Machina pushes this debate by accepting and challenging key assumptions of the Turing Test while positing its own: The role affection/attraction/love might play in the entire process. As such, by considering these emotional components (as expressed toward the creation rather than from it) grounds the discussion in terms of morality and soul, something previous films have treated more as a by-product of artificial intelligence.


2017 ◽  
Author(s):  
Jean E. Tardy

The Meca Sapiens project follows a Top-down process to develop the conceptual foundations of synthetic consciousness. The Creation of a Conscious Machine corresponds to the Requirements and Specifications document of this process. It describes the extraordinary intellectual benefits to be gained from the implementation of conscious machines. It surveys historical attempts to define and implement machine intelligence and the insights they reveal. In particular, it examines the Turing Test is analyzed in detail through multiple variations and finds it to be both excessive and insufficient as a measure of machine intelligence. The text concludes by introducing a new understanding of consciousness as an observable system capability that can be expressed as specification objectives that are compatible with software implementation. This understanding is the basis for The Meca Sapiens Blueprint, a complete System Architecture to implement synthetic consciousness using conventional computers and standard techniques.


Author(s):  
José Hernández-Orallo ◽  
Adolfo Plasencia

In this dialogue comprising seven widely diverse sections, José Hernández-Orallo, specialist in AI, reflects on a variety of topics surrounding Natural Intelligence and Artificial Intelligence (AI): 1. On what is measurable in intelligence, and what its ingredients are; 2. On how to universally measure intelligence; 3. On the Turing test; 4. On compared intelligences and the IQ (Intelligence Quotient); 5. On the AI agents of software; 6. On whether the human condition, (and happiness), can be mathematized; 7. On the relationship between intelligence and humor; and, 8. Are there universal ingredients in what we call intelligence? Toward the end, he talks about the current science and technology debate on whether the evolution of AI and its latest most disturbing incarnations (e.g., lethal autonomous weapons) can become an existential threat for humans or not. His reflections are culminated with arguments concerning a real danger— that someone, or something, might modify the present natural distribution of intelligence in the planet, which could end up being controlled by a global oligopoly.


2017 ◽  
Author(s):  
Jean E. Tardy

The Meca Sapiens project follows a Top-down process to develop the conceptual foundations of synthetic consciousness. The Creation of a Conscious Machine corresponds to the Requirements and Specifications document of this process. It describes the extraordinary intellectual benefits to be gained from the implementation of conscious machines. It surveys historical attempts to define and implement machine intelligence and the insights they reveal. In particular, it examines the Turing Test is analyzed in detail through multiple variations and finds it to be both excessive and insufficient as a measure of machine intelligence. The text concludes by introducing a new understanding of consciousness as an observable system capability that can be expressed as specification objectives that are compatible with software implementation. This understanding is the basis for The Meca Sapiens Blueprint, a complete System Architecture to implement synthetic consciousness using conventional computers and standard techniques.


2019 ◽  
Vol 62 (7) ◽  
pp. 73-95
Author(s):  
Albert R. Efimov

The article discusses the main trends in the development of artificial intelligence systems and robotics (AI&R). The main question that is considered in this context is whether artificial systems are going to become more and more anthropomorphic, both intellectually and physically. In the article, the author analyzes the current state and prospects of technological development of artificial intelligence and robotics and also determines the main aspects of the impact of these technologies on society and economy, indicating the geopolitical strategic nature of this influence. The author considers various approaches to the definition of artificial intelligence and robotics, focusing on the subject-oriented and functional ones. In the article, AI&R abilities and human abilities are compared in such areas as categorization, pattern recognition, planning and decision making, etc. Based on this comparison, it is concluded in which areas AI&R’s performance is inferior to a human and in which cases it is superior to one. The modern achievements in the field of robotics and artificial intelligence create the necessary basis for further discussion of the applicability of goal setting in engineering, in the form of a Turing test. It is shown that development of AI&R is associated with certain contradictions that impede the application of Turing’s methodology in its usual format. The basic contradictions in the development of AI&R technologies imply that there is to be a transition to a post-Turing methodology for assessing engineering implementations of AI&R. In such implementations, on the one hand, the “Turing wall” is removed, and on the other hand, artificial intelligence gets its physical implementation.


Author(s):  
James H. Moor

Alan Turing was a mathematical logician who made fundamental contributions to the theory of computation. He developed the concept of an abstract computing device (a ‘Turing machine’) which precisely characterizes the concept of computation, and provided the basis for the practical development of electronic digital computers beginning in the 1940s. He demonstrated both the scope and limitations of computation, proving that some mathematical functions are not computable in principle by such machines. Turing believed that human behaviour might be understood in terms of computation, and his views inspired contemporary computational theories of mind. He proposed a comparative test for machine intelligence, the ‘Turing test’, in which a human interrogator tries to distinguish a computer from a human by interacting with them only over a teletypewriter. Although the validity of the Turing test is controversial, the test and modifications of it remain influential measures for evaluating artificial intelligence.


2020 ◽  
Author(s):  
Christopher Welker ◽  
David France ◽  
Alice Henty ◽  
Thalia Wheatley

Advances in artificial intelligence (AI) enable the creation of videos in which a person appears to say or do things they did not. The impact of these so-called “deepfakes” hinges on their perceived realness. Here we tested different versions of deepfake faces for Welcome to Chechnya, a documentary that used face swaps to protect the privacy of Chechen torture survivors who were persecuted because of their sexual orientation. AI face swaps that replace an entire face with another were perceived as more human-like and less unsettling compared to partial face swaps that left the survivors’ original eyes unaltered. The full-face swap was deemed the least unsettling even in comparison to the original (unaltered) face. When rendered in full, AI face swaps can appear human and avoid aversive responses in the viewer associated with the uncanny valley.


Sign in / Sign up

Export Citation Format

Share Document