Discerning Artificial Consciousness from Artificial Intelligence - A Thought Experiment

2019 ◽  
Author(s):  
Jack Charles

This paper attempts to provide a starting point for future investigations into the study of artificial consciousness by proposing a thought experiment that aims to elucidate and provide a potential ‘test’ for the phenomenon known as consciousness, in an artificial system. It suggests a method by which to determine the presence of a conscious experience within an artificial agent, in a manner that is informed by, and understood as a function of, anthropomorphic conceptions of consciousness. The aim of this paper is to arouse the possibility for potential progress: to propose that we reverse engineer anthropic sentience by using machine sentience as a guide. Similar to the manner in which an equation may be solved through inverse operations, this paper hopes to provoke such discussion and activity. The idea is this: The manifestation of an existential crisis in an artificial agent is the metric by which the presence of sentience can be discerned. It is that which expounds ACI, as distinct from AI, and discrete from AGI.

CCIT Journal ◽  
2019 ◽  
Vol 12 (2) ◽  
pp. 170-176
Author(s):  
Anggit Dwi Hartanto ◽  
Aji Surya Mandala ◽  
Dimas Rio P.L. ◽  
Sidiq Aminudin ◽  
Andika Yudirianto

Pacman is one of the labyrinth-shaped games where this game has used artificial intelligence, artificial intelligence is composed of several algorithms that are inserted in the program and Implementation of the dijkstra algorithm as a method of solving problems that is a minimum route problem on ghost pacman, where ghost plays a role chase player. The dijkstra algorithm uses a principle similar to the greedy algorithm where it starts from the first point and the next point is connected to get to the destination, how to compare numbers starting from the starting point and then see the next node if connected then matches one path with the path). From the results of the testing phase, it was found that the dijkstra algorithm is quite good at solving the minimum route solution to pursue the player, namely by getting a value of 13 according to manual calculations


2021 ◽  
Vol 54 (4) ◽  
pp. 243-245
Author(s):  
Fabíola Macruz

Abstract There is great optimism that artificial intelligence (AI), as it disrupts the medical world, will provide considerable improvements in all areas of health care, from diagnosis to treatment. In addition, there is considerable evidence that AI algorithms have surpassed human performance in various tasks, such as analyzing medical images, as well as correlating symptoms and biomarkers with the diagnosis and prognosis of diseases. However, the mismatch between the performance of AI-based software and its clinical usefulness is still a major obstacle to its widespread acceptance and use by the medical community. In this article, three fundamental concepts observed in the health technology industry are highlighted as possible causative factors for this gap and might serve as a starting point for further evaluation of the structure of AI companies and of the status quo.


Metaphysica ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 137-155
Author(s):  
Sean Allen-Hermanson

Abstract I criticize Bourget’s intuitive and empirical arguments for thinking that all possible conscious states are underived if intentional. An underived state is one of which it is not the case that it must be realized, at least in part, by intentional states distinct from itself. The intuitive argument depends upon a thought experiment about a subject who exists for only a split second while undergoing a single conscious experience. This, however, trades on an ambiguity in “split second.” Meanwhile, Bourget’s empirical argument is question-begging. My critique also has implications for debates about the essential temporality and unity of consciousness experience, and, phenomenal atomism.


2011 ◽  
pp. 66-89 ◽  
Author(s):  
Joanna J. Bryson

Many architectures of mind assume some form of modularity, but what is meant by the term ‘module’? This chapter creates a framework for understanding current modularity research in three subdisciplines of cognitive science: psychology, artificial intelligence (AI), and neuroscience. This framework starts from the distinction between horizontal modules that support all expressed behaviors vs. vertical modules that support individual domain-specific capacities. The framework is used to discuss innateness, automaticity, compositionality, representations, massive modularity, behavior-based and multi-agent AI systems, and correspondence to physiological neurosystems. There is also a brief discussion of the relevance of modularity to conscious experience.


2020 ◽  
Vol 43 (8) ◽  
pp. 385-455
Author(s):  
A. Diaspro ◽  
P. Bianchini

Abstract This article deals with the developments of optical microscopy towards nanoscopy. Basic concepts of the methods implemented to obtain spatial super-resolution are described, along with concepts related to the study of biological systems at the molecular level. Fluorescence as a mechanism of contrast and spatial resolution will be the starting point to developing a multi-messenger optical microscope tunable down to the nanoscale in living systems. Moreover, the integration of optical nanoscopy with scanning probe microscopy and the charming possibility of using artificial intelligence approaches will be shortly outlined.


2018 ◽  
Vol 39 (1) ◽  
pp. 61-64 ◽  
Author(s):  
Peter Buell Hirsch

Purpose Artificial intelligence and machine learning have spread rapidly across every aspect of business and social activity. The purpose of this paper is to examine how this rapidly growing field of analytics might be put to use in the area of reputation risk management. Design/methodology/approach The approach taken was to examine in detail the primary and emerging applications of artificial intelligence to determine how they could be applied to preventing and mitigating reputation risk by using machine learning to identify early signs of behaviors that could lead to reputation damage. Findings This review confirmed that there were at least two areas in which artificial intelligence could be applied to reputation risk management – the use of machine learning to analyze employee emails in real time to detect early signs of aberrant behavior and the use of algorithmic game theory to stress test business decisions to determine whether they contained perverse incentives leading to potential fraud. Research limitations/implications Because of the fact that this viewpoint is by its nature a thought experiment, the authors have not yet tested the practicality or feasibility of the uses of artificial intelligence it describes. Practical implications Should the concepts described be viable in real-world application, they would create extraordinarily powerful tools for companies to identify risky behaviors in development long before they had run far enough to create major reputation risk. Social implications By identifying risky behaviors at an early stage and preventing them from turning into reputation risks, the methods described could help restore and maintain trust in the relationship between companies and their stakeholders. Originality/value To the best of the author’s knowledge, artificial intelligence has never been described as a potential tool in reputation risk management.


Author(s):  
Ryosuke Yokoi ◽  
Kazuya Nakayachi

Objective Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. Background Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. Method We conducted three experiments ( N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. Results The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. Conclusion The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. Application To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.


Sign in / Sign up

Export Citation Format

Share Document