Journal of Artificial Intelligence and Consciousness
Latest Publications


TOTAL DOCUMENTS

45
(FIVE YEARS 45)

H-INDEX

1
(FIVE YEARS 1)

Published By World Scientific Pub Co Pte Lt

2705-0785, 2705-0793

Author(s):  
Eduardo C. Garrido-Mercháin ◽  
Martín Molina ◽  
Francisco M. Mendoza-Soto

This work seeks to study the beneficial properties that an autonomous agent can obtain by imitating a cognitive architecture similar to that of conscious beings. Throughout this document, a cognitive model of an autonomous agent-based in a global workspace architecture is presented. We hypothesize that consciousness is an evolutionary advantage, so if our autonomous agent can be potentially conscious, its performance will be enhanced. We explore whether an autonomous agent implementing a cognitive architecture like the one proposed in the global workspace theory can be conscious from a philosophy of mind perspective, with a special emphasis on functionalism and multiple realizability. The purposes of our proposed model are to create autonomous agents that can navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings to find the best possible position according to its inner preferences and to test the effectiveness of many of its cognitive mechanisms, such as an attention mechanism for magnitude selection, possession of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating the consciousness bottleneck into the decision-making process, that controls and integrates information processed by all the subsystems of the model, as in global workspace theory. We show in a large experiment set how potentially conscious autonomous agents can benefit from having a cognitive architecture such as the one described.


Author(s):  
Wenjie Huang ◽  
Antonio Chella ◽  
Angelo Cangelosi

There are many developed theories and implemented artificial systems in the area of machine consciousness, while none has achieved that. For a possible approach, we are interested in implementing a system by integrating different theories. Along this way, this paper proposes a model based on the global workspace theory and attention mechanism, and providing a fundamental framework for our future work. To examine this model, two experiments are conducted. The first one demonstrates the agent’s ability to shift attention over multiple stimuli, which accounts for the dynamics of conscious content. Another experiment of simulations of attentional blink and lag-1 sparing, which are two well-studied effects in psychology and neuroscience of attention and consciousness, aims to justify the agent’s compatibility with human brains. In summary, the main contributions of this paper are (1) Adaptation of the global workspace framework by separated workspace nodes, reducing unnecessary computation but retaining the potential of global availability; (2) Embedding attention mechanism into the global workspace framework as the competition mechanism for the consciousness access; (3) Proposing a synchronization mechanism in the global workspace for supporting lag-1 sparing effect, retaining the attentional blink effect.


Author(s):  
Robin L. Zebrowski ◽  
Eli B. McGraw

Within artificial intelligence (AI) and machine consciousness research, social cognition as a whole is often ignored. When it is addressed, it is often thought of as one application of more traditional forms of cognition. However, while theoretical approaches to AI have been fairly stagnant in recent years, social cognition research has progressed in productive new ways, specifically through enactive approaches. Using participatory sense-making (PSM) as an approach, we rethink conceptions of autonomy and openness in AI and enactivism, shifting the focus away from living systems to allow incorporation of artificial systems into social forms of sense-making. PSM provides an entire level of analysis through an overlooked autonomous system produced via social interaction that can be both measured and modeled in order to instantiate and examine more robust artificial cognitive systems.


Author(s):  
Carlos Montemayor

Contemporary debates on Artificial General Intelligence (AGI) center on what philosophers classify as descriptive issues. These issues concern the architecture and style of information processing required for multiple kinds of optimal problem-solving. This paper focuses on two topics that are central to developing AGI regarding normative, rather than descriptive, requirements for AGIs epistemic agency and responsibility. The first is that a collective kind of epistemic agency may be the best way to model AGI. This collective approach is possible only if solipsistic considerations concerning phenomenal consciousness are ignored, thereby focusing on the cognitive foundation that attention and access consciousness provide for collective rationality and intelligence. The second is that joint attention and motivation are essential for AGI in the context of linguistic artificial intelligence. Focusing on GPT-3, this paper argues that without a satisfactory solution to this second normative issue regarding joint attention and motivation, there cannot be genuine AGI, particularly in conversational settings.


Author(s):  
Subhash Kak

It is generally accepted that machines can replicate cognitive tasks performed by conscious agents as long as they are not based on the capacity of awareness. We consider several views on the nature of subjective awareness, which is fundamental for self-reflection and review, and present reasons why this property is not computable. We argue that consciousness is more than an epiphenomenon and assuming it to be a separate category is consistent with both quantum mechanics and cognitive science. We speak of two kinds of consciousness, little-C and big-C, and discuss the significance of this classification in analyzing the current academic debates in the field. The interaction between the system and the measuring apparatus of the experimenter is examined both from the perspectives of decoherence and the quantum Zeno effect. These ideas are used as context to address the question of limits to machine consciousness.


Author(s):  
Zohar Bronfman ◽  
Simona Ginsburg ◽  
Eva Jablonka

The current failure to construct an artificial intelligence (AI) agent with the capacity for domain-general learning is a major stumbling block in the attempt to build conscious robots. Taking an evolutionary approach, we previously suggested that the emergence of consciousness was entailed by the evolution of an open-ended domain-general form of learning, which we call unlimited associative learning (UAL). Here, we outline the UAL theory and discuss the constraints and affordances that seem necessary for constructing an AI machine exhibiting UAL. We argue that a machine that is capable of domain-general learning requires the dynamics of a UAL architecture and that a UAL architecture requires, in turn, that the machine is highly sensitive to the environment and has an ultimate value (like self-persistence) that provides shared context to all its behaviors and learning outputs. The implementation of UAL in a machine may require that it is made of “soft” materials, which are sensitive to a large range of environmental conditions, and that it undergoes sequential morphological and behavioral co-development. We suggest that the implementation of these requirements in a human-made robot will lead to its ability to perform domain-general learning and will bring us closer to the construction of a sentient machine.


Author(s):  
Hongzhi Wang ◽  
Bozhou Chen ◽  
Yueyang Xu ◽  
Kaixin Zhang ◽  
Shengwen Zheng

The major criteria to distinguish conscious Artificial Intelligence (AI) and non-conscious AI is whether the conscious is from the needs. Based on this criteria, we develop ConsciousControlFlow(CCF) to show the need-based conscious AI. The system is based on the computational model with a short-term memory (STM) and long-term memory (LTM) for consciousness and the hierarchy of needs. To generate AI based on real needs of the agent, we developed several LTMs for special functions such as feeling and sensor. Experiments have demonstrated that the agents in the proposed system behave according to the needs, which coincides with the prediction.


Author(s):  
Joshua Bensemann ◽  
Qiming Bao ◽  
Gaël Gendron ◽  
Tim Hartill ◽  
Michael Witbrock

Processes occurring in brains, a.k.a. biological neural networks, can and have been modeled within artificial neural network architectures. Due to this, we have conducted a review of research on the phenomenon of blindsight in an attempt to generate ideas for artificial intelligence models. Blindsight can be considered as a diminished form of visual experience. If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks. This paper has been structured into three parts. Section 2 is a review of blindsight research, looking specifically at the errors occurring during this condition compared to normal vision. Section 3 identifies overall patterns from Sec. 2 to generate insights for computational models of vision. Section 4 demonstrates the utility of examining biological research to inform artificial intelligence research by examining computational models of visual attention relevant to one of the insights generated in Sec. 3. The research covered in Sec. 4 shows that incorporating one of our insights into computational vision does benefit those models. Future research will be required to determine whether our other insights are as valuable.


Author(s):  
Jeffrey L. Krichmar

In 2006, during a meeting of a working group of scientists in La Jolla, California at The Neurosciences Institute (NSI), Gerald Edelman described a roadmap towards the creation of a Conscious Artifact. As far as I know, this roadmap was not published. However, it did shape my thinking and that of many others in the years since that meeting. This short paper, which is based on my notes taken during the meeting, describes the key steps in this roadmap. I believe it is as groundbreaking today as it was more than 15 years ago.


Author(s):  
Garret Merriam

Artificial Emotional Intelligence research has focused on emotions in a limited “black box” sense, concerned only with emotions as ‘inputs/outputs’ for the system, disregarding the processes and structures that constitute the emotion itself. We’re teaching machines to act as if they can feel emotions without the capacity to actually feel emotions. Serous moral and social problems will arise if we stick with the black box approach. As A.I.’s become more integrated with our lives, humans will require more than mere emulation of emotion; we’ll need them to have ‘the real thing.’ Moral psychology suggests emotions are necessary for moral reasoning and moral behavior. Socially, the role of ‘affective computing’ foreshadows the intimate ways humans will expect emotional reciprocity from their machines. Three objections are considered and responded to: (1) it’s not possible, (2) not necessary, and (3) too dangerous to give machines genuine emotions.


Sign in / Sign up

Export Citation Format

Share Document