Visions of Mind
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781591404828, 9781591404842

2011 ◽  
pp. 204-224 ◽  
Author(s):  
Fernand Gobet ◽  
Peter C.R. Logan

This chapter provides an introduction to the CHREST architecture of cognition and shows how this architecture can help develop a full theory of mind. After describing the main components and mechanisms of the architecture, we discuss several domains where it has already been successfully applied, such as in the psychology of expert behaviour, the acquisition of language by children, and the learning of multiple representations in physics. We highlight the characteristics of CHREST that enable it to account for empirical data, including self-organisation, an emphasis on cognitive limitations, the presence of a perception-learning cycle, and the use of naturalistic data as input for learning. We argue that some of these characteristics can help shed light on the hard questions facing theorists developing a full theory of mind, such as intuition, the acquisition and use of concepts, the link between cognition and emotions, and the role of embodiment.


2011 ◽  
pp. 108-124
Author(s):  
Bruce Edmonds

Free will is described in terms of the useful properties that it could confer, explaining why it might have been selected for over the course of evolution. These properties are exterior unpredictability, interior rationality, and social accountability. A process is described that might bring it about when deployed in a suitable social context. It is suggested that this process could be of an evolutionary nature—that free will might “evolve” in the brain during development. This mental evolution effectively separates the internal and external contexts, while retaining the coherency between individual’s public accounts of their actions. This is supported by the properties of evolutionary algorithms and possesses the three desired properties. Some objections to the possibility of free will are dealt with by pointing out the prima facie evidence and showing how an assumption that everything must be either deterministic or random can result from an unsupported assumption of universalism.


2011 ◽  
pp. 66-89 ◽  
Author(s):  
Joanna J. Bryson

Many architectures of mind assume some form of modularity, but what is meant by the term ‘module’? This chapter creates a framework for understanding current modularity research in three subdisciplines of cognitive science: psychology, artificial intelligence (AI), and neuroscience. This framework starts from the distinction between horizontal modules that support all expressed behaviors vs. vertical modules that support individual domain-specific capacities. The framework is used to discuss innateness, automaticity, compositionality, representations, massive modularity, behavior-based and multi-agent AI systems, and correspondence to physiological neurosystems. There is also a brief discussion of the relevance of modularity to conscious experience.


2011 ◽  
pp. 21-44
Author(s):  
Michel Aube

This chapter proposes a model of emotions relying upon an analysis of the requirements that are to be met by individuals of nurturing species, so as to adapt themselves to their social environments. It closely reflects the structure of other motivational systems, which consist of control structures dedicated to the management of resources critical for survival. The particular resources emotional systems seem to handle have to do with social bonding and collaborative behaviors. Herein, they are called second-order resources. They refer to the resources made available by other agents, and they are captured in the model through the concept of commitments. Emotions thus appear as computational control systems that handle the variation of commitments lying at the root of interactive and collaborative behaviors. Some critical consequences of the model for the implementation of emotions in artificial systems are drawn at the end of the chapter.


2011 ◽  
pp. 1-20 ◽  
Author(s):  
Andy Adamatzky

We portray mind as an imaginary chemical reactor, where discrete entities of emotions and beliefs diffuse and react as molecules. We discuss two models of mind: a doxastic solution where quasi-chemical species represent knowledge, ignorance, delusion, doubt, and misbelief; and an affective solution, where reaction mixtures include happiness, anger, confusion, and fear. Using numerical and cellular-automaton techniques, we demonstrate a rich spectrum of nontrivial phenomena in the spatiotemporal dynamic of the affective and doxastic mixtures. This paradigm of nonlinear medium-based mind will be used in future studies in developing intelligent robotic systems, designs of artificial organic creatures with liquid brains, and diffusive intelligence of agent collectives.


2011 ◽  
pp. 225-253 ◽  
Author(s):  
Elizabeth Gordon ◽  
Brian Logan

A key problem for agents is responding in a timely and appropriate way to multiple, often conflicting goals in a complex, dynamic environment. In this chapter, we propose a novel goal-processing architecture that allows an agent to arbitrate between multiple conflicting goals. Building on the teleo-reactive programming framework originally developed in robotics, we introduce the notion of a resource that represents a condition that must be true for the safe concurrent execution of a durative action. We briefly outline a goal arbitration architecture for teleo-reactive programs with resources that allow an agent to respond flexibly to multiple competing goals with conflicting resource requirements.


2011 ◽  
pp. 254-274 ◽  
Author(s):  
Pentti O Haikonen

The following fundamental issues of artificial minds and conscious machines are considered here: representation and symbolic processing of information with meaning and significance in the human sense; the perception process; a neural cognitive architecture; system reactions and emotions; consciousness in the machine; and artificial minds as a content-level phenomenon. Solutions are proposed for related problems, and a cognitive machine is outlined. An artificial mind within this machine that eventually controls the machine is seen to arise via learning and experience as higher level content is constructed.


2011 ◽  
pp. 312-331 ◽  
Author(s):  
Push Singh

To build systems as resourceful and adaptive as people, we must develop cognitive architectures that support great procedural and representational diversity. No single technique is by itself powerful enough to deal with the broad range of domains every ordinary person can understand—even as children, we can effortlessly think about complex problems involving temporal, spatial, physical, bodily, psychological, and social dimensions. In this chapter, we describe a multiagent cognitive architecture that aims for such flexibility. Rather than seeking a best way to organize agents, our architecture supports multiple “ways to think,” each a different architectural configuration of agents. Each agent may use a different way to represent and reason with knowledge, and there are special “panalogy” mechanisms that link agents that represent similar ideas in different ways. At the highest level, the architecture is arranged as a matrix of agents: Vertically, the architecture divides into a tower of reflection, including the reactive, deliberative, reflective, self-reflective, and self-conscious levels; horizontally, the architecture divides along “mental realms,” including the temporal, spatial, physical, bodily, social, and psychological realms. Our goal is to build an artificial intelligence (AI) system resourceful enough to combine the advantages of many different ways to think about things, by making use of many types of mechanisms for reasoning, representation, and reflection.


2011 ◽  
pp. 45-65
Author(s):  
John A. Barnden

This chapter speculatively addresses the nature and effects of metaphorical views that a mind can intermittently use in thinking about itself and other minds, such as the view of mind as a physical space in which ideas have physical locations. Although such views are subjective, it is argued in this chapter that they are nevertheless part of the real nature of the conscious and unconscious mind. In particular, it is conjectured that if a mind entertains a particular (metaphorical) view at a given time, then this activity could of itself cause that mind to become more similar in the short term to how it is portrayed by the view. Hence, the views are, to an extent, self-fulfilling prophecies. In these ways, metaphorical self-reflection, even when distorting and inaccurate, is speculatively an important aspect of the true nature of mind. The chapter also outlines a theoretical approach and related implemented system (ATT-Meta) that were designed for the understanding of metaphorical discourse but that incorporate principles that could be at the core of metaphorical self-reflection in people or future artificial agents.


2011 ◽  
pp. 290-311
Author(s):  
Matthias Scheutz

In this chapter, we introduce an architecture framework called APOC (Activating-Processing-Observing-Components) for the analysis, evaluation, and design of complex agents. APOC provides a unified framework for the specification of agent architectures at different levels of abstraction. As such, it permits intermediary levels of architectural specification between high-level functional descriptions and low-level mechanistic descriptions that can be used to connect these two levels in a systematic way.


Sign in / Sign up

Export Citation Format

Share Document