An Architecture for Cognitive Diversity

2011 ◽  
pp. 312-331 ◽  
Author(s):  
Push Singh

To build systems as resourceful and adaptive as people, we must develop cognitive architectures that support great procedural and representational diversity. No single technique is by itself powerful enough to deal with the broad range of domains every ordinary person can understand—even as children, we can effortlessly think about complex problems involving temporal, spatial, physical, bodily, psychological, and social dimensions. In this chapter, we describe a multiagent cognitive architecture that aims for such flexibility. Rather than seeking a best way to organize agents, our architecture supports multiple “ways to think,” each a different architectural configuration of agents. Each agent may use a different way to represent and reason with knowledge, and there are special “panalogy” mechanisms that link agents that represent similar ideas in different ways. At the highest level, the architecture is arranged as a matrix of agents: Vertically, the architecture divides into a tower of reflection, including the reactive, deliberative, reflective, self-reflective, and self-conscious levels; horizontally, the architecture divides along “mental realms,” including the temporal, spatial, physical, bodily, social, and psychological realms. Our goal is to build an artificial intelligence (AI) system resourceful enough to combine the advantages of many different ways to think about things, by making use of many types of mechanisms for reasoning, representation, and reflection.

Author(s):  
Pranav Gupta ◽  
Anita Williams Woolley

Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it.


Author(s):  
Alexandra D. Kaplan ◽  
Theresa T. Kessler ◽  
J. Christopher Brill ◽  
P. A. Hancock

Objective The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction. Background There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI. Method Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors. Results Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others. Conclusion Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research. Application Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.


Author(s):  
K. P. V. Sai Aakarsh ◽  
Adwin Manhar

Over many centuries, tools of increasing sophistication have been developed to serve the human race Digital computers are, in many respects, just another tool. They can perform the same sort of numerical and symbolic manipulations that an ordinary person can, but faster and more reliably. This paper represents review of artificial intelligence algorithms applying in computer application and software. Include knowledge-based systems; computational intelligence, which leads to Artificial intelligence, is the science of mimicking human mental faculties in a computer. That assists Physician to make dissection in medical diagnosis.


2000 ◽  
Vol 9 (2-3) ◽  
pp. 177-193 ◽  
Author(s):  
Michael Macdonald-Ross ◽  
Robert Waller

Written in 1974 while the authors were with the Open University, this paper first appeared in the 1976 Penrose Annual. The original abstract, written by the Penrose editor, read: Break down the barriers in the interests of the reader. Take responsibility for the success or failure of the communication. Do not accept a label or a slot on a production line. Be a complete human being with moral and intellectual integrity and thoroughgoing technical competence. This is the message of this article by two highly professional communicators at the Institute of Educational Technology of the Open University, Milton Keynes. It examines the range of complex problems involved in putting the expert's message in a form the ordinary person can best understand and use. It is reprinted here with minor changes that mostly reflect the current un acceptability of the pronoun 'he' used generi-cally. The authors have also added a 2000 Postscript.


2019 ◽  
Vol 374 (1774) ◽  
pp. 20180377 ◽  
Author(s):  
Luís F. Seoane

Reservoir computing (RC) is a powerful computational paradigm that allows high versatility with cheap learning. While other artificial intelligence approaches need exhaustive resources to specify their inner workings, RC is based on a reservoir with highly nonlinear dynamics that does not require a fine tuning of its parts. These dynamics project input signals into high-dimensional spaces, where training linear readouts to extract input features is vastly simplified. Thus, inexpensive learning provides very powerful tools for decision-making, controlling dynamical systems, classification, etc. RC also facilitates solving multiple tasks in parallel, resulting in a high throughput. Existing literature focuses on applications in artificial intelligence and neuroscience. We review this literature from an evolutionary perspective. RC’s versatility makes it a great candidate to solve outstanding problems in biology, which raises relevant questions. Is RC as abundant in nature as its advantages should imply? Has it evolved? Once evolved, can it be easily sustained? Under what circumstances? (In other words, is RC an evolutionarily stable computing paradigm?) To tackle these issues, we introduce a conceptual morphospace that would map computational selective pressures that could select for or against RC and other computing paradigms. This guides a speculative discussion about the questions above and allows us to propose a solid research line that brings together computation and evolution with RC as test model of the proposed hypotheses. This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.


1997 ◽  
Vol 12 (4) ◽  
pp. 411-412 ◽  
Author(s):  
ROBERT MORRIS ◽  
LINA KHATIB

Artificial intelligence research in temporal reasoning focuses on designing automated solutions to complex problems in computation involving time. TIME-97, the 4th International Workshop on Temporal Representation and Reasoning, held in Daytona Beach, Florida — like the three workshops that preceded it — had the objective of creating an international forum for the exchange of information among the many researchers and knowledge engineers who are developing and applying techniques in temporal reasoning.


2010 ◽  
Vol 2010 ◽  
pp. 1-8 ◽  
Author(s):  
Troy D. Kelley ◽  
Lyle N. Long

Generalized intelligence is much more difficult than originally anticipated when Artificial Intelligence (AI) was first introduced in the early 1960s. Deep Blue, the chess playing supercomputer, was developed to defeat the top rated human chess player and successfully did so by defeating Gary Kasporov in 1997. However, Deep Blue only played chess; it did not play checkers, or any other games. Other examples of AI programs which learned and played games were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. So the question remains: Why is generalized intelligence so difficult? If complex tasks require a significant amount of development, time and task generalization is not easily accomplished, then a significant amount of effort is going to be required to develop an intelligent system. This approach will require a system of systems approach that uses many AI techniques: neural networks, fuzzy logic, and cognitive architectures.


Author(s):  
Hui Wei

The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence. In this paper a computational model was proposed to simulate the network of neurons in brain and how they process information. The model refers to morphological and electrophysiological characteristics of neural information processing, and is based on the assumption that neurons encode their firing sequence. The network structure, functions for neural encoding at different stages, the representation of stimuli in memory, and an algorithm to form a memory were presented. It also analyzed the stability and recall rate for learning and the capacity of memory. Because neural dynamic processes, one succeeding another, achieve a neuron-level and coherent form by which information is represented and processed, it may facilitate examination of various branches of Artificial Intelligence (AI), such as inference, problem solving, pattern recognition, natural language processing and learning. The processes of cognitive manipulation occurring in intelligent behavior have a consistent representation while all being modeled from the perspective of computational neuroscience. Thus, the dynamics of neurons make it possible to explain the inner mechanisms of different intelligent behaviors by a unified model of cognitive architecture at a micro-level.


Sign in / Sign up

Export Citation Format

Share Document