Autonomy and Openness in Human and Machine Systems: Participatory Sense-Making and Artificial Minds

Author(s):  
Robin L. Zebrowski ◽  
Eli B. McGraw

Within artificial intelligence (AI) and machine consciousness research, social cognition as a whole is often ignored. When it is addressed, it is often thought of as one application of more traditional forms of cognition. However, while theoretical approaches to AI have been fairly stagnant in recent years, social cognition research has progressed in productive new ways, specifically through enactive approaches. Using participatory sense-making (PSM) as an approach, we rethink conceptions of autonomy and openness in AI and enactivism, shifting the focus away from living systems to allow incorporation of artificial systems into social forms of sense-making. PSM provides an entire level of analysis through an overlooked autonomous system produced via social interaction that can be both measured and modeled in order to instantiate and examine more robust artificial cognitive systems.

Author(s):  
Dane A. Morey ◽  
Jesse M. Marquisee ◽  
Ryan C. Gifford ◽  
Morgan C. Fitzgerald ◽  
Michael F. Rayo

With all of the research and investment dedicated to artificial intelligence and other automation technologies, there is a paucity of evaluation methods for how these technologies integrate into effective joint human-machine teams. Current evaluation methods, which largely were designed to measure performance of discrete representative tasks, provide little information about how the system will perform when operating outside the bounds of the evaluation. We are exploring a method of generating Extensibility Plots, which predicts the ability of the human-machine system to respond to classes of challenges at intensities both within and outside of what was tested. In this paper we test and explore the method, using performance data collected from a healthcare setting in which a machine and nurse jointly detect signs of patient decompensation. We explore the validity and usefulness of these curves to predict the graceful extensibility of the system.


1989 ◽  
Vol 29 (4) ◽  
pp. 223-229 ◽  
Author(s):  
D.H. Adams ◽  
M.R.C. External Scientific Staff

AI Magazine ◽  
2020 ◽  
Vol 41 (2) ◽  
pp. 86-92 ◽  
Author(s):  
Melanie Mitchell

In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning” (Rota 1986). Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: Humans are able to actually understand the situations they encounter, whereas even the most advanced of today’s artificial intelligence systems do not yet have a humanlike understanding of the concepts that we are trying to teach them. This lack of understanding may underlie current limitations on the generality and reliability of modern artificial intelligence systems. In October 2018, the Santa Fe Institute held a three-day workshop, organized by Barbara Grosz, Dawn Song, and myself, called Artificial Intelligence and the Barrier of Meaning. Thirty participants from a diverse set of disciplines — artificial intelligence, robotics, cognitive and developmental psychology, animal behavior, information theory, and philosophy, among others — met to discuss questions related to the notion of understanding in living systems and the prospect for such understanding in machines. In the hope that the results of the workshop will be useful to the broader community, this article summarizes the main themes of discussion and highlights some of the ideas developed at the workshop.


Author(s):  
Rhyse Bendell ◽  
Jessica Williams ◽  
Stephen M. Fiore ◽  
Florian Jentsch

Artificial intelligence has been developed to perform all manner of tasks but has not gained capabilities to support social cognition. We suggest that teams comprised of both humans and artificially intelligent agents cannot achieve optimal team performance unless all teammates have the capacity to employ social-cognitive mechanisms. These form the foundation for generating inferences about their counterparts and enable execution of informed, appropriate behaviors. Social intelligence and its utilization are known to be vital components of human-human teaming processes due to their importance in guiding the recognition, interpretation, and use of the signals that humans naturally use to shape their exchanges. Although modern sensors and algorithms could allow AI to observe most social cues, signals, and other indicators, the approximation of human-to-human social interaction -based upon aggregation and modeling of such cues is currently beyond the capacity of potential AI teammates. Partially, this is because humans are notoriously variable. We describe an approach for measuring social-cognitive features to produce the raw information needed to create human agent profiles that can be operated upon by artificial intelligences.


Author(s):  
Alexander Riegler

Interdisciplinary research provides in¬spirations and insights into how a variety of disciplines can contribute to the formulation of an alternative path to artificial cognition systems. It has been suggested that results from ethology, evolutionary theory and epistemology can be condensed into four boundary conditions. They lead to the outline of an architecture for genuine cognitive systems, which seeks to overcome traditional problems known from artificial intelligence research. Two major points are stressed: (a) The maintenance of explanatory power by favoring an advanced rule-based system rather than neuronal systems, and (b) the organizational closure of the cognitive apparatus, which has far-reaching implications for the creation of meaningful agents.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Author(s):  
Radu Mutihac

Models and algorithms have been designed to mimic information processing and knowledge acquisition of the human brain generically called artificial or formal neural networks (ANNs), parallel distributed processing (PDP), neuromorphic or connectionist models. The term network is common today: computer networks exist, communications are referred to as networking, corporations and markets are structured in networks. The concept of ANN was initially coined as a hopeful vision of anticipating artificial intelligence (AI) synthesis by emulating the biological brain. ANNs are alternative means to symbol programming aiming to implement neural-inspired concepts in AI environments (neural computing) (Hertz, Krogh, & Palmer, 1991), whereas cognitive systems attempt to mimic the actual biological nervous systems (computational neuroscience). All conceivable neuromorphic models lie in between and supposed to be a simplified but meaningful representation of some reality. In order to establish a unifying theory of neural computing and computational neuroscience, mathematical theories should be developed along with specific methods of analysis (Amari, 1989) (Amit, 1990). The following outlines a tentatively mathematical-closed framework in neural modeling.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1163
Author(s):  
Andrea Roli ◽  
Stuart A. Kauffman

Since early cybernetics studies by Wiener, Pask, and Ashby, the properties of living systems are subject to deep investigations. The goals of this endeavour are both understanding and building: abstract models and general principles are sought for describing organisms, their dynamics and their ability to produce adaptive behavior. This research has achieved prominent results in fields such as artificial intelligence and artificial life. For example, today we have robots capable of exploring hostile environments with high level of self-sufficiency, planning capabilities and able to learn. Nevertheless, the discrepancy between the emergence and evolution of life and artificial systems is still huge. In this paper, we identify the fundamental elements that characterize the evolution of the biosphere and open-ended evolution, and we illustrate their implications for the evolution of artificial systems. Subsequently, we discuss the most relevant issues and questions that this viewpoint poses both for biological and artificial systems.


2020 ◽  
Vol 43 (8) ◽  
pp. 385-455
Author(s):  
A. Diaspro ◽  
P. Bianchini

Abstract This article deals with the developments of optical microscopy towards nanoscopy. Basic concepts of the methods implemented to obtain spatial super-resolution are described, along with concepts related to the study of biological systems at the molecular level. Fluorescence as a mechanism of contrast and spatial resolution will be the starting point to developing a multi-messenger optical microscope tunable down to the nanoscale in living systems. Moreover, the integration of optical nanoscopy with scanning probe microscopy and the charming possibility of using artificial intelligence approaches will be shortly outlined.


Sign in / Sign up

Export Citation Format

Share Document