artificial agent
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 40)

H-INDEX

11
(FIVE YEARS 4)

2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Isabelle Hupont ◽  
Giovanna Varni ◽  
Mohamed Chetouani ◽  
...  

In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people’s spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents’ facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants’ facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.


2021 ◽  
pp. 1-36
Author(s):  
Vagan Terziyan ◽  
Olena Kaikova

Abstract Machine learning is a good tool to simulate human cognitive skills as it is about mapping perceived information to various labels or action choices, aiming at optimal behavior policies for a human or an artificial agent operating in the environment. Regarding autonomous systems, objects and situations are perceived by some receptors as divided between sensors. Reactions to the input (e.g., actions) are distributed among the particular capability providers or actuators. Cognitive models can be trained as, for example, neural networks. We suggest training such models for cases of potential disabilities. Disability can be either the absence of one or more cognitive sensors or actuators at different levels of cognitive model. We adapt several neural network architectures to simulate various cognitive disabilities. The idea has been triggered by the “coolability” (enhanced capability) paradox, according to which a person with some disability can be more efficient in using other capabilities. Therefore, an autonomous system (human or artificial) pretrained with simulated disabilities will be more efficient when acting in adversarial conditions. We consider these coolabilities as complementary artificial intelligence and argue on the usefulness if this concept for various applications.


2021 ◽  
Author(s):  
David Antonio Gomez Jauregui ◽  
Felix Dollack ◽  
Monica Perusuuia-Hernandez
Keyword(s):  
The Self ◽  

2021 ◽  
Vol 8 ◽  
Author(s):  
Maria Lombardi ◽  
Davide Liuzza ◽  
Mario di Bernardo

In many real-word scenarios, humans and robots are required to coordinate their movements in joint tasks to fulfil a common goal. While several examples regarding dyadic human robot interaction exist in the current literature, multi-agent scenarios in which one or more artificial agents need to interact with many humans are still seldom investigated. In this paper we address the problem of synthesizing an autonomous artificial agent to perform a paradigmatic oscillatory joint task in human ensembles while exhibiting some desired human kinematic features. We propose an architecture based on deep reinforcement learning which is flexible enough to make the artificial agent interact with human groups of different sizes. As a paradigmatic coordination task we consider a multi-agent version of the mirror game, an oscillatory motor task largely used in the literature to study human motor coordination.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sung-Phil Kim ◽  
Minju Kim ◽  
Jongmin Lee ◽  
Yang Seok Cho ◽  
Oh-Sang Kwon

The present study develops an artificial agent that plays the iterative chicken game based on a computational model that describes human behavior in competitive social interactions in terms of fairness. The computational model we adopted in this study, named as the self-concept fairness model, decides the agent’s action according to the evaluation of fairness of both opponent and self. We implemented the artificial agent in a computer program with a set of parameters adjustable by researchers. These parameters allow researchers to determine the extent to which the agent behaves aggressively or cooperatively. To demonstrate the use of the proposed method for the investigation of human behavior, we performed an experiment in which human participants played the iterative chicken game against the artificial agent. Participants were divided into two groups, each being informed to play with either a person or the computer. The behavioral analysis results showed that the proposed method can induce changes in the behavioral pattern of human players by changing the agent’s behavioral pattern. Also, we found that participants tended to be more sensitive to fairness when they played with a human opponent than with a computer opponent. These results support that the artificial agent developed in this study will be useful to investigate human behavior in competitive social interactions.


2021 ◽  
pp. 89-102
Author(s):  
Matthias Scheutz ◽  
Bertram F. Malle

In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.


2021 ◽  
Vol 3 ◽  
Author(s):  
Kosmas Kritsis ◽  
Theatina Kylafi ◽  
Maximos Kaliakatsos-Papakostas ◽  
Aggelos Pikrakis ◽  
Vassilis Katsouros

Jazz improvisation on a given lead sheet with chords is an interesting scenario for studying the behaviour of artificial agents when they collaborate with humans. Specifically in jazz improvisation, the role of the accompanist is crucial for reflecting the harmonic and metric characteristics of a jazz standard, while identifying in real-time the intentions of the soloist and adapt the accompanying performance parameters accordingly. This paper presents a study on a basic implementation of an artificial jazz accompanist, which provides accompanying chord voicings to a human soloist that is conditioned by the soloing input and the harmonic and metric information provided in a lead sheet chart. The model of the artificial agent includes a separate model for predicting the intentions of the human soloist, towards providing proper accompaniment to the human performer in real-time. Simple implementations of Recurrent Neural Networks are employed both for modeling the predictions of the artificial agent and for modeling the expectations of human intention. A publicly available dataset is modified with a probabilistic refinement process for including all the necessary information for the task at hand and test-case compositions on two jazz standards show the ability of the system to comply with the harmonic constraints within the chart. Furthermore, the system is indicated to be able to provide varying output with different soloing conditions, while there is no significant sacrifice of “musicality” in generated music, as shown in subjective evaluations. Some important limitations that need to be addressed for obtaining more informative results on the potential of the examined approach are also discussed.


Sign in / Sign up

Export Citation Format

Share Document