scholarly journals Allocentric Emotional Affordances in HRI: The Multimodal Binding

Author(s):  
Jordi Vallverdú ◽  
Gabriele Trovato ◽  
Lorenzo Jamone

Affordances are an important concept in cognition, which can be applied to robots in order to perform a successful human-robot interaction (HRI). In this paper we explore and discuss the idea of emotional affordances and propose a viable model for implementation into HRI. We consider “2-ways” affordances: perceived object triggering an emotion, and perceived human emotion expression triggering an action. In order to make the implementation generic, the proposed model includes a library that can be customised depending on the specific robot and application’s scenario. We present the AAA (Affordance-Appraisal-Arousal) model, which incorporates Plutchik’s Wheel of Emotions, and show some examples of simulation and possible scenarios.

2018 ◽  
Vol 2 (4) ◽  
pp. 78 ◽  
Author(s):  
Jordi Vallverdú ◽  
Gabriele Trovato ◽  
Lorenzo Jamone

The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of “emotional affordances” by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider “2-ways” affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik’s Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios.


2009 ◽  
Vol 02 (01) ◽  
pp. 1-7 ◽  
Author(s):  
VLADIMIR G. IVANCEVIC ◽  
EUGENE V. AIDMAN ◽  
LEONG YEN

The recently developed Life-Space-Foam approach to goal-directed human action deals with individual actor dynamics. This paper applies the model to characterize the dynamics of co-action by two or more actors. This dynamics is modelled by (i) a two-term joint action (including cognitive/motivatonal potential and kinetic energy), and (ii) its associated adaptive path integral, representing an infinite-dimensional neural network. Its feedback adaptation loop has been derived from Bernstein's concepts of sensory corrections loop in human motor control and Brooks' subsumption architectures in robotics. Potential applications of the proposed model in human-robot interaction research are discussed.


Author(s):  
Yuan Wei ◽  
Jing Zhao

Purpose This paper aims to deal with the problem of designing robot behaviors (mainly to robotic arms) to express emotions. The authors study the effects of robot behaviors from our humanoid robot NAO on the subject’s emotion expression in human–robot interaction (HRI). Design/methodology/approach A method to design robot behavior through the movement primitives is proposed. Then, a novel dimensional affective model is built. Finally, the concept of action semantics is adopted to combine the robot behaviors with emotion expression. Findings For the evaluation of this combination, the authors assess positive (excited and happy) and negative (frightened and sad) emotional patterns on 20 subjects which are divided into two groups (whether they were familiar with robots). The results show that the recognition of the different emotion patterns does not have differences between the two groups and the subjects could recognize the robot behaviors with emotions. Practical implications Using affective models to guide robots’ behavior or express their intentions is highly beneficial in human–robot interaction. The authors think about several applications of the emotional motion: improve efficiency in HRI, direct people during disasters, better understanding with human partners or help people perform their tasks better. Originality/value This paper presents a method to design robot behaviors with emotion expression. Meanwhile, a similar methodology can be used in other parts (leg, torso, head and so on) of humanoid robots or non-humanoid robots, such as industrial robots.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5357 ◽  
Author(s):  
Yuqiang Wu ◽  
Fei Zhao ◽  
Wansoo Kim ◽  
Arash Ajoudani

In this work, we propose an intuitive and real-time model of the human arm active endpoint stiffness. In our model, the symmetric and positive-definite stiffness matrix is constructed through the eigendecomposition Kc=VDVT, where V is an orthonormal matrix whose columns are the normalized eigenvectors of Kc, and D is a diagonal matrix whose entries are the eigenvalues of Kc. In this formulation, we propose to construct V and D directly by exploiting the geometric information from a reduced human arm skeleton structure in 3D and from the assumption that human arm muscles work synergistically when co-contracted. Through the perturbation experiments across multiple subjects under different arm configurations and muscle activation states, we identified the model parameters and examined the modeling accuracy. In comparison to our previous models for predicting human active arm endpoint stiffness, the new model offers significant advantages such as fast identification and personalization due to its principled simplicity. The proposed model is suitable for applications such as teleoperation, human–robot interaction and collaboration, and human ergonomic assessments, where a personalizable and real-time human kinodynamic model is a crucial requirement.


2015 ◽  
Author(s):  
Noraidah Blar ◽  
Fairul Azni Jafar ◽  
Nurhidayu Abdullah ◽  
Mohd Nazrin Muhammad ◽  
Anuar Muhamed Kassim

2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

Sign in / Sign up

Export Citation Format

Share Document