scholarly journals I Show You How I Like You: Human-Robot Interaction through Emotional Expression and Tactile Stimulation

2000 ◽  
Vol 29 (544) ◽  
Author(s):  
Dolores Canamero ◽  
Jakob Fredslund

We report work on a LEGO robot capable of displaying several emotional expressions in response to physical contact. Our motivation has been to explore believable emotional exchanges to achieve plausible interaction with a simple robot. We have worked toward this goal in two ways. <p>First, acknowledging the importance of physical manipulation in children's interactions, interaction with the robot is through tactile stimulation; the various kinds of stimulation that can elicit the robot's emotions are grounded in a model of emotion activation based on different stimulation patterns.</p><p>Second, emotional states need to be clearly conveyed. We have drawn inspiration from theories of human basic emotions with associated universal facial expressions, which we have implemented in a caricaturized face. We have conducted experiments on both children and adults to assess the recognizability of these expressions.</p>

Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Author(s):  
Eleonora Cannoni ◽  
Giuliana Pinto ◽  
Anna Silvia Bombi

AbstractThis study was aimed at verifying if children introduce emotional expressions in their drawings of human faces, and if a preferential expression exists; we also wanted to verify if children’s pictorial choices change with increasing age. To this end we examined the human figure drawings made by 160 boys and 160 girls, equally divided in 4 age groups: 6–7; 8–9; 10–11; 12–13 years; mean ages (SD in parentheses) were: 83,30 (6,54); 106,14 (7,16) 130,49 (8,26); 155,40 (6,66). Drawings were collected with the Draw-a-Man test instructions, i.e. without mentioning an emotional characterization. In the light of data from previous studies of emotion drawing on request, and the literature about preferred emotional expressions, we expected that an emotion would be portrayed even by the younger participants, and that the preferred emotion would be happiness. We also expected that with the improving ability to keep into account both mouth and eyes appearance, other expressions would be found besides the smiling face. Data were submitted to non-parametric tests to compare the frequencies of expressions (absolute and by age) and the frequencies of visual cues (absolute and by age and expressions). The results confirmed that only a small number of faces were expressionless, and that the most frequent emotion was happiness. However, with increasing age this representation gave way to a variety of basic emotions (sadness, fear, anger, surprise), whose representation may depend from the ability to modify the shapes of both eyes and mouth and changing communicative aims of the child.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 134051-134066 ◽  
Author(s):  
Mikel Val-Calvo ◽  
Jose Ramon Alvarez-Sanchez ◽  
Jose Manuel Ferrandez-Vicente ◽  
Eduardo Fernandez

2014 ◽  
Vol 5 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Mohammad Rabiei ◽  
Alessandro Gasparetto

AbstractA system for recognition of emotions based on speech analysis can have interesting applications in human-robot interaction. In this paper, we carry out an exploratory study on the possibility to use a proposed methodology to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust) based on phonetic and acoustic properties of emotive speech with the minimal use of signal processing algorithms. We set up an experimental test, consisting of choosing three types of speakers, namely: (i) five adult European speakers, (ii) five Asian (Middle East) adult speakers and (iii) five adult American speakers. The speakers had to repeat 6 sentences in English (with durations typically between 1 s and 3 s) in order to emphasize rising-falling intonation and pitch movement. Intensity, peak and range of pitch and speech rate have been evaluated. The proposed methodology consists of generating and analyzing a graph of formant, pitch and intensity, using the open-source PRAAT program. From the experimental results, it was possible to recognize the basic emotions in most of the cases


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


2020 ◽  
Vol 32 (1) ◽  
pp. 7-7
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro

As social robot research is advancing, the interaction distance between people and robots is decreasing. Indeed, although we were once required to maintain a certain physical distance from traditional industrial robots for safety, we can now interact with social robots in such a close distance that we can touch them. The physical existence of social robots will be essential to realize natural and acceptable interactions with people in daily environments. Because social robots function in our daily environments, we must design scenarios where robots interact closely with humans by considering various viewpoints. Interactions that involve touching robots influence the changes in the behavior of a person strongly. Therefore, robotics researchers and developers need to design such scenarios carefully. Based on these considerations, this special issue focuses on close human-robot interactions. This special issue on “Human-Robot Interaction in Close Distance” includes a review paper and 11 other interesting papers covering various topics such as social touch interactions, non-verbal behavior design for touch interactions, child-robot interactions including physical contact, conversations with physical interactions, motion copying systems, and mobile human-robot interactions. We thank all the authors and reviewers of the papers and hope this special issue will help readers better understand human-robot interaction in close distance.


2002 ◽  
Vol 14 (2) ◽  
pp. 210-227 ◽  
Author(s):  
S. Campanella ◽  
P. Quinet ◽  
R. Bruyer ◽  
M. Crommelinck ◽  
J.-M. Guerit

Behavioral studies have shown that two different morphed faces perceived as reflecting the same emotional expression are harder to discriminate than two faces considered as two different ones. This advantage of between-categorical differences compared with within-categorical ones is classically referred as the categorical perception effect. The temporal course of this effect on fear and happiness facial expressions has been explored through event-related potentials (ERPs). Three kinds of pairs were presented in a delayed same–different matching task: (1) two different morphed faces perceived as the same emotional expression (within-categorical differences), (2) two other ones reflecting two different emotions (between-categorical differences), and (3) two identical morphed faces (same faces for methodological purpose). Following the second face onset in the pair, the amplitude of the bilateral occipito-temporal negativities (N170) and of the vertex positive potential (P150 or VPP) was reduced for within and same pairs relative to between pairs. This suggests a repetition priming effect. We also observed a modulation of the P3b wave, as the amplitude of the responses for the between pairs was higher than for the within and same pairs. These results indicate that the categorical perception of human facial emotional expressions has a perceptual origin in the bilateral occipito-temporal regions, while typical prior studies found emotion-modulated ERP components considerably later.


2012 ◽  
Vol 25 (0) ◽  
pp. 97-98
Author(s):  
Brianna Beck ◽  
Caterina Bertini ◽  
Elisabetta Ladavas

Prior studies have identified an ‘enfacement effect’ in which participants incorporate another’s face into their self-face representation after observing that face touched repeatedly in synchrony with touch on their own face (Sforza et al., 2010; Tsakiris, 2008). The degree of self-face/other-face merging is positively correlated with participants’ trait-level empathy scores (Sforza et al., 2010) and affects judgments of the other’s personality (Paladino et al., 2010), suggesting that enfacement also modulates higher-order representations of ‘self’ and ‘other’ involved in social and emotional evaluations. To test this hypothesis, we varied not only whether visuo-tactile stimulation was synchronous or asynchronous but also whether the person being touched in the video displayed an emotional expression indicative of threat, either fear or anger. We hypothesized that participants would incorporate the faces of fearful others more than the faces of angry others after a shared visuo-tactile experience because of a potentially stronger representation of the sight of fear in somatosensory cortices compared to the sight of anger (Cardini et al., 2012). Instead, we found that the enfacement effect (i.e., greater self-face/other-face merging following synchronous compared to asynchronous visuo-tactile stimulation) was abolished if the other person displayed fear but remained if they expressed anger. This nonetheless suggests that enfacement operates on an evaluative self-representation as well as a physical one because the effect changes with the emotional content of the other’s face. Further research into the neural mechanism behind the enfacement effect is needed to determine why sight of fear diminishes it rather than enhancing it.


Sign in / Sign up

Export Citation Format

Share Document