Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness

2017 ◽  
Vol 354 ◽  
pp. 64-72 ◽  
Author(s):  
Emmanuèle Ambert-Dahan ◽  
Anne-Lise Giraud ◽  
Halima Mecheri ◽  
Olivier Sterkers ◽  
Isabelle Mosnier ◽  
...  
2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2021 ◽  
Vol 151 ◽  
pp. 107734
Author(s):  
Katia M. Harlé ◽  
Alan N. Simmons ◽  
Jessica Bomyea ◽  
Andrea D. Spadoni ◽  
Charles T. Taylor

2011 ◽  
Vol 24 (2) ◽  
pp. 149-163 ◽  
Author(s):  
Marie Arsalidou ◽  
Drew Morris ◽  
Margot J. Taylor

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


2021 ◽  
Vol 12 ◽  
Author(s):  
Keita Tsukada ◽  
Shin-ichi Usami

Background: The development of less traumatic surgical techniques, such as the round window approach (RWA), as well as the use of flexible electrodes and post-operative steroid administration have enabled the preservation of residual hearing after cochlear implantation (CI) surgery. However, consideration must still be given to the complications that can accompany CI. One such potential complication is the impairment of vestibular function with resulting vertigo symptoms. The aim of our current study was to examine the changes in vestibular function after implantation in patients who received CI using less traumatic surgery, particularly the RWA technique.Methods: Sixty-six patients who received CI in our center were examined by caloric testing, cervical vestibular evoked myogenic potential (cVEMP) and ocular VEMP (oVEMP) before or after implantation, or both, to obtain data on semicircular canal, saccular and utricular function, respectively. Less traumatic CI surgery was performed by the use of the RWA and insertion of flexible electrodes such as MED-EL FLEX soft, FLEX 28, and FLEX 24 (Innsbruck, Austria).Results: Caloric response and the asymmetry ratio of cVEMP and oVEMP were examined before and after implantation using less traumatic surgical techniques. Compared with before implantation, 93.9, 82.4, and 92.5% of the patients showed preserved vestibular function after implantation based on caloric testing, cVEMP and oVEMP results, respectively. We also examined the results for vestibular function by a comparison of the 66 patients using the RWA and flexible electrodes, and 17 patients who underwent cochleostomy and insertion of conventional or hard electrodes. We measured responses using caloric testing, cVEMP and oVEMP in patients after CI. There were no differences in the frequencies of abnormal caloric and oVEMP results in the implanted ears between the RWA and cochleostomy. On the other hand, the frequency of abnormal cVEMP responses in the implanted ears in the patients who received implantation by cochleostomy was significantly higher than that in the patients undergoing surgery using the RWA.Conclusion: Patients receiving CI using less traumatic surgical techniques such as RWA and flexible electrodes have reduced risk of damage to vestibular function.


2018 ◽  
Vol 115 (43) ◽  
pp. E10013-E10021 ◽  
Author(s):  
Chaona Chen ◽  
Carlos Crivelli ◽  
Oliver G. B. Garrod ◽  
Philippe G. Schyns ◽  
José-Miguel Fernández-Dols ◽  
...  

Real-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Sign in / Sign up

Export Citation Format

Share Document