scholarly journals Integrated Person Identification and Emotion Recognition from Facial Images

Author(s):  
Dadet PRAMADIHANTO ◽  
Yoshio IWAI ◽  
Masahiko YACHIDA
Author(s):  
Smitha Engoor ◽  
Sendhilkumar Selvaraju ◽  
Hepsibah Sharon Christopher ◽  
Mahalakshmi Guruvayur Suryanarayanan ◽  
Bhuvaneshwari Ranganathan

Author(s):  
Laura Steenbergen ◽  
María J. Maraver ◽  
Rossana Actis-Grosso ◽  
Paola Ricciardelli ◽  
Lorenza S. Colzato

AbstractAccording to the Polyvagal theory, the vagus nerve is the key phylogenetic substrate that supports efficient emotion recognition for promoting safety and survival. Previous studies showed that the vagus nerve affects people’s ability to recognize emotions based on eye regions and whole facial images, but not static bodies. The purpose of this study was to verify whether the previously suggested causal link between vagal activity and emotion recognition can be generalized to situations in which emotions must be inferred from images of whole moving bodies. We employed transcutaneous vagus nerve stimulation (tVNS), a noninvasive brain stimulation technique that stimulates the vagus nerve by a mild electrical stimulation to the auricular branch of the vagus, located in the anterior protuberance of the outer ear. In two sessions, participants received active or sham tVNS before and while performing three emotion recognition tasks, aimed at indexing their ability to recognize emotions from static or moving bodily expressions by actors. Active tVNS, compared to sham stimulation, enhanced the recognition of anger but reduced the ability to recognize sadness, regardless of the type of stimulus (static vs. moving). Convergent with the idea of hierarchical involvement of the vagus in establishing safety, as put forward by the Polyvagal theory, we argue that our findings may be explained by vagus-evoked differential adjustment strategies to emotional expressions. Taken together, our findings fit with an evolutionary perspective on the vagus nerve and its involvement in emotion recognition for the benefit of survival.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2847
Author(s):  
Dorota Kamińska ◽  
Kadir Aktas ◽  
Davit Rizhinashvili ◽  
Danila Kuklyanov ◽  
Abdallah Hussein Sham ◽  
...  

Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2026
Author(s):  
Jung Hwan Kim ◽  
Alwin Poulose ◽  
Dong Seog Han

Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.


Author(s):  
Wenming Zheng ◽  
Hao Tang ◽  
Zhouchen Lin ◽  
Thomas S. Huang

Algorithms ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 70 ◽  
Author(s):  
Kudakwashe Zvarevashe ◽  
Oludayo Olugbara

Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document