Enhancing CNN with Pre-processing Stage in Illumination-Invariant Automatic Expression Recognition

2021 ◽  
pp. 95-106
Author(s):  
Hiral A. Patel ◽  
Nidhi Khatri ◽  
Keyur Suthar ◽  
Hiral R. Patel
Optik ◽  
2018 ◽  
Vol 158 ◽  
pp. 1016-1025 ◽  
Author(s):  
Asim Munir ◽  
Ayyaz Hussain ◽  
Sajid Ali Khan ◽  
Muhammad Nadeem ◽  
Sadia Arshid

2019 ◽  
Vol 8 (4) ◽  
pp. 6140-6144

In this work, we propose a prospective novel method to address illumination invariant system for facial expression recognition. Facial expressions are used to convey nonverbal visual information among humans. This also plays a vital role in human-machine interface modules that have invoked attention of many researchers. Earlier machine learning algorithms require complex feature extraction algorithms and are relying on the size and uniqueness of features related to the subjects. In this paper, a deep convolutional neural network is proposed for facial expression recognition and it is trained on two publicly available datasets such as JAFFE and Yale databases under different illumination conditions. Furthermore, transfer learning is used with pre-trained networks such as AlexNet and ResNet-101 trained on ImageNet database. Experimental results show that the designed network could recognize up to 30% variation in the illumination and it achieves an accuracy of 92%.


2018 ◽  
Author(s):  
Jiayin Zhao ◽  
Yifang Wang ◽  
Licong An

AbstractFaces play important roles in the social lives of humans. In addition to real faces, people also encounter numerous cartoon faces in daily life. These cartoon faces convey basic emotional states through facial expressions. Using a behavioral research methodology and event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, vertex positive potential (VPP), and late positive potential (LPP) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces; and that angry faces induced larger LPP amplitudes than did happy faces. In addition, the results showed a significant difference in the brain regions associated with face processing as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher facial expression recognition accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. These results demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces among adults. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.


Author(s):  
Rachel L. C. Mitchell ◽  
Rachel A. Kingston

It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.


2012 ◽  
Vol 43 (1) ◽  
pp. 14-27 ◽  
Author(s):  
Silvia Tomelleri ◽  
Luigi Castelli

In the present paper, relying on event-related brain potentials (ERPs), we investigated the automatic nature of gender categorization focusing on different stages of the ongoing process. In particular, we explored the degree to which gender categorization occurs automatically by manipulating the semantic vs. nonsemantic processing goals requested by the task (Study 1) and the complexity of the task itself (Study 2). Results of Study 1 highlighted the automatic nature of categorization at an early (N170) and on a later processing stage (P300). Findings of Study 2 showed that at an early stage categorization was automatically driven by the ease of extraction of category-based knowledge from faces while, at a later stage, categorization was more influenced by situational constrains.


Sign in / Sign up

Export Citation Format

Share Document