emotion recognition
Recently Published Documents





2022 ◽  
Vol 27 ◽  
pp. 100225
Michael S. Kraus ◽  
Trina M. Walker ◽  
Diana Perkins ◽  
Richard S.E. Keefe

2022 ◽  
Vol 63 ◽  
pp. 101000
Florence Yik Nam Leung ◽  
Jacqueline Sin ◽  
Caitlin Dawson ◽  
Jia Hoong Ong ◽  
Chen Zhao ◽  

I Made Agus Wirawan ◽  
Retantyo Wardoyo ◽  
Danang Lelono

Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set.

2022 ◽  
Vol 72 ◽  
pp. 103289
Swapnil Bhosale ◽  
Rupayan Chakraborty ◽  
Sunil Kumar Kopparapu

2022 ◽  
Vol 12 (2) ◽  
pp. 807
Huafei Xiao ◽  
Wenbo Li ◽  
Guanzhong Zeng ◽  
Yingzhang Wu ◽  
Jiyong Xue ◽  

With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.

2022 ◽  
Vol 12 ◽  
Gada Musa Salech ◽  
Patricia Lillo ◽  
Karin van der Hiele ◽  
Carolina Méndez-Orellana ◽  
Agustín Ibáñez ◽  

Background: The cognitive and neuropsychiatric deficits present in patients with behavioral variant frontotemporal dementia (bvFTD) are associated with loss of functionality in the activities of daily living (ADLs). The main purpose of this study was to examine and explore the association between the cognitive and neuropsychiatric features that might prompt functional impairment of basic, instrumental, and advanced ADL domains in patients with bvFTD.Methods: A retrospective cross-sectional study was conducted with 27 patients with bvFTD in its early stage (<2 years of evolution) and 32 healthy control subjects. A neuropsychological assessment was carried out wherein measures of cognitive function and neuropsychiatric symptoms were obtained. The informant-report Technology–Activities of Daily Living Questionnaire was used to assess the percentage of functional impairment in the different ADL domains. To identify the best determinants, three separate multiple regression analyses were performed, considering each functional impairment as the dependent variable and executive function, emotion recognition, disinhibition, and apathy as independent variables.Results: For the basic ADLs, a model that explains 28.2% of the variability was found, in which the presence of apathy (β = 0.33, p = 0.02) and disinhibition (β = 0.29, p = 0.04) were significant factors. Concerning instrumental ADLs, the model produced accounted for 63.7% of the functional variability, with the presence of apathy (β = 0.71, p < 0.001), deficits in executive function (β = −0.36, p = 0.002), and lack of emotion recognition (β = 0.28, p = 0.017) as the main contributors. Finally, in terms of advanced ADLs, the model found explained 52.6% of the variance, wherein only the presence of apathy acted as a significant factor (β = 0.59, p < 0.001).Conclusions: The results of this study show the prominent and transverse effect of apathy in the loss of functionality throughout all the ADL domains. Apart from that, this is the first study that shows that the factors associated with loss of functionality differ according to the functional domain in patients with bvFTD in its early stage. Finally, no other study has analyzed the impact of the lack of emotion recognition in the functionality of ADLs. These results could guide the planning of tailored interventions that might enhance everyday activities and the improvement of quality of life.

2022 ◽  
Vol 12 ◽  
Jiangsheng Cao ◽  
Xueqin He ◽  
Chenhui Yang ◽  
Sifang Chen ◽  
Zhangyu Li ◽  

Due to the non-invasiveness and high precision of electroencephalography (EEG), the combination of EEG and artificial intelligence (AI) is often used for emotion recognition. However, the internal differences in EEG data have become an obstacle to classification accuracy. To solve this problem, considering labeled data from similar nature but different domains, domain adaptation usually provides an attractive option. Most of the existing researches aggregate the EEG data from different subjects and sessions as a source domain, which ignores the assumption that the source has a certain marginal distribution. Moreover, existing methods often only align the representation distributions extracted from a single structure, and may only contain partial information. Therefore, we propose the multi-source and multi-representation adaptation (MSMRA) for cross-domain EEG emotion recognition, which divides the EEG data from different subjects and sessions into multiple domains and aligns the distribution of multiple representations extracted from a hybrid structure. Two datasets, i.e., SEED and SEED IV, are used to validate the proposed method in cross-session and cross-subject transfer scenarios, experimental results demonstrate the superior performance of our model to state-of-the-art models in most settings.

Sign in / Sign up

Export Citation Format

Share Document