Improving Emotion Recognition Performance by Random-Forest-Based Feature Selection

Author(s):  
Olga Egorow ◽  
Ingo Siegert ◽  
Andreas Wendemuth
2019 ◽  
Author(s):  
Alex Bertrams ◽  
Katja Schlegel

People high in autistic-like traits have been found to have difficulties with recognizing emotions from nonverbal expressions. However, findings on the autism—emotion recognition relationship are inconsistent. In the present study, we investigated whether speeded reasoning ability (reasoning performance under time pressure) moderates the inverse relationship between autistic-like traits and emotion recognition performance. We expected the negative correlation between autistic-like traits and emotion recognition to be less strong when speeded reasoning ability was high. MTurkers (N = 217) completed the ten item version of the Autism Spectrum Quotient (AQ-10), two emotion recognition tests using videos with sound (Geneva Emotion Recognition Test, GERT-S) and pictures (Reading the Mind in the Eyes Test, RMET), and Baddeley's Grammatical Reasoning test to measure speeded reasoning. As expected, the higher the ability in speeded reasoning, the less were higher autistic-like traits related to lower emotion recognition performance. These results suggest that a high ability in making quick mental inferences may (partly) compensate for difficulties with intuitive emotion recognition related to autistic-like traits.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


Sign in / Sign up

Export Citation Format

Share Document