Emotion classification with multichannel physiological signals using hybrid feature and adaptive decision fusion

2022 ◽  
Vol 71 ◽  
pp. 103235
Author(s):  
MaoSong Yan ◽  
Zhen Deng ◽  
BingWei He ◽  
ChengSheng Zou ◽  
Jie Wu ◽  
...  
Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3510 ◽  
Author(s):  
Gisela Pinto ◽  
João M. Carvalho ◽  
Filipa Barros ◽  
Sandra C. Soares ◽  
Armando J. Pinho ◽  
...  

Emotional responses are associated with distinct body alterations and are crucial to foster adaptive responses, well-being, and survival. Emotion identification may improve peoples’ emotion regulation strategies and interaction with multiple life contexts. Several studies have investigated emotion classification systems, but most of them are based on the analysis of only one, a few, or isolated physiological signals. Understanding how informative the individual signals are and how their combination works would allow to develop more cost-effective, informative, and objective systems for emotion detection, processing, and interpretation. In the present work, electrocardiogram, electromyogram, and electrodermal activity were processed in order to find a physiological model of emotions. Both a unimodal and a multimodal approach were used to analyze what signal, or combination of signals, may better describe an emotional response, using a sample of 55 healthy subjects. The method was divided in: (1) signal preprocessing; (2) feature extraction; (3) classification using random forest and neural networks. Results suggest that the electrocardiogram (ECG) signal is the most effective for emotion classification. Yet, the combination of all signals provides the best emotion identification performance, with all signals providing crucial information for the system. This physiological model of emotions has important research and clinical implications, by providing valuable information about the value and weight of physiological signals for emotional classification, which can critically drive effective evaluation, monitoring and intervention, regarding emotional processing and regulation, considering multiple contexts.


Safety ◽  
2020 ◽  
Vol 6 (4) ◽  
pp. 55
Author(s):  
Luca Davoli ◽  
Marco Martalò ◽  
Antonio Cilfone ◽  
Laura Belli ◽  
Gianluigi Ferrari ◽  
...  

Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 866 ◽  
Author(s):  
SeungJun Oh ◽  
Jun-Young Lee ◽  
Dong Keun Kim

This study aimed to design an optimal emotion recognition method using multiple physiological signal parameters acquired by bio-signal sensors for improving the accuracy of classifying individual emotional responses. Multiple physiological signals such as respiration (RSP) and heart rate variability (HRV) were acquired in an experiment from 53 participants when six basic emotion states were induced. Two RSP parameters were acquired from a chest-band respiration sensor, and five HRV parameters were acquired from a finger-clip blood volume pulse (BVP) sensor. A newly designed deep-learning model based on a convolutional neural network (CNN) was adopted for detecting the identification accuracy of individual emotions. Additionally, the signal combination of the acquired parameters was proposed to obtain high classification accuracy. Furthermore, a dominant factor influencing the accuracy was found by comparing the relativeness of the parameters, providing a basis for supporting the results of emotion classification. The users of this proposed model will soon be able to improve the emotion recognition model further based on CNN using multimodal physiological signals and their sensors.


2020 ◽  
Vol 37 (2) ◽  
Author(s):  
Arturo Martínez‐Rodrigo ◽  
Luz Fernández‐Aguilar ◽  
Roberto Zangróniz ◽  
José M. Latorre ◽  
José M. Pastor ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document