scholarly journals Emotion Recognition Based on Weighted Fusion Strategy of Multichannel Physiological Signals

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Wei Wei ◽  
Qingxuan Jia ◽  
Yongli Feng ◽  
Gang Chen

Emotion recognition is an important pattern recognition problem that has inspired researchers for several areas. Various data from humans for emotion recognition have been developed, including visual, audio, and physiological signals data. This paper proposes a decision-level weight fusion strategy for emotion recognition in multichannel physiological signals. Firstly, we selected four kinds of physiological signals, including Electroencephalography (EEG), Electrocardiogram (ECG), Respiration Amplitude (RA), and Galvanic Skin Response (GSR). And various analysis domains have been used in physiological emotion features extraction. Secondly, we adopt feedback strategy for weight definition, according to recognition rate of each emotion of each physiological signal based on Support Vector Machine (SVM) classifier independently. Finally, we introduce weight in decision level by linear fusing weight matrix with classification result of each SVM classifier. The experiments on the MAHNOB-HCI database show the highest accuracy. The results also provide evidence and suggest a way for further developing a more specialized emotion recognition system based on multichannel data using weight fusion strategy.

Author(s):  
Raveendra K ◽  
◽  
Ravi J

Face biometric system is one of the successful applications of image processing. Person recognition using face is the challenging task since it involves identifying the 3D object from 2D object. The feature extraction plays a very important role in face recognition. Extraction of features both in spatial as well as frequency domain has more advantages than the features obtained from single domain alone. The proposed work achieves spatial domain feature extraction using Asymmetric Region Local Binary Pattern (ARLBP) and frequency domain feature extraction using Fast Discrete Curvelet Transform (FDCT). The obtained features are fused by concatenation and compared with trained set of features using different distance metrics and Support Vector Machine (SVM) classifier. The experiment is conducted for different face databases. It is shown that the proposed work yields 95.48% accuracy for FERET, 92.18% for L-space k, 76.55% for JAFFE and 81.44% for NIR database using SVM classifier. The results show that the proposed system provides better recognition rate for SVM classifier when compare to the other distance matrices. Further, the work is also compared with existing work for performance evaluation.


2013 ◽  
Vol 380-384 ◽  
pp. 3750-3753 ◽  
Author(s):  
Chun Yan Nie ◽  
Rui Li ◽  
Ju Wang

Changes of physiological signals are affected by human emotions, but also the emotional fluctuations are reflected by the body's variation of physiological signal's feature. Physiological signal is a non-linear signal ,nonlinear dynamics and biomedical engineering ,which based on chaos theory, providing us a new method for studying on the parameters of these complex physiological signals which can hardly described by the classical theory. This paper shows physiological emotion signal recognition system based on the chaotic characteristics, and than describes some current applications of chaotic characteristics for multiple physiological signals on emotional recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Haixia Yang ◽  
Zhaohui Ji ◽  
Jun Sun ◽  
Fanan Xing ◽  
Yixian Shen ◽  
...  

Human gestures have been considered as one of the important human-computer interaction modes. With the fast development of wireless technology in urban Internet of Things (IoT) environment, Wi-Fi can not only provide the function of high-speed network communication but also has great development potential in the field of environmental perception. This paper proposes a gesture recognition system based on the channel state information (CSI) within the physical layer of Wi-Fi transmission. To solve the problems of noise interference and phase offset in the CSI, we adopt a model based on CSI quotient. Then, the amplitude and phase curves of CSI are smoothed using Savitzky-Golay filter, and the one-dimensional convolutional neural network (1D-CNN) is used to extract the gesture features. Then, the support vector machine (SVM) classifier is adopted to recognize the gestures. The experimental results have shown that our system can achieve a recognition rate of about 90% for three common gestures, including pushing forward, left stroke, and waving. Meanwhile, the effects of different human orientation and model parameters on the recognition results are analyzed as well.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 540 ◽  
Author(s):  
Qiang Guo ◽  
Xin Yu ◽  
Guoqing Ruan

Low Probability of Intercept (LPI) radar waveform recognition is not only an important branch of the electronic reconnaissance field, but also an important means to obtain non-cooperative radar information. To solve the problems of LPI radar waveform recognition rate, difficult feature extraction and large number of samples needed, an automatic classification and recognition system based on Choi-Williams distribution (CWD) and depth convolution neural network migration learning is proposed in this paper. First, the system performs CWD time-frequency transform on the LPI radar waveform to obtain a 2-D time-frequency image. Then the system preprocesses the original time-frequency image. In addition, then the system sends the pre-processed image to the pre-training model (Inception-v3 or ResNet-152) of the deep convolution network for feature extraction. Finally, the extracted features are sent to a Support Vector Machine (SVM) classifier to realize offline training and online recognition of radar waveforms. The simulation results show that the overall recognition rate of the eight LPI radar signals (LFM, BPSK, Costas, Frank, and T1–T4) of the ResNet-152-SVM system reaches 97.8%, and the overall recognition rate of the Inception-v3-SVM system reaches 96.2% when the SNR is −2 dB.


2020 ◽  
Vol 40 (2) ◽  
pp. 149-157 ◽  
Author(s):  
Değer Ayata ◽  
Yusuf Yaslan ◽  
Mustafa E. Kamasak

Abstract Purpose The purpose of this paper is to propose a novel emotion recognition algorithm from multimodal physiological signals for emotion aware healthcare systems. In this work, physiological signals are collected from a respiratory belt (RB), photoplethysmography (PPG), and fingertip temperature (FTT) sensors. These signals are used as their collection becomes easy with the advance in ergonomic wearable technologies. Methods Arousal and valence levels are recognized from the fused physiological signals using the relationship between physiological signals and emotions. This recognition is performed using various machine learning methods such as random forest, support vector machine and logistic regression. The performance of these methods is studied. Results Using decision level fusion, the accuracy improved from 69.86 to 73.08% for arousal, and from 69.53 to 72.18% for valence. Results indicate that using multiple sources of physiological signals and their fusion increases the accuracy rate of emotion recognition. Conclusion This study demonstrated a framework for emotion recognition using multimodal physiological signals from respiratory belt, photo plethysmography and fingertip temperature. It is shown that decision level fusion from multiple classifiers (one per signal source) improved the accuracy rate of emotion recognition both for arousal and valence dimensions.


2020 ◽  
pp. 283-293
Author(s):  
Imen Trabelsi ◽  
Med Salim Bouhlel

Automatic Speech Emotion Recognition (SER) is a current research topic in the field of Human Computer Interaction (HCI) with a wide range of applications. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral, and happiness. The speech samples in this paper are from the Berlin emotional database. Mel Frequency cepstrum coefficients (MFCC), Linear prediction coefficients (LPC), linear prediction cepstrum coefficients (LPCC), Perceptual Linear Prediction (PLP) and Relative Spectral Perceptual Linear Prediction (Rasta-PLP) features are used to characterize the emotional utterances using a combination between Gaussian mixture models (GMM) and Support Vector Machines (SVM) based on the Kullback-Leibler Divergence Kernel. In this study, the effect of feature type and its dimension are comparatively investigated. The best results are obtained with 12-coefficient MFCC. Utilizing the proposed features a recognition rate of 84% has been achieved which is close to the performance of humans on this database.


2021 ◽  
Vol 10 (5) ◽  
pp. 2539-2547
Author(s):  
Valentina Markova ◽  
Todor Ganchev ◽  
Kalin Kalinkov ◽  
Miroslav Markov

We report on the development of an automated detector of acute stress based on physiological signals. Our detector discriminates between high and low levels of acute stress accumulated by students when performing cognitive tasks on a computer. The proposed detector builds on well-known physiological signal processing principles combined with the state-of-art support vector machine (SVM) classifier. The novelty aspects here come from the design and implementation of the signal pre-processing and the feature extraction stages, which were purposely designed and fine-tuned for the specific needs of acute stress detection and from applying existing algorithms to a new problem. The proposed acute stress detector was evaluated in person-specific and person-independent experimental setups using the publicly available CLAS dataset. Each setup involved three cognitive tasks with a dissimilar crux of the matter and different complexity. The experimental results indicated a very high detection accuracy when discriminating between acute stress conditions due to significant cognitive load and conditions elicited by two typical emotion elicitation tasks. Such a functionality would also contribute towards obtaining a multi-faceted analysis on the dependence of work efficiency from personal treats, cognitive load and acute stress level.


2020 ◽  
Vol 5 (2) ◽  
pp. 609
Author(s):  
Segun Aina ◽  
Kofoworola V. Sholesi ◽  
Aderonke R. Lawal ◽  
Samuel D. Okegbile ◽  
Adeniran I. Oluwaranti

This paper presents the application of Gaussian blur filters and Support Vector Machine (SVM) techniques for greeting recognition among the Yoruba tribe of Nigeria. Existing efforts have considered different recognition gestures. However, tribal greeting postures or gestures recognition for the Nigerian geographical space has not been studied before. Some cultural gestures are not correctly identified by people of the same tribe, not to mention other people from different tribes, thereby posing a challenge of misinterpretation of meaning. Also, some cultural gestures are unknown to most people outside a tribe, which could also hinder human interaction; hence there is a need to automate the recognition of Nigerian tribal greeting gestures. This work hence develops a Gaussian Blur – SVM based system capable of recognizing the Yoruba tribe greeting postures for men and women. Videos of individuals performing various greeting gestures were collected and processed into image frames. The images were resized and a Gaussian blur filter was used to remove noise from them. This research used a moment-based feature extraction algorithm to extract shape features that were passed as input to SVM. SVM is exploited and trained to perform the greeting gesture recognition task to recognize two Nigerian tribe greeting postures. To confirm the robustness of the system, 20%, 25% and 30% of the dataset acquired from the preprocessed images were used to test the system. A recognition rate of 94% could be achieved when SVM is used, as shown by the result which invariably proves that the proposed method is efficient.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Chenchen Huang ◽  
Wei Gong ◽  
Wenlong Fu ◽  
Dongyu Feng

Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. The speech emotion recognition rate of the system reached 86.5%, which was 7% higher than the original method.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Hasan Mahmud ◽  
Md. Kamrul Hasan ◽  
Abdullah-Al-Tariq ◽  
Md. Hasanul Kabir ◽  
M. A. Mottalib

Symbolic gestures are the hand postures with some conventionalized meanings. They are static gestures that one can perform in a very complex environment containing variations in rotation and scale without using voice. The gestures may be produced in different illumination conditions or occluding background scenarios. Any hand gesture recognition system should find enough discriminative features, such as hand-finger contextual information. However, in existing approaches, depth information of hand fingers that represents finger shapes is utilized in limited capacity to extract discriminative features of fingers. Nevertheless, if we consider finger bending information (i.e., a finger that overlaps palm), extracted from depth map, and use them as local features, static gestures varying ever so slightly can become distinguishable. Our work here corroborated this idea and we have generated depth silhouettes with variation in contrast to achieve more discriminative keypoints. This approach, in turn, improved the recognition accuracy up to 96.84%. We have applied Scale-Invariant Feature Transform (SIFT) algorithm which takes the generated depth silhouettes as input and produces robust feature descriptors as output. These features (after converting into unified dimensional feature vectors) are fed into a multiclass Support Vector Machine (SVM) classifier to measure the accuracy. We have tested our results with a standard dataset containing 10 symbolic gesture representing 10 numeric symbols (0-9). After that we have verified and compared our results among depth images, binary images, and images consisting of the hand-finger edge information generated from the same dataset. Our results show higher accuracy while applying SIFT features on depth images. Recognizing numeric symbols accurately performed through hand gestures has a huge impact on different Human-Computer Interaction (HCI) applications including augmented reality, virtual reality, and other fields.


Sign in / Sign up

Export Citation Format

Share Document