scholarly journals Personnel Emotion Recognition Model for Internet of Vehicles Security Monitoring

Author(s):  
Erkang Fu ◽  
Xi Li ◽  
Zhi Yao ◽  
Yuxin Ren ◽  
Yuanhao Wu ◽  
...  

Abstract In recent years, the Internet of vehicles (IOV) with intelligent networked automobiles as the terminal node has gradually become the development trend of the automotive industry and the research hot spot in related fields. This is due to its characteristics of intelligence, networking, low-carbon and energy saving. Real time emotion recognition for drivers and pedestrians in the community can be utilized to prevent fatigue driving and malicious collision, keep safety verification and pedestrian safety detection. This paper mainly studies the face emotion recognition model that can be utilized for IOV. Considering the fluctuation of image acquisition perspective and image quality in the application scene of IOV, the natural scene video similar to vehicle environment and its galvanic skin response (GSR) are utilized to make the testing set of emotion recognition. Then an expression recognition model combining codec and Support Vector Machine (SVM) classifier is proposed. Finally, emotion recognition testing is completed on the basis of Algorithm 1. The matching accuracy between the emotion recognition model and GSR is 82.01%. In the process of model testing, 189 effective videos are involved and 155 are correctly identified.

Author(s):  
Erkang Fu ◽  
Xi Li ◽  
Zhi Yao ◽  
Yuxin Ren ◽  
Yuanhao Wu ◽  
...  

AbstractIn recent years, the Internet of vehicles (IOV) with intelligent networked automobiles as terminal node has gradually become the development trend of automotive industry and research hot spot in related fields. This is due to its characteristics of intelligence, networking, low-carbon and energy saving. Real time emotion recognition for drivers and pedestrians in the community can be utilized to prevent fatigue driving and malicious collision, keep safety verification and pedestrian safety detection. This paper mainly studies the face emotion recognition model that can be utilized for IOV. Considering the fluctuation of image acquisition perspective and image quality in the application scene of IOV, the natural scene video similar to vehicle environment and its galvanic skin response (GSR) are utilized to make the testing set of emotion recognition. Then an expression recognition model combining codec and Support Vector Machine classifier is proposed. Finally, emotion recognition testing is completed on the basis of Algorithm 1. The matching accuracy between the emotion recognition model and GSR is 82.01%. In the process of model testing, 189 effective videos are involved and 155 are correctly identified.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 497 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.


2020 ◽  
Vol 11 (1) ◽  
pp. 48-70 ◽  
Author(s):  
Sivaiah Bellamkonda ◽  
Gopalan N.P

Facial expression analysis and recognition has gained popularity in the last few years for its challenging nature and broad area of applications like HCI, pain detection, operator fatigue detection, surveillance, etc. The key of real-time FER system is exploiting its variety of features extracted from the source image. In this article, three different features viz. local binary pattern, Gabor, and local directionality pattern were exploited to perform feature fusion and two classification algorithms viz. support vector machines and artificial neural networks were used to validate the proposed model on benchmark datasets. The classification accuracy has been improved in the proposed feature fusion of Gabor and LDP features with SVM classifier, recorded an average accuracy of 93.83% on JAFFE, 95.83% on CK and 96.50% on MMI. The recognition rates were compared with the existing studies in the literature and found that the proposed feature fusion model has improved the performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Paweł Tarnowski ◽  
Marcin Kołodziej ◽  
Andrzej Majkowski ◽  
Remigiusz Jan Rak

This article reports the results of the study related to emotion recognition by using eye-tracking. Emotions were evoked by presenting a dynamic movie material in the form of 21 video fragments. Eye-tracking signals recorded from 30 participants were used to calculate 18 features associated with eye movements (fixations and saccades) and pupil diameter. To ensure that the features were related to emotions, we investigated the influence of luminance and the dynamics of the presented movies. Three classes of emotions were considered: high arousal and low valence, low arousal and moderate valence, and high arousal and high valence. A maximum of 80% classification accuracy was obtained using the support vector machine (SVM) classifier and leave-one-subject-out validation method.


Due to the highly variant face geometry and appearances, Facial Expression Recognition (FER) is still a challenging problem. CNN can characterize 2-D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (Support Vector Machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.


2020 ◽  
Vol 37 (4) ◽  
pp. 627-632
Author(s):  
Aihua Li ◽  
Lei An ◽  
Zihui Che

With the development of computer vision, facial expression recognition has become a research hotspot. To further improve the accuracy of facial expression recognition, this paper probes deep into image segmentation, feature extraction, and facial expression classification. Firstly, the convolution neural network (CNN) was adopted to accurately separate the salient regions from the face image. Next, the Gaussian Markov random field (GMRF) model was improved to enhance the ability of texture features to represent image information, and a novel feature extraction algorithm called specific angle abundance entropy (SAAE) was designed to improve the representation ability of shape features. After that, the texture features were combined with shape features, and trained and classified by the support vector machine (SVM) classifier. Finally, the proposed method was compared with common methods of facial expression recognition on a standard facial expression database. The results show that our method can greatly improve the accuracy of facial expression recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhi Yao ◽  
Hailing Sun ◽  
Guofu Zhou

Facial video big sensor data (BSD) is the core data of wireless sensor network industry application and technology research. It plays an important role in many industries, such as urban safety management, unmanned driving, senseless attendance, and venue management. The construction of video big sensor data security application and intelligent algorithm model has become a hot and difficult topic in related fields based on facial expression recognition. This paper focused on the experimental analysis of Cohn–Kanade dataset plus (CK+) dataset with frontal pose and great clarity. Firstly, face alignment and the selection of peak image were utilized to preprocess the expression sequence. Then, the output vector from convolution network 1 and β-VAE were connected proportionally and input to support vector machine (SVM) classifier to complete facial expression recognition. The testing accuracy of the proposed model in CK + dataset can reach 99.615%. The number of expression sequences involved in training was 2417, and the number of expression sequences in testing was 519.


Sign in / Sign up

Export Citation Format

Share Document