Emotion Recognition Model Based on RBF Neural Network in E-Learning

Author(s):  
Wansen Wang ◽  
Rui Li
2018 ◽  
Vol 30 (4) ◽  
pp. 407-417
Author(s):  
Yifan Sun ◽  
Jinglei Zhang ◽  
Xiaoyuan Wang ◽  
Zhangu Wang ◽  
Jie Yu

Drinking-driving behaviors are important causes of road traffic injuries, which are serious threats to the lives and property of traffic participants. Therefore, reducing the occurrences of drinking-driving behaviors has become an important problem of traffic safety research. Forty-eight male drivers and six female drivers who could drink moderate alcohol were chosen as participants. The drivers’ physiological data, operation behavior data, car running data, and driving environment data were collected by designing various virtual traffic scenes and organizing drivers to conduct driving simulation experiments. The original variables were analyzed by the Principal Component Analysis (PCA), and seven principal components were extracted as the input vector of the Radial Basis Function (RBF) neural network. The principal component data was used to train and verify the RBF neural network. The Levenberg-Marquardt (LM) algorithm was chosen to train the parameters of the neural network and build a drinking-driving recognition model based on PCA and RBF  neural network to realize an accurate recognition of drinking-driving behaviors. The test results showed that the drinking-driving recognition model based on PCA and RBF neural network could identify drinking drivers accurately during driving process with a recognition accuracy of 92.01%, and the operation efficiency of the model was high. The research can provide useful reference for prevention and treatment of drinking and  driving and traffic safety maintenance.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mingyong Li ◽  
Xue Qiu ◽  
Shuang Peng ◽  
Lirong Tang ◽  
Qiqi Li ◽  
...  

With the rapid development of deep learning and wireless communication technology, emotion recognition has received more and more attention from researchers. Computers can only be truly intelligent when they have human emotions, and emotion recognition is its primary consideration. This paper proposes a multimodal emotion recognition model based on a multiobjective optimization algorithm. The model combines voice information and facial information and can optimize the accuracy and uniformity of recognition at the same time. The speech modal is based on an improved deep convolutional neural network (DCNN); the video image modal is based on an improved deep separation convolution network (DSCNN). After single mode recognition, a multiobjective optimization algorithm is used to fuse the two modalities at the decision level. The experimental results show that the proposed model has a large improvement in each evaluation index, and the accuracy of emotion recognition is 2.88% higher than that of the ISMS_ALA model. The results show that the multiobjective optimization algorithm can effectively improve the performance of the multimodal emotion recognition model.


2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2020 ◽  
Vol 29 (02) ◽  
pp. 1
Author(s):  
Qihua Xu ◽  
Chunyue Zhang ◽  
Bo Sun

Sign in / Sign up

Export Citation Format

Share Document