An efficient facial emotion recognition system using novel deep learning neural network-regression activation classifier

Author(s):  
Anjani Suputri Devi D ◽  
Satyanarayana Ch
2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2021 ◽  
pp. 397-402
Author(s):  
Vimal Singh ◽  
Sonal Gandhi ◽  
Rajiv Kumar ◽  
Ramashankar Yadav ◽  
Shivani Joshi

2021 ◽  
Vol 11 (11) ◽  
pp. 4782
Author(s):  
Huan-Chung Li ◽  
Telung Pan ◽  
Man-Hua Lee ◽  
Hung-Wen Chiu

In recent years, many types of research have continued to improve the environment of human speech and emotion recognition. As facial emotion recognition has gradually matured through speech recognition, the result of this study provided more accurate recognition of complex human emotional performance, and speech emotion identification will be derived from human subjective interpretation into the use of computers to automatically interpret the speaker’s emotional expression. Focused on use in medical care, which can be used to understand the current feelings of physicians and patients during a visit, and improve the medical treatment through the relationship between illness and interaction. By transforming the voice data into a single observation segment per second, the first to the thirteenth dimensions of the frequency cestrum coefficients are used as speech emotion recognition eigenvalue vectors. Vectors for the eigenvalue vectors are maximum, minimum, average, median, and standard deviation, and there are 65 eigenvalues in total for the construction of an artificial neural network. The sentiment recognition system developed by the hospital is used as a comparison between the sentiment recognition results of the artificial neural network classification, and then use the foregoing results for a comprehensive analysis to understand the interaction between the doctor and the patient. Using this experimental module, the emotion recognition rate is 93.34%, and the accuracy rate of facial emotion recognition results can be 86.3%.


Recognition of face emotion has been a challenging task for many years. This work uses machine learning algorithms for both, a real-time image or a stored database image in the area of facial emotion recognition system. So it is very clear that, deep learning technology becomes important for Human-computer interaction (HCI) applications. The proposed system has two parts, real-time based facial emotion recognition system and also the image based facial emotion recognition system. A Convolutional Neural Network (CNN) model is used to train and test different facial emotion images in this research work. This work was executed successfully using Python 3.7.6 platform. The input Face image of a person was taken using the webcam video stream or from the standard database available for research. The five different facial emotions considered in this work are happy, surprise, angry, sad and neutral. The best recognition accuracy with the proposed system for the webcam video stream is found to be 91.2%, whereas for the input database images is found to be 90.08%.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2021 ◽  
Vol 1827 (1) ◽  
pp. 012130
Author(s):  
Qi Li ◽  
Yun Qing Liu ◽  
Yue Qi Peng ◽  
Cong Liu ◽  
Jun Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document