scholarly journals Enhancement of Patient Facial Recognition through Deep Learning Algorithm: ConvNet

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Edeh Michael Onyema ◽  
Piyush Kumar Shukla ◽  
Surjeet Dalal ◽  
Mayuri Neeraj Mathur ◽  
Mohammed Zakariah ◽  
...  

The use of machine learning algorithms for facial expression recognition and patient monitoring is a growing area of research interest. In this study, we present a technique for facial expression recognition based on deep learning algorithm: convolutional neural network (ConvNet). Data were collected from the FER2013 dataset that contains samples of seven universal facial expressions for training. The results show that the presented technique improves facial expression recognition accuracy without encoding several layers of CNN that lead to a computationally costly model. This study proffers solutions to the issues of high computational cost due to the implementation of facial expression recognition by providing a model close to the accuracy of the state-of-the-art model. The study concludes that deep l\earning-enabled facial expression recognition techniques enhance accuracy, better facial recognition, and interpretation of facial expressions and features that promote efficiency and prediction in the health sector.

2021 ◽  
Vol 5 (12) ◽  
pp. 63-68
Author(s):  
Jun Mao

Classroom is an important environment for communication in teaching events. Therefore, both school and society should pay more attention to it. However, in the traditional teaching classroom, there is actually a relatively lack of communication and exchanges. Facial expression recognition is a branch of facial recognition technology with high precision. Even in large teaching scenes, it can capture the changes of students’ facial expressions and analyze their concentration accurately. This paper expounds the concept of this technology, and studies the evaluation of classroom teaching effects based on facial expression recognition.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2166
Author(s):  
Geesung Oh ◽  
Junghwan Ryu ◽  
Euiseok Jeong ◽  
Ji Hyun Yang ◽  
Sungwook Hwang ◽  
...  

In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Meiqi Zhuang ◽  
Lang Yin ◽  
Youhua Wang ◽  
Yunzhao Bai ◽  
Jian Zhan ◽  
...  

The facial expressions are a mirror of the elusive emotion hidden in the mind, and thus, capturing expressions is a crucial way of merging the inward world and virtual world. However, typical facial expression recognition (FER) systems are restricted by environments where faces must be clearly seen for computer vision, or rigid devices that are not suitable for the time-dynamic, curvilinear faces. Here, we present a robust, highly wearable FER system that is based on deep-learning-assisted, soft epidermal electronics. The epidermal electronics that can fully conform on faces enable high-fidelity biosignal acquisition without hindering spontaneous facial expressions, releasing the constraint of movement, space, and light. The deep learning method can significantly enhance the recognition accuracy of facial expression types and intensities based on a small sample. The proposed wearable FER system is superior for wide applicability and high accuracy. The FER system is suitable for the individual and shows essential robustness to different light, occlusion, and various face poses. It is totally different from but complementary to the computer vision technology that is merely suitable for simultaneous FER of multiple individuals in a specific place. This wearable FER system is successfully applied to human-avatar emotion interaction and verbal communication disambiguation in a real-life environment, enabling promising human-computer interaction applications.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 375 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method.


2021 ◽  
Vol 4 (2) ◽  
pp. 192-201
Author(s):  
Denys Valeriiovych Petrosiuk ◽  
Olena Oleksandrivna Arsirii ◽  
Oksana Yurievna Babilunha ◽  
Anatolii Oleksandrovych Nikolenko

The application of deep learning convolutional neural networks for solving the problem of automated facial expression recognition and determination of emotions of a person is analyzed. It is proposed to use the advantages of the transfer approach to deep learning convolutional neural networks training to solve the problem of insufficient data volume in sets of images with different facial expressions. Most of these datasets are labeled in accordance with a facial coding system based on the units of human facial movement. The developed technology of transfer learning of the public deep learning convolutional neural networks families DenseNet and MobileNet, with the subsequent “fine tuning” of the network parameters, allowed to reduce the training time and computational resources when solving the problem of facial expression recognition without losing the reliability of recognition of motor units. During the development of deep learning technology for convolutional neural networks, the following tasks were solved. Firstly, the choice of publicly available convolutional neural networks of the DenseNet and MobileNet families pre-trained on the ImageNet dataset was substantiated, taking into account the peculiarities of transfer learning for the task of recognizing facial expressions and determining emotions. Secondary, a model of a deep convolutional neural network and a method for its training have been developed for solving problems of recognizing facial expressions and determining human emotions, taking into account the specifics of the selected pretrained convolutional neural networks. Thirdly, the developed deep learning technology was tested, and finally, the resource intensity and reliability of recognition of motor units on the DISFA set were assessed. The proposed technology of deep learning of convolutional neural networks can be used in the development of systems for automatic recognition of facial expressions and determination of human emotions for both stationary and mobile devices. Further modification of the systems for recognizing motor units of human facial activity in order to increase the reliability of recognition is possible using of the augmentation technique.


Author(s):  
Sharmeen M. Saleem Abdullah ◽  
◽  
Adnan Mohsin Abdulazeez ◽  

Facial emotional processing is one of the most important activities in effective calculations, engagement with people and computers, machine vision, video game testing, and consumer research. Facial expressions are a form of nonverbal communication, as they reveal a person's inner feelings and emotions. Extensive attention to Facial Expression Recognition (FER) has recently been received as facial expressions are considered. As the fastest communication medium of any kind of information. Facial expression recognition gives a better understanding of a person's thoughts or views and analyzes them with the currently trending deep learning methods. Accuracy rate sharply compared to traditional state-of-the-art systems. This article provides a brief overview of the different FER fields of application and publicly accessible databases used in FER and studies the latest and current reviews in FER using Convolution Neural Network (CNN) algorithms. Finally, it is observed that everyone reached good results, especially in terms of accuracy, with different rates, and using different data sets, which impacts the results.


Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 113
Author(s):  
James Coe ◽  
Mustafa Atay

The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.


Sign in / Sign up

Export Citation Format

Share Document