scholarly journals EduFERA: A Real-Time Student Facial Emotion Recognition Approach

Author(s):  
Kaouther MOUHEB ◽  
Ali YÜREKLİ ◽  
Nedzma DERVİSBEGOVİC ◽  
Ridwan Ali MOHAMMED ◽  
Burcu YILMAZEL
2021 ◽  
Vol 1827 (1) ◽  
pp. 012130
Author(s):  
Qi Li ◽  
Yun Qing Liu ◽  
Yue Qi Peng ◽  
Cong Liu ◽  
Jun Shi ◽  
...  

2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


Author(s):  
Suchitra Saxena ◽  
Shikha Tripathi ◽  
Sudarshan Tsb

This research work proposes a Facial Emotion Recognition (FER) system using deep learning algorithm Gated Recurrent Units (GRUs) and Robotic Process Automation (RPA) for real time robotic applications. GRUs have been used in the proposed architecture to reduce training time and to capture temporal information. Most work reported in literature uses Convolution Neural Networks (CNN), Hybrid architecture of CNN with Long Short Term Memory (LSTM) and GRUs. In this work, GRUs are used for feature extraction from raw images and dense layers are used for classification. The performance of CNN, GRUs and LSTM are compared in the context of facial emotion recognition. The proposed FER system is implemented on Raspberry pi3 B+ and on Robotic Process Automation (RPA) using UiPath RPA tool for robot human interaction achieving 94.66% average accuracy in real time.


2016 ◽  
Vol 139 (11) ◽  
pp. 16-19 ◽  
Author(s):  
Rituparna Halder ◽  
Sushmit Sengupta ◽  
Arnab Pal ◽  
Sudipta Ghosh ◽  
Debashish Kundu

2021 ◽  
Author(s):  
Devanshu Shah ◽  
Khushi Chavan ◽  
Sanket Shah ◽  
Pratik Kanani

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2026
Author(s):  
Jung Hwan Kim ◽  
Alwin Poulose ◽  
Dong Seog Han

Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1289
Author(s):  
Navjot Rathour ◽  
Sultan S. Alshamrani ◽  
Rajesh Singh ◽  
Anita Gehlot ◽  
Mamoon Rashid ◽  
...  

Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of Medical Things (IoMT) have yielded impressive results from gathering various forms of emotional and physical health-related data. The novel deep learning (DL) algorithms are allowing to perform application in a resource-constrained edge environment, encouraging data from IoMT devices to be processed locally at the edge. This article presents an IoMT based facial emotion detection and recognition system that has been implemented in real-time by utilizing a small, powerful, and resource-constrained device known as Raspberry-Pi with the assistance of deep convolution neural networks. For this purpose, we have conducted one empirical study on the facial emotions of human beings along with the emotional state of human beings using physiological sensors. It then proposes a model for the detection of emotions in real-time on a resource-constrained device, i.e., Raspberry-Pi, along with a co-processor, i.e., Intel Movidius NCS2. The facial emotion detection test accuracy ranged from 56% to 73% using various models, and the accuracy has become 73% performed very well with the FER 2013 dataset in comparison to the state of art results mentioned as 64% maximum. A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and the heart rate of an individual watching three different subjects (angry, happy, and neutral).


Sign in / Sign up

Export Citation Format

Share Document