A user-independent real-time emotion recognition system for software agents in domestic environments

2007 ◽  
Vol 20 (3) ◽  
pp. 337-345 ◽  
Author(s):  
Enrique Leon ◽  
Graham Clarke ◽  
Victor Callaghan ◽  
Francisco Sepulveda
2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


2016 ◽  
Vol 11 (5) ◽  
pp. 456 ◽  
Author(s):  
Riyanarto Sarno ◽  
Muhammad Nadzeri Munawar ◽  
Brilian T. Nugraha

2020 ◽  
Vol 10 (1) ◽  
pp. 259-269
Author(s):  
Akansha Singh ◽  
Surbhi Dewan

AbstractAssistive technology has proven to be one of the most significant inventions to aid people with Autism to improve the quality of their lives. In this study, a real-time emotion recognition system for autistic children has been developed. Emotion recognition is implemented by executing three stages: Face identification, Facial Feature extraction, and feature classification. The objective is to frame a system that includes all three stages of emotion recognition activity that executes expeditiously in real time. Thus, Affectiva SDK is implemented in the application. The propound system detects at most 7 facial emotions: anger, disgust, fear, joy, sadness, contempt, and surprise. The purpose for performing this study is to teach emotions to individuals suffering from autism, as they lack the ability to respond appropriately to others emotions. The proposed application was tested with a group of typical children aged 6–14 years, and positive outcomes were achieved.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1289
Author(s):  
Navjot Rathour ◽  
Sultan S. Alshamrani ◽  
Rajesh Singh ◽  
Anita Gehlot ◽  
Mamoon Rashid ◽  
...  

Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of Medical Things (IoMT) have yielded impressive results from gathering various forms of emotional and physical health-related data. The novel deep learning (DL) algorithms are allowing to perform application in a resource-constrained edge environment, encouraging data from IoMT devices to be processed locally at the edge. This article presents an IoMT based facial emotion detection and recognition system that has been implemented in real-time by utilizing a small, powerful, and resource-constrained device known as Raspberry-Pi with the assistance of deep convolution neural networks. For this purpose, we have conducted one empirical study on the facial emotions of human beings along with the emotional state of human beings using physiological sensors. It then proposes a model for the detection of emotions in real-time on a resource-constrained device, i.e., Raspberry-Pi, along with a co-processor, i.e., Intel Movidius NCS2. The facial emotion detection test accuracy ranged from 56% to 73% using various models, and the accuracy has become 73% performed very well with the FER 2013 dataset in comparison to the state of art results mentioned as 64% maximum. A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and the heart rate of an individual watching three different subjects (angry, happy, and neutral).


2019 ◽  
Vol 13 (4) ◽  
pp. JAMDSM0075-JAMDSM0075 ◽  
Author(s):  
Jyun-Rong ZHUANG ◽  
Ya-Jing GUAN ◽  
Hayato NAGAYOSHI ◽  
Keiichi MURAMATSU ◽  
Keiichi WATANUKI ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document