scholarly journals Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System

Sensors ◽  
2018 ◽  
Vol 18 (5) ◽  
pp. 1383 ◽  
Author(s):  
Yea-Hoon Kwon ◽  
Sae-Byuk Shin ◽  
Shin-Dug Kim
Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1289
Author(s):  
Navjot Rathour ◽  
Sultan S. Alshamrani ◽  
Rajesh Singh ◽  
Anita Gehlot ◽  
Mamoon Rashid ◽  
...  

Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of Medical Things (IoMT) have yielded impressive results from gathering various forms of emotional and physical health-related data. The novel deep learning (DL) algorithms are allowing to perform application in a resource-constrained edge environment, encouraging data from IoMT devices to be processed locally at the edge. This article presents an IoMT based facial emotion detection and recognition system that has been implemented in real-time by utilizing a small, powerful, and resource-constrained device known as Raspberry-Pi with the assistance of deep convolution neural networks. For this purpose, we have conducted one empirical study on the facial emotions of human beings along with the emotional state of human beings using physiological sensors. It then proposes a model for the detection of emotions in real-time on a resource-constrained device, i.e., Raspberry-Pi, along with a co-processor, i.e., Intel Movidius NCS2. The facial emotion detection test accuracy ranged from 56% to 73% using various models, and the accuracy has become 73% performed very well with the FER 2013 dataset in comparison to the state of art results mentioned as 64% maximum. A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and the heart rate of an individual watching three different subjects (angry, happy, and neutral).


2021 ◽  
Author(s):  
Tharindu Bathigama ◽  
Sarasi Madushika

This research attempts to study the effects of different representations of audio in relation to music emotion recognition. We have used an experimental approach to this problem and have used deep learning to create models that can recognise the emotion.


Sign in / Sign up

Export Citation Format

Share Document