A comparative study on different approaches of real time human emotion recognition based on facial expression detection

Author(s):  
Anurag De ◽  
Ashim Saha
Author(s):  
Mair Muteeb Javaid ◽  
Muhammad Abdullah Yousaf ◽  
Quratulain Zahid Sheikh ◽  
Mian M. Awais ◽  
Sameera Saleem ◽  
...  

2019 ◽  
Author(s):  
◽  
Abdultaofeek Abayomi

This research work investigates physiological signals based human emotion and its incorporation in an affective system architecture for real-time tracking of persons in distress phase situations to prevent the occurrence of casualties. In a casualty situation, a mishap has already occurred leading to life, limb and valuables being in a state of peril. However, in a distress phase situation, there is a high likelihood that a tragedy is about to occur unless an immediate assistance is rendered. The distress phase situations include the spate of kidnapping, human trafficking and terrorism related crimes that could lead to casualty such as loss of lives, properties, finances and destruction of infrastructure. These situations are of global concern and worldwide phenomenon that necessitate a system that could mitigate the alarming trend of social crimes. The novel idea of deploying a combination of data and knowledge driven approaches using wearable sensor devices supported by machine learning methods could prove useful as a preventive mechanism in a distress phase situation. Such a system could be achieved through modelling human emotion recognition, including the harvesting and recognising human emotion physiological signals. Different methods have been applied in emotion recognition domain because the extraction of relevant discriminating features has been identified as an unresolved and one of the most daunting aspects of physiological signals based human emotion recognition system. In this thesis, emotion physiological signals, image processing technique and shallow learning based on radial basis function neural network were used to construct a system for real-time tracking of persons in distress phase situations. The system was tested using the Database for Emotion Analysis using Physiological Signal (DEAP) to ascertain the recognition performance that could be achieved. Emotion representations such as Arousal, Valence, Dominance and Liking have been creatively mapped to different conditions of human safety and survival state like happy phase, distress phase and casualty phase in a real-time system for tracking of persons. The constructed system can practically benefit security agencies, emergency services, rescue teams and restore confidence to both the potential victims and their family by proactively providing assistance in an emergency event of a distress phase situation. Moreover, the system would prove beneficial in stemming the tide of the identified societal crimes and tragedies by thwarting the successful progress of a distress phase situation through an application of information communication technology to address critical societal challenges. The performance of the recognition algorithmic component of the constructed system gives accuracy that outperforms the state of the art results based on deep learning techniques.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Xiaodong Liu ◽  
Miao Wang

Recognition of human emotion from facial expression is affected by distortions of pictorial quality and facial pose, which is often ignored by traditional video emotion recognition methods. On the other hand, context information can also provide different degrees of extra clues, which can further improve the recognition accuracy. In this paper, we first build a video dataset with seven categories of human emotion, named human emotion in the video (HEIV). With the HEIV dataset, we trained a context-aware attention network (CAAN) to recognize human emotion. The network consists of two subnetworks to process both face and context information. Features from facial expression and context clues are fused to represent the emotion of video frames, which will be then passed through an attention network and generate emotion scores. Then, the emotion features of all frames will be aggregated according to their emotional score. Experimental results show that our proposed method is effective on HEIV dataset.


Sign in / Sign up

Export Citation Format

Share Document