Group emotion estimation using Bayesian network based on facial expression and prosodic information

Author(s):  
Tatsuya Hayamizu ◽  
Sano Mutsuo ◽  
Kenzaburo Miyawaki ◽  
Hiroaki Mori ◽  
Satoshi Nishiguchi ◽  
...  
Author(s):  
Afizan Azman ◽  
Mohd. Fikri Azli Abdullah ◽  
Sumendra Yogarayan ◽  
Siti Fatimah Abdul Razak ◽  
Hartini Azman ◽  
...  

<span>Cognitive distraction is one of the several contributory factors in road accidents. A number of cognitive distraction detection methods have been developed. One of the most popular methods is based on physiological measurement. Head orientation, gaze rotation, blinking and pupil diameter are among popular physiological parameters that are measured for driver cognitive distraction. In this paper, lips and eyebrows are studied. These new features on human facial expression are obvious and can be easily measured when a person is in cognitive distraction. There are several types of movement on lips and eyebrows that can be captured to indicate cognitive distraction. Correlation and classification techniques are used in this paper for performance measurement and comparison. Real time driving experiment was setup and faceAPI was installed in the car to capture driver’s facial expression. Linear regression, support vector machine (SVM), static Bayesian network (SBN) and logistic regression (LR) are used in this study. Results showed that lips and eyebrows are strongly correlated and have a significant role in improving cognitive distraction detection. Dynamic Bayesian network (DBN) with different confidence of levels was also used in this study to classify whether a driver is distracted or not.</span>


Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


Sign in / Sign up

Export Citation Format

Share Document