Estimation of Emotional Scenes from Lifelog Videos with Moving Average of Facial Expression Intensity

2021 ◽  
Author(s):  
Tomoya Nishimura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin
2013 ◽  
Vol 1 (4) ◽  
pp. 1-15 ◽  
Author(s):  
Hiroki Nomiya ◽  
Atsushi Morikuni ◽  
Teruhisa Hochin

An emotional scene detection method is proposed in order to retrieve impressive scenes from lifelog videos. The proposed method is based on facial expression recognition considering that a wide variety of facial expression could be observed in impressive scenes. Conventional facial expression techniques, which focus on discriminating typical facial expressions, will be inadequate for lifelog video retrieval because of the diversity of facial expressions. The authors thus propose a more flexible and efficient emotional scene detection method using an unsupervised facial expression recognition based on cluster ensembles. The authors' approach does not need to predefine facial expressions and is able to detect emotional scenes containing a wide variety of facial expressions. The detection performance of the proposed method is evaluated through some emotional scene detection experiments.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xueping Su ◽  
Meng Gao ◽  
Jie Ren ◽  
Yunhong Li ◽  
Matthias Rätsch

With the continuous development of economy, consumers pay more attention to the demand for personalization clothing. However, the recommendation quality of the existing clothing recommendation system is not enough to meet the user’s needs. When browsing online clothing, facial expression is the salient information to understand the user’s preference. In this paper, we propose a novel method to automatically personalize clothing recommendation based on user emotional analysis. Firstly, the facial expression is classified by multiclass SVM. Next, the user’s multi-interest value is calculated using expression intensity that is obtained by hybrid RCNN. Finally, the multi-interest value is fused to carry out personalized recommendation. The experimental results show that the proposed method achieves a significant improvement over other algorithms.


2020 ◽  
Vol 528 ◽  
pp. 113-132
Author(s):  
Mingliang Xue ◽  
Xiaodong Duan ◽  
Wanquan Liu ◽  
Yan Ren

2019 ◽  
Vol 9 (16) ◽  
pp. 3379
Author(s):  
Hyun-Jun Hyung ◽  
Han Ul Yoon ◽  
Dongwoon Choi ◽  
Duk-Yeon Lee ◽  
Dong-Wook Lee

Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.


2021 ◽  
Vol 7 ◽  
pp. e736
Author(s):  
Olufisayo Ekundayo ◽  
Serestina Viriri

Facial Expression Recognition (FER) has gained considerable attention in affective computing due to its vast area of applications. Diverse approaches and methods have been considered for a robust FER in the field, but only a few works considered the intensity of emotion embedded in the expression. Even the available studies on expression intensity estimation successfully assigned a nominal/regression value or classified emotion in a range of intervals. Most of the available works on facial expression intensity estimation successfully present only the emotion intensity estimation. At the same time, others proposed methods that predict emotion and its intensity in different channels. These multiclass approaches and extensions do not conform to man heuristic manner of recognising emotion and its intensity estimation. This work presents a Multilabel Convolution Neural Network (ML-CNN)-based model, which could simultaneously recognise emotion and provide ordinal metrics as the intensity estimation of the emotion. The proposed ML-CNN is enhanced with the aggregation of Binary Cross-Entropy (BCE) loss and Island Loss (IL) functions to minimise intraclass and interclass variations. Also, ML-CNN model is pre-trained with Visual Geometric Group (VGG-16) to control overfitting. In the experiments conducted on Binghampton University 3D Facial Expression (BU-3DFE) and Cohn Kanade extension (CK+) datasets, we evaluate ML-CNN’s performance based on accuracy and loss. We also carried out a comparative study of our model with some popularly used multilabel algorithms using standard multilabel metrics. ML-CNN model simultaneously predicts emotion and intensity estimation using ordinal metrics. The model also shows appreciable and superior performance over four standard multilabel algorithms: Chain Classifier (CC), distinct Random K label set (RAKEL), Multilabel K Nearest Neighbour (MLKNN) and Multilabel ARAM (MLARAM).


Sign in / Sign up

Export Citation Format

Share Document