facial action
Recently Published Documents


TOTAL DOCUMENTS

469
(FIVE YEARS 159)

H-INDEX

46
(FIVE YEARS 6)

2022 ◽  
Vol 15 ◽  
Author(s):  
Chongwen Wang ◽  
Zicheng Wang

Facial action unit (AU) detection is an important task in affective computing and has attracted extensive attention in the field of computer vision and artificial intelligence. Previous studies for AU detection usually encode complex regional feature representations with manually defined facial landmarks and learn to model the relationships among AUs via graph neural network. Albeit some progress has been achieved, it is still tedious for existing methods to capture the exclusive and concurrent relationships among different combinations of the facial AUs. To circumvent this issue, we proposed a new progressive multi-scale vision transformer (PMVT) to capture the complex relationships among different AUs for the wide range of expressions in a data-driven fashion. PMVT is based on the multi-scale self-attention mechanism that can flexibly attend to a sequence of image patches to encode the critical cues for AUs. Compared with previous AU detection methods, the benefits of PMVT are 2-fold: (i) PMVT does not rely on manually defined facial landmarks to extract the regional representations, and (ii) PMVT is capable of encoding facial regions with adaptive receptive fields, thus facilitating representation of different AU flexibly. Experimental results show that PMVT improves the AU detection accuracy on the popular BP4D and DISFA datasets. Compared with other state-of-the-art AU detection methods, PMVT obtains consistent improvements. Visualization results show PMVT automatically perceives the discriminative facial regions for robust AU detection.


2021 ◽  
Vol 57 (25) ◽  
Author(s):  
Chuangao Tang ◽  
Cheng Lu ◽  
Wenming Zheng ◽  
Yuan Zong ◽  
Sunan Li

2021 ◽  
Vol 11 (23) ◽  
pp. 11171
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Automatic facial action detection is important, but no previous studies have evaluated pre-trained models on the accuracy of facial action detection as the angle of the face changes from frontal to profile. Using static facial images obtained at various angles (0°, 15°, 30°, and 45°), we investigated the performance of three automated facial action detection systems (FaceReader, OpenFace, and Py-feat). The overall performance was best for OpenFace, followed by FaceReader and Py-Feat. The performance of FaceReader significantly decreased at 45° compared to that at other angles, while the performance of Py-Feat did not differ among the four angles. The performance of OpenFace decreased as the target face turned sideways. Prediction accuracy and robustness to angle changes varied with the target facial components and action detection system.


2021 ◽  
Author(s):  
Patama Gomutbutra ◽  
Adisak Kittisares ◽  
Atigorn Sanguansri ◽  
Noppon Choosri ◽  
Passakorn Sawaddiruk ◽  
...  

Abstract Background: It is increasingly interesting to monitor pain severity in elderly individuals by applying machine learning models. In previous studies, OpenFace© - a well-known automated facial analysis algorithm, was used to detect facial action units (FAUs) that initially need long hours of human coding. However, OpenFace© developed from the dataset that dominant young Caucasians who were illicit pain in the lab. Therefore, this study aims to evaluate the accuracy and feasibility of the model using data from OpenFace© to classify pain severity in elderly Asian patients in clinical settings.Methods: Data from 255 Thai individuals with chronic pain were collected at Chiang Mai Medical School Hospital. The phone camera recorded faces for 10 seconds at a 1-meter distance briefly after the patients provided self-rating pain severity. For those unable to self-rate, the video was recorded just after the move, which illicit pain. The trained assistant rated each video clip for the Pain Assessment in Advanced Dementia (PAINAD). The classification of pain severity was mild, moderate, or severe. OpenFace© process video clip into 18 FAUs. Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Results: Among the models that included only FAU described in the literature (FAUs 4, 6, 7, 9, 10, 25, 26, 27 and 45), multilayer perception yielded the highest accuracy of 50%. Among the machine learning selection features, the SVM model for FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, 45, and gender yielded the best accuracy of 58%. Conclusion: Our open-source automatic video clip facial action unit analysis experiment was not robust for classifying elderly pain. Retraining facial action unit detection algorithms, enhancing frame selection strategies, and adding pain-related functions may improve the accuracy and feasibility of the model.


Author(s):  
Habibullah Akbar ◽  
Sintia Dewi ◽  
Yuli Azmi Rozali ◽  
Lita Patricia Lunanta ◽  
Nizirwan Anwar ◽  
...  

2021 ◽  
Author(s):  
Bo Yang ◽  
Jianming Wu ◽  
Zhiguang Zhou ◽  
Megumi Komiya ◽  
Koki Kishimoto ◽  
...  

2021 ◽  
Author(s):  
Jingwei Yan ◽  
Jingjing Wang ◽  
Qiang Li ◽  
Chunmao Wang ◽  
Shiliang Pu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document