facial landmarks
Recently Published Documents


TOTAL DOCUMENTS

233
(FIVE YEARS 107)

H-INDEX

17
(FIVE YEARS 4)

2022 ◽  
Vol 15 ◽  
Author(s):  
Chongwen Wang ◽  
Zicheng Wang

Facial action unit (AU) detection is an important task in affective computing and has attracted extensive attention in the field of computer vision and artificial intelligence. Previous studies for AU detection usually encode complex regional feature representations with manually defined facial landmarks and learn to model the relationships among AUs via graph neural network. Albeit some progress has been achieved, it is still tedious for existing methods to capture the exclusive and concurrent relationships among different combinations of the facial AUs. To circumvent this issue, we proposed a new progressive multi-scale vision transformer (PMVT) to capture the complex relationships among different AUs for the wide range of expressions in a data-driven fashion. PMVT is based on the multi-scale self-attention mechanism that can flexibly attend to a sequence of image patches to encode the critical cues for AUs. Compared with previous AU detection methods, the benefits of PMVT are 2-fold: (i) PMVT does not rely on manually defined facial landmarks to extract the regional representations, and (ii) PMVT is capable of encoding facial regions with adaptive receptive fields, thus facilitating representation of different AU flexibly. Experimental results show that PMVT improves the AU detection accuracy on the popular BP4D and DISFA datasets. Compared with other state-of-the-art AU detection methods, PMVT obtains consistent improvements. Visualization results show PMVT automatically perceives the discriminative facial regions for robust AU detection.


2022 ◽  
Vol 12 (1) ◽  
pp. 60
Author(s):  
Zhouxiao Li ◽  
Yimin Liang ◽  
Thilo Ludwig Schenck ◽  
Konstantin Frank ◽  
Riccardo Enzo Giunta ◽  
...  

Three-dimensional surface imaging systems (3DSI) provide an effective and applicable approach for the quantification of facial morphology. Several researchers have implemented 3D techniques for nasal anthropometry; however, they only included limited classic nasal facial landmarks and parameters. In our clinical routines, we have identified a considerable number of novel facial landmarks and nasal anthropometric parameters, which could be of great benefit to personalized rhinoplasty. Our aim is to verify their reliability, thus laying the foundation for the comprehensive application of 3DSI in personalized rhinoplasty. We determined 46 facial landmarks and 57 anthropometric parameters. A total of 110 volunteers were recruited, and the intra-assessor, inter-assessor, and intra-method reliability of nasal anthropometry were assessed through 3DSI. Our results displayed the high intra-assessor reliability of MAD (0.012–0.29, 0.003–0.758 mm), REM (0.008–1.958%), TEM (0–0.06), rTEM (0.001–0.155%), and ICC (0.77–0.995); inter-assessor reliability of 0.216–1.476, 0.003–2.013 mm; 0.01–7.552%, 0–0.161, and 0.001–1.481%, 0.732–0.985, respectively; and intra-method reliability of 0.006–0.598°, 0–0.379 mm; 0 0.984%, 0–0.047, and 0–0.078%, 0.996–0.998, respectively. This study provides conclusive evidence for the high reliability of novel facial landmarks and anthropometric parameters for comprehensive nasal measurements using the 3DSI system. Considering this, the proposed landmarks and parameters could be widely used for digital planning and evaluation in personalized rhinoplasty, otorhinolaryngology, and oral and maxillofacial surgery.


2021 ◽  
Author(s):  
Askat Kuzdeuov ◽  
Dana Aubakirova ◽  
Darina Koishigarina ◽  
Hüseyin Atakan Varol

Face detection and localization of facial landmarks are the primary steps in building many face applications in computer vision. Numerous algorithms and benchmark datasets have been proposed to develop accurate face and facial landmark detection models in the visual domain. However, varying illumination conditions still pose challenging problems. Thermal cameras can address this problem because of their operation in longer wavelengths. However, thermal face detection and localization of facial landmarks in the wild condition are overlooked. The main reason is that most of the existing thermal face datasets have been collected in controlled environments. In addition, many of them contain no annotations of face bounding boxes and facial landmarks. In this work, we present a thermal face dataset with manually labeled bounding boxes and facial landmarks to address these problems. The dataset contains 9,202 images of 145 subjects, collected in both controlled and wild conditions. As a baseline, we trained the YOLOv5 object detection model and its adaptation for face detection, YOLO5Face, on our dataset. To show the efficacy of our dataset, we evaluated these models on the RWTH-Aachen thermal face dataset in addition to our test set. We have made the dataset, source code, and pretrained models publicly available at https://github.com/IS2AI/TFW to bolster research in thermal face analysis. <br>


2021 ◽  
Author(s):  
Askat Kuzdeuov ◽  
Dana Aubakirova ◽  
Darina Koishigarina ◽  
Hüseyin Atakan Varol

Face detection and localization of facial landmarks are the primary steps in building many face applications in computer vision. Numerous algorithms and benchmark datasets have been proposed to develop accurate face and facial landmark detection models in the visual domain. However, varying illumination conditions still pose challenging problems. Thermal cameras can address this problem because of their operation in longer wavelengths. However, thermal face detection and localization of facial landmarks in the wild condition are overlooked. The main reason is that most of the existing thermal face datasets have been collected in controlled environments. In addition, many of them contain no annotations of face bounding boxes and facial landmarks. In this work, we present a thermal face dataset with manually labeled bounding boxes and facial landmarks to address these problems. The dataset contains 9,202 images of 145 subjects, collected in both controlled and wild conditions. As a baseline, we trained the YOLOv5 object detection model and its adaptation for face detection, YOLO5Face, on our dataset. To show the efficacy of our dataset, we evaluated these models on the RWTH-Aachen thermal face dataset in addition to our test set. We have made the dataset, source code, and pretrained models publicly available at https://github.com/IS2AI/TFW to bolster research in thermal face analysis. <br>


2021 ◽  
Vol 2089 (1) ◽  
pp. 012039
Author(s):  
P Ramesh Naidu ◽  
S Pruthvi Sagar ◽  
K Praveen ◽  
K Kiran ◽  
K Khalandar

Abstract Stress is a psychological disorder that affects every aspect of life and diminishes the quality of sleep. The strategy presented in this paper for detecting cognitive stress levels using facial landmarks is successful. The major goal of this system was to employ visual technology to detect stress using a machine learning methodology. The novelty of this work lies in the fact that a stress detection system should be as non-invasive as possible for the user. The user tension and these evidences are modelled using machine learning. The computer vision techniques we utilized to extract visual evidences, the machine learning model we used to forecast stress and related parameters, and the active sensing strategy we used to collect the most valuable evidences for efficient stress inference are all discussed. Our findings show that the stress level identified by our method is accurate is consistent with what psychological theories predict. This presents a stress recognition approach based on facial photos and landmarks utilizing AlexNet architecture in this research. It is vital to have a gadget that can collect the appropriate data. The use of a biological signal or a thermal image to identify stress is currently being investigated. To address this limitation, we devised an algorithm that can detect stress in photos taken with a standard camera. We have created DNN that uses facial positions points as input to take advantage of the fact that when a person is worried their eye, mouth, and head movements differ from what they are used to. The suggested algorithm senses stress more efficiently, according to experimental data.


2021 ◽  
Author(s):  
Jiangming Shi ◽  
Zixian Gao ◽  
Hao Liu ◽  
Zekuan Yu ◽  
Fengjun Li
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document